From pascal77_C at hotmail.com Tue Dec 1 07:31:23 2015 From: pascal77_C at hotmail.com (PASCAL DIAFERIA) Date: Tue, 1 Dec 2015 12:31:23 +0000 Subject: [pypy-dev] Issue intalling Numpy module Message-ID: NARBONNE, 12/01/2015 Hi, I want to use Pypy ; I'm trying to install Numpy ; to that, I launched the next command: git clone https://bitbucket.org/pypy/numpy.git then in in Numpy directory, I launch the next command : pypy setup.py install I get the next error : ============================================ Traceback (most recent call last): File "setup.py", line 278, in setup_package() File "setup.py", line 261, in setup_package import setuptools ImportError: No module named setuptools ============================================ [Os = Win7 / 64-bits ; Python2.7.10] Pypy installed, pip, ... installed Maybe, you know the mean to bypass this step ; I have to say I'm not experienced and I'm looking for help to solve this problem. So, if you have any suggestion I will take and try it. In advance, thanks a lot (do not care of my written english as it's not my mother language). Pascal DIAFERIA FR Pascal DIAFERIA [http://gfx2.hotmail.com/mail/w2/ltr/../emoticons/rainbow.gif] -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Tue Dec 1 11:41:49 2015 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 1 Dec 2015 18:41:49 +0200 Subject: [pypy-dev] Issue intalling Numpy module In-Reply-To: References: Message-ID: <565DCDCD.2030600@gmail.com> Installation of numpy requires setuptools. Try calling path\to\pypy\pip install setuptools Note setuptools (and pip) will be installed automatically if you have created a virtualenv: virtualenv -p path\to\pypy\pypy.exe path\for\virtualenv Matti On 01/12/15 14:31, PASCAL DIAFERIA wrote: > > NARBONNE, 12/01/2015 > > Hi, > I want to use Pypy ; I'm trying to install Numpy ; to that, I launched > the next command: > git clone https://bitbucket.org/pypy/numpy.git > then in in Numpy directory, I launch the next command : pypy setup.py > install > I get the next error : > ============================================ > Traceback (most recent call last): > File "setup.py", line 278, in > setup_package() > File "setup.py", line 261, in setup_package > import setuptools > ImportError: No module named setuptools > > ============================================ > > [Os = Win7 / 64-bits ; Python2.7.10] Pypy installed, pip, ... installed > > > > Maybe, you know the mean to bypass this step ; I have to say I'm not > experienced and I'm > looking for help to solve this problem. > So, if you have any suggestion I will take and try it. > > In advance, thanks a lot (do not care of my written english as it's > not my mother language). > > Pascal DIAFERIA > FR > > > */Pascal DIAFERIA/* > *//* > > > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From me at manueljacob.de Tue Dec 1 12:16:34 2015 From: me at manueljacob.de (Manuel Jacob) Date: Tue, 01 Dec 2015 18:16:34 +0100 Subject: [pypy-dev] Leysin Winter sprint? In-Reply-To: References: Message-ID: <9904363526be1c7bf53db0739b2719d4@indus.uberspace.de> Hi, I'd like to attend the sprint next year. My exams are between February 10th and 18th. January or after February 18th would be perfect for me. -Manuel On 2015-11-27 11:18, Armin Rigo wrote: > Hi all, > > Due to public pressure :-) I'm trying to organize this winter's > sprint, in Leysin, Switzerland. This is a fully public sprint of 7 > days, with possible skiing. > > If you'd like to come, please reply to this e-mail with dates that are > ok for you. I will give some priority to core people, but anyone else > is welcome to have preferences too. Traditionally, it is around > mid-January, but this year as far as I understand there is some push > to have it in February or March instead---which is fine too. The > following week should be avoided if possible, as it is holidays for > the canton's schools: 20-28 Feb 2016. > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From vincent.legoll at gmail.com Sat Dec 5 05:14:48 2015 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Sat, 5 Dec 2015 11:14:48 +0100 Subject: [pypy-dev] Branch description Message-ID: Hello, in 81104:6a35beb87a56, I see that the now merged branch fix-setslice-can-resize was not described properly and now the what's new document does not contain any description. Is there something to do to fix this ? What should I have done before asking the PR ? Thanks -- Vincent Legoll From matti.picus at gmail.com Sat Dec 5 10:31:08 2015 From: matti.picus at gmail.com (Matti Picus) Date: Sat, 5 Dec 2015 17:31:08 +0200 Subject: [pypy-dev] Branch description In-Reply-To: References: Message-ID: <5663033C.4000802@gmail.com> On 05/12/15 12:14, Vincent Legoll wrote: > Hello, > > in 81104:6a35beb87a56, I see that the now merged branch fix-setslice-can-resize > was not described properly and now the what's new document does not contain > any description. > > Is there something to do to fix this ? > What should I have done before asking the PR ? > > Thanks > Thanks for the branch. You could send a patch to pypy/doc/whatsnew-head.rst either as a pull request or just a patch here, or we could just give you a commit bit to pypy/pypy Whatever is easiest for you Matti From vincent.legoll at gmail.com Sat Dec 5 12:17:47 2015 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Sat, 5 Dec 2015 17:17:47 +0000 Subject: [pypy-dev] Branch description In-Reply-To: <5663033C.4000802@gmail.com> References: <5663033C.4000802@gmail.com> Message-ID: thanks, i'll send a pr to describe it. Should it be comprehensive and add links to related bugs or just a single line desc? On Sat, Dec 5, 2015 at 3:31 PM, Matti Picus wrote: > On 05/12/15 12:14, Vincent Legoll wrote: >> >> Hello, >> >> in 81104:6a35beb87a56, I see that the now merged branch >> fix-setslice-can-resize >> was not described properly and now the what's new document does not >> contain >> any description. >> >> Is there something to do to fix this ? >> What should I have done before asking the PR ? >> >> Thanks >> > Thanks for the branch. You could send a patch to pypy/doc/whatsnew-head.rst > either as a pull request or just a patch here, > or we could just give you a commit bit to pypy/pypy > Whatever is easiest for you > Matti -- Vincent Legoll From matti.picus at gmail.com Sat Dec 5 12:22:34 2015 From: matti.picus at gmail.com (Matti Picus) Date: Sat, 5 Dec 2015 19:22:34 +0200 Subject: [pypy-dev] Branch description In-Reply-To: References: <5663033C.4000802@gmail.com> Message-ID: <56631D5A.2010707@gmail.com> On 05/12/15 19:17, Vincent Legoll wrote: > thanks, i'll send a pr to describe it. Should it be comprehensive and > add links to related bugs or just a single line desc? one or two lines, comprehensive but concise, and add the issue numbers as #xxx if possible Thanks Matti From matti.picus at gmail.com Mon Dec 7 17:22:50 2015 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 8 Dec 2015 00:22:50 +0200 Subject: [pypy-dev] AppTestCpythonExtention tests Message-ID: <566606BA.2070005@gmail.com> in cpyext there are AppTestCpythonExtension tests, which compile a small capi module and then test it. It seems like there should be a way to run these tests with the -A flag, which would allow testing in cpython and on translated pypy Before I embark on this journey, has it been tried before and if so what were the conclusions? Matti From elmir at unity3d.com Fri Dec 11 09:07:02 2015 From: elmir at unity3d.com (Elmir Jagudin) Date: Fri, 11 Dec 2015 15:07:02 +0100 Subject: [pypy-dev] using python-ldap under pypy Message-ID: Hi I'm trying to run an application under pypy that authenticates user with LDAP. It is using python-ldap module and it fails to lookup the users. The problem is in python-ldap's c extension code. When it converts the LDAP search query from python format to C, parts of the query are corrupted. Is python-ldap supposed to work under pypy? How compatible is the python C API between cpython and pypy? Right now I can't figure out if this is a bug in python-ldap code or an compatibility with Pypy C API. Regards, Elmir Jagudin -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Fri Dec 11 09:39:19 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 11 Dec 2015 16:39:19 +0200 Subject: [pypy-dev] using python-ldap under pypy In-Reply-To: References: Message-ID: Hi Elmir. I would say that it should work, however, subtle bugs are a bit expected. I'm happy to help you debug it, let me know how I can reproduce it. On Fri, Dec 11, 2015 at 4:07 PM, Elmir Jagudin wrote: > Hi > > I'm trying to run an application under pypy that authenticates user with > LDAP. > > It is using python-ldap module and it fails to lookup the users. The problem > is in python-ldap's c extension code. When it converts the LDAP search query > from python format to C, parts of the query are corrupted. > > Is python-ldap supposed to work under pypy? How compatible is the python C > API between cpython and pypy? > > Right now I can't figure out if this is a bug in python-ldap code or an > compatibility with Pypy C API. > > Regards, > Elmir Jagudin > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From arigo at tunes.org Sun Dec 13 14:46:44 2015 From: arigo at tunes.org (Armin Rigo) Date: Sun, 13 Dec 2015 20:46:44 +0100 Subject: [pypy-dev] Leysin Winter sprint? In-Reply-To: <9904363526be1c7bf53db0739b2719d4@indus.uberspace.de> References: <9904363526be1c7bf53db0739b2719d4@indus.uberspace.de> Message-ID: Hi all, The sprint should be on the week of February, the 21-28th. A final confirmation will have to wait for around two more weeks, but unless there is really an unexpected problem, it should be this date. I will of course sent a proper sprint announcement then. A bient?t, Armin. From elmir at unity3d.com Sun Dec 13 16:19:47 2015 From: elmir at unity3d.com (Elmir Jagudin) Date: Sun, 13 Dec 2015 22:19:47 +0100 Subject: [pypy-dev] using python-ldap under pypy In-Reply-To: References: Message-ID: On Fri, Dec 11, 2015 at 3:39 PM, Maciej Fijalkowski wrote: > Hi Elmir. > > I would say that it should work, however, subtle bugs are a bit expected. > > Cool! We should try to fix the bug! > I'm happy to help you debug it, let me know how I can reproduce it. > The bug is pretty simple to reproduce, basically doing this query will show the bug: l = ldap.initialize(SERVER) l.simple_bind() res = l.search_s(BASE_DN, ldap.SCOPE_SUBTREE, FILTER, ["uid", "cn"]) # <-- these string will be mangled Here is the complete script which shows the bug: https://gist.github.com/elmirjagudin/6d7aadaa1825901ed73d The error happens in the python-ldap C code that converts ["uid", "cn"] array to char **. In this file: http://python-ldap.cvs.sourceforge.net/viewvc/python-ldap/python-ldap/Modules/LDAPObject.c?revision=1.91&view=markup in function attrs_from_List() there is this code (lines 289-290): 289: attrs[i] = PyString_AsString(item); 290: Py_DECREF(item); On line 289 the assigned string is correct, however after executing line 290, the string will be corrupted. I have noticed that under cpython, the refcount for 'item' is larger then 1. However under pypy it is always 1, and I guess after decreasing it, the 'item' is freed, and attrs[i] pointer becomes invalid. I don't know enough about python extension C API to know if this is a problem in python-ldap C code, or in the pypy code. Any help is appreciated! A general question, does pypy strive to be compatible with the API defined here: https://docs.python.org/2/c-api/ ? Thanks in advance, Elmir > > On Fri, Dec 11, 2015 at 4:07 PM, Elmir Jagudin wrote: > > Hi > > > > I'm trying to run an application under pypy that authenticates user with > > LDAP. > > > > It is using python-ldap module and it fails to lookup the users. The > problem > > is in python-ldap's c extension code. When it converts the LDAP search > query > > from python format to C, parts of the query are corrupted. > > > > Is python-ldap supposed to work under pypy? How compatible is the python > C > > API between cpython and pypy? > > > > Right now I can't figure out if this is a bug in python-ldap code or an > > compatibility with Pypy C API. > > > > Regards, > > Elmir Jagudin > > > > _______________________________________________ > > pypy-dev mailing list > > pypy-dev at python.org > > https://mail.python.org/mailman/listinfo/pypy-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Dec 14 04:01:35 2015 From: arigo at tunes.org (Armin Rigo) Date: Mon, 14 Dec 2015 10:01:35 +0100 Subject: [pypy-dev] using python-ldap under pypy In-Reply-To: References: Message-ID: Hi Elmir, On Sun, Dec 13, 2015 at 10:19 PM, Elmir Jagudin wrote: > The error happens in the python-ldap C code that converts ["uid", "cn"] > array to char **. > > In this file: > http://python-ldap.cvs.sourceforge.net/viewvc/python-ldap/python-ldap/Modules/LDAPObject.c?revision=1.91&view=markup > > in function attrs_from_List() there is this code (lines 289-290): > > 289: attrs[i] = PyString_AsString(item); > 290: Py_DECREF(item); > > On line 289 the assigned string is correct, however after executing line > 290, the string will be corrupted. > > I have noticed that under cpython, the refcount for 'item' is larger then 1. > However under pypy it is always 1, and I guess after decreasing it, the > 'item' is freed, and attrs[i] pointer becomes invalid. Ok. However the sentence "under CPython the refcount for 'item' is larger than 1" is not true in all cases. It is true for simple lists or tuples, but not for more complex types. That means that you can probably get already-freed strings under CPython too. Try for example: class CustomSeq(object): def __getitem__(self, i): return str(i) # returns a refcount=1 result def __len__(self): return 2 res = l.search_s(BASE_DN, ldap.SCOPE_SUBTREE, FILTER, CustomSeq()) So it means it's really a bug of python-ldap, which just happens to crash more often on PyPy than on CPython. It should be fixed there. A bient?t, Armin. From arigo at tunes.org Mon Dec 14 04:09:30 2015 From: arigo at tunes.org (Armin Rigo) Date: Mon, 14 Dec 2015 10:09:30 +0100 Subject: [pypy-dev] using python-ldap under pypy In-Reply-To: References: Message-ID: Hi again, On Mon, Dec 14, 2015 at 10:01 AM, Armin Rigo wrote: > So it means it's really a bug of python-ldap, which just happens to > crash more often on PyPy than on CPython. It should be fixed there. Actually it's a known issue. See the comment line 255: XXX the strings should live longer than the resulting attrs pointer. A bient?t, Armin. From elmir at unity3d.com Mon Dec 14 06:38:10 2015 From: elmir at unity3d.com (Elmir Jagudin) Date: Mon, 14 Dec 2015 12:38:10 +0100 Subject: [pypy-dev] using python-ldap under pypy In-Reply-To: References: Message-ID: On Mon, Dec 14, 2015 at 10:01 AM, Armin Rigo wrote: > Hi Elmir, > > On Sun, Dec 13, 2015 at 10:19 PM, Elmir Jagudin wrote: > > The error happens in the python-ldap C code that converts ["uid", "cn"] > > array to char **. > > > > In this file: > > > http://python-ldap.cvs.sourceforge.net/viewvc/python-ldap/python-ldap/Modules/LDAPObject.c?revision=1.91&view=markup > > > > in function attrs_from_List() there is this code (lines 289-290): > > > > 289: attrs[i] = PyString_AsString(item); > > 290: Py_DECREF(item); > > > > On line 289 the assigned string is correct, however after executing line > > 290, the string will be corrupted. > > > > I have noticed that under cpython, the refcount for 'item' is larger > then 1. > > However under pypy it is always 1, and I guess after decreasing it, the > > 'item' is freed, and attrs[i] pointer becomes invalid. > > Ok. However the sentence "under CPython the refcount for 'item' is > larger than 1" is not true in all cases. It is true for simple lists > or tuples, but not for more complex types. That means that you can > probably get already-freed strings under CPython too. Try for > example: > > class CustomSeq(object): > def __getitem__(self, i): > return str(i) # returns a refcount=1 result > def __len__(self): > return 2 > > res = l.search_s(BASE_DN, > ldap.SCOPE_SUBTREE, > FILTER, > CustomSeq()) > > > So it means it's really a bug of python-ldap, which just happens to > crash more often on PyPy than on CPython. It should be fixed there. > Yepp, you are right. Following version of the code above clearly shows that it's broken under CPython as well: class CustomSeq(object): def __getitem__(self, i): return str(i) # returns a refcount=1 result def __len__(self): return 20 The resulting query send over network will be wrong. Thanks for clarification. /Elmir -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Wed Dec 16 07:39:24 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 16 Dec 2015 12:39:24 +0000 Subject: [pypy-dev] Current status of GUI support in PyPy Message-ID: After a casual bit of googling I'm not sure what is the current status of GUI support in PyPy. Does tkinter work with PyPy yet (It doesn't in the version I have installed here)? Is there any other GUI that you recommend? I'm looking to make a Python-based physics simulator that should be able to show animations in some kind of GUI. I would like to have some kind of web-based format as one output of this (so that results can be shown on the web) but I'm not sure how good that would be for interactive simulation. The physics calculations require that I need to use either PyPy or Cython or something (not plain CPython) to get reasonable performance. One possibility is that I could run PyPy in a subprocess from a CPython-based GUI which I guess is not dissimilar to getting it to work in the browser except I get to write the front-end in Python. -- Oscar From arigo at tunes.org Wed Dec 16 07:45:56 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 16 Dec 2015 13:45:56 +0100 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: Hi Oscar, On Wed, Dec 16, 2015 at 1:39 PM, Oscar Benjamin wrote: > Does tkinter work with PyPy yet (It doesn't in > the version I have installed here)? It should work. Did you install a recent official release, or compile it yourself? In the latter case, do you have the "tk-dev" headers installed on your machine? If you get an obscure crash, please open a bug report. :-) A bient?t, Armin. From oscar.j.benjamin at gmail.com Wed Dec 16 08:05:43 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 16 Dec 2015 13:05:43 +0000 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: On 16 December 2015 at 12:45, Armin Rigo wrote: > > On Wed, Dec 16, 2015 at 1:39 PM, Oscar Benjamin > wrote: >> Does tkinter work with PyPy yet (It doesn't in >> the version I have installed here)? > > It should work. Did you install a recent official release, or compile > it yourself? In the latter case, do you have the "tk-dev" headers > installed on your machine? If you get an obscure crash, please open a > bug report. :-) Thanks Armin. I guess I just don't have a recent enough version. This machine is running Ubuntu 12.04 and I have PyPy from the repos: $ pypy Python 2.7.2 (1.8+dfsg-2, Feb 19 2012, 19:18:08) [PyPy 1.8.0 with GCC 4.6.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. And now for something completely different: ``pypy is a race between the industry trying to build machines with more and more resources, and the pypy developers trying to eat all of them. So far, the winner is still unclear'' >>>> import Tkinter Traceback (most recent call last): File "", line 1, in File "/usr/lib/pypy/lib-python/2.7/lib-tk/Tkinter.py", line 39, in import _tkinter # If this fails your Python may not be configured for Tk ImportError: No module named _tkinter I'll try installing a newer version... Looking here there seems to be binaries built for 12.04-14.04: http://pypy.org/download.html This one import tkinter without error: https://bitbucket.org/pypy/pypy/downloads/pypy3-2.4.0-linux64.tar.bz2 $ bin/pypy3 Python 3.2.5 (b2091e973da6, Oct 19 2014, 18:29:55) [PyPy 2.4.0 with GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>>> import Tkinter Traceback (most recent call last): File "", line 1, in ImportError: No module named Tkinter >>>> import tkinter # correct Py3 module name >>>> This one does not: https://bitbucket.org/pypy/pypy/downloads/pypy-4.0.1-linux64.tar.bz2 $ bin/pypy Python 2.7.10 (5f8302b8bf9f, Nov 18 2015, 10:46:46) [PyPy 4.0.1 with GCC 4.8.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>>> import Tkinter Traceback (most recent call last): File "", line 1, in File "/users/enojb/src/pypy-4.0.1-linux64/lib-python/2.7/lib-tk/Tkinter.py", line 39, in import _tkinter # If this fails your Python may not be configured for Tk File "/users/enojb/src/pypy-4.0.1-linux64/lib_pypy/_tkinter/__init__.py", line 13, in from .tklib_cffi import ffi as tkffi, lib as tklib ImportError: unable to load extension module '/users/enojb/src/pypy-4.0.1-linux64/lib_pypy/_tkinter/tklib_cffi.pypy-26.so': libtcl8.5.so: cannot open shared object file: No such file or directory Is that an obscure crash? -- Oscar From arigo at tunes.org Wed Dec 16 09:43:47 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 16 Dec 2015 15:43:47 +0100 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: Hi Oscar, On Wed, Dec 16, 2015 at 2:05 PM, Oscar Benjamin wrote: > libtcl8.5.so: cannot open shared object file: No such file or > directory > > Is that an obscure crash? Sadly not. It just means it was compiled for libtcl8.5.so, and your machine has a different version installed. You can recompile it simply by running: cd /users/enojb/src/pypy-4.0.1-linux64/lib_pypy/_tkinter/ ../../bin/pypy tklib_build.py A bient?t, Armin. From rymg19 at gmail.com Wed Dec 16 10:46:22 2015 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Wed, 16 Dec 2015 09:46:22 -0600 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: <7F1FB41E-1089-451F-9A05-BA654AF2CE3F@gmail.com> On December 16, 2015 7:05:43 AM CST, Oscar Benjamin wrote: >On 16 December 2015 at 12:45, Armin Rigo wrote: >> >> On Wed, Dec 16, 2015 at 1:39 PM, Oscar Benjamin >> wrote: >>> Does tkinter work with PyPy yet (It doesn't in >>> the version I have installed here)? >> >> It should work. Did you install a recent official release, or >compile >> it yourself? In the latter case, do you have the "tk-dev" headers >> installed on your machine? If you get an obscure crash, please open >a >> bug report. :-) > >Thanks Armin. I guess I just don't have a recent enough version. This >machine is running Ubuntu 12.04 and I have PyPy from the repos: > >$ pypy >Python 2.7.2 (1.8+dfsg-2, Feb 19 2012, 19:18:08) >[PyPy 1.8.0 with GCC 4.6.2] on linux2 >Type "help", "copyright", "credits" or "license" for more information. >And now for something completely different: ``pypy is a race between >the >industry trying to build machines with more and more resources, and the >pypy >developers trying to eat all of them. So far, the winner is still >unclear'' >>>>> import Tkinter >Traceback (most recent call last): > File "", line 1, in >File "/usr/lib/pypy/lib-python/2.7/lib-tk/Tkinter.py", line 39, in > >import _tkinter # If this fails your Python may not be configured for >Tk >ImportError: No module named _tkinter > >I'll try installing a newer version... > The version Ubuntu 12.04 comes with is pretty old. PyPy 2.0 is WAY better. I like Ubuntu, but I hate how packages can go out of date so easily. >Looking here there seems to be binaries built for 12.04-14.04: >http://pypy.org/download.html > >This one import tkinter without error: >https://bitbucket.org/pypy/pypy/downloads/pypy3-2.4.0-linux64.tar.bz2 > >$ bin/pypy3 >Python 3.2.5 (b2091e973da6, Oct 19 2014, 18:29:55) >[PyPy 2.4.0 with GCC 4.6.3] on linux2 >Type "help", "copyright", "credits" or "license" for more information. >>>>> import Tkinter >Traceback (most recent call last): > File "", line 1, in >ImportError: No module named Tkinter >>>>> import tkinter # correct Py3 module name >>>>> > >This one does not: >https://bitbucket.org/pypy/pypy/downloads/pypy-4.0.1-linux64.tar.bz2 > >$ bin/pypy >Python 2.7.10 (5f8302b8bf9f, Nov 18 2015, 10:46:46) >[PyPy 4.0.1 with GCC 4.8.4] on linux2 >Type "help", "copyright", "credits" or "license" for more information. >>>>> import Tkinter >Traceback (most recent call last): > File "", line 1, in >File >"/users/enojb/src/pypy-4.0.1-linux64/lib-python/2.7/lib-tk/Tkinter.py", >line 39, in >import _tkinter # If this fails your Python may not be configured for >Tk >File >"/users/enojb/src/pypy-4.0.1-linux64/lib_pypy/_tkinter/__init__.py", >line 13, in > from .tklib_cffi import ffi as tkffi, lib as tklib >ImportError: unable to load extension module >'/users/enojb/src/pypy-4.0.1-linux64/lib_pypy/_tkinter/tklib_cffi.pypy-26.so': >libtcl8.5.so: cannot open shared object file: No such file or >directory > >Is that an obscure crash? > >-- >Oscar >_______________________________________________ >pypy-dev mailing list >pypy-dev at python.org >https://mail.python.org/mailman/listinfo/pypy-dev -- Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity. From oscar.j.benjamin at gmail.com Thu Dec 17 09:33:17 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Thu, 17 Dec 2015 14:33:17 +0000 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: On 16 December 2015 at 14:43, Armin Rigo wrote: > > On Wed, Dec 16, 2015 at 2:05 PM, Oscar Benjamin > wrote: >> libtcl8.5.so: cannot open shared object file: No such file or >> directory >> >> Is that an obscure crash? > > Sadly not. It just means it was compiled for libtcl8.5.so, and your > machine has a different version installed. I'm not sure about that: enojb at it054759:~$ ls /usr/lib/libtcl* /usr/lib/libtcl8.5.so.0 enojb at it054759:~$ ls /usr/lib/libtk* /usr/lib/libtk8.5.so.0 Maybe it's just missing the symlink from libtcl8.5.so -> libtcl8.5.so.0. I'm not sure why it would be setup like that. > You can recompile it > simply by running: > > cd /users/enojb/src/pypy-4.0.1-linux64/lib_pypy/_tkinter/ > ../../bin/pypy tklib_build.py The above gave me an error about "tk.h" so I installed tk8.5-dev. I recompiled tkinter and can now import Tkinter without error. Afterwards I noticed that installing tk8.5-dev gave me this: enojb at it054759:~$ ls /usr/lib/libtk* /usr/lib/libtk8.5.a /usr/lib/libtk8.5.so /usr/lib/libtk8.5.so.0 /usr/lib/libtkstub8.5.a enojb at it054759:~$ ls /usr/lib/libtcl* /usr/lib/libtcl8.5.a /usr/lib/libtcl8.5.so /usr/lib/libtcl8.5.so.0 /usr/lib/libtclstub8.5.a -- Oscar From oscar.j.benjamin at gmail.com Thu Dec 17 09:36:22 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Thu, 17 Dec 2015 14:36:22 +0000 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: <7F1FB41E-1089-451F-9A05-BA654AF2CE3F@gmail.com> References: <7F1FB41E-1089-451F-9A05-BA654AF2CE3F@gmail.com> Message-ID: On 16 December 2015 at 15:46, Ryan Gonzalez wrote: > The version Ubuntu 12.04 comes with is pretty old. PyPy 2.0 is WAY better. > > I like Ubuntu, but I hate how packages can go out of date so easily. To be fair I should just update my Ubuntu version. I don't use 12.04 at home but the situation with IT here means that updating my OS is not as simple as it should be. -- Oscar From arigo at tunes.org Thu Dec 17 10:00:02 2015 From: arigo at tunes.org (Armin Rigo) Date: Thu, 17 Dec 2015 16:00:02 +0100 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: Hi Oscar, On Thu, Dec 17, 2015 at 3:33 PM, Oscar Benjamin wrote: > Maybe it's just missing the symlink from libtcl8.5.so -> > libtcl8.5.so.0. I'm not sure why it would be setup like that. Yes. I mostly gave up understanding the differences in binary distributions in Linux. It just turns out that on the particular distribution where this pypy was built, it's called "libtcl8.5.so". Maybe it is indeed just a symlink to "libtcl8.5.so.0", and so if we put "libtcl8.5.so.0" in the "_tklib_cffi.so" then everybody would be happy. However I have no clue how to do that in the .so without tons of non-portable tricks. gcc is invoked with "-ltcl", and figures out at compilation time that it means "link with libtcl8.5.so". A bient?t, Armin. From oscar.j.benjamin at gmail.com Thu Dec 17 10:11:24 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Thu, 17 Dec 2015 15:11:24 +0000 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: On 17 December 2015 at 15:00, Armin Rigo wrote: > On Thu, Dec 17, 2015 at 3:33 PM, Oscar Benjamin > wrote: >> Maybe it's just missing the symlink from libtcl8.5.so -> >> libtcl8.5.so.0. I'm not sure why it would be setup like that. > > Yes. I mostly gave up understanding the differences in binary > distributions in Linux. It just turns out that on the particular > distribution where this pypy was built, it's called "libtcl8.5.so". > Maybe it is indeed just a symlink to "libtcl8.5.so.0", and so if we > put "libtcl8.5.so.0" in the "_tklib_cffi.so" then everybody would be > happy. However I have no clue how to do that in the .so without tons > of non-portable tricks. gcc is invoked with "-ltcl", and figures out > at compilation time that it means "link with libtcl8.5.so". I just tested and re-downloading it works out of the box without any need to recompile. Perhaps an easier solution is just to mention that tk8.5-dev may be needed as a dependency for those binaries. -- Oscar From yury at shurup.com Thu Dec 17 12:33:19 2015 From: yury at shurup.com (Yury V. Zaytsev) Date: Thu, 17 Dec 2015 18:33:19 +0100 (CET) Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: Hi Armin, hi Oscar, I think that the mystery lies in the packaging differences between Ubuntu 12.04 and Ubuntu 14.04 releases: root at 1204:/# readelf -d /usr/lib/libtk8.5.so.0 | grep tk 0x000000000000000e (SONAME) Library soname: [libtk8.5.so.0] root at 1404:/# readelf -d /usr/lib/x86_64-linux-gnu/libtk8.5.so | grep tk 0x000000000000000e (SONAME) Library soname: [libtk8.5.so] On Ubuntu 12.04 they split libtk8.5.so.0 and libtk8.5.so, and the latter is only shipped in the tk8.5-dev package, whereas on Ubuntu 14.04 they ship libtk8.5.so.0 and libtk8.5.so together in libtk-8.5 package. So, to run a binary compiled on Ubuntu 14.04 on Ubuntu 12.04, you do, indeed, need to install tk8.5-dev to get the libtk8.5.so symlink, that is, unfortunately, backwards compatibility hasn't been preserved in this case, which you have just witnessed... However, a binary compiled on Ubuntu 12.04 should still "just work" on Ubuntu 14.04 (at least in as far as TK is concerned). The reasons why they changed the SONAME are not clear to me, but it might be that they decided to ditch the major number, because TK encodes it in the library name anyways, and left the symlink for forwards compatibility. -- Sincerely yours, Yury V. Zaytsev On Thu, 17 Dec 2015, Oscar Benjamin wrote: > On 17 December 2015 at 15:00, Armin Rigo wrote: >> On Thu, Dec 17, 2015 at 3:33 PM, Oscar Benjamin >> wrote: >>> Maybe it's just missing the symlink from libtcl8.5.so -> >>> libtcl8.5.so.0. I'm not sure why it would be setup like that. >> >> Yes. I mostly gave up understanding the differences in binary >> distributions in Linux. It just turns out that on the particular >> distribution where this pypy was built, it's called "libtcl8.5.so". >> Maybe it is indeed just a symlink to "libtcl8.5.so.0", and so if we >> put "libtcl8.5.so.0" in the "_tklib_cffi.so" then everybody would be >> happy. However I have no clue how to do that in the .so without tons >> of non-portable tricks. gcc is invoked with "-ltcl", and figures out >> at compilation time that it means "link with libtcl8.5.so". > > I just tested and re-downloading it works out of the box without any > need to recompile. Perhaps an easier solution is just to mention that > tk8.5-dev may be needed as a dependency for those binaries. > > -- > Oscar > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > From marky1991 at gmail.com Thu Dec 17 20:58:54 2015 From: marky1991 at gmail.com (marky1991 .) Date: Thu, 17 Dec 2015 20:58:54 -0500 Subject: [pypy-dev] Running Tests for the py3.3 branch Message-ID: I know I've asked this in irc at least 5 times at this point, but I am still running into issues when I try to run the tests. I have built a pypy binary locally. My exact steps: hg update default cd pypy/goal pypy ../../rpython/bin/rpython --opt=2 #Wait for it to finish... cd .. cd .. PYTHONPATH=. ./pypy-c pypy/tool/build_cffi_imports.py hg update py3.3 pypy/goal/pypy-c pytest.py -sx pypy/module/ (My pypy-c's version info): pypy/goal/pypy-c --version Python 2.7.10 (2cf2803c6652, Dec 17 2015, 06:33:54) [PyPy 4.1.0-alpha0 with GCC 4.8.2] The output of the test line: ==================================================================== test session starts ===================================================================== platform linux2 -- Python 2.7.10[pypy-4.1.0-alpha] -- py-1.4.20 -- pytest-2.5.2 pytest-2.5.2 from /home/lgfdev/Proyectos/my_pypy/pytest.pyc collected 0 items / 1 errors =========================================================================== ERRORS =========================================================================== _____________________________________________________________________ ERROR collecting . _____________________________________________________________________ py/_path/common.py:327: in visit > for x in Visitor(fil, rec, ignore, bf, sort).gen(self): py/_path/common.py:363: in gen > if p.check(dir=1) and (rec is None or rec(p))]) _pytest/main.py:600: in _recurse > ihook.pytest_collect_directory(path=path, parent=self) _pytest/main.py:161: in call_matching_hooks > plugins = self.config._getmatchingplugins(self.fspath) _pytest/config.py:670: in _getmatchingplugins > plugins += self._conftest.getconftestmodules(fspath) _pytest/config.py:512: in getconftestmodules > clist.append(self.importconftest(conftestpath)) _pytest/config.py:538: in importconftest > self._conftestpath2mod[conftestpath] = mod = conftestpath.pyimport() py/_path/local.py:620: in pyimport > __import__(modname) pypy/module/select/__init__.py:8: in > class Module(MixedModule): pypy/module/select/__init__.py:22: in Module > from pypy.module.select.interp_epoll import public_symbols pypy/module/select/interp_epoll.py:9: in > from pypy.interpreter.typedef import TypeDef, GetSetProperty pypy/interpreter/typedef.py:66: in > @interp2app pypy/interpreter/gateway.py:929: in __new__ > doc=doc) pypy/interpreter/gateway.py:598: in __init__ > from pypy.interpreter import pycode pypy/interpreter/pycode.py:17: in > from pypy.tool.stdlib_opcode import opcodedesc, HAVE_ARGUMENT pypy/tool/stdlib_opcode.py:31: in > load_pypy_opcode() pypy/tool/stdlib_opcode.py:22: in load_pypy_opcode > from pypy.tool.lib_pypy import LIB_PYTHON pypy/tool/lib_pypy.py:4: in > from pypy.module.sys.version import CPYTHON_VERSION pypy/module/sys/version.py:5: in > from rpython.rlib import compilerinfo rpython/rlib/compilerinfo.py:2: in > from rpython.rtyper.lltypesystem import rffi rpython/rtyper/lltypesystem/rffi.py:5: in > from rpython.rtyper.lltypesystem import ll2ctypes rpython/rtyper/lltypesystem/ll2ctypes.py:4: in > import ctypes lib-python/2.7/ctypes/__init__.py:11: in > from _ctypes import Union, Structure, Array lib_pypy/_ctypes/__init__.py:2: in > from _ctypes.basics import _CData, sizeof, alignment, byref, addressof,\ E File "/home/lgfdev/Proyectos/my_pypy/lib_pypy/_ctypes/basics.py", line 138 E class _CData(object, metaclass=_CDataMeta): E ^ E SyntaxError: invalid syntax ================================================================== 1 error in 3.36 seconds =================================================================== What have I screwed up? Interestingly to me, if I run the tests using the apt-get-provided pypy (I think this one came from http://ppa.launchpad.net/pypy/ppa/ubuntu/, but I could be misremembering), it fails differently: pypy --version Python 2.7.10 (4.0.1+dfsg-1~ppa1~ubuntu14.04, Nov 20 2015, 19:34:15) [PyPy 4.0.1 with GCC 4.8.4] pypy pytest.py -sx pypy/module =================================================================================================================================================== test session starts ==================================================================================================================================================== platform linux2 -- Python 2.7.10[pypy-4.0.1-final] -- py-1.4.20 -- pytest-2.5.2 pytest-2.5.2 from /home/lgfdev/Proyectos/my_pypy/pytest.py [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /tmp/usession-py3.3-228/platcheck_10.c -o /tmp/usession-py3.3-228/platcheck_10.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /tmp/usession-py3.3-228/platcheck_15.c -o /tmp/usession-py3.3-228/platcheck_15.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /tmp/usession-py3.3-228/platcheck_18.c -o /tmp/usession-py3.3-228/platcheck_18.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /tmp/usession-py3.3-228/platcheck_19.c -o /tmp/usession-py3.3-228/platcheck_19.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /tmp/usession-py3.3-228/platcheck_20.c -o /tmp/usession-py3.3-228/platcheck_20.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /tmp/usession-py3.3-228/platcheck_25.c -o /tmp/usession-py3.3-228/platcheck_25.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /tmp/usession-py3.3-228/platcheck_36.c -o /tmp/usession-py3.3-228/platcheck_36.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /tmp/usession-py3.3-228/platcheck_38.c -o /tmp/usession-py3.3-228/platcheck_38.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -DRPY_EXTERN=RPY_EXPORTED -I/home/lgfdev/Proyectos/my_pypy/pypy/module/cppyy/include -I/home/lgfdev/Proyectos/my_pypy/pypy/module/cppyy/test -I/home/lgfdev/Proyectos/my_pypy/rpython/translator/c /home/lgfdev/Proyectos/my_pypy/pypy/module/cppyy/src/dummy_backend.cxx -o /tmp/usession-py3.3-228/pypy/module/cppyy/src/dummy_backend.o [platform:execute] g++ -shared /tmp/usession-py3.3-228/pypy/module/cppyy/src/dummy_backend.o -pthread -Wl,--export-dynamic -lrt -o /tmp/usession-py3.3-228/pypy/module/cppyy/src/libcppyy_dummy_backend.so [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /tmp/usession-py3.3-228/platcheck_50.c -o /tmp/usession-py3.3-228/platcheck_50.o [platform:execute] gcc /tmp/usession-py3.3-228/platcheck_50.o -pthread -Wl,--export-dynamic -lintl -lrt -o /tmp/usession-py3.3-228/platcheck_50 [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused /tmp/usession-py3.3-228/platcheck_61.c -o /tmp/usession-py3.3-228/platcheck_61.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -DPy_BUILD_CORE -I/home/lgfdev/Proyectos/my_pypy/pypy/module/cpyext/include -I/home/lgfdev/Proyectos/my_pypy/rpython/translator/c -I/tmp/usession-py3.3-228 /tmp/usession-py3.3-228/platcheck_62.c -o /tmp/usession-py3.3-228/platcheck_62.o [platform:execute] gcc /tmp/usession-py3.3-228/platcheck_62.o -pthread -Wl,--export-dynamic -lrt -o /tmp/usession-py3.3-228/platcheck_62 [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -DPy_BUILD_CORE -I/home/lgfdev/Proyectos/my_pypy/pypy/module/cpyext/include -I/home/lgfdev/Proyectos/my_pypy/rpython/translator/c -I/tmp/usession-py3.3-228 /tmp/usession-py3.3-228/platcheck_63.c -o /tmp/usession-py3.3-228/platcheck_63.o [platform:execute] gcc /tmp/usession-py3.3-228/platcheck_63.o -pthread -Wl,--export-dynamic -lrt -o /tmp/usession-py3.3-228/platcheck_63 [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -DPy_BUILD_CORE -I/home/lgfdev/Proyectos/my_pypy/pypy/module/cpyext/include -I/home/lgfdev/Proyectos/my_pypy/rpython/translator/c -I/tmp/usession-py3.3-228 /tmp/usession-py3.3-228/platcheck_64.c -o /tmp/usession-py3.3-228/platcheck_64.o [platform:execute] gcc /tmp/usession-py3.3-228/platcheck_64.o -pthread -Wl,--export-dynamic -lrt -o /tmp/usession-py3.3-228/platcheck_64 [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES -I/home/lgfdev/Proyectos/my_pypy/pypy/module/_codecs -I/home/lgfdev/Proyectos/my_pypy/rpython/translator/c /home/lgfdev/Proyectos/my_pypy/pypy/module/_codecs/locale_codec.c -o /tmp/usession-py3.3-228/pypy/module/_codecs/locale_codec.o [platform:execute] gcc -shared /tmp/usession-py3.3-228/pypy/module/_codecs/locale_codec.o -pthread -Wl,--export-dynamic -lrt -o /tmp/usession-py3.3-228/shared_cache/externmod.so [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_0.c -o /tmp/usession-py3.3-228/module_cache/module_0.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_1.c -o /tmp/usession-py3.3-228/module_cache/module_1.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_2.c -o /tmp/usession-py3.3-228/module_cache/module_2.o [platform:execute] gcc -shared /tmp/usession-py3.3-228/module_cache/module_0.o /tmp/usession-py3.3-228/module_cache/module_1.o /tmp/usession-py3.3-228/module_cache/module_2.o -pthread -Wl,--export-dynamic -lrt -o /tmp/usession-py3.3-228/shared_cache/externmod_0.so [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_3.c -o /tmp/usession-py3.3-228/module_cache/module_3.o [platform:execute] gcc -shared /tmp/usession-py3.3-228/module_cache/module_3.o -pthread -Wl,--export-dynamic -lrt -o /tmp/usession-py3.3-228/shared_cache/externmod_1.so [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_4.c -o /tmp/usession-py3.3-228/module_cache/module_4.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_5.c -o /tmp/usession-py3.3-228/module_cache/module_5.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_6.c -o /tmp/usession-py3.3-228/module_cache/module_6.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_7.c -o /tmp/usession-py3.3-228/module_cache/module_7.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_8.c -o /tmp/usession-py3.3-228/module_cache/module_8.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_9.c -o /tmp/usession-py3.3-228/module_cache/module_9.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_10.c -o /tmp/usession-py3.3-228/module_cache/module_10.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_11.c -o /tmp/usession-py3.3-228/module_cache/module_11.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_12.c -o /tmp/usession-py3.3-228/module_cache/module_12.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_13.c -o /tmp/usession-py3.3-228/module_cache/module_13.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_14.c -o /tmp/usession-py3.3-228/module_cache/module_14.o [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -g -O0 -DRPY_EXTERN=RPY_EXPORTED -DRPYTHON_LL2CTYPES /tmp/usession-py3.3-228/module_cache/module_15.c -o /tmp/usession-py3.3-228/module_cache/module_15.o [platform:WARNING] /tmp/usession-py3.3-228/module_cache/module_15.c: In function ?pypy_macro_wrapper_mknod?: [platform:WARNING] /tmp/usession-py3.3-228/module_cache/module_15.c:88:1: warning: implicit declaration of function ?mknod? [-Wimplicit-function-declaration] [platform:WARNING] RPY_EXTERN int pypy_macro_wrapper_mknod(char *arg0, int arg1, int arg2) { return mknod(arg0, arg1, arg2); } [platform:WARNING] ^ [platform:execute] gcc -shared /tmp/usession-py3.3-228/module_cache/module_4.o /tmp/usession-py3.3-228/module_cache/module_5.o /tmp/usession-py3.3-228/module_cache/module_6.o /tmp/usession-py3.3-228/module_cache/module_7.o /tmp/usession-py3.3-228/module_cache/module_8.o /tmp/usession-py3.3-228/module_cache/module_9.o /tmp/usession-py3.3-228/module_cache/module_10.o /tmp/usession-py3.3-228/module_cache/module_11.o /tmp/usession-py3.3-228/module_cache/module_12.o /tmp/usession-py3.3-228/module_cache/module_13.o /tmp/usession-py3.3-228/module_cache/module_14.o /tmp/usession-py3.3-228/module_cache/module_15.o -pthread -Wl,--export-dynamic -lutil -lrt -o /tmp/usession-py3.3-228/shared_cache/externmod_2.so [platform:execute] gcc -c -O3 -pthread -fomit-frame-pointer -Wall -Wno-unused -fPIC -fvisibility=hidden -I/home/lgfdev/Proyectos/my_pypy/rpython/translator/c /home/lgfdev/Proyectos/my_pypy/pypy/module/test_lib_pypy/ctypes_tests/_ctypes_test.c -o /tmp/usession-py3.3-228/pypy/module/test_lib_pypy/ctypes_tests/_ctypes_test.o [platform:WARNING] /home/lgfdev/Proyectos/my_pypy/pypy/module/test_lib_pypy/ctypes_tests/_ctypes_test.c:256:14: warning: ?an_integer? initialized and declared ?extern? [enabled by default] [platform:WARNING] EXPORT (int) an_integer = 42; [platform:WARNING] ^ [platform:WARNING] /home/lgfdev/Proyectos/my_pypy/pypy/module/test_lib_pypy/ctypes_tests/_ctypes_test.c:263:14: warning: ?a_string? initialized and declared ?extern? [enabled by default] [platform:WARNING] EXPORT(char) a_string[16] = "0123456789abcdef"; [platform:WARNING] ^ [platform:WARNING] /home/lgfdev/Proyectos/my_pypy/pypy/module/test_lib_pypy/ctypes_tests/_ctypes_test.c:323:19: warning: ?last_tf_arg_s? initialized and declared ?extern? [enabled by default] [platform:WARNING] EXPORT(LONG_LONG) last_tf_arg_s = 0; [platform:WARNING] ^ [platform:WARNING] /home/lgfdev/Proyectos/my_pypy/pypy/module/test_lib_pypy/ctypes_tests/_ctypes_test.c:324:28: warning: ?last_tf_arg_u? initialized and declared ?extern? [enabled by default] [platform:WARNING] EXPORT(unsigned LONG_LONG) last_tf_arg_u = 0; [platform:WARNING] ^ [platform:execute] gcc -shared /tmp/usession-py3.3-228/pypy/module/test_lib_pypy/ctypes_tests/_ctypes_test.o -pthread -Wl,--export-dynamic -lrt -o /tmp/pytest-33/_ctypes_test/_ctypes_test.so collecting 1197 items / 1 errors ========================================================================================================================================================== ERRORS ========================================================================================================================================================== _________________________________________________________________________________________________________________________ ERROR collecting pypy/module/_posixsubprocess/test/test_ztranslation.py __________________________________________________________________________________________________________________________ import file mismatch: imported module 'test_ztranslation' has this __file__ attribute: /home/lgfdev/Proyectos/my_pypy/pypy/module/_hashlib/test/test_ztranslation.py which is not the same as the test file we want to collect: /home/lgfdev/Proyectos/my_pypy/pypy/module/_posixsubprocess/test/test_ztranslation.py HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ================================================================================================================================================= 1 error in 50.85 seconds ================================================================================================================================================= This test is not failing for the regular py3.3 buildbot. Similarly, if I run specific files which are currently failing for the py3.3 buildbot, they all just work on my machine when tested using the "pypy" executable. (A couple have actually failed the same as they have failed for the py3.3 buildbot, but every still-failing test that I have specifically run passes locally now, so something must be wrong.) Why is there a difference in behavior between my /usr/bin/pypy and my newly-built pypy? Why are neither of the pytest.py commands matching the output of the tests run by the buildbot? I don't really understand how lib_pypy is supposed to interact with the python used to invoke the tests. The behavior seen when I run the tests with my pypy-c makes sense to me, failing as soon as it runs into python3-specific code. I don't really get why the tests don't fail the same way when I run the tests using the "pypy" executable. If anyone could explain where I have gone wrong, it would be greatly, greatly appreciated. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From marky1991 at gmail.com Thu Dec 17 21:24:05 2015 From: marky1991 at gmail.com (marky1991 .) Date: Thu, 17 Dec 2015 21:24:05 -0500 Subject: [pypy-dev] Running Tests for the py3.3 branch In-Reply-To: References: Message-ID: Sorry, my OS in case it's relevant: Linux Mint 17 64 bit -------------- next part -------------- An HTML attachment was scrubbed... URL: From marky1991 at gmail.com Thu Dec 17 21:49:32 2015 From: marky1991 at gmail.com (marky1991 .) Date: Thu, 17 Dec 2015 21:49:32 -0500 Subject: [pypy-dev] Running Tests for the py3.3 branch In-Reply-To: References: Message-ID: The pypy/goal/pypy-c output: http://pastebin.com/b5MQPPH8 "pypy" output: http://pastebin.com/2kBPLKjy "python2" output: http://pastebin.com/hpWa24zZ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marky1991 at gmail.com Thu Dec 17 22:27:15 2015 From: marky1991 at gmail.com (marky1991 .) Date: Thu, 17 Dec 2015 22:27:15 -0500 Subject: [pypy-dev] Running Tests for the py3.3 branch In-Reply-To: References: Message-ID: After talking to mjacob, I now understand why the pypy-c tests were failing so badly. Re the pypy/python2 errors: It looks like there might be some bug in pytest. If you try running this command: pypy pytest.py pypy/module/_hashlib/ pypy/module/faulthandler/ It should fail with the same error seen in the python2 pastebin above. I now understand that the buildbot must not run these two tests in the same test execution, explaining why the behavior I'm seeing is different. Given all this, I think I now understand this correctly. Thanks again to mjacob for explaining! -------------- next part -------------- An HTML attachment was scrubbed... URL: From marky1991 at gmail.com Thu Dec 17 22:45:39 2015 From: marky1991 at gmail.com (marky1991 .) Date: Thu, 17 Dec 2015 22:45:39 -0500 Subject: [pypy-dev] Running Tests for the py3.3 branch In-Reply-To: References: Message-ID: In case anyone is curious: The pypy-c tests were failing because ultimately, the pypy binary has to always use lib_pypy. When I updated my working copy to py3.3 after generating the pypy-c binary, my pypy-c binary, which was still just a python2 binary, was now incompatible with the working copy's lib_pypy, which expected a python 3.3 binary. Using my apt-get provided pypy works properly because it is using its own lib_pypy defined elsewhere (which is still correctly a python2-compatible lib_pypy, not my now-3.3 lib_pypy). And of course, using "python2" works fine because it doesn't use any lib_pypy at all. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Fri Dec 18 07:56:57 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Fri, 18 Dec 2015 12:56:57 +0000 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: On 16 December 2015 at 12:45, Armin Rigo wrote: > If you get an obscure crash, please open a > bug report. :-) I found a bug with PyPy 3.2.5 (2.4.0). The offending code looks like: s = s.encode('utf-8') if '\x00' in s: # <- This line guaranteed to raise on Py3 raise TypeError It seems to have already been fixed on the py3k branch so I'm not sure if it needs reporting: https://bitbucket.org/pypy/pypy/src/6da866a9e7d54a016c8b554fdb961b56de7ac2da/lib_pypy/_tkinter/app.py?at=py3k&fileviewer=file-view-default#app.py-446 Is that the right branch? -- Oscar From phyo.arkarlwin at gmail.com Fri Dec 18 08:52:40 2015 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Fri, 18 Dec 2015 20:22:40 +0630 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: Best GUI for every language these days is HTML5. Use PyPy as backend process , and use HTML5 as UI. Check out Electron .http://electron.atom.io/ HTML5 Desktop UI that powers Atom editor. On Wed, Dec 16, 2015 at 7:09 PM, Oscar Benjamin wrote: > After a casual bit of googling I'm not sure what is the current status > of GUI support in PyPy. Does tkinter work with PyPy yet (It doesn't in > the version I have installed here)? Is there any other GUI that you > recommend? > > I'm looking to make a Python-based physics simulator that should be > able to show animations in some kind of GUI. I would like to have some > kind of web-based format as one output of this (so that results can be > shown on the web) but I'm not sure how good that would be for > interactive simulation. The physics calculations require that I need > to use either PyPy or Cython or something (not plain CPython) to get > reasonable performance. > > One possibility is that I could run PyPy in a subprocess from a > CPython-based GUI which I guess is not dissimilar to getting it to > work in the browser except I get to write the front-end in Python. > > -- > Oscar > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rymg19 at gmail.com Fri Dec 18 11:25:40 2015 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Fri, 18 Dec 2015 10:25:40 -0600 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: <6F60CD6B-5CC5-4143-9AEC-132CD324F317@gmail.com> I find HTML5 overkill if you're not using JS... On December 18, 2015 7:52:40 AM CST, Phyo Arkar wrote: >Best GUI for every language these days is HTML5. > >Use PyPy as backend process , and use HTML5 as UI. > >Check out Electron .http://electron.atom.io/ HTML5 Desktop UI that >powers >Atom editor. > >On Wed, Dec 16, 2015 at 7:09 PM, Oscar Benjamin > >wrote: > >> After a casual bit of googling I'm not sure what is the current >status >> of GUI support in PyPy. Does tkinter work with PyPy yet (It doesn't >in >> the version I have installed here)? Is there any other GUI that you >> recommend? >> >> I'm looking to make a Python-based physics simulator that should be >> able to show animations in some kind of GUI. I would like to have >some >> kind of web-based format as one output of this (so that results can >be >> shown on the web) but I'm not sure how good that would be for >> interactive simulation. The physics calculations require that I need >> to use either PyPy or Cython or something (not plain CPython) to get >> reasonable performance. >> >> One possibility is that I could run PyPy in a subprocess from a >> CPython-based GUI which I guess is not dissimilar to getting it to >> work in the browser except I get to write the front-end in Python. >> >> -- >> Oscar >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> > > >------------------------------------------------------------------------ > >_______________________________________________ >pypy-dev mailing list >pypy-dev at python.org >https://mail.python.org/mailman/listinfo/pypy-dev -- Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From phyo.arkarlwin at gmail.com Fri Dec 18 15:16:58 2015 From: phyo.arkarlwin at gmail.com (Phyo Arkar) Date: Sat, 19 Dec 2015 02:46:58 +0630 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: <6F60CD6B-5CC5-4143-9AEC-132CD324F317@gmail.com> References: <6F60CD6B-5CC5-4143-9AEC-132CD324F317@gmail.com> Message-ID: Rapydscript is pure pythonic javascript. HTML5 + http://www.pyjeon.com/rapydscript On Fri, Dec 18, 2015 at 10:55 PM, Ryan Gonzalez wrote: > I find HTML5 overkill if you're not using JS... > > > On December 18, 2015 7:52:40 AM CST, Phyo Arkar > wrote: >> >> Best GUI for every language these days is HTML5. >> >> Use PyPy as backend process , and use HTML5 as UI. >> >> Check out Electron .http://electron.atom.io/ HTML5 Desktop UI that >> powers Atom editor. >> >> On Wed, Dec 16, 2015 at 7:09 PM, Oscar Benjamin < >> oscar.j.benjamin at gmail.com> wrote: >> >>> After a casual bit of googling I'm not sure what is the current status >>> of GUI support in PyPy. Does tkinter work with PyPy yet (It doesn't in >>> the version I have installed here)? Is there any other GUI that you >>> recommend? >>> >>> I'm looking to make a Python-based physics simulator that should be >>> able to show animations in some kind of GUI. I would like to have some >>> kind of web-based format as one output of this (so that results can be >>> shown on the web) but I'm not sure how good that would be for >>> interactive simulation. The physics calculations require that I need >>> to use either PyPy or Cython or something (not plain CPython) to get >>> reasonable performance. >>> >>> One possibility is that I could run PyPy in a subprocess from a >>> CPython-based GUI which I guess is not dissimilar to getting it to >>> work in the browser except I get to write the front-end in Python. >>> >>> -- >>> Oscar >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> https://mail.python.org/mailman/listinfo/pypy-dev >>> >> >> ------------------------------ >> >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> >> > -- > Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Sat Dec 19 03:24:56 2015 From: arigo at tunes.org (Armin Rigo) Date: Sat, 19 Dec 2015 09:24:56 +0100 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: Hi Oscar, On Fri, Dec 18, 2015 at 1:56 PM, Oscar Benjamin wrote: > It seems to have already been fixed on the py3k branch so I'm not sure > if it needs reporting: We usually look at the py3.3 branch, which is for Python 3.3 compatibility; if it's also fixed there then there is nothing more to report. A bient?t, Armin. From arigo at tunes.org Tue Dec 22 02:30:31 2015 From: arigo at tunes.org (Armin Rigo) Date: Tue, 22 Dec 2015 08:30:31 +0100 Subject: [pypy-dev] stmgc: rethinking the GC? Message-ID: Hi Remi, Just thinking about stmgc: currently it requires large transactions in order to work efficiently. I think it is mostly caused by the fact that transaction commits always force a minor collection. Maybe we should rethink that at some point, by changing the GC completely... There are moving GCs that are really concurrent but they tend to be an advanced mess. Maybe a better alternative would be to experiment again with non-moving GCs. Non-moving generational GCs are not very common as far as I know, but they do exist; for example see http://wiki.luajit.org/New-Garbage-Collector . A bient?t, Armin. From remi.meier at gmail.com Tue Dec 22 05:24:15 2015 From: remi.meier at gmail.com (Remi Meier) Date: Tue, 22 Dec 2015 11:24:15 +0100 Subject: [pypy-dev] stmgc: rethinking the GC? In-Reply-To: References: Message-ID: Hi Armin, I think we need to quantify how bad the minor GCs really are. They are thread-local and they should also take less time for shorter transactions. The last benchmark I looked at had a different problem that I would summarize as (1) time spent creating backup copies, (2) huge commit log entry size, (3) time spent pulling the changes in, (4) major collections. All of these need to be amortized with longer transactions, too. I mention this because these problems come from writing to *old* objects and at least (3) and (4) are non-thread-local issues that prevent scaling. If we go for a different GC type, we need to make sure that we still have ways to reduce the cost of these problems (currently with overflow objs and ignoring young objs). I don't know if minor GCs are really that high on the list of things that cause overhead. It may just be that short transactions produce more old objs relative to their runtime, and these old objs cause more work in following transactions. So I currently see two important directions to look into: (1) quantify the cost of minor GCs and estimate the advantage of a non-moving GC, and (2) think about ways to make old objs cheaper (e.g. by doing less work for thread-local objs). But I'll first have to read up on that LuaJIT GC over the holidays :) Cheers, Remi On 22 December 2015 at 08:30, Armin Rigo wrote: > Hi Remi, > > Just thinking about stmgc: currently it requires large transactions in > order to work efficiently. I think it is mostly caused by the fact > that transaction commits always force a minor collection. Maybe we > should rethink that at some point, by changing the GC completely... > There are moving GCs that are really concurrent but they tend to be an > advanced mess. Maybe a better alternative would be to experiment > again with non-moving GCs. Non-moving generational GCs are not very > common as far as I know, but they do exist; for example see > http://wiki.luajit.org/New-Garbage-Collector . > > > A bient?t, > > Armin. From fijall at gmail.com Tue Dec 22 09:16:23 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 22 Dec 2015 16:16:23 +0200 Subject: [pypy-dev] stmgc: rethinking the GC? In-Reply-To: References: Message-ID: Armin: the luajit GC does not exist either, it's just a plan as far as I know On Tue, Dec 22, 2015 at 9:30 AM, Armin Rigo wrote: > Hi Remi, > > Just thinking about stmgc: currently it requires large transactions in > order to work efficiently. I think it is mostly caused by the fact > that transaction commits always force a minor collection. Maybe we > should rethink that at some point, by changing the GC completely... > There are moving GCs that are really concurrent but they tend to be an > advanced mess. Maybe a better alternative would be to experiment > again with non-moving GCs. Non-moving generational GCs are not very > common as far as I know, but they do exist; for example see > http://wiki.luajit.org/New-Garbage-Collector . > > > A bient?t, > > Armin. > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev From hubo at jiedaibao.com Wed Dec 23 05:03:25 2015 From: hubo at jiedaibao.com (hubo) Date: Wed, 23 Dec 2015 18:03:25 +0800 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 Message-ID: 567A716C.3050308@jiedaibao.com Hello devs, A (possible) dead loop is found when I use python-daemon and multiprocessing together in PyPy 4.0.1, which does not appear in Python(2.6 or 2.7). Also it does not appear in earlier PyPy versions (2.0.2) Reproduce: First install python-daemon: pypy_pip install python-daemon Use the following test script (also available in attachment): #!/usr/bin/pypy import daemon import multiprocessing def test(): q = multiprocessing.Queue(64) if __name__ == '__main__': with daemon.DaemonContext(): test() When executing the script with pypy: pypy test.py The background service does not exit, and is consuming 100% CPU: ps aux | grep pypy root 7769 99.1 0.5 235332 46812 ? R 17:52 2:09 pypy test.py root 7775 0.0 0.0 103252 804 pts/1 S+ 17:54 0:00 grep pypy Executing the script with python: python2.7 test.py And the background service normally exits. Environment: I'm using CentOS 6.5, with portable PyPy distribution for linux (https://bitbucket.org/squeaky/portable-pypy/downloads/pypy-4.0.1-linux_x86_64-portable.tar.bz2) I run the script on system built-in python (python 2.6.6), a compiled CPython (2.7.11), and pypy from epel-release(pypy 2.0.2, python 2.7.2), and the problem does not appear. Though the compiled CPython is 2.7.11 and PyPy 4.0.4 is python 2.7.10, I think that does not matter much. Please contact if you have any questions or ideas. 2015-12-23 hubo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Clip(12-23-17-55-26).png Type: image/png Size: 2516 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.py Type: application/octet-stream Size: 177 bytes Desc: not available URL: From fijall at gmail.com Wed Dec 23 06:35:50 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 23 Dec 2015 13:35:50 +0200 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: Hi hubo Can you put it as a bug report? Those things get easily lost on the mailing list (and sadly I won't look at it right now, multiprocessing scares me) On Wed, Dec 23, 2015 at 12:03 PM, hubo wrote: > Hello devs, > > A (possible) dead loop is found when I use python-daemon and > multiprocessing together in PyPy 4.0.1, which does not appear in Python(2.6 > or 2.7). Also it does not appear in earlier PyPy versions (2.0.2) > > *Reproduce*: > > First install python-daemon: > pypy_pip install python-daemon > > Use the following test script (also available in attachment): > > #!/usr/bin/pypy > import daemon > import multiprocessing > def test(): > q = multiprocessing.Queue(64) > if __name__ == '__main__': > with daemon.DaemonContext(): > test() > > When executing the script with pypy: > pypy test.py > > The background service does not exit, and is consuming 100% CPU: > ps aux | grep pypy > root 7769 99.1 0.5 235332 46812 ? R 17:52 2:09 pypy > test.py > root 7775 0.0 0.0 103252 804 pts/1 S+ 17:54 0:00 grep pypy > > > > Executing the script with python: > python2.7 test.py > And the background service normally exits. > > *Environment:* > I'm using CentOS 6.5, with portable PyPy distribution for linux ( > https://bitbucket.org/squeaky/portable-pypy/downloads/pypy-4.0.1-linux_x86_64-portable.tar.bz2 > ) > I run the script on system built-in python (python 2.6.6), a compiled > CPython (2.7.11), and pypy from epel-release(pypy 2.0.2, python 2.7.2), and > the problem does not appear. Though the compiled CPython is 2.7.11 and PyPy > 4.0.4 is python 2.7.10, I think that does not matter much. > > Please contact if you have any questions or ideas. > > > 2015-12-23 > ------------------------------ > hubo > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Clip(12-23-17-55-26).png Type: image/png Size: 2516 bytes Desc: not available URL: From hubo at jiedaibao.com Wed Dec 23 06:54:44 2015 From: hubo at jiedaibao.com (hubo) Date: Wed, 23 Dec 2015 19:54:44 +0800 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: 567A8B83.8020207@jiedaibao.com Thanks for the response. Should I put it directly in the bug tracker? FYI, I've located the reason to be the incompatibility with python-daemon (or rather the standard unix-daemon behavior) and PyPy posix.urandom implementation. It seems that in PyPy 4.0.1, when module random loaded, a file descriptor is created on /dev/urandom. I think PyPy implementation use the shared descriptor to read from /dev/urandom. Sadly when python-daemon fork the process and turns it into an unix daemon, it closes all the currently open file descriptors. After that all os.urandom calls failed with OSError. I think maybe the other functions of Random class is also using the file descriptor in C code and just never detects if the return value is 0, and causes the dead loop. I think the problem will be solved if the implementation re-open the handle when it is closed somehow. multiprocessing is using random internally. Also there are lots of other modules using random, like email etc. The dead loop occurs when you use any of the libraries in a daemon. 2015-12-23 hubo ????Maciej Fijalkowski ?????2015-12-23 19:35 ???Re: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 ????"hubo" ???"pypy-dev" Hi hubo Can you put it as a bug report? Those things get easily lost on the mailing list (and sadly I won't look at it right now, multiprocessing scares me) On Wed, Dec 23, 2015 at 12:03 PM, hubo wrote: Hello devs, A (possible) dead loop is found when I use python-daemon and multiprocessing together in PyPy 4.0.1, which does not appear in Python(2.6 or 2.7). Also it does not appear in earlier PyPy versions (2.0.2) Reproduce: First install python-daemon: pypy_pip install python-daemon Use the following test script (also available in attachment): #!/usr/bin/pypy import daemon import multiprocessing def test(): q = multiprocessing.Queue(64) if __name__ == '__main__': with daemon.DaemonContext(): test() When executing the script with pypy: pypy test.py The background service does not exit, and is consuming 100% CPU: ps aux | grep pypy root 7769 99.1 0.5 235332 46812 ? R 17:52 2:09 pypy test.py root 7775 0.0 0.0 103252 804 pts/1 S+ 17:54 0:00 grep pypy Executing the script with python: python2.7 test.py And the background service normally exits. Environment: I'm using CentOS 6.5, with portable PyPy distribution for linux (https://bitbucket.org/squeaky/portable-pypy/downloads/pypy-4.0.1-linux_x86_64-portable.tar.bz2) I run the script on system built-in python (python 2.6.6), a compiled CPython (2.7.11), and pypy from epel-release(pypy 2.0.2, python 2.7.2), and the problem does not appear. Though the compiled CPython is 2.7.11 and PyPy 4.0.4 is python 2.7.10, I think that does not matter much. Please contact if you have any questions or ideas. 2015-12-23 hubo _______________________________________________ pypy-dev mailing list pypy-dev at python.org https://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Clip(12-23-17-55-26)(2).png Type: image/png Size: 2516 bytes Desc: not available URL: From fijall at gmail.com Wed Dec 23 07:07:16 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 23 Dec 2015 14:07:16 +0200 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: That's very interesting, can you produce a standalone example that does not use multiprocessing? That would make it much easier to fix the bug (e.g. os.fork followed by os.urandom failing) On Wed, Dec 23, 2015 at 1:54 PM, hubo wrote: > Thanks for the response. Should I put it directly in the bug tracker? > > FYI, I've located the reason to be the incompatibility with python-daemon > (or rather the standard unix-daemon behavior) and PyPy *posix.urandom* > implementation. > > It seems that in PyPy 4.0.1, when module *random* loaded, a file > descriptor is created on /dev/urandom. I think PyPy implementation use the > shared descriptor to read from /dev/urandom. Sadly when python-daemon fork > the process and turns it into an unix daemon, it closes all the currently > open file descriptors. After that all os.urandom calls failed with OSError. > I think maybe the other functions of Random class is also using the file > descriptor in C code and just never detects if the return value is 0, and > causes the dead loop. > > I think the problem will be solved if the implementation re-open the > handle when it is closed somehow. > > multiprocessing is using random internally. Also there are lots of other > modules using random, like email etc. The dead loop occurs when you use any > of the libraries in a daemon. > > > > 2015-12-23 > ------------------------------ > hubo > ------------------------------ > > *????*Maciej Fijalkowski > *?????*2015-12-23 19:35 > *???*Re: [pypy-dev] Dead loop occurs when using python-daemon and > multiprocessing together in PyPy 4.0.1 > *????*"hubo" > *???*"pypy-dev" > > Hi hubo > > Can you put it as a bug report? Those things get easily lost on the > mailing list (and sadly I won't look at it right now, multiprocessing > scares me) > > On Wed, Dec 23, 2015 at 12:03 PM, hubo wrote: > >> Hello devs, >> >> A (possible) dead loop is found when I use python-daemon and >> multiprocessing together in PyPy 4.0.1, which does not appear in Python(2.6 >> or 2.7). Also it does not appear in earlier PyPy versions (2.0.2) >> >> *Reproduce*: >> >> First install python-daemon: >> pypy_pip install python-daemon >> >> Use the following test script (also available in attachment): >> >> #!/usr/bin/pypy >> import daemon >> import multiprocessing >> def test(): >> q = multiprocessing.Queue(64) >> if __name__ == '__main__': >> with daemon.DaemonContext(): >> test() >> >> When executing the script with pypy: >> pypy test.py >> >> The background service does not exit, and is consuming 100% CPU: >> ps aux | grep pypy >> root 7769 99.1 0.5 235332 46812 ? R 17:52 2:09 pypy >> test.py >> root 7775 0.0 0.0 103252 804 pts/1 S+ 17:54 0:00 grep pypy >> >> >> >> Executing the script with python: >> python2.7 test.py >> And the background service normally exits. >> >> *Environment:* >> I'm using CentOS 6.5, with portable PyPy distribution for linux ( >> https://bitbucket.org/squeaky/portable-pypy/downloads/pypy-4.0.1-linux_x86_64-portable.tar.bz2 >> ) >> I run the script on system built-in python (python 2.6.6), a compiled >> CPython (2.7.11), and pypy from epel-release(pypy 2.0.2, python 2.7.2), and >> the problem does not appear. Though the compiled CPython is 2.7.11 and PyPy >> 4.0.4 is python 2.7.10, I think that does not matter much. >> >> Please contact if you have any questions or ideas. >> >> >> 2015-12-23 >> ------------------------------ >> hubo >> >> _______________________________________________ >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Clip(12-23-17-55-26)(2).png Type: image/png Size: 2516 bytes Desc: not available URL: From hubo at jiedaibao.com Wed Dec 23 08:14:26 2015 From: hubo at jiedaibao.com (hubo) Date: Wed, 23 Dec 2015 21:14:26 +0800 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: 567A9E2F.7060808@jiedaibao.com I can only reproduce the OSError problem. Maybe the CPU 100% is not really a dead lock, but rather some kind of automatic crash report? Although it is quite easy to crash the program with os.urandom, it only stops responding when the crash happens in system libraries like multiprocessing or email. The posix.urandom problem is quite easy to reproduce: #!/usr/bin/pypy import os os.urandom(16) def test(): print repr(os.urandom(16)) import daemon import sys if __name__ == '__main__': with daemon.DaemonContext(initgroups=False, stderr=sys.stderr,stdout=sys.stdout): test() (stderr and stdout is kept open to show console messages in the daemon. initgroups=False is a workaround on python-daemon not working in Python2.6) Or, with module random: #!/usr/bin/pypy import random def test(): random.Random() import daemon import sys if __name__ == '__main__': with daemon.DaemonContext(initgroups=False, stderr=sys.stderr,stdout=sys.stdout): test() And when run scripts with pypy: pypy test3.py it crashes with OSError: Traceback (most recent call last): File "test2.py", line 13, in test() File "test2.py", line 6, in test random.Random() File "/opt/pypy-4.0.1-linux_x86_64-portable/lib-python/2.7/random.py", line 95, in __init__ self.seed(x) File "/opt/pypy-4.0.1-linux_x86_64-portable/lib-python/2.7/random.py", line 111, in seed a = long(_hexlify(_urandom(2500)), 16) OSError: [Errno 9] Bad file descriptor It is still not clear why it causes dead loop (or long-time no responding) in multiprocessing (should have thrown an ImportError) and the exact condition for the file descriptor of /dev/urandom appears (just call os.urandom and import random does not reproduce the result), but I believe it is definitely linked to the problem. 2015-12-23 hubo ????Maciej Fijalkowski ?????2015-12-23 20:07 ???Re: Re: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 ????"hubo" ???"pypy-dev" That's very interesting, can you produce a standalone example that does not use multiprocessing? That would make it much easier to fix the bug (e.g. os.fork followed by os.urandom failing) On Wed, Dec 23, 2015 at 1:54 PM, hubo wrote: Thanks for the response. Should I put it directly in the bug tracker? FYI, I've located the reason to be the incompatibility with python-daemon (or rather the standard unix-daemon behavior) and PyPy posix.urandom implementation. It seems that in PyPy 4.0.1, when module random loaded, a file descriptor is created on /dev/urandom. I think PyPy implementation use the shared descriptor to read from /dev/urandom. Sadly when python-daemon fork the process and turns it into an unix daemon, it closes all the currently open file descriptors. After that all os.urandom calls failed with OSError. I think maybe the other functions of Random class is also using the file descriptor in C code and just never detects if the return value is 0, and causes the dead loop. I think the problem will be solved if the implementation re-open the handle when it is closed somehow. multiprocessing is using random internally. Also there are lots of other modules using random, like email etc. The dead loop occurs when you use any of the libraries in a daemon. 2015-12-23 hubo ????Maciej Fijalkowski ?????2015-12-23 19:35 ???Re: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 ????"hubo" ???"pypy-dev" Hi hubo Can you put it as a bug report? Those things get easily lost on the mailing list (and sadly I won't look at it right now, multiprocessing scares me) On Wed, Dec 23, 2015 at 12:03 PM, hubo wrote: Hello devs, A (possible) dead loop is found when I use python-daemon and multiprocessing together in PyPy 4.0.1, which does not appear in Python(2.6 or 2.7). Also it does not appear in earlier PyPy versions (2.0.2) Reproduce: First install python-daemon: pypy_pip install python-daemon Use the following test script (also available in attachment): #!/usr/bin/pypy import daemon import multiprocessing def test(): q = multiprocessing.Queue(64) if __name__ == '__main__': with daemon.DaemonContext(): test() When executing the script with pypy: pypy test.py The background service does not exit, and is consuming 100% CPU: ps aux | grep pypy root 7769 99.1 0.5 235332 46812 ? R 17:52 2:09 pypy test.py root 7775 0.0 0.0 103252 804 pts/1 S+ 17:54 0:00 grep pypy Executing the script with python: python2.7 test.py And the background service normally exits. Environment: I'm using CentOS 6.5, with portable PyPy distribution for linux (https://bitbucket.org/squeaky/portable-pypy/downloads/pypy-4.0.1-linux_x86_64-portable.tar.bz2) I run the script on system built-in python (python 2.6.6), a compiled CPython (2.7.11), and pypy from epel-release(pypy 2.0.2, python 2.7.2), and the problem does not appear. Though the compiled CPython is 2.7.11 and PyPy 4.0.4 is python 2.7.10, I think that does not matter much. Please contact if you have any questions or ideas. 2015-12-23 hubo _______________________________________________ pypy-dev mailing list pypy-dev at python.org https://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Clip(12-23-17-55-26)(2)(1).png Type: image/png Size: 2516 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test2.py Type: application/octet-stream Size: 222 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test3.py Type: application/octet-stream Size: 215 bytes Desc: not available URL: From fijall at gmail.com Wed Dec 23 08:22:33 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 23 Dec 2015 15:22:33 +0200 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: can you reproduce the OSError problem without having the daemon module involved either? On Wed, Dec 23, 2015 at 3:14 PM, hubo wrote: > I can only reproduce the *OSError* problem. Maybe the CPU 100% is not > really a dead lock, but rather some kind of automatic crash report? > Although it is quite easy to crash the program with os.urandom, it > only stops responding when the crash happens in system libraries like > multiprocessing or email. > > The posix.urandom problem is quite easy to reproduce: > > #!/usr/bin/pypy > import os > os.urandom(16) > def test(): > print repr(os.urandom(16)) > import daemon > import sys > if __name__ == '__main__': > with daemon.DaemonContext(initgroups=False, > stderr=sys.stderr,stdout=sys.stdout): > test() > > (stderr and stdout is kept open to show console messages in the daemon. > initgroups=False is a workaround on python-daemon not working in Python2.6) > > Or, with module random: > > #!/usr/bin/pypy > import random > def test(): > random.Random() > import daemon > import sys > if __name__ == '__main__': > with daemon.DaemonContext(initgroups=False, > stderr=sys.stderr,stdout=sys.stdout): > test() > And when run scripts with pypy: > > pypy test3.py > > it crashes with OSError: > Traceback (most recent call last): > File "test2.py", line 13, in > test() > File "test2.py", line 6, in test > random.Random() > File "/opt/pypy-4.0.1-linux_x86_64-portable/lib-python/2.7/random.py", > line 95, in __init__ > self.seed(x) > File "/opt/pypy-4.0.1-linux_x86_64-portable/lib-python/2.7/random.py", > line 111, in seed > a = long(_hexlify(_urandom(2500)), 16) > OSError: [Errno 9] Bad file descriptor > > It is still not clear why it causes dead loop (or long-time no responding) > in multiprocessing (should have thrown an ImportError) and the exact > condition for the file descriptor of /dev/urandom appears (just call > os.urandom and import random does not reproduce the result), but I believe > it is definitely linked to the problem. > > 2015-12-23 > ------------------------------ > hubo > ------------------------------ > > *????*Maciej Fijalkowski > *?????*2015-12-23 20:07 > *???*Re: Re: [pypy-dev] Dead loop occurs when using python-daemon and > multiprocessing together in PyPy 4.0.1 > *????*"hubo" > *???*"pypy-dev" > > That's very interesting, can you produce a standalone example that does > not use multiprocessing? That would make it much easier to fix the bug > (e.g. os.fork followed by os.urandom failing) > > On Wed, Dec 23, 2015 at 1:54 PM, hubo wrote: > >> Thanks for the response. Should I put it directly in the bug tracker? >> >> FYI, I've located the reason to be the incompatibility with python-daemon >> (or rather the standard unix-daemon behavior) and PyPy *posix.urandom* >> implementation. >> >> It seems that in PyPy 4.0.1, when module *random* loaded, a file >> descriptor is created on /dev/urandom. I think PyPy implementation use the >> shared descriptor to read from /dev/urandom. Sadly when python-daemon fork >> the process and turns it into an unix daemon, it closes all the currently >> open file descriptors. After that all os.urandom calls failed with OSError. >> I think maybe the other functions of Random class is also using the file >> descriptor in C code and just never detects if the return value is 0, and >> causes the dead loop. >> >> I think the problem will be solved if the implementation re-open the >> handle when it is closed somehow. >> >> multiprocessing is using random internally. Also there are lots of other >> modules using random, like email etc. The dead loop occurs when you use any >> of the libraries in a daemon. >> >> >> >> 2015-12-23 >> ------------------------------ >> hubo >> ------------------------------ >> >> *????*Maciej Fijalkowski >> *?????*2015-12-23 19:35 >> *???*Re: [pypy-dev] Dead loop occurs when using python-daemon and >> multiprocessing together in PyPy 4.0.1 >> *????*"hubo" >> *???*"pypy-dev" >> >> Hi hubo >> >> Can you put it as a bug report? Those things get easily lost on the >> mailing list (and sadly I won't look at it right now, multiprocessing >> scares me) >> >> On Wed, Dec 23, 2015 at 12:03 PM, hubo wrote: >> >>> Hello devs, >>> >>> A (possible) dead loop is found when I use python-daemon and >>> multiprocessing together in PyPy 4.0.1, which does not appear in Python(2.6 >>> or 2.7). Also it does not appear in earlier PyPy versions (2.0.2) >>> >>> *Reproduce*: >>> >>> First install python-daemon: >>> pypy_pip install python-daemon >>> >>> Use the following test script (also available in attachment): >>> >>> #!/usr/bin/pypy >>> import daemon >>> import multiprocessing >>> def test(): >>> q = multiprocessing.Queue(64) >>> if __name__ == '__main__': >>> with daemon.DaemonContext(): >>> test() >>> >>> When executing the script with pypy: >>> pypy test.py >>> >>> The background service does not exit, and is consuming 100% CPU: >>> ps aux | grep pypy >>> root 7769 99.1 0.5 235332 46812 ? R 17:52 2:09 pypy >>> test.py >>> root 7775 0.0 0.0 103252 804 pts/1 S+ 17:54 0:00 grep >>> pypy >>> >>> >>> >>> Executing the script with python: >>> python2.7 test.py >>> And the background service normally exits. >>> >>> *Environment:* >>> I'm using CentOS 6.5, with portable PyPy distribution for linux ( >>> https://bitbucket.org/squeaky/portable-pypy/downloads/pypy-4.0.1-linux_x86_64-portable.tar.bz2 >>> ) >>> I run the script on system built-in python (python 2.6.6), a compiled >>> CPython (2.7.11), and pypy from epel-release(pypy 2.0.2, python 2.7.2), and >>> the problem does not appear. Though the compiled CPython is 2.7.11 and PyPy >>> 4.0.4 is python 2.7.10, I think that does not matter much. >>> >>> Please contact if you have any questions or ideas. >>> >>> >>> 2015-12-23 >>> ------------------------------ >>> hubo >>> >>> _______________________________________________ >>> pypy-dev mailing list >>> pypy-dev at python.org >>> https://mail.python.org/mailman/listinfo/pypy-dev >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Clip(12-23-17-55-26)(2)(1).png Type: image/png Size: 2516 bytes Desc: not available URL: From hubo at jiedaibao.com Wed Dec 23 08:36:09 2015 From: hubo at jiedaibao.com (hubo) Date: Wed, 23 Dec 2015 21:36:09 +0800 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: 567AA348.1000408@jiedaibao.com No, the python-daemon module is critical in this problem, because it is the python-daemon module who closed the fd to /dev/urandom. When process swith to daemon, it forks itself, and then close all open fds (including stdin, stdout and stderr), so it also closes the fd for /dev/urandom which is used by PyPy library. It is the standard behavior defined by https://www.python.org/dev/peps/pep-3143/#daemoncontext-objects and also the standard behavior for unix daemons. And unfortunately there is not a way to prevent the fd to be closed without knowing exactly what number it is on. Without python-daemon (or similar libraries), it is only possible to reproduce the problem by closing the fd (usually 4) forcely, but it does not make much sense. 2015-12-23 hubo ????Maciej Fijalkowski ?????2015-12-23 21:22 ???Re: Re: Re: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 ????"hubo" ???"pypy-dev" can you reproduce the OSError problem without having the daemon module involved either? On Wed, Dec 23, 2015 at 3:14 PM, hubo wrote: I can only reproduce the OSError problem. Maybe the CPU 100% is not really a dead lock, but rather some kind of automatic crash report? Although it is quite easy to crash the program with os.urandom, it only stops responding when the crash happens in system libraries like multiprocessing or email. The posix.urandom problem is quite easy to reproduce: #!/usr/bin/pypy import os os.urandom(16) def test(): print repr(os.urandom(16)) import daemon import sys if __name__ == '__main__': with daemon.DaemonContext(initgroups=False, stderr=sys.stderr,stdout=sys.stdout): test() (stderr and stdout is kept open to show console messages in the daemon. initgroups=False is a workaround on python-daemon not working in Python2.6) Or, with module random: #!/usr/bin/pypy import random def test(): random.Random() import daemon import sys if __name__ == '__main__': with daemon.DaemonContext(initgroups=False, stderr=sys.stderr,stdout=sys.stdout): test() And when run scripts with pypy: pypy test3.py it crashes with OSError: Traceback (most recent call last): File "test2.py", line 13, in test() File "test2.py", line 6, in test random.Random() File "/opt/pypy-4.0.1-linux_x86_64-portable/lib-python/2.7/random.py", line 95, in __init__ self.seed(x) File "/opt/pypy-4.0.1-linux_x86_64-portable/lib-python/2.7/random.py", line 111, in seed a = long(_hexlify(_urandom(2500)), 16) OSError: [Errno 9] Bad file descriptor It is still not clear why it causes dead loop (or long-time no responding) in multiprocessing (should have thrown an ImportError) and the exact condition for the file descriptor of /dev/urandom appears (just call os.urandom and import random does not reproduce the result), but I believe it is definitely linked to the problem. 2015-12-23 hubo ????Maciej Fijalkowski ?????2015-12-23 20:07 ???Re: Re: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 ????"hubo" ???"pypy-dev" That's very interesting, can you produce a standalone example that does not use multiprocessing? That would make it much easier to fix the bug (e.g. os.fork followed by os.urandom failing) On Wed, Dec 23, 2015 at 1:54 PM, hubo wrote: Thanks for the response. Should I put it directly in the bug tracker? FYI, I've located the reason to be the incompatibility with python-daemon (or rather the standard unix-daemon behavior) and PyPy posix.urandom implementation. It seems that in PyPy 4.0.1, when module random loaded, a file descriptor is created on /dev/urandom. I think PyPy implementation use the shared descriptor to read from /dev/urandom. Sadly when python-daemon fork the process and turns it into an unix daemon, it closes all the currently open file descriptors. After that all os.urandom calls failed with OSError. I think maybe the other functions of Random class is also using the file descriptor in C code and just never detects if the return value is 0, and causes the dead loop. I think the problem will be solved if the implementation re-open the handle when it is closed somehow. multiprocessing is using random internally. Also there are lots of other modules using random, like email etc. The dead loop occurs when you use any of the libraries in a daemon. 2015-12-23 hubo ????Maciej Fijalkowski ?????2015-12-23 19:35 ???Re: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 ????"hubo" ???"pypy-dev" Hi hubo Can you put it as a bug report? Those things get easily lost on the mailing list (and sadly I won't look at it right now, multiprocessing scares me) On Wed, Dec 23, 2015 at 12:03 PM, hubo wrote: Hello devs, A (possible) dead loop is found when I use python-daemon and multiprocessing together in PyPy 4.0.1, which does not appear in Python(2.6 or 2.7). Also it does not appear in earlier PyPy versions (2.0.2) Reproduce: First install python-daemon: pypy_pip install python-daemon Use the following test script (also available in attachment): #!/usr/bin/pypy import daemon import multiprocessing def test(): q = multiprocessing.Queue(64) if __name__ == '__main__': with daemon.DaemonContext(): test() When executing the script with pypy: pypy test.py The background service does not exit, and is consuming 100% CPU: ps aux | grep pypy root 7769 99.1 0.5 235332 46812 ? R 17:52 2:09 pypy test.py root 7775 0.0 0.0 103252 804 pts/1 S+ 17:54 0:00 grep pypy Executing the script with python: python2.7 test.py And the background service normally exits. Environment: I'm using CentOS 6.5, with portable PyPy distribution for linux (https://bitbucket.org/squeaky/portable-pypy/downloads/pypy-4.0.1-linux_x86_64-portable.tar.bz2) I run the script on system built-in python (python 2.6.6), a compiled CPython (2.7.11), and pypy from epel-release(pypy 2.0.2, python 2.7.2), and the problem does not appear. Though the compiled CPython is 2.7.11 and PyPy 4.0.4 is python 2.7.10, I think that does not matter much. Please contact if you have any questions or ideas. 2015-12-23 hubo _______________________________________________ pypy-dev mailing list pypy-dev at python.org https://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Clip(12-23-17-55-26)(2)(1)(1).png Type: image/png Size: 2516 bytes Desc: not available URL: From fijall at gmail.com Wed Dec 23 08:54:23 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 23 Dec 2015 15:54:23 +0200 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: well, ok, but that does not sound like a pypy bug then - "close all existing fds and complain that some of them are closed" is a bit not good - maybe it's a bug in python-daemon and the PEP? On Wed, Dec 23, 2015 at 3:36 PM, hubo wrote: > No, the python-daemon module is critical in this problem, because it is > the python-daemon module who closed the fd to /dev/urandom. When process > swith to daemon, it forks itself, and then close all open fds (including > stdin, stdout and stderr), so it also closes the fd for /dev/urandom which > is used by PyPy library. It is the standard behavior defined by > https://www.python.org/dev/peps/pep-3143/#daemoncontext-objects and also > the standard behavior for unix daemons. And unfortunately there is not a > way to prevent the fd to be closed without knowing exactly what number it > is on. > > Without python-daemon (or similar libraries), it is only possible to > reproduce the problem by closing the fd (usually 4) forcely, but it does > not make much sense. > > > 2015-12-23 > ------------------------------ > hubo > ------------------------------ > > *????*Maciej Fijalkowski > *?????*2015-12-23 21:22 > *???*Re: Re: Re: [pypy-dev] Dead loop occurs when using python-daemon and > multiprocessing together in PyPy 4.0.1 > *????*"hubo" > *???*"pypy-dev" > > can you reproduce the OSError problem without having the daemon module > involved either? > > On Wed, Dec 23, 2015 at 3:14 PM, hubo wrote: > >> I can only reproduce the *OSError* problem. Maybe the CPU 100% is not >> really a dead lock, but rather some kind of automatic crash report? >> Although it is quite easy to crash the program with os.urandom, it >> only stops responding when the crash happens in system libraries like >> multiprocessing or email. >> >> The posix.urandom problem is quite easy to reproduce: >> >> #!/usr/bin/pypy >> import os >> os.urandom(16) >> def test(): >> print repr(os.urandom(16)) >> import daemon >> import sys >> if __name__ == '__main__': >> with daemon.DaemonContext(initgroups=False, >> stderr=sys.stderr,stdout=sys.stdout): >> test() >> >> (stderr and stdout is kept open to show console messages in the daemon. >> initgroups=False is a workaround on python-daemon not working in Python2.6) >> >> Or, with module random: >> >> #!/usr/bin/pypy >> import random >> def test(): >> random.Random() >> import daemon >> import sys >> if __name__ == '__main__': >> with daemon.DaemonContext(initgroups=False, >> stderr=sys.stderr,stdout=sys.stdout): >> test() >> And when run scripts with pypy: >> >> pypy test3.py >> >> it crashes with OSError: >> Traceback (most recent call last): >> File "test2.py", line 13, in >> test() >> File "test2.py", line 6, in test >> random.Random() >> File "/opt/pypy-4.0.1-linux_x86_64-portable/lib-python/2.7/random.py", >> line 95, in __init__ >> self.seed(x) >> File "/opt/pypy-4.0.1-linux_x86_64-portable/lib-python/2.7/random.py", >> line 111, in seed >> a = long(_hexlify(_urandom(2500)), 16) >> OSError: [Errno 9] Bad file descriptor >> >> It is still not clear why it causes dead loop (or long-time no >> responding) in multiprocessing (should have thrown an ImportError) and the >> exact condition for the file descriptor of /dev/urandom appears (just call >> os.urandom and import random does not reproduce the result), but I believe >> it is definitely linked to the problem. >> >> 2015-12-23 >> ------------------------------ >> hubo >> ------------------------------ >> >> *????*Maciej Fijalkowski >> *?????*2015-12-23 20:07 >> *???*Re: Re: [pypy-dev] Dead loop occurs when using python-daemon and >> multiprocessing together in PyPy 4.0.1 >> *????*"hubo" >> *???*"pypy-dev" >> >> That's very interesting, can you produce a standalone example that does >> not use multiprocessing? That would make it much easier to fix the bug >> (e.g. os.fork followed by os.urandom failing) >> >> On Wed, Dec 23, 2015 at 1:54 PM, hubo wrote: >> >>> Thanks for the response. Should I put it directly in the bug tracker? >>> >>> FYI, I've located the reason to be the incompatibility with >>> python-daemon (or rather the standard unix-daemon behavior) and PyPy >>> *posix.urandom* implementation. >>> >>> It seems that in PyPy 4.0.1, when module *random* loaded, a file >>> descriptor is created on /dev/urandom. I think PyPy implementation use the >>> shared descriptor to read from /dev/urandom. Sadly when python-daemon fork >>> the process and turns it into an unix daemon, it closes all the currently >>> open file descriptors. After that all os.urandom calls failed with OSError. >>> I think maybe the other functions of Random class is also using the file >>> descriptor in C code and just never detects if the return value is 0, and >>> causes the dead loop. >>> >>> I think the problem will be solved if the implementation re-open the >>> handle when it is closed somehow. >>> >>> multiprocessing is using random internally. Also there are lots of other >>> modules using random, like email etc. The dead loop occurs when you use any >>> of the libraries in a daemon. >>> >>> >>> >>> 2015-12-23 >>> ------------------------------ >>> hubo >>> ------------------------------ >>> >>> *????*Maciej Fijalkowski >>> *?????*2015-12-23 19:35 >>> *???*Re: [pypy-dev] Dead loop occurs when using python-daemon and >>> multiprocessing together in PyPy 4.0.1 >>> *????*"hubo" >>> *???*"pypy-dev" >>> >>> Hi hubo >>> >>> Can you put it as a bug report? Those things get easily lost on the >>> mailing list (and sadly I won't look at it right now, multiprocessing >>> scares me) >>> >>> On Wed, Dec 23, 2015 at 12:03 PM, hubo wrote: >>> >>>> Hello devs, >>>> >>>> A (possible) dead loop is found when I use python-daemon and >>>> multiprocessing together in PyPy 4.0.1, which does not appear in Python(2.6 >>>> or 2.7). Also it does not appear in earlier PyPy versions (2.0.2) >>>> >>>> *Reproduce*: >>>> >>>> First install python-daemon: >>>> pypy_pip install python-daemon >>>> >>>> Use the following test script (also available in attachment): >>>> >>>> #!/usr/bin/pypy >>>> import daemon >>>> import multiprocessing >>>> def test(): >>>> q = multiprocessing.Queue(64) >>>> if __name__ == '__main__': >>>> with daemon.DaemonContext(): >>>> test() >>>> >>>> When executing the script with pypy: >>>> pypy test.py >>>> >>>> The background service does not exit, and is consuming 100% CPU: >>>> ps aux | grep pypy >>>> root 7769 99.1 0.5 235332 46812 ? R 17:52 2:09 pypy >>>> test.py >>>> root 7775 0.0 0.0 103252 804 pts/1 S+ 17:54 0:00 grep >>>> pypy >>>> >>>> >>>> >>>> Executing the script with python: >>>> python2.7 test.py >>>> And the background service normally exits. >>>> >>>> *Environment:* >>>> I'm using CentOS 6.5, with portable PyPy distribution for linux ( >>>> https://bitbucket.org/squeaky/portable-pypy/downloads/pypy-4.0.1-linux_x86_64-portable.tar.bz2 >>>> ) >>>> I run the script on system built-in python (python 2.6.6), a compiled >>>> CPython (2.7.11), and pypy from epel-release(pypy 2.0.2, python 2.7.2), and >>>> the problem does not appear. Though the compiled CPython is 2.7.11 and PyPy >>>> 4.0.4 is python 2.7.10, I think that does not matter much. >>>> >>>> Please contact if you have any questions or ideas. >>>> >>>> >>>> 2015-12-23 >>>> ------------------------------ >>>> hubo >>>> >>>> _______________________________________________ >>>> pypy-dev mailing list >>>> pypy-dev at python.org >>>> https://mail.python.org/mailman/listinfo/pypy-dev >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Clip(12-23-17-55-26)(2)(1)(1).png Type: image/png Size: 2516 bytes Desc: not available URL: From cfbolz at gmx.de Wed Dec 23 08:57:35 2015 From: cfbolz at gmx.de (Carl Friedrich Bolz) Date: Wed, 23 Dec 2015 14:57:35 +0100 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: Hi Maciek, Of course it's a difference between pypy and cpython. if you close all fds, cpython's random module still works, but PyPy's doesn't. Cheers, Carl Friedrich On December 23, 2015 2:54:23 PM GMT+01:00, Maciej Fijalkowski wrote: >well, ok, but that does not sound like a pypy bug then - "close all >existing fds and complain that some of them are closed" is a bit not >good - >maybe it's a bug in python-daemon and the PEP? > >On Wed, Dec 23, 2015 at 3:36 PM, hubo wrote: > >> No, the python-daemon module is critical in this problem, because it >is >> the python-daemon module who closed the fd to /dev/urandom. When >process >> swith to daemon, it forks itself, and then close all open fds >(including >> stdin, stdout and stderr), so it also closes the fd for /dev/urandom >which >> is used by PyPy library. It is the standard behavior defined by >> https://www.python.org/dev/peps/pep-3143/#daemoncontext-objects and >also >> the standard behavior for unix daemons. And unfortunately there is >not a >> way to prevent the fd to be closed without knowing exactly what >number it >> is on. >> >> Without python-daemon (or similar libraries), it is only possible to >> reproduce the problem by closing the fd (usually 4) forcely, but it >does >> not make much sense. >> >> >> 2015-12-23 >> ------------------------------ >> hubo >> ------------------------------ >> >> *????*Maciej Fijalkowski >> *?????*2015-12-23 21:22 >> *???*Re: Re: Re: [pypy-dev] Dead loop occurs when using python-daemon >and >> multiprocessing together in PyPy 4.0.1 >> *????*"hubo" >> *???*"pypy-dev" >> >> can you reproduce the OSError problem without having the daemon >module >> involved either? >> >> On Wed, Dec 23, 2015 at 3:14 PM, hubo wrote: >> >>> I can only reproduce the *OSError* problem. Maybe the CPU 100% is >not >>> really a dead lock, but rather some kind of automatic crash report? >>> Although it is quite easy to crash the program with os.urandom, it >>> only stops responding when the crash happens in system libraries >like >>> multiprocessing or email. >>> >>> The posix.urandom problem is quite easy to reproduce: >>> >>> #!/usr/bin/pypy >>> import os >>> os.urandom(16) >>> def test(): >>> print repr(os.urandom(16)) >>> import daemon >>> import sys >>> if __name__ == '__main__': >>> with daemon.DaemonContext(initgroups=False, >>> stderr=sys.stderr,stdout=sys.stdout): >>> test() >>> >>> (stderr and stdout is kept open to show console messages in the >daemon. >>> initgroups=False is a workaround on python-daemon not working in >Python2.6) >>> >>> Or, with module random: >>> >>> #!/usr/bin/pypy >>> import random >>> def test(): >>> random.Random() >>> import daemon >>> import sys >>> if __name__ == '__main__': >>> with daemon.DaemonContext(initgroups=False, >>> stderr=sys.stderr,stdout=sys.stdout): >>> test() >>> And when run scripts with pypy: >>> >>> pypy test3.py >>> >>> it crashes with OSError: >>> Traceback (most recent call last): >>> File "test2.py", line 13, in >>> test() >>> File "test2.py", line 6, in test >>> random.Random() >>> File >"/opt/pypy-4.0.1-linux_x86_64-portable/lib-python/2.7/random.py", >>> line 95, in __init__ >>> self.seed(x) >>> File >"/opt/pypy-4.0.1-linux_x86_64-portable/lib-python/2.7/random.py", >>> line 111, in seed >>> a = long(_hexlify(_urandom(2500)), 16) >>> OSError: [Errno 9] Bad file descriptor >>> >>> It is still not clear why it causes dead loop (or long-time no >>> responding) in multiprocessing (should have thrown an ImportError) >and the >>> exact condition for the file descriptor of /dev/urandom appears >(just call >>> os.urandom and import random does not reproduce the result), but I >believe >>> it is definitely linked to the problem. >>> >>> 2015-12-23 >>> ------------------------------ >>> hubo >>> ------------------------------ >>> >>> *????*Maciej Fijalkowski >>> *?????*2015-12-23 20:07 >>> *???*Re: Re: [pypy-dev] Dead loop occurs when using python-daemon >and >>> multiprocessing together in PyPy 4.0.1 >>> *????*"hubo" >>> *???*"pypy-dev" >>> >>> That's very interesting, can you produce a standalone example that >does >>> not use multiprocessing? That would make it much easier to fix the >bug >>> (e.g. os.fork followed by os.urandom failing) >>> >>> On Wed, Dec 23, 2015 at 1:54 PM, hubo wrote: >>> >>>> Thanks for the response. Should I put it directly in the bug >tracker? >>>> >>>> FYI, I've located the reason to be the incompatibility with >>>> python-daemon (or rather the standard unix-daemon behavior) and >PyPy >>>> *posix.urandom* implementation. >>>> >>>> It seems that in PyPy 4.0.1, when module *random* loaded, a file >>>> descriptor is created on /dev/urandom. I think PyPy implementation >use the >>>> shared descriptor to read from /dev/urandom. Sadly when >python-daemon fork >>>> the process and turns it into an unix daemon, it closes all the >currently >>>> open file descriptors. After that all os.urandom calls failed with >OSError. >>>> I think maybe the other functions of Random class is also using the >file >>>> descriptor in C code and just never detects if the return value is >0, and >>>> causes the dead loop. >>>> >>>> I think the problem will be solved if the implementation re-open >the >>>> handle when it is closed somehow. >>>> >>>> multiprocessing is using random internally. Also there are lots of >other >>>> modules using random, like email etc. The dead loop occurs when you >use any >>>> of the libraries in a daemon. >>>> >>>> >>>> >>>> 2015-12-23 >>>> ------------------------------ >>>> hubo >>>> ------------------------------ >>>> >>>> *????*Maciej Fijalkowski >>>> *?????*2015-12-23 19:35 >>>> *???*Re: [pypy-dev] Dead loop occurs when using python-daemon and >>>> multiprocessing together in PyPy 4.0.1 >>>> *????*"hubo" >>>> *???*"pypy-dev" >>>> >>>> Hi hubo >>>> >>>> Can you put it as a bug report? Those things get easily lost on the >>>> mailing list (and sadly I won't look at it right now, >multiprocessing >>>> scares me) >>>> >>>> On Wed, Dec 23, 2015 at 12:03 PM, hubo wrote: >>>> >>>>> Hello devs, >>>>> >>>>> A (possible) dead loop is found when I use python-daemon and >>>>> multiprocessing together in PyPy 4.0.1, which does not appear in >Python(2.6 >>>>> or 2.7). Also it does not appear in earlier PyPy versions (2.0.2) >>>>> >>>>> *Reproduce*: >>>>> >>>>> First install python-daemon: >>>>> pypy_pip install python-daemon >>>>> >>>>> Use the following test script (also available in attachment): >>>>> >>>>> #!/usr/bin/pypy >>>>> import daemon >>>>> import multiprocessing >>>>> def test(): >>>>> q = multiprocessing.Queue(64) >>>>> if __name__ == '__main__': >>>>> with daemon.DaemonContext(): >>>>> test() >>>>> >>>>> When executing the script with pypy: >>>>> pypy test.py >>>>> >>>>> The background service does not exit, and is consuming 100% CPU: >>>>> ps aux | grep pypy >>>>> root 7769 99.1 0.5 235332 46812 ? R 17:52 2:09 >pypy >>>>> test.py >>>>> root 7775 0.0 0.0 103252 804 pts/1 S+ 17:54 0:00 >grep >>>>> pypy >>>>> >>>>> >>>>> >>>>> Executing the script with python: >>>>> python2.7 test.py >>>>> And the background service normally exits. >>>>> >>>>> *Environment:* >>>>> I'm using CentOS 6.5, with portable PyPy distribution for linux ( >>>>> >https://bitbucket.org/squeaky/portable-pypy/downloads/pypy-4.0.1-linux_x86_64-portable.tar.bz2 >>>>> ) >>>>> I run the script on system built-in python (python 2.6.6), a >compiled >>>>> CPython (2.7.11), and pypy from epel-release(pypy 2.0.2, python >2.7.2), and >>>>> the problem does not appear. Though the compiled CPython is 2.7.11 >and PyPy >>>>> 4.0.4 is python 2.7.10, I think that does not matter much. >>>>> >>>>> Please contact if you have any questions or ideas. >>>>> >>>>> >>>>> 2015-12-23 >>>>> ------------------------------ >>>>> hubo >>>>> >>>>> _______________________________________________ >>>>> pypy-dev mailing list >>>>> pypy-dev at python.org >>>>> https://mail.python.org/mailman/listinfo/pypy-dev >>>>> >>>>> >>>> >>> >> > > >------------------------------------------------------------------------ > >_______________________________________________ >pypy-dev mailing list >pypy-dev at python.org >https://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Wed Dec 23 09:48:41 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 23 Dec 2015 16:48:41 +0200 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: I bet the difference is due to "we see more FDs in pypy". If you close the correct FD in CPython, it would break too I presume? Or is there special code to handle that? On Wed, Dec 23, 2015 at 3:57 PM, Carl Friedrich Bolz wrote: > Hi Maciek, > > Of course it's a difference between pypy and cpython. if you close all > fds, cpython's random module still works, but PyPy's doesn't. > > Cheers, > > Carl Friedrich > > > On December 23, 2015 2:54:23 PM GMT+01:00, Maciej Fijalkowski < > fijall at gmail.com> wrote: >> >> well, ok, but that does not sound like a pypy bug then - "close all >> existing fds and complain that some of them are closed" is a bit not good - >> maybe it's a bug in python-daemon and the PEP? >> >> On Wed, Dec 23, 2015 at 3:36 PM, hubo wrote: >> >>> No, the python-daemon module is critical in this problem, because it is >>> the python-daemon module who closed the fd to /dev/urandom. When process >>> swith to daemon, it forks itself, and then close all open fds (including >>> stdin, stdout and stderr), so it also closes the fd for /dev/urandom which >>> is used by PyPy library. It is the standard behavior defined by >>> https://www.python.org/dev/peps/pep-3143/#daemoncontext-objects and >>> also the standard behavior for unix daemons. And unfortunately there is not >>> a way to prevent the fd to be closed without knowing exactly what number it >>> is on. >>> >>> Without python-daemon (or similar libraries), it is only possible to >>> reproduce the problem by closing the fd (usually 4) forcely, but it does >>> not make much sense. >>> >>> >>> 2015-12-23 >>> ------------------------------ >>> hubo >>> ------------------------------ >>> >>> *????*Maciej Fijalkowski >>> *?????*2015-12-23 21:22 >>> *???*Re: Re: Re: [pypy-dev] Dead loop occurs when using python-daemon >>> and multiprocessing together in PyPy 4.0.1 >>> *????*"hubo" >>> *???*"pypy-dev" >>> >>> can you reproduce the OSError problem without having the daemon module >>> involved either? >>> >>> On Wed, Dec 23, 2015 at 3:14 PM, hubo wrote: >>> >>>> I can only reproduce the *OSError* problem. Maybe the CPU 100% is not >>>> really a dead lock, but rather some kind of automatic crash report? >>>> Although it is quite easy to crash the program with os.urandom, it >>>> only stops responding when the crash happens in system libraries like >>>> multiprocessing or email. >>>> >>>> The posix.urandom problem is quite easy to reproduce: >>>> >>>> #!/usr/bin/pypy >>>> import os >>>> os.urandom(16) >>>> def test(): >>>> print repr(os.urandom(16)) >>>> import daemon >>>> import sys >>>> if __name__ == '__main__': >>>> with daemon.DaemonContext(initgroups=False, >>>> stderr=sys.stderr,stdout=sys.stdout): >>>> test() >>>> >>>> (stderr and stdout is kept open to show console messages in the daemon. >>>> initgroups=False is a workaround on python-daemon not working in Python2.6) >>>> >>>> Or, with module random: >>>> >>>> #!/usr/bin/pypy >>>> import random >>>> def test(): >>>> random.Random() >>>> import daemon >>>> import sys >>>> if __name__ == '__main__': >>>> with daemon.DaemonContext(initgroups=False, >>>> stderr=sys.stderr,stdout=sys.stdout): >>>> test() >>>> And when run scripts with pypy: >>>> >>>> pypy test3.py >>>> >>>> it crashes with OSError: >>>> Traceback (most recent call last): >>>> File "test2.py", line 13, in >>>> test() >>>> File "test2.py", line 6, in test >>>> random.Random() >>>> File >>>> "/opt/pypy-4.0.1-linux_x86_64-portable/lib-python/2.7/random.py", line 95, >>>> in __init__ >>>> self.seed(x) >>>> File >>>> "/opt/pypy-4.0.1-linux_x86_64-portable/lib-python/2.7/random.py", line 111, >>>> in seed >>>> a = long(_hexlify(_urandom(2500)), 16) >>>> OSError: [Errno 9] Bad file descriptor >>>> >>>> It is still not clear why it causes dead loop (or long-time no >>>> responding) in multiprocessing (should have thrown an ImportError) and the >>>> exact condition for the file descriptor of /dev/urandom appears (just call >>>> os.urandom and import random does not reproduce the result), but I believe >>>> it is definitely linked to the problem. >>>> >>>> 2015-12-23 >>>> ------------------------------ >>>> hubo >>>> ------------------------------ >>>> >>>> *????*Maciej Fijalkowski >>>> *?????*2015-12-23 20:07 >>>> *???*Re: Re: [pypy-dev] Dead loop occurs when using python-daemon and >>>> multiprocessing together in PyPy 4.0.1 >>>> *????*"hubo" >>>> *???*"pypy-dev" >>>> >>>> That's very interesting, can you produce a standalone example that does >>>> not use multiprocessing? That would make it much easier to fix the bug >>>> (e.g. os.fork followed by os.urandom failing) >>>> >>>> On Wed, Dec 23, 2015 at 1:54 PM, hubo wrote: >>>> >>>>> Thanks for the response. Should I put it directly in the bug tracker? >>>>> >>>>> FYI, I've located the reason to be the incompatibility with >>>>> python-daemon (or rather the standard unix-daemon behavior) and PyPy >>>>> *posix.urandom* implementation. >>>>> >>>>> It seems that in PyPy 4.0.1, when module *random* loaded, a file >>>>> descriptor is created on /dev/urandom. I think PyPy implementation use the >>>>> shared descriptor to read from /dev/urandom. Sadly when python-daemon fork >>>>> the process and turns it into an unix daemon, it closes all the currently >>>>> open file descriptors. After that all os.urandom calls failed with OSError. >>>>> I think maybe the other functions of Random class is also using the file >>>>> descriptor in C code and just never detects if the return value is 0, and >>>>> causes the dead loop. >>>>> >>>>> I think the problem will be solved if the implementation re-open the >>>>> handle when it is closed somehow. >>>>> >>>>> multiprocessing is using random internally. Also there are lots of >>>>> other modules using random, like email etc. The dead loop occurs when you >>>>> use any of the libraries in a daemon. >>>>> >>>>> >>>>> >>>>> 2015-12-23 >>>>> ------------------------------ >>>>> hubo >>>>> ------------------------------ >>>>> >>>>> *????*Maciej Fijalkowski >>>>> *?????*2015-12-23 19:35 >>>>> *???*Re: [pypy-dev] Dead loop occurs when using python-daemon and >>>>> multiprocessing together in PyPy 4.0.1 >>>>> *????*"hubo" >>>>> *???*"pypy-dev" >>>>> >>>>> Hi hubo >>>>> >>>>> Can you put it as a bug report? Those things get easily lost on the >>>>> mailing list (and sadly I won't look at it right now, multiprocessing >>>>> scares me) >>>>> >>>>> On Wed, Dec 23, 2015 at 12:03 PM, hubo wrote: >>>>> >>>>>> Hello devs, >>>>>> >>>>>> A (possible) dead loop is found when I use python-daemon and >>>>>> multiprocessing together in PyPy 4.0.1, which does not appear in Python(2.6 >>>>>> or 2.7). Also it does not appear in earlier PyPy versions (2.0.2) >>>>>> >>>>>> *Reproduce*: >>>>>> >>>>>> First install python-daemon: >>>>>> pypy_pip install python-daemon >>>>>> >>>>>> Use the following test script (also available in attachment): >>>>>> >>>>>> #!/usr/bin/pypy >>>>>> import daemon >>>>>> import multiprocessing >>>>>> def test(): >>>>>> q = multiprocessing.Queue(64) >>>>>> if __name__ == '__main__': >>>>>> with daemon.DaemonContext(): >>>>>> test() >>>>>> >>>>>> When executing the script with pypy: >>>>>> pypy test.py >>>>>> >>>>>> The background service does not exit, and is consuming 100% CPU: >>>>>> ps aux | grep pypy >>>>>> root 7769 99.1 0.5 235332 46812 ? R 17:52 2:09 pypy >>>>>> test.py >>>>>> root 7775 0.0 0.0 103252 804 pts/1 S+ 17:54 0:00 grep >>>>>> pypy >>>>>> >>>>>> >>>>>> >>>>>> Executing the script with python: >>>>>> python2.7 test.py >>>>>> And the background service normally exits. >>>>>> >>>>>> *Environment:* >>>>>> I'm using CentOS 6.5, with portable PyPy distribution for linux ( >>>>>> https://bitbucket.org/squeaky/portable-pypy/downloads/pypy-4.0.1-linux_x86_64-portable.tar.bz2 >>>>>> ) >>>>>> I run the script on system built-in python (python 2.6.6), a compiled >>>>>> CPython (2.7.11), and pypy from epel-release(pypy 2.0.2, python 2.7.2), and >>>>>> the problem does not appear. Though the compiled CPython is 2.7.11 and PyPy >>>>>> 4.0.4 is python 2.7.10, I think that does not matter much. >>>>>> >>>>>> Please contact if you have any questions or ideas. >>>>>> >>>>>> >>>>>> 2015-12-23 >>>>>> ------------------------------ >>>>>> hubo >>>>>> >>>>>> _______________________________________________ >>>>>> pypy-dev mailing list >>>>>> pypy-dev at python.org >>>>>> https://mail.python.org/mailman/listinfo/pypy-dev >>>>>> >>>>>> >>>>> >>>> >>> >> ------------------------------ >> >> pypy-dev mailing list >> pypy-dev at python.org >> https://mail.python.org/mailman/listinfo/pypy-dev >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Wed Dec 23 12:24:54 2015 From: arigo at tunes.org (Armin Rigo) Date: Wed, 23 Dec 2015 18:24:54 +0100 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: Hi Maciej, On Wed, Dec 23, 2015 at 3:48 PM, Maciej Fijalkowski wrote: > I bet the difference is due to "we see more FDs in pypy". If you close the > correct FD in CPython, it would break too I presume? Or is there special > code to handle that? > There is special code to handle that in CPython. It is actually very carefully checking that the FD is still open *and* still pointing to what it used to. See Python/random.c. Just thinking aloud: what could occur here is that the forked process reopens an unrelated file at this descriptor, and then our os.urandom() tries to read from it---and, as a guess, the file happens to be a socket opened in non-blocking mode, and our implementation of os.urandom() gets 0 bytes and keeps trying to get more, which throws it into an infinite loop. It seems it would be a good idea to copy the careful behavior of CPython. (There used to be another case of file descriptor kept open internally, in rpython/rtyper/lltypesystem/llarena.py for /dev/zero, but this is gone.) A bient?t, Armin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent.legoll at gmail.com Wed Dec 23 13:05:35 2015 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Wed, 23 Dec 2015 19:05:35 +0100 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: Do you thought about that removed code ? https://bitbucket.org/vincentlegoll/pypy/commits/992e29624c5fd8a7e76677acf085fea0520f576d The filedesc was not kept opened there, or do you meant another thing in this file ? On Wed, Dec 23, 2015 at 6:24 PM, Armin Rigo wrote: > Hi Maciej, > > On Wed, Dec 23, 2015 at 3:48 PM, Maciej Fijalkowski > wrote: >> >> I bet the difference is due to "we see more FDs in pypy". If you close the >> correct FD in CPython, it would break too I presume? Or is there special >> code to handle that? > > > There is special code to handle that in CPython. It is actually very > carefully checking that the FD is still open *and* still pointing to what it > used to. See Python/random.c. > > Just thinking aloud: what could occur here is that the forked process > reopens an unrelated file at this descriptor, and then our os.urandom() > tries to read from it---and, as a guess, the file happens to be a socket > opened in non-blocking mode, and our implementation of os.urandom() gets 0 > bytes and keeps trying to get more, which throws it into an infinite loop. > > It seems it would be a good idea to copy the careful behavior of CPython. > (There used to be another case of file descriptor kept open internally, in > rpython/rtyper/lltypesystem/llarena.py for /dev/zero, but this is gone.) > > > A bient?t, > > Armin. > > _______________________________________________ > pypy-dev mailing list > pypy-dev at python.org > https://mail.python.org/mailman/listinfo/pypy-dev > -- Vincent Legoll From arigo at tunes.org Thu Dec 24 02:02:08 2015 From: arigo at tunes.org (Armin Rigo) Date: Thu, 24 Dec 2015 08:02:08 +0100 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: Hi Vincent, On Wed, Dec 23, 2015 at 7:05 PM, Vincent Legoll wrote: > Do you thought about that removed code ? > https://bitbucket.org/vincentlegoll/pypy/commits/992e29624c5fd8a7e76677acf085fea0520f576d > > The filedesc was not kept opened there, or do you meant another thing > in this file ? Thanks for looking it up :-) Yes, that's exactly what I have in mind, and indeed I see now that the file descriptor was not kept opened, so it didn't have the problem anyway. However rpython.rlib.rurandom line 101 keeps a file descriptor opened and reuses it without checking, which is the problem here. A bient?t, Armin. From vincent.legoll at gmail.com Thu Dec 24 09:34:36 2015 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Thu, 24 Dec 2015 14:34:36 +0000 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: I may give this an eye if no one beats me to it, just not today, time to overeat! On Thu, Dec 24, 2015 at 7:02 AM, Armin Rigo wrote: > Hi Vincent, > > On Wed, Dec 23, 2015 at 7:05 PM, Vincent Legoll > wrote: >> Do you thought about that removed code ? >> https://bitbucket.org/vincentlegoll/pypy/commits/992e29624c5fd8a7e76677acf085fea0520f576d >> >> The filedesc was not kept opened there, or do you meant another thing >> in this file ? > > Thanks for looking it up :-) Yes, that's exactly what I have in mind, > and indeed I see now that the file descriptor was not kept opened, so > it didn't have the problem anyway. However rpython.rlib.rurandom line > 101 keeps a file descriptor opened and reuses it without checking, > which is the problem here. > > > A bient?t, > > Armin. -- Vincent Legoll From arigo at tunes.org Thu Dec 24 10:04:37 2015 From: arigo at tunes.org (Armin Rigo) Date: Thu, 24 Dec 2015 16:04:37 +0100 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: Hi Vincent, On Thu, Dec 24, 2015 at 3:34 PM, Vincent Legoll wrote: > I may give this an eye if no one beats me to it, just not today, time > to overeat! :-) In the meantime I reverted the older checkin that caches the file descriptor, with a comment, in c6be1b27fa1d. A bient?t, Armin. From vincent.legoll at gmail.com Thu Dec 24 11:50:20 2015 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Thu, 24 Dec 2015 17:50:20 +0100 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: > :-) In the meantime I reverted the older checkin that caches the file > descriptor, with a comment, in c6be1b27fa1d. Are we really seeing a closed and reopened fd ? And if that's the case it may even be intentional ;-) because if we're only seeing a closed python file object, we could go the simpler way: - if not context[0]: + if (not context[0] or + type(context[0]) == file and context[0].closed): context[0] = os.open("/dev/urandom", os.O_RDONLY, 0777) I'll try to reproduce first then test if this simple fix is enough... Frohe Weihnachten ! -- Vincent Legoll From arigo at tunes.org Thu Dec 24 14:28:53 2015 From: arigo at tunes.org (Armin Rigo) Date: Thu, 24 Dec 2015 20:28:53 +0100 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: Hi Vincent, On Thu, Dec 24, 2015 at 5:50 PM, Vincent Legoll wrote: > Are we really seeing a closed and reopened fd ? > And if that's the case it may even be intentional ;-) > > because if we're only seeing a closed python file object, we could go > the simpler way: > > - if not context[0]: > + if (not context[0] or > + type(context[0]) == file and context[0].closed): > context[0] = os.open("/dev/urandom", os.O_RDONLY, 0777) > > I'll try to reproduce first then test if this simple fix is enough... No, please don't commit "half fixes" like that. We know that there is a situation in which bad things occurs, so we need to deal with it. Fixing half the problem might make this particular case work, but not all of them. At least the current situation is not very good performance-wise but should work. A bient?t, Armin. From vincent.legoll at gmail.com Sat Dec 26 07:45:49 2015 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Sat, 26 Dec 2015 13:45:49 +0100 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: OK, I understand we want to have behavior as identical to cpython as possible. So does that mean implementing the same : "fstat() & check we're still on the same inode & device as before" ? See cpython's Python/random.c:208 This I have done, but I'll need help to make the test work (as in detect something is working after the patch that was not before)... Do we also need to do the "could have been opened in another thread" dance ? And what about the CLO_EXEC thing, this does seem sensible to do too, albeit not directly tied to the bug, even if it would make it happen in more cases (I think) On Thu, Dec 24, 2015 at 8:28 PM, Armin Rigo wrote: > Hi Vincent, > > On Thu, Dec 24, 2015 at 5:50 PM, Vincent Legoll > wrote: >> Are we really seeing a closed and reopened fd ? >> And if that's the case it may even be intentional ;-) >> >> because if we're only seeing a closed python file object, we could go >> the simpler way: >> >> - if not context[0]: >> + if (not context[0] or >> + type(context[0]) == file and context[0].closed): >> context[0] = os.open("/dev/urandom", os.O_RDONLY, 0777) >> >> I'll try to reproduce first then test if this simple fix is enough... > > No, please don't commit "half fixes" like that. We know that there is > a situation in which bad things occurs, so we need to deal with it. > Fixing half the problem might make this particular case work, but not > all of them. At least the current situation is not very good > performance-wise but should work. > > > A bient?t, > > Armin. -- Vincent Legoll From arigo at tunes.org Sat Dec 26 18:42:15 2015 From: arigo at tunes.org (Armin Rigo) Date: Sun, 27 Dec 2015 00:42:15 +0100 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: Hi Vincent, On Sat, Dec 26, 2015 at 1:45 PM, Vincent Legoll wrote: > OK, I understand we want to have behavior as identical to cpython as possible. > > So does that mean implementing the same : "fstat() & check we're still > on the same inode & device as before" ? > Do we also need to do the "could have been opened in another thread" dance ? > And what about the CLO_EXEC thing, this does seem sensible to do too, This is what I started to do, but I stopped because there are additional issues on PyPy: you need to make sure that the GIL is not released at places where CPython does not release it either. Otherwise, you're opening the code to "race conditions" again. The goal would be to be *at least* as safe as CPython. There are a lot of corner cases that are not discussed in the source code of CPython. I'm pretty sure that some of them are possible (but rare). As an extreme example, if one thread does os.urandom() and another thread does os.close(4) in parallel, where 4 happens to be the file descriptor returned by open() in urandom.c, then it is possible that the open()'s result is closed after open() but before urandom.c reacquires the GIL. Then urandom.c gets a closed file descriptor. If additionally the other thread (or a 3rd one) opens a different file at file descriptor 4, then urandom.c will think it successfully opened /dev/urandom but actually the file descriptor is for some unrelated file. However, this is probably an issue that cannot be dealt with at all on Posix even in C. That would mean that it is always wrong to close() unknown file descriptors from one thread when other threads might be running... This would not hurt being clarified. A bient?t, Armin. From vincent.legoll at gmail.com Sun Dec 27 05:45:41 2015 From: vincent.legoll at gmail.com (Vincent Legoll) Date: Sun, 27 Dec 2015 11:45:41 +0100 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: So to stay on the safe side, you prefer to keep reopening every time ? Fine, I've put my work here: https://bitbucket.org/vincentlegoll/pypy/commits/branch/fix-urandom-closed it contains a test for the "close /dev/urandom fd from under our feet" and a typo fix that you may still want to cherry pick... Tell me if there's something else I can do On Sun, Dec 27, 2015 at 12:42 AM, Armin Rigo wrote: > Hi Vincent, > > On Sat, Dec 26, 2015 at 1:45 PM, Vincent Legoll > wrote: >> OK, I understand we want to have behavior as identical to cpython as possible. >> >> So does that mean implementing the same : "fstat() & check we're still >> on the same inode & device as before" ? >> Do we also need to do the "could have been opened in another thread" dance ? >> And what about the CLO_EXEC thing, this does seem sensible to do too, > > This is what I started to do, but I stopped because there are > additional issues on PyPy: you need to make sure that the GIL is not > released at places where CPython does not release it either. > Otherwise, you're opening the code to "race conditions" again. > > The goal would be to be *at least* as safe as CPython. There are a > lot of corner cases that are not discussed in the source code of > CPython. I'm pretty sure that some of them are possible (but rare). > As an extreme example, if one thread does os.urandom() and another > thread does os.close(4) in parallel, where 4 happens to be the file > descriptor returned by open() in urandom.c, then it is possible that > the open()'s result is closed after open() but before urandom.c > reacquires the GIL. Then urandom.c gets a closed file descriptor. If > additionally the other thread (or a 3rd one) opens a different file at > file descriptor 4, then urandom.c will think it successfully opened > /dev/urandom but actually the file descriptor is for some unrelated > file. However, this is probably an issue that cannot be dealt with at > all on Posix even in C. That would mean that it is always wrong to > close() unknown file descriptors from one thread when other threads > might be running... This would not hurt being clarified. > > > A bient?t, > > Armin. -- Vincent Legoll From hubo at jiedaibao.com Mon Dec 28 01:00:35 2015 From: hubo at jiedaibao.com (hubo) Date: Mon, 28 Dec 2015 14:00:35 +0800 Subject: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 In-Reply-To: References: <567a7198.01911c0a.85ee3.ffffce95SMTPIN_ADDED_BROKEN@mx.google.com> <567a8b8b.e89c420a.3f3a4.ffffb2d7SMTPIN_ADDED_BROKEN@mx.google.com> <567a9e35.470c620a.772a6.02b5SMTPIN_ADDED_BROKEN@mx.google.com> <567aa34e.6a92420a.9259.0756SMTPIN_ADDED_BROKEN@mx.google.com> Message-ID: <5680D001.9060505@jiedaibao.com> I think the multi-threaded cases are not really problems. The race condition only happens when another thread closed the file descriptor, and the only knowing case of closing file descriptors without knowing what they do is during fork (daemon) process. It always produces unpredictable results if you fork with multiple threads without planning anyway. For GIL, I think it is acceptable for PyPy to be different with CPython in very rare cases. Users should understand they have different implement details. It is only important that common use cases like python-daemon works as expected. 2015-12-28 hubo ????Armin Rigo ?????2015-12-27 07:42 ???Re: [pypy-dev] Dead loop occurs when using python-daemon and multiprocessing together in PyPy 4.0.1 ????"Vincent Legoll" ???"pypy-dev" Hi Vincent, On Sat, Dec 26, 2015 at 1:45 PM, Vincent Legoll wrote: > OK, I understand we want to have behavior as identical to cpython as possible. > > So does that mean implementing the same : "fstat() & check we're still > on the same inode & device as before" ? > Do we also need to do the "could have been opened in another thread" dance ? > And what about the CLO_EXEC thing, this does seem sensible to do too, This is what I started to do, but I stopped because there are additional issues on PyPy: you need to make sure that the GIL is not released at places where CPython does not release it either. Otherwise, you're opening the code to "race conditions" again. The goal would be to be *at least* as safe as CPython. There are a lot of corner cases that are not discussed in the source code of CPython. I'm pretty sure that some of them are possible (but rare). As an extreme example, if one thread does os.urandom() and another thread does os.close(4) in parallel, where 4 happens to be the file descriptor returned by open() in urandom.c, then it is possible that the open()'s result is closed after open() but before urandom.c reacquires the GIL. Then urandom.c gets a closed file descriptor. If additionally the other thread (or a 3rd one) opens a different file at file descriptor 4, then urandom.c will think it successfully opened /dev/urandom but actually the file descriptor is for some unrelated file. However, this is probably an issue that cannot be dealt with at all on Posix even in C. That would mean that it is always wrong to close() unknown file descriptors from one thread when other threads might be running... This would not hurt being clarified. A bient?t, Armin. _______________________________________________ pypy-dev mailing list pypy-dev at python.org https://mail.python.org/mailman/listinfo/pypy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Thu Dec 31 07:26:07 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Thu, 31 Dec 2015 12:26:07 +0000 Subject: [pypy-dev] Current status of GUI support in PyPy In-Reply-To: References: Message-ID: On 19 December 2015 at 08:24, Armin Rigo wrote: > On Fri, Dec 18, 2015 at 1:56 PM, Oscar Benjamin > wrote: >> It seems to have already been fixed on the py3k branch so I'm not sure >> if it needs reporting: > > We usually look at the py3.3 branch, which is for Python 3.3 > compatibility; if it's also fixed there then there is nothing more to > report. Finally found time to check this. It's fixed on py3k and py3.3 so nothing to report. -- Oscar