From emmanuelarias30 at gmail.com Fri Feb 1 06:57:10 2019 From: emmanuelarias30 at gmail.com (eamanu15) Date: Fri, 1 Feb 2019 08:57:10 -0300 Subject: [Python-Dev] How to update namedtuple asdict() to use dict instead of OrderedDict In-Reply-To: References: Message-ID: Hi! > Option 4) Just make the change directly in 3.8, s/OrderedDict/dict/, and > be done will it. This gives users the benefits right away and doesn't > annoy them with warnings that they likely don't care about. There is some > precedent for this. To make namedtuple class creation faster, the > *verbose* option was dropped without any deprecation period. It looks like > no one missed that feature at all, but they did get the immediate benefit > of faster import times. In the case of using regular dicts in named > tuples, people will get immediate and significant space savings as well as > a speed benefit. > +1 for option 4 Regards! eamanu -------------- next part -------------- An HTML attachment was scrubbed... URL: From status at bugs.python.org Fri Feb 1 13:07:59 2019 From: status at bugs.python.org (Python tracker) Date: Fri, 01 Feb 2019 18:07:59 +0000 Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20190201180759.1.9D569A7627081917@roundup.psfhosted.org> ACTIVITY SUMMARY (2019-01-25 - 2019-02-01) Python tracker at https://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 6985 (+32) closed 40649 (+23) total 47634 (+55) Open issues with patches: 2775 Issues opened (45) ================== #25592: distutils docs: data_files always uses sys.prefix https://bugs.python.org/issue25592 reopened by pitrou #30670: pprint for dict in sorted order or insert order? https://bugs.python.org/issue30670 reopened by josephsmeng #35829: datetime: parse "Z" timezone suffix in fromisoformat() https://bugs.python.org/issue35829 opened by rdb #35830: building multiple (binary) packages from a single project https://bugs.python.org/issue35830 opened by stefan #35832: Installation error https://bugs.python.org/issue35832 opened by Stefano Bonalumi #35833: IDLE: revise doc for control chars sent to Shell https://bugs.python.org/issue35833 opened by Dude Roast #35834: get_type_hints exposes an instance of ForwardRef (internal cla https://bugs.python.org/issue35834 opened by Lincoln Quirk #35838: ConfigParser calls optionxform twice when assigning dict https://bugs.python.org/issue35838 opened by Phil Kang #35839: Suggestion: Ignore sys.modules entries with no __spec__ attrib https://bugs.python.org/issue35839 opened by ncoghlan #35840: Control flow inconsistency on closed asyncio stream https://bugs.python.org/issue35840 opened by schlamar #35841: Datetime strftime() does not return correct week numbers for 2 https://bugs.python.org/issue35841 opened by tr12 #35843: importlib.util docs for namespace packages innaccurate https://bugs.python.org/issue35843 opened by Anthony Sottile #35844: Calling `Multiprocessing.Queue.close()` too quickly causes int https://bugs.python.org/issue35844 opened by charmonium #35845: Can't read a F-contiguous memoryview in physical order https://bugs.python.org/issue35845 opened by pitrou #35846: Incomplete documentation for re.sub https://bugs.python.org/issue35846 opened by pbugnion #35847: RISC-V needs CTYPES_PASS_BY_REF_HACK https://bugs.python.org/issue35847 opened by schwab #35848: readinto is not a method on io.TextIOBase https://bugs.python.org/issue35848 opened by steverpalmer #35849: Added thousands separators to Lib/pstats.py final report https://bugs.python.org/issue35849 opened by addons_zz #35851: Make search result in online docs keep their position when sea https://bugs.python.org/issue35851 opened by roelschroeven #35852: Fixed tests regenerating using CRLF when running it on Windows https://bugs.python.org/issue35852 opened by addons_zz #35854: EnvBuilder and venv symlinks do not work on Windows on 3.7.2 https://bugs.python.org/issue35854 opened by steve.dower #35855: IDLE squeezer: improve unsqueezing and autosqueeze default https://bugs.python.org/issue35855 opened by terry.reedy #35856: bundled pip syntaxwarning https://bugs.python.org/issue35856 opened by Dima.Tisnek #35857: Stacktrace shows lines from updated file on disk, not code act https://bugs.python.org/issue35857 opened by Steve Pryde #35859: Capture behavior depends on the order of an alternation https://bugs.python.org/issue35859 opened by davisjam #35860: ProcessPoolExecutor subprocesses crash & break pool when raisi https://bugs.python.org/issue35860 opened by underyx #35861: test_named_expressions raises SyntaxWarning https://bugs.python.org/issue35861 opened by xtreak #35862: Change the environment for a new process https://bugs.python.org/issue35862 opened by r-or #35866: concurrent.futures deadlock https://bugs.python.org/issue35866 opened by jwilk #35867: NameError is not caught at Task execution https://bugs.python.org/issue35867 opened by Sampsa Riikonen #35868: Support ALL_PROXY environment variable in urllib https://bugs.python.org/issue35868 opened by Oleh Khoma #35869: io.BufferReader.read() returns None https://bugs.python.org/issue35869 opened by steverpalmer #35870: readline() specification is unclear https://bugs.python.org/issue35870 opened by porton #35871: Pdb NameError in generator param and list comprehension https://bugs.python.org/issue35871 opened by jayanth #35872: Creating venv from venv no longer works in 3.7.2 https://bugs.python.org/issue35872 opened by schlamar #35873: Controlling venv from venv no longer works in 3.7.2 https://bugs.python.org/issue35873 opened by schlamar #35874: Clarify that the (...) convertor to PyArg_ParseTuple... accept https://bugs.python.org/issue35874 opened by Antony.Lee #35875: Crash - algos.cp36-win_amd64.pyd join.cp36-win_amd64.pyd https://bugs.python.org/issue35875 opened by AxelArnoldBangert #35876: test_start_new_session for posix_spawnp fails https://bugs.python.org/issue35876 opened by pablogsal #35877: parenthesis is mandatory for named expressions in while statem https://bugs.python.org/issue35877 opened by xtreak #35878: ast.c: end_col_offset may be used uninitialized in this functi https://bugs.python.org/issue35878 opened by vstinner #35879: test_type_comments leaks references https://bugs.python.org/issue35879 opened by vstinner #35880: math.sin has no backward error; this isn't documented https://bugs.python.org/issue35880 opened by jneb #35882: distutils fails with UnicodeEncodeError with strange filename https://bugs.python.org/issue35882 opened by scjody #35883: Change invalid unicode characters to replacement characters in https://bugs.python.org/issue35883 opened by Neui Most recent 15 issues with no replies (15) ========================================== #35882: distutils fails with UnicodeEncodeError with strange filename https://bugs.python.org/issue35882 #35878: ast.c: end_col_offset may be used uninitialized in this functi https://bugs.python.org/issue35878 #35874: Clarify that the (...) convertor to PyArg_ParseTuple... accept https://bugs.python.org/issue35874 #35873: Controlling venv from venv no longer works in 3.7.2 https://bugs.python.org/issue35873 #35867: NameError is not caught at Task execution https://bugs.python.org/issue35867 #35860: ProcessPoolExecutor subprocesses crash & break pool when raisi https://bugs.python.org/issue35860 #35852: Fixed tests regenerating using CRLF when running it on Windows https://bugs.python.org/issue35852 #35844: Calling `Multiprocessing.Queue.close()` too quickly causes int https://bugs.python.org/issue35844 #35827: C API dictionary views type checkers are not documented https://bugs.python.org/issue35827 #35813: shared memory construct to avoid need for serialization betwee https://bugs.python.org/issue35813 #35812: Don't log an exception from the main coroutine in asyncio.run( https://bugs.python.org/issue35812 #35807: Update bundled pip to 19.0 https://bugs.python.org/issue35807 #35803: Test and document that `dir=...` in tempfile may be PathLike https://bugs.python.org/issue35803 #35801: venv in 3.7 references python3 executable https://bugs.python.org/issue35801 #35792: Specifying AbstractEventLoop.run_in_executor as a coroutine co https://bugs.python.org/issue35792 Most recent 15 issues waiting for review (15) ============================================= #35877: parenthesis is mandatory for named expressions in while statem https://bugs.python.org/issue35877 #35876: test_start_new_session for posix_spawnp fails https://bugs.python.org/issue35876 #35862: Change the environment for a new process https://bugs.python.org/issue35862 #35861: test_named_expressions raises SyntaxWarning https://bugs.python.org/issue35861 #35854: EnvBuilder and venv symlinks do not work on Windows on 3.7.2 https://bugs.python.org/issue35854 #35852: Fixed tests regenerating using CRLF when running it on Windows https://bugs.python.org/issue35852 #35849: Added thousands separators to Lib/pstats.py final report https://bugs.python.org/issue35849 #35847: RISC-V needs CTYPES_PASS_BY_REF_HACK https://bugs.python.org/issue35847 #35843: importlib.util docs for namespace packages innaccurate https://bugs.python.org/issue35843 #35826: Typo in example for async with statement with condition https://bugs.python.org/issue35826 #35824: http.cookies._CookiePattern modifying regular expressions https://bugs.python.org/issue35824 #35823: Use vfork() in subprocess on Linux https://bugs.python.org/issue35823 #35813: shared memory construct to avoid need for serialization betwee https://bugs.python.org/issue35813 #35810: Object Initialization Bug with Heap-allocated Types https://bugs.python.org/issue35810 #35803: Test and document that `dir=...` in tempfile may be PathLike https://bugs.python.org/issue35803 Top 10 most discussed issues (10) ================================= #35431: Add a function for computing binomial coefficients to the math https://bugs.python.org/issue35431 12 msgs #25592: distutils docs: data_files always uses sys.prefix https://bugs.python.org/issue25592 10 msgs #35857: Stacktrace shows lines from updated file on disk, not code act https://bugs.python.org/issue35857 10 msgs #35854: EnvBuilder and venv symlinks do not work on Windows on 3.7.2 https://bugs.python.org/issue35854 7 msgs #35848: readinto is not a method on io.TextIOBase https://bugs.python.org/issue35848 6 msgs #35859: Capture behavior depends on the order of an alternation https://bugs.python.org/issue35859 6 msgs #35823: Use vfork() in subprocess on Linux https://bugs.python.org/issue35823 5 msgs #35829: datetime: parse "Z" timezone suffix in fromisoformat() https://bugs.python.org/issue35829 5 msgs #30670: pprint for dict in sorted order or insert order? https://bugs.python.org/issue30670 4 msgs #32834: test_gdb fails with Posix locale in 3.7 https://bugs.python.org/issue32834 4 msgs Issues closed (22) ================== #2212: Cookie.BaseCookie has ambiguous unicode handling https://bugs.python.org/issue2212 closed by martin.panter #29235: Allow profile/cProfile to be used as context managers https://bugs.python.org/issue29235 closed by cheryl.sabella #34003: csv.DictReader can return basic dict instead of OrderedDict https://bugs.python.org/issue34003 closed by rhettinger #35196: IDLE text squeezer is too aggressive and is slow https://bugs.python.org/issue35196 closed by terry.reedy #35717: enum.Enum error on sys._getframe(2) https://bugs.python.org/issue35717 closed by vstinner #35769: IDLE: change new file name from ''Untitled" to "untitled" https://bugs.python.org/issue35769 closed by cheryl.sabella #35780: Recheck logic in the C version of the lru_cache() https://bugs.python.org/issue35780 closed by rhettinger #35797: concurrent.futures.ProcessPoolExecutor does not work in venv o https://bugs.python.org/issue35797 closed by steve.dower #35811: py.exe should unset the __PYVENV_LAUNCHER__ environment variab https://bugs.python.org/issue35811 closed by steve.dower #35825: Py_UNICODE_SIZE=4 fails to link on Windows https://bugs.python.org/issue35825 closed by inada.naoki #35831: Format Spec example says limited to 3.1+ but works in 2.7 https://bugs.python.org/issue35831 closed by fdrake #35835: There is no mention of breakpoint() in the pdb documentation https://bugs.python.org/issue35835 closed by Mariatta #35836: ZeroDivisionError class should have a __name__ attr https://bugs.python.org/issue35836 closed by steven.daprano #35837: smtpd PureProxy breaks on mail_options keyword argument https://bugs.python.org/issue35837 closed by r.david.murray #35842: A potential bug about use of uninitialised variable https://bugs.python.org/issue35842 closed by josh.r #35850: CKAN installation went on script error https://bugs.python.org/issue35850 closed by christian.heimes #35853: Extend the functools module with more higher order function co https://bugs.python.org/issue35853 closed by rhettinger #35858: Consider adding the option of running shell/console commands i https://bugs.python.org/issue35858 closed by jcrmatos #35863: email.headers wraps headers badly https://bugs.python.org/issue35863 closed by r.david.murray #35864: Replace OrderedDict with regular dict in namedtuple's _asdict( https://bugs.python.org/issue35864 closed by rhettinger #35865: configparser document refers about random dict order https://bugs.python.org/issue35865 closed by inada.naoki #35881: test_type_comments leaks references and memory blocks https://bugs.python.org/issue35881 closed by vstinner From solipsis at pitrou.net Sun Feb 3 10:19:25 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 3 Feb 2019 16:19:25 +0100 Subject: [Python-Dev] Difference between Include/internal and Include/cpython ? Message-ID: <20190203161925.0115cf68@fsol> Hello, Can someone explain why we have two separate directories Include/internal and Include/cpython? What is the rule for declaring an API inside one or another? At first sight, it seems to me we're having gratuitous complication here. For example, I notice that PyFloat_Fini() is declared in Include/cpython/pylifecycle.h but PyLong_Fini() is declared in Include/internal/pycore_pylifecycle.h? (and why the additional "pycore_XXX.h" naming convention for some of those files?) Regards Antoine. From ammar at ammaraskar.com Sun Feb 3 11:10:16 2019 From: ammar at ammaraskar.com (Ammar Askar) Date: Sun, 3 Feb 2019 11:10:16 -0500 Subject: [Python-Dev] Difference between Include/internal and Include/cpython ? In-Reply-To: <20190203161925.0115cf68@fsol> References: <20190203161925.0115cf68@fsol> Message-ID: This is the discussion where it was named: https://discuss.python.org/t/poll-what-is-your-favorite-name-for-the-new-include-subdirectory/477?u=ammaraskar and the bug explaining the motivation: https://bugs.python.org/issue35134 >(and why the additional "pycore_XXX.h" naming convention for some ofthose files?) "* Include/internal/pycore_*.h is the "internal" API" On Sun, Feb 3, 2019 at 10:20 AM Antoine Pitrou wrote: > > > Hello, > > Can someone explain why we have two separate directories > Include/internal and Include/cpython? What is the rule for declaring an > API inside one or another? > > At first sight, it seems to me we're having gratuitous complication > here. For example, I notice that PyFloat_Fini() is declared in > Include/cpython/pylifecycle.h but PyLong_Fini() is declared in > Include/internal/pycore_pylifecycle.h? > > (and why the additional "pycore_XXX.h" naming convention for some of > those files?) > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ammar%40ammaraskar.com From solipsis at pitrou.net Sun Feb 3 11:43:28 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 3 Feb 2019 17:43:28 +0100 Subject: [Python-Dev] Difference between Include/internal and Include/cpython ? In-Reply-To: References: <20190203161925.0115cf68@fsol> Message-ID: <20190203174328.482c8175@fsol> But in practice the distinction doesn't seem very conclusive. Some internal APIs end up in either of those two directories without any clear reason why. Regards Antoine. On Sun, 3 Feb 2019 11:10:16 -0500 Ammar Askar wrote: > This is the discussion where it was named: > https://discuss.python.org/t/poll-what-is-your-favorite-name-for-the-new-include-subdirectory/477?u=ammaraskar > and the bug explaining the motivation: https://bugs.python.org/issue35134 > > >(and why the additional "pycore_XXX.h" naming convention for some ofthose files?) > > "* Include/internal/pycore_*.h is the "internal" API" > > On Sun, Feb 3, 2019 at 10:20 AM Antoine Pitrou wrote: > > > > > > Hello, > > > > Can someone explain why we have two separate directories > > Include/internal and Include/cpython? What is the rule for declaring an > > API inside one or another? > > > > At first sight, it seems to me we're having gratuitous complication > > here. For example, I notice that PyFloat_Fini() is declared in > > Include/cpython/pylifecycle.h but PyLong_Fini() is declared in > > Include/internal/pycore_pylifecycle.h? > > > > (and why the additional "pycore_XXX.h" naming convention for some of > > those files?) > > > > Regards > > > > Antoine. > > > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ammar%40ammaraskar.com From solipsis at pitrou.net Sun Feb 3 16:03:40 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 3 Feb 2019 22:03:40 +0100 Subject: [Python-Dev] Asking for reversion Message-ID: <20190203220340.3158b236@fsol> Hello, I'd like to ask for the reversion of the changes done in https://github.com/python/cpython/pull/11664 The reason is simple: the PR isn't complete, it lacks docs and tests. It also didn't pass any review (this was pointed by Ronald), even though it adds 1300 lines of code. No programmer is perfect, so it's statistically likely that the PR is defective. With git, forks and branches, we definitely don't need to commit unfinished PRs to the main repo. It's perfectly fine to maintain some non-trivial piece of work in a separate fork. People do it on a regular basis (for example I have currently two such long-lived branches: one for PEP 556 and one for PEP 574). Also, this is not the first time this happened. Another multiprocessing PR was merged some years ago without any docs or tests: https://bugs.python.org/issue28053 Today that work /still/ lacks docs or tests, and there is a suspicion that it doesn't work as intended (see issue comments). It's probably too late to revert it, but it's definitely a slippery slope. Regards Antoine. From vstinner at redhat.com Sun Feb 3 17:22:25 2019 From: vstinner at redhat.com (Victor Stinner) Date: Sun, 3 Feb 2019 23:22:25 +0100 Subject: [Python-Dev] Difference between Include/internal and Include/cpython ? In-Reply-To: <20190203161925.0115cf68@fsol> References: <20190203161925.0115cf68@fsol> Message-ID: Hi Antoine, The rules to decide what goes where have been discussed in the issues which created Include/cpython/ and the issue moving more headers to Include/internal/. In short, internal/ should not be used outside CPython codebase. In Python 3.7, these headers were even not installed. I chose to install them because I moved more headers into internal/ which is a backward incompatible change. You should not use these headers outside CPython code base, but the typical use case to use them are debug tools: debugger, tracer and profiler. The internal/ subdir is not included in Python default search path when you use python-config --cflags for example. It is a deliberate choice that these headers are not easily accessible. There file names are prefixed by pycore_ for practical reasons: if 2 header files have the same name in internal/ and Include/, the C preprocessor can pick the wrong one. See the internal/ issue which gives a concrete example (but in Python 3.7). cpython/ is just a practical separation to force developers to decide if a new API is part of the stable API or not. Previously, too many APIs have been added to the stable API by mistake (not on purpose). About inconsistencies, I invite you to open issues. I worked by small steps. I tried to not move too much code from "one API" (stable, cpython, internal) to another. IMHO all _Init() and _Fini() APIs must be internal. For historical reasons, they are even part of the public API (!) which is a mistake. I don't see the point of calling them explicitly. I tried to take notes at https://pythoncapi.readthedocs.io/ for the rationale, examples and track progess, but I didn't update this site with the work I did last 6 months. I hope that it makes more sense to you now? Victor Le dimanche 3 f?vrier 2019, Antoine Pitrou a ?crit : > > Hello, > > Can someone explain why we have two separate directories > Include/internal and Include/cpython? What is the rule for declaring an > API inside one or another? > > At first sight, it seems to me we're having gratuitous complication > here. For example, I notice that PyFloat_Fini() is declared in > Include/cpython/pylifecycle.h but PyLong_Fini() is declared in > Include/internal/pycore_pylifecycle.h? > > (and why the additional "pycore_XXX.h" naming convention for some of > those files?) > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com > -- Night gathers, and now my watch begins. It shall not end until my death. -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Sun Feb 3 17:25:00 2019 From: barry at python.org (Barry Warsaw) Date: Sun, 3 Feb 2019 14:25:00 -0800 Subject: [Python-Dev] Asking for reversion In-Reply-To: <20190203220340.3158b236@fsol> References: <20190203220340.3158b236@fsol> Message-ID: <714898D1-F99E-46CE-BE12-AC885629E49F@python.org> On Feb 3, 2019, at 13:03, Antoine Pitrou wrote: > > I'd like to ask for the reversion of the changes done in > https://github.com/python/cpython/pull/11664 > > The reason is simple: the PR isn't complete, it lacks docs and tests. > It also didn't pass any review (this was pointed by Ronald), even > though it adds 1300 lines of code. No programmer is perfect, so it's > statistically likely that the PR is defective. I concur. I actually think CI shouldn?t even pass without sufficiently covering tests and docs (sans a ?trivial? or other short circuiting label), but that might be unpopular. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From solipsis at pitrou.net Sun Feb 3 17:40:59 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 3 Feb 2019 23:40:59 +0100 Subject: [Python-Dev] Difference between Include/internal and Include/cpython ? In-Reply-To: References: <20190203161925.0115cf68@fsol> Message-ID: <20190203234059.2ba49a84@fsol> On Sun, 3 Feb 2019 23:22:25 +0100 Victor Stinner wrote: > Hi Antoine, > > The rules to decide what goes where have been discussed in the issues which > created Include/cpython/ and the issue moving more headers to > Include/internal/. > > In short, internal/ should not be used outside CPython codebase. In Python > 3.7, these headers were even not installed. I chose to install them because > I moved more headers into internal/ which is a backward incompatible > change. You should not use these headers outside CPython code base, but the > typical use case to use them are debug tools: debugger, tracer and > profiler. The internal/ subdir is not included in Python default search > path when you use python-config --cflags for example. It is a deliberate > choice that these headers are not easily accessible. > > There file names are prefixed by pycore_ for practical reasons: if 2 header > files have the same name in internal/ and Include/, the C preprocessor can > pick the wrong one. See the internal/ issue which gives a concrete example > (but in Python 3.7). > > cpython/ is just a practical separation to force developers to decide if a > new API is part of the stable API or not. Previously, too many APIs have > been added to the stable API by mistake (not on purpose). Hmm, I see. Thanks for the explanation. Regards Antoine. From tjreedy at udel.edu Sun Feb 3 19:31:18 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 3 Feb 2019 19:31:18 -0500 Subject: [Python-Dev] Asking for reversion In-Reply-To: <20190203220340.3158b236@fsol> References: <20190203220340.3158b236@fsol> Message-ID: On 2/3/2019 4:03 PM, Antoine Pitrou wrote: > > Hello, > > I'd like to ask for the reversion of the changes done in > https://github.com/python/cpython/pull/11664 > > The reason is simple: [over 1000 lines not reviewed, no tests, no docs] Aside from the technical reasons Antoine gave, which I agree with, I think the merge was legally questionable, as a non-contributor is listed as a copyright holder. Message 334805. https://bugs.python.org/issue35813 -- Terry Jan Reedy From guido at python.org Sun Feb 3 19:49:13 2019 From: guido at python.org (Guido van Rossum) Date: Sun, 3 Feb 2019 16:49:13 -0800 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> Message-ID: I think this is now up to the 3.8 release manager. On Sun, Feb 3, 2019 at 4:34 PM Terry Reedy wrote: > On 2/3/2019 4:03 PM, Antoine Pitrou wrote: > > > > Hello, > > > > I'd like to ask for the reversion of the changes done in > > https://github.com/python/cpython/pull/11664 > > > > The reason is simple: [over 1000 lines not reviewed, no tests, no docs] > > Aside from the technical reasons Antoine gave, which I agree with, I > think the merge was legally questionable, as a non-contributor is listed > as a copyright holder. Message 334805. https://bugs.python.org/issue35813 > > -- > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sun Feb 3 19:55:43 2019 From: guido at python.org (Guido van Rossum) Date: Sun, 3 Feb 2019 16:55:43 -0800 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> Message-ID: Also, did anyone ask Davin directly to roll it back? On Sun, Feb 3, 2019 at 4:49 PM Guido van Rossum wrote: > I think this is now up to the 3.8 release manager. > > On Sun, Feb 3, 2019 at 4:34 PM Terry Reedy wrote: > >> On 2/3/2019 4:03 PM, Antoine Pitrou wrote: >> > >> > Hello, >> > >> > I'd like to ask for the reversion of the changes done in >> > https://github.com/python/cpython/pull/11664 >> > >> > The reason is simple: [over 1000 lines not reviewed, no tests, no docs] >> >> Aside from the technical reasons Antoine gave, which I agree with, I >> think the merge was legally questionable, as a non-contributor is listed >> as a copyright holder. Message 334805. >> https://bugs.python.org/issue35813 >> >> -- >> Terry Jan Reedy >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > -- > --Guido van Rossum (python.org/~guido) > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Sun Feb 3 20:40:54 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 3 Feb 2019 20:40:54 -0500 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> Message-ID: On 2/3/2019 7:55 PM, Guido van Rossum wrote: > Also, did anyone ask Davin directly to roll it back? Antoine posted on the issue, along with Robert O. Robert reviewed and make several suggestions. -- Terry Jan Reedy From raymond.hettinger at gmail.com Sun Feb 3 20:52:55 2019 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 3 Feb 2019 17:52:55 -0800 Subject: [Python-Dev] Asking for reversion In-Reply-To: <20190203220340.3158b236@fsol> References: <20190203220340.3158b236@fsol> Message-ID: > On Feb 3, 2019, at 1:03 PM, Antoine Pitrou wrote: > > I'd like to ask for the reversion of the changes done in > https://github.com/python/cpython/pull/11664 Please work *with* Davin on this one. It was only recently that you edited his name out of the list of maintainers for multiprocessing even though that is what he's been working on for the last two years and at the last two sprints. I'd like to see more team work here rather than applying social pressures via python-dev (which is a *very* public list). Raymond From raymond.hettinger at gmail.com Sun Feb 3 21:10:43 2019 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 3 Feb 2019 18:10:43 -0800 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> Message-ID: <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> > On Feb 3, 2019, at 5:40 PM, Terry Reedy wrote: > > On 2/3/2019 7:55 PM, Guido van Rossum wrote: >> Also, did anyone ask Davin directly to roll it back? > > Antoine posted on the issue, along with Robert O. Robert reviewed and make several suggestions. I think the PR sat in a stable state for many months, and it looks like RO's review comments came *after* the commit. FWIW, with dataclasses we decided to get the PR committed early, long before most of the tests and all of the docs. The principle was that bigger changes needed to go in as early as possible in the release cycle so that we could thoroughly exercise it (something that almost never happens while something is in the PR stage). It would be great if the same came happen here. IIRC, shared memory has long been the holy grail for multiprocessing, helping to mitigate its principle disadvantage (the cost of moving data between processes). It's something we really want. But let's see what the 3.8 release manager has to say. Raymond From python+python_dev at discontinuity.net Sun Feb 3 22:12:38 2019 From: python+python_dev at discontinuity.net (Davin Potts) Date: Sun, 3 Feb 2019 21:12:38 -0600 Subject: [Python-Dev] Asking for reversion In-Reply-To: <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: I am attempting to do the right thing and am following the advice of other core devs in what I have done thus far. Borrowing heavily from what I've added to issue35813 just now: This work is the result of ~1.5 years of development effort, much of it accomplished at the last two core dev sprints. The code behind it has been stable since September 2018 and tested as an independently installable package by multiple people. I was encouraged by Lukasz, Yury, and others to check in this code early, not waiting for tests and docs, in order to both solicit more feedback and provide for broader testing. I understand that doing such a thing is not at all a novelty. Thankfully it is doing that -- I hope that feedback remains constructive and supportive. There are some tests to be found in a branch (enh-tests-shmem) of github.com/applio/cpython which I think should become more comprehensive before inclusion. Temporarily deferring and not including them as part of the first alpha should reduce the complexity of that release. Regarding the BSD license on the C code being adopted, my conversations with Brett and subsequently Van have not raised concerns, far from it -- there is a process which is being followed to the letter. If there are other reasons to object to the thoughtful adoption of code licensed like this one, that deserves a decoupled and larger discussion first. Davin On Sun, Feb 3, 2019 at 8:12 PM Raymond Hettinger < raymond.hettinger at gmail.com> wrote: > > > On Feb 3, 2019, at 5:40 PM, Terry Reedy wrote: > > > > On 2/3/2019 7:55 PM, Guido van Rossum wrote: > >> Also, did anyone ask Davin directly to roll it back? > > > > Antoine posted on the issue, along with Robert O. Robert reviewed and > make several suggestions. > > I think the PR sat in a stable state for many months, and it looks like > RO's review comments came *after* the commit. > > FWIW, with dataclasses we decided to get the PR committed early, long > before most of the tests and all of the docs. The principle was that bigger > changes needed to go in as early as possible in the release cycle so that > we could thoroughly exercise it (something that almost never happens while > something is in the PR stage). It would be great if the same came happen > here. IIRC, shared memory has long been the holy grail for > multiprocessing, helping to mitigate its principle disadvantage (the cost > of moving data between processes). It's something we really want. > > But let's see what the 3.8 release manager has to say. > > > Raymond > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/python%2Bpython_dev%40discontinuity.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From python+python_dev at discontinuity.net Sun Feb 3 22:25:27 2019 From: python+python_dev at discontinuity.net (Davin Potts) Date: Sun, 3 Feb 2019 21:25:27 -0600 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: On 2/3/2019 7:55 PM, Guido van Rossum wrote: > Also, did anyone ask Davin directly to roll it back? Simply put: no. There have been a number of reactionary comments in the last 16 hours but no attempt to reach out to me directly during that time. On Sun, Feb 3, 2019 at 8:12 PM Raymond Hettinger < raymond.hettinger at gmail.com> wrote: > It was only recently that you edited his name out of the list of maintainers for multiprocessing > even though that is what he's been working on for the last two years and at the last two sprints. I think it would be best to discuss Antoine's decision to take this particular action without first consulting me, elsewhere and not part of this thread. As I said, I am happy to do the most constructive thing possible and I sought the advice of those I highly respect first before doing as I have. Davin On Sun, Feb 3, 2019 at 9:12 PM Davin Potts < python+python_dev at discontinuity.net> wrote: > I am attempting to do the right thing and am following the advice of other > core devs in what I have done thus far. > > Borrowing heavily from what I've added to issue35813 just now: > > This work is the result of ~1.5 years of development effort, much of it > accomplished at the last two core dev sprints. The code behind it has been > stable since September 2018 and tested as an independently installable > package by multiple people. > > I was encouraged by Lukasz, Yury, and others to check in this code early, > not waiting for tests and docs, in order to both solicit more feedback and > provide for broader testing. I understand that doing such a thing is not > at all a novelty. Thankfully it is doing that -- I hope that feedback > remains constructive and supportive. > > There are some tests to be found in a branch (enh-tests-shmem) of > github.com/applio/cpython which I think should become more comprehensive > before inclusion. Temporarily deferring and not including them as part of > the first alpha should reduce the complexity of that release. > > Regarding the BSD license on the C code being adopted, my conversations > with Brett and subsequently Van have not raised concerns, far from it -- > there is a process which is being followed to the letter. If there are > other reasons to object to the thoughtful adoption of code licensed like > this one, that deserves a decoupled and larger discussion first. > > > Davin > > On Sun, Feb 3, 2019 at 8:12 PM Raymond Hettinger < > raymond.hettinger at gmail.com> wrote: > >> >> > On Feb 3, 2019, at 5:40 PM, Terry Reedy wrote: >> > >> > On 2/3/2019 7:55 PM, Guido van Rossum wrote: >> >> Also, did anyone ask Davin directly to roll it back? >> > >> > Antoine posted on the issue, along with Robert O. Robert reviewed and >> make several suggestions. >> >> I think the PR sat in a stable state for many months, and it looks like >> RO's review comments came *after* the commit. >> >> FWIW, with dataclasses we decided to get the PR committed early, long >> before most of the tests and all of the docs. The principle was that bigger >> changes needed to go in as early as possible in the release cycle so that >> we could thoroughly exercise it (something that almost never happens while >> something is in the PR stage). It would be great if the same came happen >> here. IIRC, shared memory has long been the holy grail for >> multiprocessing, helping to mitigate its principle disadvantage (the cost >> of moving data between processes). It's something we really want. >> >> But let's see what the 3.8 release manager has to say. >> >> >> Raymond >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/python%2Bpython_dev%40discontinuity.net >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Feb 4 00:52:40 2019 From: barry at python.org (Barry Warsaw) Date: Sun, 3 Feb 2019 21:52:40 -0800 Subject: [Python-Dev] Asking for reversion In-Reply-To: <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: <80174879-97D5-4516-97DF-093C980F363D@python.org> On Feb 3, 2019, at 18:10, Raymond Hettinger wrote: > > FWIW, with dataclasses we decided to get the PR committed early, long before most of the tests and all of the docs. The principle was that bigger changes needed to go in as early as possible in the release cycle so that we could thoroughly exercise it (something that almost never happens while something is in the PR stage). I think that should generally be the exception, but if it does happen, there ought to be a release blocker issue for the tests and docs. The problem then is if those things *don?t* happen and we get too late in the release cycle to roll the change back. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From ronaldoussoren at mac.com Mon Feb 4 01:53:49 2019 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 4 Feb 2019 07:53:49 +0100 Subject: [Python-Dev] Asking for reversion In-Reply-To: <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: <90A448A1-9854-4D66-A6E7-A1F97FEBC3B0@mac.com> > On 4 Feb 2019, at 03:10, Raymond Hettinger wrote: > > >> On Feb 3, 2019, at 5:40 PM, Terry Reedy wrote: >> >> On 2/3/2019 7:55 PM, Guido van Rossum wrote: >>> Also, did anyone ask Davin directly to roll it back? >> >> Antoine posted on the issue, along with Robert O. Robert reviewed and make several suggestions. @Terry: Robert is usually called Ronald :-) > > I think the PR sat in a stable state for many months, and it looks like RO's review comments came *after* the commit. That?s because I only noticed the PR after commit: The PR was merged within an hour of creating the BPO issue. > > FWIW, with dataclasses we decided to get the PR committed early, long before most of the tests and all of the docs. The principle was that bigger changes needed to go in as early as possible in the release cycle so that we could thoroughly exercise it (something that almost never happens while something is in the PR stage). It would be great if the same came happen here. IIRC, shared memory has long been the holy grail for multiprocessing, helping to mitigate its principle disadvantage (the cost of moving data between processes). It's something we really want. But with dataclasses there was public discussion on the API. This is a new API with no documentation in a part of the library that is known to be complex in nature. Ronald -- Twitter: @ronaldoussoren Blog: https://blog.ronaldoussoren.net/ From ronaldoussoren at mac.com Mon Feb 4 04:23:09 2019 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 4 Feb 2019 10:23:09 +0100 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: <2CCB0A9B-3B0B-460B-86FD-DE441F64B459@mac.com> > On 4 Feb 2019, at 04:25, Davin Potts wrote: > > On 2/3/2019 7:55 PM, Guido van Rossum wrote: > > Also, did anyone ask Davin directly to roll it back? > > Simply put: no. There have been a number of reactionary comments in the last 16 hours but no attempt to reach out to me directly during that time. > I asked a question about the commit yesterday night in the tracker and was waiting for a response (which I fully expected to take some time due to timezone differences and this being a volunteer driven project). Ronald From solipsis at pitrou.net Mon Feb 4 04:27:44 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 4 Feb 2019 10:27:44 +0100 Subject: [Python-Dev] Asking for reversion References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: <20190204102744.18e0b141@fsol> On Sun, 3 Feb 2019 21:25:27 -0600 Davin Potts wrote: > On 2/3/2019 7:55 PM, Guido van Rossum wrote: > > Also, did anyone ask Davin directly to roll it back? > > Simply put: no. There have been a number of reactionary comments in the > last 16 hours but no attempt to reach out to me directly during that time. By construction, if I post a comment on an issue you opened yourself on the bug tracker, you are receiving those comments. I'm not sure why a private message would be necessary. Generally, we refrain from doing things in private except if there are personal issues. Regards Antoine. From solipsis at pitrou.net Mon Feb 4 04:33:22 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 4 Feb 2019 10:33:22 +0100 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> Message-ID: <20190204103322.5b549fd1@fsol> On Sun, 3 Feb 2019 17:52:55 -0800 Raymond Hettinger wrote: > > On Feb 3, 2019, at 1:03 PM, Antoine Pitrou wrote: > > > > I'd like to ask for the reversion of the changes done in > > https://github.com/python/cpython/pull/11664 > > Please work *with* Davin on this one. You know, Raymond, I'm a volunteer and I dedicate my time to whatever I want. If someone pushes some unfinished work, it is perfectly normal to ask for reversion instead of feeling obliged to finish the work myself. Regards Antoine. From solipsis at pitrou.net Mon Feb 4 04:31:02 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 4 Feb 2019 10:31:02 +0100 Subject: [Python-Dev] Asking for reversion References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: <20190204103102.76c74feb@fsol> On Sun, 3 Feb 2019 18:10:43 -0800 Raymond Hettinger wrote: > > On Feb 3, 2019, at 5:40 PM, Terry Reedy wrote: > > > > On 2/3/2019 7:55 PM, Guido van Rossum wrote: > >> Also, did anyone ask Davin directly to roll it back? > > > > Antoine posted on the issue, along with Robert O. Robert reviewed and make several suggestions. > > I think the PR sat in a stable state for many months, According to Github, it was opened 11 days ago. The first commit itself is 12 days old (again according to Github): https://github.com/python/cpython/pull/11664/commits/90f4a6cb2da8e187fa38b05c3f347cd602dd69c5 Now, perhaps the work itself is much older. But regardless, you cannot expect someone to take notice of a PR or issue if they are not put in CC. It is very much against our usual conventions to check in large pieces of code without asking anyone for review. Regards Antoine. From solipsis at pitrou.net Mon Feb 4 04:39:06 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 4 Feb 2019 10:39:06 +0100 Subject: [Python-Dev] Asking for reversion References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: <20190204103906.13a8e20d@fsol> On Sun, 3 Feb 2019 21:12:38 -0600 Davin Potts wrote: > > I was encouraged by Lukasz, Yury, and others to check in this code early, > not waiting for tests and docs, in order to both solicit more feedback and > provide for broader testing. For the record: submitting a PR without tests or docs is perfectly fine, and a reasonable way to ask for feedback. Merging that PR is not, usually (especially as you didn't wait for feedback). So there might have been a misunderstanding between you and Lukasz, Yury and the "others". Or perhaps this is another instance of taking a disruptive decision in private... Regards Antoine. From lukasz at langa.pl Mon Feb 4 05:36:47 2019 From: lukasz at langa.pl (=?utf-8?Q?=C5=81ukasz_Langa?=) Date: Mon, 4 Feb 2019 11:36:47 +0100 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> Message-ID: > On 4 Feb 2019, at 01:49, Guido van Rossum wrote: > > I think this is now up to the 3.8 release manager. I responded on the tracker: https://bugs.python.org/issue35813#msg334817 I wrote: > @Davin, in what time can you fill in the missing tests and documentation? If this is something you finish do before alpha2, I'm inclined to leave the change in. > > As it stands, I missed the controversy yesterday as I was busy making my first release. So the merge *got released* in alpha1. I would prefer to fix the missing pieces forward instead of reverting and re-submitting which will only thrash blame and history at this point. > > FTR, I do agree with Antoine, Ronald and others that in the future such big changes should be as close to their ready state at merge time. @Raymond, would you be willing to work with Davin on finishing this work in time for alpha2? - ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From solipsis at pitrou.net Mon Feb 4 05:37:16 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 4 Feb 2019 11:37:16 +0100 Subject: [Python-Dev] About multiprocessing maintainership Message-ID: <20190204113716.4368387b@fsol> Hello, In a recent message, Raymond dramatically pretends that I would have "edited out" Davin of the maintainers list for the multiprocessing module. What I did (*) is different: I asked to mark Davin inactive and to stop auto-assigning him on bug tracker issues. Davin was /still/ listed in the experts list, along with me and others. IOW, there was no "editing out". (*) https://github.com/python/devguide/pull/435 The reason I did this is simple: Davin does not do, and has almost never done, any actual maintenance work on multiprocessing (if you are not convinced, just go through the git history, and the PRs that were merged in the ~4 last years). He usually does not respond to tracker issues opened by users. He does not review PRs. The only sizable piece of work he committed is, as I mentioned in the previous thread, still untested and undocumented. Auto-assigning someone who never (AFAICT) responds to issues ultimately does a disservice to users, whose complaints go unanswered; while other people, who /do/ respond to users, are not aware of those stale issues. Regards Antoine. From stephane at wirtel.be Mon Feb 4 05:58:27 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Mon, 4 Feb 2019 11:58:27 +0100 Subject: [Python-Dev] Why a merge for 3.8.0a1? Message-ID: <20190204105827.GA24384@xps> Hi all, After a git pull, I have seen there is a merge for v3.8.0a1 by ?ukasz Langa, why? I think the code should keep a linear commit and in this case, it's against the "commit&squash" of CPython and Github :/ Thank you for your response. St?phane -- St?phane Wirtel - https://wirtel.be - @matrixise From lukasz at langa.pl Mon Feb 4 06:03:08 2019 From: lukasz at langa.pl (=?utf-8?Q?=C5=81ukasz_Langa?=) Date: Mon, 4 Feb 2019 12:03:08 +0100 Subject: [Python-Dev] Why a merge for 3.8.0a1? In-Reply-To: <20190204105827.GA24384@xps> References: <20190204105827.GA24384@xps> Message-ID: <30BDE9B6-C213-4A3E-ADD8-0FCB3E3385D0@langa.pl> > On 4 Feb 2019, at 11:58, Stephane Wirtel wrote: > > Hi all, > > After a git pull, I have seen there is a merge for v3.8.0a1 by ?ukasz > Langa, why? I think the code should keep a linear commit and in this > case, it's against the "commit&squash" of CPython and Github :/ > > Thank you for your response. Tagging a release is different from a regular PR in the sense that you want the commit hash that you tagged as a given version to *remain the same*. In the mean time, other developers can (and will!) merge pull requests. If you were to rebase *the release tag* over their changes, the commit hash wouldn't match anymore. If you were to rebase *their changes* over your release tag, you'd have to force-push to update their changes. This is described in PEP 101. - ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From lukasz at langa.pl Mon Feb 4 06:32:25 2019 From: lukasz at langa.pl (=?utf-8?Q?=C5=81ukasz_Langa?=) Date: Mon, 4 Feb 2019 12:32:25 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing Message-ID: I packaged my first release. *wipes sweat off of face* Go get it here: https://www.python.org/downloads/release/python-380a1/ Python 3.8.0a1 is the first of four planned alpha releases of Python 3.8, the next feature release of Python. During the alpha phase, Python 3.8 remains under heavy development: additional features will be added and existing features may be modified or deleted. Please keep in mind that this is a preview release and its use is not recommended for production environments. The next preview release, 3.8.0a2, is planned for 2019-02-24. Apart from building the Mac installers, Ned helped me a lot with the process, thank you! Ernest was super quick providing me with all required access and fixing a Unicode problem I found in Salt, thank you! Finally, this release was made on a train to D?sseldorf. There's a PyPy sprint there. The train is pretty cool, makes this "Wasm! Wasm!" sound. - ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From stephane at wirtel.be Mon Feb 4 07:50:51 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Mon, 4 Feb 2019 13:50:51 +0100 Subject: [Python-Dev] Why a merge for 3.8.0a1? In-Reply-To: <30BDE9B6-C213-4A3E-ADD8-0FCB3E3385D0@langa.pl> References: <20190204105827.GA24384@xps> <30BDE9B6-C213-4A3E-ADD8-0FCB3E3385D0@langa.pl> Message-ID: <20190204125051.GA29197@xps> On 02/04, ?ukasz Langa wrote: > >> On 4 Feb 2019, at 11:58, Stephane Wirtel wrote: >> >> Hi all, >> >> After a git pull, I have seen there is a merge for v3.8.0a1 by ?ukasz >> Langa, why? I think the code should keep a linear commit and in this >> case, it's against the "commit&squash" of CPython and Github :/ >> >> Thank you for your response. > >Tagging a release is different from a regular PR in the sense that you want the commit hash that you tagged as a given version to *remain the same*. In the mean time, other developers can (and will!) merge pull requests. If you were to rebase *the release tag* over their changes, the commit hash wouldn't match anymore. If you were to rebase *their changes* over your release tag, you'd have to force-push to update their changes. > >This is described in PEP 101. > >- ? Hi ?ukasz, Thank you for this explanation and I have checked the PEP 101 and also the way for 3.7 and there is also a merge. Thanks, today, I have learned one thing. Have a nice day, PS: Really sorry for this bad ping but I wanted to have an explanation. St?phane -- St?phane Wirtel - https://wirtel.be - @matrixise From stephane at wirtel.be Mon Feb 4 08:02:38 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Mon, 4 Feb 2019 14:02:38 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: References: Message-ID: <20190204130238.GB29197@xps> On 02/04, ?ukasz Langa wrote: >I packaged my first release. *wipes sweat off of face* > >Go get it here: >https://www.python.org/downloads/release/python-380a1/ > >Python 3.8.0a1 is the first of four planned alpha releases of Python 3.8, >the next feature release of Python. During the alpha phase, Python 3.8 >remains under heavy development: additional features will be added >and existing features may be modified or deleted. Please keep in mind >that this is a preview release and its use is not recommended for >production environments. The next preview release, 3.8.0a2, is planned >for 2019-02-24. > >Apart from building the Mac installers, Ned helped me a lot with the >process, thank you! Ernest was super quick providing me with all >required access and fixing a Unicode problem I found in Salt, >thank you! > >Finally, this release was made on a train to D?sseldorf. There's a PyPy >sprint there. The train is pretty cool, makes this "Wasm! Wasm!" sound. > >- ? > Hi Lukasz, Just one idea, we could create a Docker image with this alpha version. This Docker image could be used with the CI of the main projects and the test suites of these projects. If we have some issues, we should create an issue for python 3.8.0a1. Good idea? Have a nice day, St?phane -- St?phane Wirtel - https://wirtel.be - @matrixise From stephane at wirtel.be Mon Feb 4 08:33:38 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Mon, 4 Feb 2019 14:33:38 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: <20190204130238.GB29197@xps> References: <20190204130238.GB29197@xps> Message-ID: <20190204133338.GA32737@xps> It's unofficial but I used the Dockerfile for 3.7 and created this Docker image: https://cloud.docker.com/u/matrixise/repository/docker/matrixise/python docker pull matrixise/python:3.8.0a1 I am not an expert about the releasing of a Docker image but we could work with that and try to improve it. If one person use Gitlab-CI, this person can add a new test for this version and use this image (matrixise/python:3.8.0a1) or an official image, just for the tests... St?phane -- St?phane Wirtel - https://wirtel.be - @matrixise From stephane at wirtel.be Mon Feb 4 08:41:11 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Mon, 4 Feb 2019 14:41:11 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: <20190204133338.GA32737@xps> References: <20190204130238.GB29197@xps> <20190204133338.GA32737@xps> Message-ID: <20190204134111.GA2035@xps> On 02/04, Stephane Wirtel wrote: >It's unofficial but I used the Dockerfile for 3.7 and created this >Docker image: > >https://cloud.docker.com/u/matrixise/repository/docker/matrixise/python Sorry: here is the right link https://hub.docker.com/r/matrixise/python -- St?phane Wirtel - https://wirtel.be - @matrixise From paul at ganssle.io Mon Feb 4 08:48:05 2019 From: paul at ganssle.io (Paul Ganssle) Date: Mon, 4 Feb 2019 08:48:05 -0500 Subject: [Python-Dev] Return type of datetime subclasses added to timedelta In-Reply-To: References: <1059740e-cc65-205d-5986-a9397463a315@ganssle.io> <2415dd60-b6b4-30b0-90d2-c0c8b22314c7@ganssle.io> Message-ID: <8faf2a5b-305c-e83b-33bd-e5eacc8609a7@ganssle.io> Hey all, This thread about the return type of datetime operations seems to have stopped without any explicit decision - I think I responded to everyone who had objections, but I think only Guido has given a +1 to whether or not we should go ahead. Have we got agreement to go ahead with this change? Are we still targeting Python 3.8 here? For those who don't want to dig through your old e-mails, here's the archive link for this thread: https://mail.python.org/pipermail/python-dev/2019-January/155984.html If you want to start commenting on the actual implementation, it's available here (though it's pretty simple): https://github.com/python/cpython/pull/10902 Best, Paul On 1/6/19 7:17 PM, Guido van Rossum wrote: > OK, I concede your point (and indeed I only tested this on 3.6). If we > could break the backward compatibility for now() we presumably can > break it for this purpose. > > On Sun, Jan 6, 2019 at 11:02 AM Paul Ganssle > wrote: > > I did address this in the original post - the assumption that the > subclass constructor will have the same arguments as the base > constructor is baked into many alternate constructors of datetime. > I acknowledge that this is a breaking change, but it is a small > one - anyone creating such a subclass that /cannot/ handled the > class being created this way would be broken in myriad ways. > > We have also in recent years changed several alternate > constructors (including `replace`) to retain the original > subclass, which by your same standard would be a breaking change. > I believe there have been no complaints. In fact, between Python > 3.6 and 3.7, the very example you showed broke: > > Python 3.6.6: > > >>> class D(datetime.datetime): > ...???? def __new__(cls): > ...???????? return cls.now() > ... > >>> D() > D(2019, 1, 6, 13, 49, 38, 842033) > > Python 3.7.2: > > >>> class D(datetime.datetime): > ...???? def __new__(cls): > ...???????? return cls.now() > ... > >>> D() > Traceback (most recent call last): > ? File "", line 1, in > ? File "", line 3, in __new__ > TypeError: __new__() takes 1 positional argument but 9 were given > > > We haven't seen any bug reports about this sort of thing; what we > /have/ been getting is bug reports that subclassing datetime > doesn't retain the subclass in various ways (because people /are/ > using datetime subclasses). This is likely to cause very little in > the way of problems, but it will improve convenience for people > making datetime subclasses and almost certainly performance for > people using them (e.g. pendulum and arrow, which now need to take > a slow pure python route in many situations to work around this > problem). > > If we're /really/ concerned with this backward compatibility > breaking, we could do the equivalent of: > > try: > ??? return new_behavior(...) > except TypeError: > ??? warnings.warn("The semantics of timedelta addition have " > ????????????????? "changed in a way that raises an error in " > ????????????????? "this subclass. Please implement __add__ " > ????????????????? "if you need the old behavior.", DeprecationWarning) > > Then after a suitable notice period drop the warning and turn it > to a hard error. > > Best, > > Paul > > On 1/6/19 1:43 PM, Guido van Rossum wrote: >> I don't think datetime and builtins like int necessarily need to >> be aligned. But I do see a problem -- the __new__ and __init__ >> methods defined in the subclass (if any) should allow for being >> called with the same signature as the base datetime class. >> Currently you can have a subclass of datetime whose __new__ has >> no arguments (or, more realistically, interprets its arguments >> differently). Instances of such a class can still be added to a >> timedelta. The proposal would cause this to break (since such an >> addition has to create a new instance, which calls __new__ and >> __init__). Since this is a backwards incompatibility, I don't see >> how it can be done -- and I also don't see many use cases, so I >> think it's not worth pursuing further. >> >> Note that the same problem already happens with the >> .fromordinal() class method, though it doesn't happen with >> .fromdatetime() or .now(): >> >> >>> class D(datetime.datetime): >> ...?? def __new__(cls): return cls.now() >> ... >> >>> D() >> D(2019, 1, 6, 10, 33, 37, 161606) >> >>> D.fromordinal(100) >> Traceback (most recent call last): >> ? File "", line 1, in >> TypeError: __new__() takes 1 positional argument but 4 were given >> >>> D.fromtimestamp(123456789) >> D(1973, 11, 29, 13, 33, 9) >> >>> >> >> On Sun, Jan 6, 2019 at 9:05 AM Paul Ganssle > > wrote: >> >> I can think of many reasons why datetime is different from >> builtins, though to be honest I'm not sure that consistency >> for its own sake is really a strong argument for keeping a >> counter-intuitive behavior - and to be honest I'm open to the >> idea that /all/ arithmetic types /should/ have some form of >> this change. >> >> That said, I would say that the biggest difference between >> datetime and builtins (other than the fact that datetime is >> /not/ a builtin, and as such doesn't necessarily need to be >> categorized in this group), is that unlike almost all other >> arithmetic types, /datetime/ has a special, dedicated type >> for describing differences in datetimes. Using your example >> of a float subclass, consider that without the behavior of >> "addition of floats returns floats", it would be hard to >> predict what would happen in this situation: >> >> >>> F(1.2) + 3.4 >> >> Would that always return a float, even though F(1.2) + F(3.4) >> returns an F? Would that return an F because F is the >> left-hand operand? Would it return a float because float is >> the right-hand operand? Would you walk the MROs and find the >> lowest type in common between the operands and return that? >> It's not entirely clear which subtype predominates. With >> datetime, you have: >> >> datetime - datetime -> timedelta >> datetime ? timedelta -> datetime >> timedelta ? timedelta -> timedelta >> >> There's no operation between two datetime objects that would >> return a datetime object, so it's always clear: operations >> between datetime subclasses return timedelta, operations >> between a datetime object and a timedelta return the subclass >> of the datetime that it was added to or subtracted from. >> >> Of course, the real way to resolve whether datetime should be >> different from int/float/string/etc is to look at why this >> choice was actually made for those types in the first place, >> and decide whether datetime is like them /in this respect/. >> The heterogeneous operations problem may be a reasonable >> justification for leaving the other builtins alone but >> changing datetime, but if someone knows of other fundamental >> reasons why the decision to have arithmetic operations always >> create the base class was chosen, please let me know. >> >> Best, >> Paul >> >> On 1/5/19 3:55 AM, Alexander Belopolsky wrote: >>> >>> >>> On Wed, Jan 2, 2019 at 10:18 PM Paul Ganssle >>> > wrote: >>> >>> .. the original objection was that this implementation >>> assumes that the datetime subclass has a constructor >>> with the same (or a sufficiently similar) signature as >>> datetime. >>> >>> While this was used as a possible rationale for the way >>> standard types behave, the main objection to changing >>> datetime classes is that it will make them behave >>> differently from builtins.? For example: >>> >>> >>> class F(float): >>> ...? ? ?pass >>> ... >>> >>> type(F.fromhex('AA')) >>> >>> >>> type(F(1) + F(2)) >>> >>> >>> This may be a legitimate gripe, but unfortunately that >>> ship has sailed long ago. All of datetime's alternate >>> constructors make this assumption. Any subclass that >>> does not meet this requirement must have worked around >>> it long ago (or they don't care about alternate >>> constructors). >>> >>> >>> This is right, but the same argument is equally applicable >>> to int, float, etc. subclasses.? If you want to limit your >>> change to datetime types you should explain what makes these >>> types special.?? >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> >> >> -- >> --Guido van Rossum (python.org/~guido ) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > -- > --Guido van Rossum (python.org/~guido ) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From stephane at wirtel.be Mon Feb 4 09:26:08 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Mon, 4 Feb 2019 15:26:08 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: References: Message-ID: <20190204142608.GB2035@xps> Hi ?ukasz, I have some issues with pytest and this release, you can see this BPO https://bugs.python.org/issue35895 Have a nice day and thank you for your job. St?phane -- St?phane Wirtel - https://wirtel.be - @matrixise From python+python_dev at discontinuity.net Mon Feb 4 10:24:38 2019 From: python+python_dev at discontinuity.net (Davin Potts) Date: Mon, 4 Feb 2019 09:24:38 -0600 Subject: [Python-Dev] About multiprocessing maintainership In-Reply-To: <20190204113716.4368387b@fsol> References: <20190204113716.4368387b@fsol> Message-ID: Antoine's change to the devguide was made on the basis that "he doesn't contribute anymore" which, going by Antoine's own description in this thread, he contradicts. My current effort, mentioned in Antoine's other thread, is not my single largest contribution. I have been impressed by the volume of time that Antoine is able to spend on the issue tracker. Because he and I generally agree on what actions to take on an issue, when he is quick to jump on issues it is uncommon for me to feel the need to say something myself just to play a numbers game or to make my presence felt -- that's part of being a team. There have been incidents where I disagree with Antoine and explain the reasoning in an issue but later Antoine goes on to do whatever he wants, disregarding what I wrote -- because I am not in a position to necessarily react or respond as frequently, I've generally only discovered this much later. I regard this latter interaction as unhealthy. I have been part of several group discussions (among core developers) now regarding how to balance the efforts of contributors with copious time to devote versus those which must be extra judicious in how they spend their more limited time. We recognize this as an ongoing concern and here it is again. If we are supportive of one another, we can find a way to work through such things. I joined the core developer team to help others and give back especially when it involved the then-neglected multiprocessing module. When I am personally attacked in a discussion on an issue by someone I do not know, it hurts and demoralizes me -- I know that all of the core developers experience this. When I am spending time on multiprocessing only to be surprised by a claim that I don't contribute anymore, it hurts and demoralizes me. When I read hand-picked statistics to support a slanted narrative designed to belittle my contributions, it hurts and demoralizes me. I know different people react to such things differently but in my case I have occasionally needed to take time away from cpython to detox -- in 2018, such incidents led to my losing more than a month of time more than once. Regarding support for one another: At the core developer sprint last year, I volunteered to remotely host Antoine on my laptop so that he could video-conference into the governance discussions we were having there. A few weeks later, Antoine is "editing me out" of the maintainers list without any further communication. If we only let the loudest people contribute then we lose the quiet contributors and push them out. Davin On Mon, Feb 4, 2019 at 4:39 AM Antoine Pitrou wrote: > > Hello, > > In a recent message, Raymond dramatically pretends that I would have > "edited out" Davin of the maintainers list for the multiprocessing > module. > > What I did (*) is different: I asked to mark Davin inactive and to stop > auto-assigning him on bug tracker issues. Davin was /still/ listed in > the experts list, along with me and others. IOW, there was no "editing > out". > > (*) https://github.com/python/devguide/pull/435 > > The reason I did this is simple: Davin does not do, and has almost > never done, any actual maintenance work on multiprocessing (if you are > not convinced, just go through the git history, and the PRs that were > merged in the ~4 last years). He usually does not respond to tracker > issues opened by users. He does not review PRs. The only sizable > piece of work he committed is, as I mentioned in the previous thread, > still untested and undocumented. > > Auto-assigning someone who never (AFAICT) responds to issues ultimately > does a disservice to users, whose complaints go unanswered; while other > people, who /do/ respond to users, are not aware of those stale issues. > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/python%2Bpython_dev%40discontinuity.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.ware+pydev at gmail.com Mon Feb 4 10:45:39 2019 From: zachary.ware+pydev at gmail.com (Zachary Ware) Date: Mon, 4 Feb 2019 09:45:39 -0600 Subject: [Python-Dev] About multiprocessing maintainership In-Reply-To: <20190204113716.4368387b@fsol> References: <20190204113716.4368387b@fsol> Message-ID: On Mon, Feb 4, 2019 at 4:39 AM Antoine Pitrou wrote: > What I did (*) is different: I asked to mark Davin inactive and to stop > auto-assigning him on bug tracker issues. Davin was /still/ listed in > the experts list, along with me and others. IOW, there was no "editing > out". Auto-assignment (and auto-add-to-nosy-list, for that matter) is handled by the "components" of the bug tracker, see bugs.python.org/component. The experts list is used just for populating the auto-completion for the nosy-list (that is, typing "multi" in the nosy list entry field brings up "multiprocessing: davin,pitrou" currently). Marking a dev as "(inactive)" in the experts list removes them from that auto-completion. We've long discussed the possibility of rearranging how bpo does auto-nosy/auto-assign such that a reporter can tag the affected module(s) and auto-nosy based on the experts list. That would take significant effort which probably isn't worth doing unless PEP581 winds up rejected, but in the meantime we could easily add a `multiprocessing` component that does whatever auto-nosy and/or auto-assignment we want. -- Zach From antoine at python.org Mon Feb 4 10:48:17 2019 From: antoine at python.org (Antoine Pitrou) Date: Mon, 4 Feb 2019 16:48:17 +0100 Subject: [Python-Dev] About multiprocessing maintainership In-Reply-To: References: <20190204113716.4368387b@fsol> Message-ID: <84b82b3a-55be-91a2-3388-7e3c5360d1d7@python.org> Hello Davin, I would like this discussion to be constructive and not vindicative. So I would ask that we leave personal attacks out of this. > I have been part of several group discussions (among core developers) > now regarding how to balance the efforts of contributors with copious > time to devote versus those which must be extra judicious in how they > spend their more limited time. It is a misconception to think that I would have "copious time" to devote to the bug tracker and general contribution work. I don't. Actually, I find I am not responsive enough on such issues when they fall in my areas of expertise. For users and contributors, it is generally demotivating to have to wait several weeks before a response comes. > A few weeks later, Antoine is "editing me out" of the > maintainers list without any further communication. You haven't been "edited out". You were left in the maintainers list (really an "experts list"), together with Richard Oudkirk who is the original author of multiprocessing, but also Jesse Noller and me. Again, I have found that frequently multiprocessing issues get neglected. This is in part because you are auto-assigned on such issues, and therefore users and other contributors think you'll deal with the issues, which you don't. I could not guess by myself that you have been busy working in private on a feature for the past 1.5 years. So by all accounts you definitely seemed to be "inactive" as far as multiprocessing maintenance goes. My concern is to improve the likelihood of users getting a response on multiprocessing issues. One important factor is to be honest to users who is actually available to respond to issues. If you have another suggestion as to how act on this, please do. Regards Antoine. From guido at python.org Mon Feb 4 11:38:34 2019 From: guido at python.org (Guido van Rossum) Date: Mon, 4 Feb 2019 08:38:34 -0800 Subject: [Python-Dev] Return type of datetime subclasses added to timedelta In-Reply-To: <8faf2a5b-305c-e83b-33bd-e5eacc8609a7@ganssle.io> References: <1059740e-cc65-205d-5986-a9397463a315@ganssle.io> <2415dd60-b6b4-30b0-90d2-c0c8b22314c7@ganssle.io> <8faf2a5b-305c-e83b-33bd-e5eacc8609a7@ganssle.io> Message-ID: I recommend that you submit a PR so we can get it into 3.8 alpha 2. On Mon, Feb 4, 2019 at 5:50 AM Paul Ganssle wrote: > Hey all, > > This thread about the return type of datetime operations seems to have > stopped without any explicit decision - I think I responded to everyone who > had objections, but I think only Guido has given a +1 to whether or not we > should go ahead. > > Have we got agreement to go ahead with this change? Are we still targeting > Python 3.8 here? > > For those who don't want to dig through your old e-mails, here's the > archive link for this thread: > https://mail.python.org/pipermail/python-dev/2019-January/155984.html > > If you want to start commenting on the actual implementation, it's > available here (though it's pretty simple): > https://github.com/python/cpython/pull/10902 > > Best, > > Paul > > > On 1/6/19 7:17 PM, Guido van Rossum wrote: > > OK, I concede your point (and indeed I only tested this on 3.6). If we > could break the backward compatibility for now() we presumably can break it > for this purpose. > > On Sun, Jan 6, 2019 at 11:02 AM Paul Ganssle wrote: > >> I did address this in the original post - the assumption that the >> subclass constructor will have the same arguments as the base constructor >> is baked into many alternate constructors of datetime. I acknowledge that >> this is a breaking change, but it is a small one - anyone creating such a >> subclass that *cannot* handled the class being created this way would be >> broken in myriad ways. >> >> We have also in recent years changed several alternate constructors >> (including `replace`) to retain the original subclass, which by your same >> standard would be a breaking change. I believe there have been no >> complaints. In fact, between Python 3.6 and 3.7, the very example you >> showed broke: >> >> Python 3.6.6: >> >> >>> class D(datetime.datetime): >> ... def __new__(cls): >> ... return cls.now() >> ... >> >>> D() >> D(2019, 1, 6, 13, 49, 38, 842033) >> >> Python 3.7.2: >> >> >>> class D(datetime.datetime): >> ... def __new__(cls): >> ... return cls.now() >> ... >> >>> D() >> Traceback (most recent call last): >> File "", line 1, in >> File "", line 3, in __new__ >> TypeError: __new__() takes 1 positional argument but 9 were given >> >> >> We haven't seen any bug reports about this sort of thing; what we *have* >> been getting is bug reports that subclassing datetime doesn't retain the >> subclass in various ways (because people *are* using datetime >> subclasses). This is likely to cause very little in the way of problems, >> but it will improve convenience for people making datetime subclasses and >> almost certainly performance for people using them (e.g. pendulum and >> arrow, which now need to take a slow pure python route in many situations >> to work around this problem). >> >> If we're *really* concerned with this backward compatibility breaking, >> we could do the equivalent of: >> >> try: >> return new_behavior(...) >> except TypeError: >> warnings.warn("The semantics of timedelta addition have " >> "changed in a way that raises an error in " >> "this subclass. Please implement __add__ " >> "if you need the old behavior.", DeprecationWarning) >> >> Then after a suitable notice period drop the warning and turn it to a >> hard error. >> >> Best, >> >> Paul >> On 1/6/19 1:43 PM, Guido van Rossum wrote: >> >> I don't think datetime and builtins like int necessarily need to be >> aligned. But I do see a problem -- the __new__ and __init__ methods defined >> in the subclass (if any) should allow for being called with the same >> signature as the base datetime class. Currently you can have a subclass of >> datetime whose __new__ has no arguments (or, more realistically, interprets >> its arguments differently). Instances of such a class can still be added to >> a timedelta. The proposal would cause this to break (since such an addition >> has to create a new instance, which calls __new__ and __init__). Since this >> is a backwards incompatibility, I don't see how it can be done -- and I >> also don't see many use cases, so I think it's not worth pursuing further. >> >> Note that the same problem already happens with the .fromordinal() class >> method, though it doesn't happen with .fromdatetime() or .now(): >> >> >>> class D(datetime.datetime): >> ... def __new__(cls): return cls.now() >> ... >> >>> D() >> D(2019, 1, 6, 10, 33, 37, 161606) >> >>> D.fromordinal(100) >> Traceback (most recent call last): >> File "", line 1, in >> TypeError: __new__() takes 1 positional argument but 4 were given >> >>> D.fromtimestamp(123456789) >> D(1973, 11, 29, 13, 33, 9) >> >>> >> >> On Sun, Jan 6, 2019 at 9:05 AM Paul Ganssle wrote: >> >>> I can think of many reasons why datetime is different from builtins, >>> though to be honest I'm not sure that consistency for its own sake is >>> really a strong argument for keeping a counter-intuitive behavior - and to >>> be honest I'm open to the idea that *all* arithmetic types *should* >>> have some form of this change. >>> >>> That said, I would say that the biggest difference between datetime and >>> builtins (other than the fact that datetime is *not* a builtin, and as >>> such doesn't necessarily need to be categorized in this group), is that >>> unlike almost all other arithmetic types, *datetime* has a special, >>> dedicated type for describing differences in datetimes. Using your example >>> of a float subclass, consider that without the behavior of "addition of >>> floats returns floats", it would be hard to predict what would happen in >>> this situation: >>> >>> >>> F(1.2) + 3.4 >>> >>> Would that always return a float, even though F(1.2) + F(3.4) returns an >>> F? Would that return an F because F is the left-hand operand? Would it >>> return a float because float is the right-hand operand? Would you walk the >>> MROs and find the lowest type in common between the operands and return >>> that? It's not entirely clear which subtype predominates. With datetime, >>> you have: >>> >>> datetime - datetime -> timedelta >>> datetime ? timedelta -> datetime >>> timedelta ? timedelta -> timedelta >>> >>> There's no operation between two datetime objects that would return a >>> datetime object, so it's always clear: operations between datetime >>> subclasses return timedelta, operations between a datetime object and a >>> timedelta return the subclass of the datetime that it was added to or >>> subtracted from. >>> >>> Of course, the real way to resolve whether datetime should be different >>> from int/float/string/etc is to look at why this choice was actually made >>> for those types in the first place, and decide whether datetime is like >>> them *in this respect*. The heterogeneous operations problem may be a >>> reasonable justification for leaving the other builtins alone but changing >>> datetime, but if someone knows of other fundamental reasons why the >>> decision to have arithmetic operations always create the base class was >>> chosen, please let me know. >>> >>> Best, >>> Paul >>> On 1/5/19 3:55 AM, Alexander Belopolsky wrote: >>> >>> >>> >>> On Wed, Jan 2, 2019 at 10:18 PM Paul Ganssle wrote: >>> >>>> .. the original objection was that this implementation assumes that the >>>> datetime subclass has a constructor with the same (or a sufficiently >>>> similar) signature as datetime. >>>> >>> While this was used as a possible rationale for the way standard types >>> behave, the main objection to changing datetime classes is that it will >>> make them behave differently from builtins. For example: >>> >>> >>> class F(float): >>> ... pass >>> ... >>> >>> type(F.fromhex('AA')) >>> >>> >>> type(F(1) + F(2)) >>> >>> >>> This may be a legitimate gripe, but unfortunately that ship has sailed >>>> long ago. All of datetime's alternate constructors make this assumption. >>>> Any subclass that does not meet this requirement must have worked around it >>>> long ago (or they don't care about alternate constructors). >>>> >>> >>> This is right, but the same argument is equally applicable to int, >>> float, etc. subclasses. If you want to limit your change to datetime types >>> you should explain what makes these types special. >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >> >> >> -- >> --Guido van Rossum (python.org/~guido) >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at ganssle.io Mon Feb 4 11:39:24 2019 From: paul at ganssle.io (Paul Ganssle) Date: Mon, 4 Feb 2019 11:39:24 -0500 Subject: [Python-Dev] Return type of datetime subclasses added to timedelta In-Reply-To: References: <1059740e-cc65-205d-5986-a9397463a315@ganssle.io> <2415dd60-b6b4-30b0-90d2-c0c8b22314c7@ganssle.io> <8faf2a5b-305c-e83b-33bd-e5eacc8609a7@ganssle.io> Message-ID: There's already a PR, actually, #10902: https://github.com/python/cpython/pull/10902 Victor reviewed and approved it, I think before I started this thread, so now it's just waiting on merge. On 2/4/19 11:38 AM, Guido van Rossum wrote: > I recommend that you submit a PR so we can get it into 3.8 alpha 2. > > On Mon, Feb 4, 2019 at 5:50 AM Paul Ganssle > wrote: > > Hey all, > > This thread about the return type of datetime operations seems to > have stopped without any explicit decision - I think I responded > to everyone who had objections, but I think only Guido has given a > +1 to whether or not we should go ahead. > > Have we got agreement to go ahead with this change? Are we still > targeting Python 3.8 here? > > For those who don't want to dig through your old e-mails, here's > the archive link for this thread: > https://mail.python.org/pipermail/python-dev/2019-January/155984.html > > If you want to start commenting on the actual implementation, it's > available here (though it's pretty simple): > https://github.com/python/cpython/pull/10902 > > Best, > > Paul > > > On 1/6/19 7:17 PM, Guido van Rossum wrote: >> OK, I concede your point (and indeed I only tested this on 3.6). >> If we could break the backward compatibility for now() we >> presumably can break it for this purpose. >> >> On Sun, Jan 6, 2019 at 11:02 AM Paul Ganssle > > wrote: >> >> I did address this in the original post - the assumption that >> the subclass constructor will have the same arguments as the >> base constructor is baked into many alternate constructors of >> datetime. I acknowledge that this is a breaking change, but >> it is a small one - anyone creating such a subclass that >> /cannot/ handled the class being created this way would be >> broken in myriad ways. >> >> We have also in recent years changed several alternate >> constructors (including `replace`) to retain the original >> subclass, which by your same standard would be a breaking >> change. I believe there have been no complaints. In fact, >> between Python 3.6 and 3.7, the very example you showed broke: >> >> Python 3.6.6: >> >> >>> class D(datetime.datetime): >> ...???? def __new__(cls): >> ...???????? return cls.now() >> ... >> >>> D() >> D(2019, 1, 6, 13, 49, 38, 842033) >> >> Python 3.7.2: >> >> >>> class D(datetime.datetime): >> ...???? def __new__(cls): >> ...???????? return cls.now() >> ... >> >>> D() >> Traceback (most recent call last): >> ? File "", line 1, in >> ? File "", line 3, in __new__ >> TypeError: __new__() takes 1 positional argument but 9 were given >> >> >> We haven't seen any bug reports about this sort of thing; >> what we /have/ been getting is bug reports that subclassing >> datetime doesn't retain the subclass in various ways (because >> people /are/ using datetime subclasses). This is likely to >> cause very little in the way of problems, but it will improve >> convenience for people making datetime subclasses and almost >> certainly performance for people using them (e.g. pendulum >> and arrow, which now need to take a slow pure python route in >> many situations to work around this problem). >> >> If we're /really/ concerned with this backward compatibility >> breaking, we could do the equivalent of: >> >> try: >> ??? return new_behavior(...) >> except TypeError: >> ??? warnings.warn("The semantics of timedelta addition have " >> ????????????????? "changed in a way that raises an error in " >> ????????????????? "this subclass. Please implement __add__ " >> ????????????????? "if you need the old behavior.", >> DeprecationWarning) >> >> Then after a suitable notice period drop the warning and turn >> it to a hard error. >> >> Best, >> >> Paul >> >> On 1/6/19 1:43 PM, Guido van Rossum wrote: >>> I don't think datetime and builtins like int necessarily >>> need to be aligned. But I do see a problem -- the __new__ >>> and __init__ methods defined in the subclass (if any) should >>> allow for being called with the same signature as the base >>> datetime class. Currently you can have a subclass of >>> datetime whose __new__ has no arguments (or, more >>> realistically, interprets its arguments differently). >>> Instances of such a class can still be added to a timedelta. >>> The proposal would cause this to break (since such an >>> addition has to create a new instance, which calls __new__ >>> and __init__). Since this is a backwards incompatibility, I >>> don't see how it can be done -- and I also don't see many >>> use cases, so I think it's not worth pursuing further. >>> >>> Note that the same problem already happens with the >>> .fromordinal() class method, though it doesn't happen with >>> .fromdatetime() or .now(): >>> >>> >>> class D(datetime.datetime): >>> ...?? def __new__(cls): return cls.now() >>> ... >>> >>> D() >>> D(2019, 1, 6, 10, 33, 37, 161606) >>> >>> D.fromordinal(100) >>> Traceback (most recent call last): >>> ? File "", line 1, in >>> TypeError: __new__() takes 1 positional argument but 4 were >>> given >>> >>> D.fromtimestamp(123456789) >>> D(1973, 11, 29, 13, 33, 9) >>> >>> >>> >>> On Sun, Jan 6, 2019 at 9:05 AM Paul Ganssle >> > wrote: >>> >>> I can think of many reasons why datetime is different >>> from builtins, though to be honest I'm not sure that >>> consistency for its own sake is really a strong argument >>> for keeping a counter-intuitive behavior - and to be >>> honest I'm open to the idea that /all/ arithmetic types >>> /should/ have some form of this change. >>> >>> That said, I would say that the biggest difference >>> between datetime and builtins (other than the fact that >>> datetime is /not/ a builtin, and as such doesn't >>> necessarily need to be categorized in this group), is >>> that unlike almost all other arithmetic types, >>> /datetime/ has a special, dedicated type for describing >>> differences in datetimes. Using your example of a float >>> subclass, consider that without the behavior of >>> "addition of floats returns floats", it would be hard to >>> predict what would happen in this situation: >>> >>> >>> F(1.2) + 3.4 >>> >>> Would that always return a float, even though F(1.2) + >>> F(3.4) returns an F? Would that return an F because F is >>> the left-hand operand? Would it return a float because >>> float is the right-hand operand? Would you walk the MROs >>> and find the lowest type in common between the operands >>> and return that? It's not entirely clear which subtype >>> predominates. With datetime, you have: >>> >>> datetime - datetime -> timedelta >>> datetime ? timedelta -> datetime >>> timedelta ? timedelta -> timedelta >>> >>> There's no operation between two datetime objects that >>> would return a datetime object, so it's always clear: >>> operations between datetime subclasses return timedelta, >>> operations between a datetime object and a timedelta >>> return the subclass of the datetime that it was added to >>> or subtracted from. >>> >>> Of course, the real way to resolve whether datetime >>> should be different from int/float/string/etc is to look >>> at why this choice was actually made for those types in >>> the first place, and decide whether datetime is like >>> them /in this respect/. The heterogeneous operations >>> problem may be a reasonable justification for leaving >>> the other builtins alone but changing datetime, but if >>> someone knows of other fundamental reasons why the >>> decision to have arithmetic operations always create the >>> base class was chosen, please let me know. >>> >>> Best, >>> Paul >>> >>> On 1/5/19 3:55 AM, Alexander Belopolsky wrote: >>>> >>>> >>>> On Wed, Jan 2, 2019 at 10:18 PM Paul Ganssle >>>> > wrote: >>>> >>>> .. the original objection was that this >>>> implementation assumes that the datetime subclass >>>> has a constructor with the same (or a sufficiently >>>> similar) signature as datetime. >>>> >>>> While this was used as a possible rationale for the way >>>> standard types behave, the main objection to changing >>>> datetime classes is that it will make them behave >>>> differently from builtins.? For example: >>>> >>>> >>> class F(float): >>>> ...? ? ?pass >>>> ... >>>> >>> type(F.fromhex('AA')) >>>> >>>> >>> type(F(1) + F(2)) >>>> >>>> >>>> This may be a legitimate gripe, but unfortunately >>>> that ship has sailed long ago. All of datetime's >>>> alternate constructors make this assumption. Any >>>> subclass that does not meet this requirement must >>>> have worked around it long ago (or they don't care >>>> about alternate constructors). >>>> >>>> >>>> This is right, but the same argument is equally >>>> applicable to int, float, etc. subclasses.? If you want >>>> to limit your change to datetime types you should >>>> explain what makes these types special.?? >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >>> >>> >>> -- >>> --Guido van Rossum (python.org/~guido >>> ) >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> >> >> -- >> --Guido van Rossum (python.org/~guido ) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > -- > --Guido van Rossum (python.org/~guido ) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From solipsis at pitrou.net Mon Feb 4 12:10:44 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 4 Feb 2019 18:10:44 +0100 Subject: [Python-Dev] About multiprocessing maintainership References: <20190204113716.4368387b@fsol> Message-ID: <20190204181044.306a2dfe@fsol> On Mon, 4 Feb 2019 09:45:39 -0600 Zachary Ware wrote: > On Mon, Feb 4, 2019 at 4:39 AM Antoine Pitrou wrote: > > What I did (*) is different: I asked to mark Davin inactive and to stop > > auto-assigning him on bug tracker issues. Davin was /still/ listed in > > the experts list, along with me and others. IOW, there was no "editing > > out". > > Auto-assignment (and auto-add-to-nosy-list, for that matter) is > handled by the "components" of the bug tracker, see > bugs.python.org/component. The experts list is used just for > populating the auto-completion for the nosy-list (that is, typing > "multi" in the nosy list entry field brings up "multiprocessing: > davin,pitrou" currently). Marking a dev as "(inactive)" in the > experts list removes them from that auto-completion. Thanks for the clarification. In any case, here is how things usually happen. A user files a bug report for a certain module M. A triager takes notice, looks up the relevant expert(s) in the developer's guide. If an expert is listed with issue assignment allowed (the asterisk "*" besides the name), then the triager assumes that expert is available and assigns the issue to them. If the expert with an asterisk doesn't respond to the issue, the issue may very well get forgotten. So it's important that experts with an asterisk are actually available to deal with user reports. Regards Antoine. From barry at python.org Mon Feb 4 13:14:41 2019 From: barry at python.org (Barry Warsaw) Date: Mon, 4 Feb 2019 10:14:41 -0800 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: <20190204130238.GB29197@xps> References: <20190204130238.GB29197@xps> Message-ID: <23ED9695-7715-40DE-9DD5-F6A3C482612F@python.org> On Feb 4, 2019, at 05:02, Stephane Wirtel wrote: > > Just one idea, we could create a Docker image with this alpha version. > > This Docker image could be used with the CI of the main projects and the > test suites of these projects. > > If we have some issues, we should create an issue for python 3.8.0a1. The time machine strikes again! https://gitlab.com/python-devs/ci-images/tree/master We call these ?semi-official?! The current image takes a slightly different approach, by including all the latest Python versions from 2.7, and 3.4-3.8, plus git head. I just pushed an update for the latest Python 3.8 alpha and 3.7.2. It?s building now, but the image should be published on quay.io as soon as that?s done. Contributions most welcome! -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From ericsnowcurrently at gmail.com Mon Feb 4 13:35:48 2019 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 4 Feb 2019 11:35:48 -0700 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> Message-ID: The main problem here seems to be a shortage of communication. :/ Also, I agree on the exceptional nature of merging incomplete PRs. -eric On Mon, Feb 4, 2019 at 3:37 AM ?ukasz Langa wrote: > > > > On 4 Feb 2019, at 01:49, Guido van Rossum wrote: > > > > I think this is now up to the 3.8 release manager. > > I responded on the tracker: https://bugs.python.org/issue35813#msg334817 > > I wrote: > > > @Davin, in what time can you fill in the missing tests and documentation? If this is something you finish do before alpha2, I'm inclined to leave the change in. > > > > As it stands, I missed the controversy yesterday as I was busy making my first release. So the merge *got released* in alpha1. I would prefer to fix the missing pieces forward instead of reverting and re-submitting which will only thrash blame and history at this point. > > > > FTR, I do agree with Antoine, Ronald and others that in the future such big changes should be as close to their ready state at merge time. > > > > @Raymond, would you be willing to work with Davin on finishing this work in time for alpha2? > > > - ? > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ericsnowcurrently%40gmail.com From guido at python.org Mon Feb 4 14:19:16 2019 From: guido at python.org (Guido van Rossum) Date: Mon, 4 Feb 2019 11:19:16 -0800 Subject: [Python-Dev] Return type of datetime subclasses added to timedelta In-Reply-To: References: <1059740e-cc65-205d-5986-a9397463a315@ganssle.io> <2415dd60-b6b4-30b0-90d2-c0c8b22314c7@ganssle.io> <8faf2a5b-305c-e83b-33bd-e5eacc8609a7@ganssle.io> Message-ID: OK, I approved the PR. Can some other core dev ensure that it gets merged? No backports though! On Mon, Feb 4, 2019 at 8:46 AM Paul Ganssle wrote: > There's already a PR, actually, #10902: > https://github.com/python/cpython/pull/10902 > > Victor reviewed and approved it, I think before I started this thread, so > now it's just waiting on merge. > On 2/4/19 11:38 AM, Guido van Rossum wrote: > > I recommend that you submit a PR so we can get it into 3.8 alpha 2. > > On Mon, Feb 4, 2019 at 5:50 AM Paul Ganssle wrote: > >> Hey all, >> >> This thread about the return type of datetime operations seems to have >> stopped without any explicit decision - I think I responded to everyone who >> had objections, but I think only Guido has given a +1 to whether or not we >> should go ahead. >> >> Have we got agreement to go ahead with this change? Are we still >> targeting Python 3.8 here? >> >> For those who don't want to dig through your old e-mails, here's the >> archive link for this thread: >> https://mail.python.org/pipermail/python-dev/2019-January/155984.html >> >> If you want to start commenting on the actual implementation, it's >> available here (though it's pretty simple): >> https://github.com/python/cpython/pull/10902 >> >> Best, >> >> Paul >> >> >> On 1/6/19 7:17 PM, Guido van Rossum wrote: >> >> OK, I concede your point (and indeed I only tested this on 3.6). If we >> could break the backward compatibility for now() we presumably can break it >> for this purpose. >> >> On Sun, Jan 6, 2019 at 11:02 AM Paul Ganssle wrote: >> >>> I did address this in the original post - the assumption that the >>> subclass constructor will have the same arguments as the base constructor >>> is baked into many alternate constructors of datetime. I acknowledge that >>> this is a breaking change, but it is a small one - anyone creating such a >>> subclass that *cannot* handled the class being created this way would >>> be broken in myriad ways. >>> >>> We have also in recent years changed several alternate constructors >>> (including `replace`) to retain the original subclass, which by your same >>> standard would be a breaking change. I believe there have been no >>> complaints. In fact, between Python 3.6 and 3.7, the very example you >>> showed broke: >>> >>> Python 3.6.6: >>> >>> >>> class D(datetime.datetime): >>> ... def __new__(cls): >>> ... return cls.now() >>> ... >>> >>> D() >>> D(2019, 1, 6, 13, 49, 38, 842033) >>> >>> Python 3.7.2: >>> >>> >>> class D(datetime.datetime): >>> ... def __new__(cls): >>> ... return cls.now() >>> ... >>> >>> D() >>> Traceback (most recent call last): >>> File "", line 1, in >>> File "", line 3, in __new__ >>> TypeError: __new__() takes 1 positional argument but 9 were given >>> >>> >>> We haven't seen any bug reports about this sort of thing; what we *have* >>> been getting is bug reports that subclassing datetime doesn't retain the >>> subclass in various ways (because people *are* using datetime >>> subclasses). This is likely to cause very little in the way of problems, >>> but it will improve convenience for people making datetime subclasses and >>> almost certainly performance for people using them (e.g. pendulum and >>> arrow, which now need to take a slow pure python route in many situations >>> to work around this problem). >>> >>> If we're *really* concerned with this backward compatibility breaking, >>> we could do the equivalent of: >>> >>> try: >>> return new_behavior(...) >>> except TypeError: >>> warnings.warn("The semantics of timedelta addition have " >>> "changed in a way that raises an error in " >>> "this subclass. Please implement __add__ " >>> "if you need the old behavior.", DeprecationWarning) >>> >>> Then after a suitable notice period drop the warning and turn it to a >>> hard error. >>> >>> Best, >>> >>> Paul >>> On 1/6/19 1:43 PM, Guido van Rossum wrote: >>> >>> I don't think datetime and builtins like int necessarily need to be >>> aligned. But I do see a problem -- the __new__ and __init__ methods defined >>> in the subclass (if any) should allow for being called with the same >>> signature as the base datetime class. Currently you can have a subclass of >>> datetime whose __new__ has no arguments (or, more realistically, interprets >>> its arguments differently). Instances of such a class can still be added to >>> a timedelta. The proposal would cause this to break (since such an addition >>> has to create a new instance, which calls __new__ and __init__). Since this >>> is a backwards incompatibility, I don't see how it can be done -- and I >>> also don't see many use cases, so I think it's not worth pursuing further. >>> >>> Note that the same problem already happens with the .fromordinal() class >>> method, though it doesn't happen with .fromdatetime() or .now(): >>> >>> >>> class D(datetime.datetime): >>> ... def __new__(cls): return cls.now() >>> ... >>> >>> D() >>> D(2019, 1, 6, 10, 33, 37, 161606) >>> >>> D.fromordinal(100) >>> Traceback (most recent call last): >>> File "", line 1, in >>> TypeError: __new__() takes 1 positional argument but 4 were given >>> >>> D.fromtimestamp(123456789) >>> D(1973, 11, 29, 13, 33, 9) >>> >>> >>> >>> On Sun, Jan 6, 2019 at 9:05 AM Paul Ganssle wrote: >>> >>>> I can think of many reasons why datetime is different from builtins, >>>> though to be honest I'm not sure that consistency for its own sake is >>>> really a strong argument for keeping a counter-intuitive behavior - and to >>>> be honest I'm open to the idea that *all* arithmetic types *should* >>>> have some form of this change. >>>> >>>> That said, I would say that the biggest difference between datetime and >>>> builtins (other than the fact that datetime is *not* a builtin, and as >>>> such doesn't necessarily need to be categorized in this group), is that >>>> unlike almost all other arithmetic types, *datetime* has a special, >>>> dedicated type for describing differences in datetimes. Using your example >>>> of a float subclass, consider that without the behavior of "addition of >>>> floats returns floats", it would be hard to predict what would happen in >>>> this situation: >>>> >>>> >>> F(1.2) + 3.4 >>>> >>>> Would that always return a float, even though F(1.2) + F(3.4) returns >>>> an F? Would that return an F because F is the left-hand operand? Would it >>>> return a float because float is the right-hand operand? Would you walk the >>>> MROs and find the lowest type in common between the operands and return >>>> that? It's not entirely clear which subtype predominates. With datetime, >>>> you have: >>>> >>>> datetime - datetime -> timedelta >>>> datetime ? timedelta -> datetime >>>> timedelta ? timedelta -> timedelta >>>> >>>> There's no operation between two datetime objects that would return a >>>> datetime object, so it's always clear: operations between datetime >>>> subclasses return timedelta, operations between a datetime object and a >>>> timedelta return the subclass of the datetime that it was added to or >>>> subtracted from. >>>> >>>> Of course, the real way to resolve whether datetime should be different >>>> from int/float/string/etc is to look at why this choice was actually made >>>> for those types in the first place, and decide whether datetime is like >>>> them *in this respect*. The heterogeneous operations problem may be a >>>> reasonable justification for leaving the other builtins alone but changing >>>> datetime, but if someone knows of other fundamental reasons why the >>>> decision to have arithmetic operations always create the base class was >>>> chosen, please let me know. >>>> >>>> Best, >>>> Paul >>>> On 1/5/19 3:55 AM, Alexander Belopolsky wrote: >>>> >>>> >>>> >>>> On Wed, Jan 2, 2019 at 10:18 PM Paul Ganssle wrote: >>>> >>>>> .. the original objection was that this implementation assumes that >>>>> the datetime subclass has a constructor with the same (or a sufficiently >>>>> similar) signature as datetime. >>>>> >>>> While this was used as a possible rationale for the way standard types >>>> behave, the main objection to changing datetime classes is that it will >>>> make them behave differently from builtins. For example: >>>> >>>> >>> class F(float): >>>> ... pass >>>> ... >>>> >>> type(F.fromhex('AA')) >>>> >>>> >>> type(F(1) + F(2)) >>>> >>>> >>>> This may be a legitimate gripe, but unfortunately that ship has sailed >>>>> long ago. All of datetime's alternate constructors make this assumption. >>>>> Any subclass that does not meet this requirement must have worked around it >>>>> long ago (or they don't care about alternate constructors). >>>>> >>>> >>>> This is right, but the same argument is equally applicable to int, >>>> float, etc. subclasses. If you want to limit your change to datetime types >>>> you should explain what makes these types special. >>>> >>>> _______________________________________________ >>>> Python-Dev mailing list >>>> Python-Dev at python.org >>>> https://mail.python.org/mailman/listinfo/python-dev >>>> Unsubscribe: >>>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>>> >>> >>> >>> -- >>> --Guido van Rossum (python.org/~guido) >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >> >> >> -- >> --Guido van Rossum (python.org/~guido) >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > -- > --Guido van Rossum (python.org/~guido) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Mon Feb 4 14:38:00 2019 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 4 Feb 2019 14:38:00 -0500 Subject: [Python-Dev] Return type of datetime subclasses added to timedelta In-Reply-To: References: <1059740e-cc65-205d-5986-a9397463a315@ganssle.io> <2415dd60-b6b4-30b0-90d2-c0c8b22314c7@ganssle.io> <8faf2a5b-305c-e83b-33bd-e5eacc8609a7@ganssle.io> Message-ID: I'll merge it tonight. On Mon, Feb 4, 2019 at 2:22 PM Guido van Rossum wrote: > OK, I approved the PR. Can some other core dev ensure that it gets merged? > No backports though! > > On Mon, Feb 4, 2019 at 8:46 AM Paul Ganssle wrote: > >> There's already a PR, actually, #10902: >> https://github.com/python/cpython/pull/10902 >> >> Victor reviewed and approved it, I think before I started this thread, so >> now it's just waiting on merge. >> On 2/4/19 11:38 AM, Guido van Rossum wrote: >> >> I recommend that you submit a PR so we can get it into 3.8 alpha 2. >> >> On Mon, Feb 4, 2019 at 5:50 AM Paul Ganssle wrote: >> >>> Hey all, >>> >>> This thread about the return type of datetime operations seems to have >>> stopped without any explicit decision - I think I responded to everyone who >>> had objections, but I think only Guido has given a +1 to whether or not we >>> should go ahead. >>> >>> Have we got agreement to go ahead with this change? Are we still >>> targeting Python 3.8 here? >>> >>> For those who don't want to dig through your old e-mails, here's the >>> archive link for this thread: >>> https://mail.python.org/pipermail/python-dev/2019-January/155984.html >>> >>> If you want to start commenting on the actual implementation, it's >>> available here (though it's pretty simple): >>> https://github.com/python/cpython/pull/10902 >>> >>> Best, >>> >>> Paul >>> >>> >>> On 1/6/19 7:17 PM, Guido van Rossum wrote: >>> >>> OK, I concede your point (and indeed I only tested this on 3.6). If we >>> could break the backward compatibility for now() we presumably can break it >>> for this purpose. >>> >>> On Sun, Jan 6, 2019 at 11:02 AM Paul Ganssle wrote: >>> >>>> I did address this in the original post - the assumption that the >>>> subclass constructor will have the same arguments as the base constructor >>>> is baked into many alternate constructors of datetime. I acknowledge that >>>> this is a breaking change, but it is a small one - anyone creating such a >>>> subclass that *cannot* handled the class being created this way would >>>> be broken in myriad ways. >>>> >>>> We have also in recent years changed several alternate constructors >>>> (including `replace`) to retain the original subclass, which by your same >>>> standard would be a breaking change. I believe there have been no >>>> complaints. In fact, between Python 3.6 and 3.7, the very example you >>>> showed broke: >>>> >>>> Python 3.6.6: >>>> >>>> >>> class D(datetime.datetime): >>>> ... def __new__(cls): >>>> ... return cls.now() >>>> ... >>>> >>> D() >>>> D(2019, 1, 6, 13, 49, 38, 842033) >>>> >>>> Python 3.7.2: >>>> >>>> >>> class D(datetime.datetime): >>>> ... def __new__(cls): >>>> ... return cls.now() >>>> ... >>>> >>> D() >>>> Traceback (most recent call last): >>>> File "", line 1, in >>>> File "", line 3, in __new__ >>>> TypeError: __new__() takes 1 positional argument but 9 were given >>>> >>>> >>>> We haven't seen any bug reports about this sort of thing; what we >>>> *have* been getting is bug reports that subclassing datetime doesn't >>>> retain the subclass in various ways (because people *are* using >>>> datetime subclasses). This is likely to cause very little in the way of >>>> problems, but it will improve convenience for people making datetime >>>> subclasses and almost certainly performance for people using them (e.g. >>>> pendulum and arrow, which now need to take a slow pure python route in many >>>> situations to work around this problem). >>>> >>>> If we're *really* concerned with this backward compatibility breaking, >>>> we could do the equivalent of: >>>> >>>> try: >>>> return new_behavior(...) >>>> except TypeError: >>>> warnings.warn("The semantics of timedelta addition have " >>>> "changed in a way that raises an error in " >>>> "this subclass. Please implement __add__ " >>>> "if you need the old behavior.", DeprecationWarning) >>>> >>>> Then after a suitable notice period drop the warning and turn it to a >>>> hard error. >>>> >>>> Best, >>>> >>>> Paul >>>> On 1/6/19 1:43 PM, Guido van Rossum wrote: >>>> >>>> I don't think datetime and builtins like int necessarily need to be >>>> aligned. But I do see a problem -- the __new__ and __init__ methods defined >>>> in the subclass (if any) should allow for being called with the same >>>> signature as the base datetime class. Currently you can have a subclass of >>>> datetime whose __new__ has no arguments (or, more realistically, interprets >>>> its arguments differently). Instances of such a class can still be added to >>>> a timedelta. The proposal would cause this to break (since such an addition >>>> has to create a new instance, which calls __new__ and __init__). Since this >>>> is a backwards incompatibility, I don't see how it can be done -- and I >>>> also don't see many use cases, so I think it's not worth pursuing further. >>>> >>>> Note that the same problem already happens with the .fromordinal() >>>> class method, though it doesn't happen with .fromdatetime() or .now(): >>>> >>>> >>> class D(datetime.datetime): >>>> ... def __new__(cls): return cls.now() >>>> ... >>>> >>> D() >>>> D(2019, 1, 6, 10, 33, 37, 161606) >>>> >>> D.fromordinal(100) >>>> Traceback (most recent call last): >>>> File "", line 1, in >>>> TypeError: __new__() takes 1 positional argument but 4 were given >>>> >>> D.fromtimestamp(123456789) >>>> D(1973, 11, 29, 13, 33, 9) >>>> >>> >>>> >>>> On Sun, Jan 6, 2019 at 9:05 AM Paul Ganssle wrote: >>>> >>>>> I can think of many reasons why datetime is different from builtins, >>>>> though to be honest I'm not sure that consistency for its own sake is >>>>> really a strong argument for keeping a counter-intuitive behavior - and to >>>>> be honest I'm open to the idea that *all* arithmetic types *should* >>>>> have some form of this change. >>>>> >>>>> That said, I would say that the biggest difference between datetime >>>>> and builtins (other than the fact that datetime is *not* a builtin, >>>>> and as such doesn't necessarily need to be categorized in this group), is >>>>> that unlike almost all other arithmetic types, *datetime* has a >>>>> special, dedicated type for describing differences in datetimes. Using your >>>>> example of a float subclass, consider that without the behavior of >>>>> "addition of floats returns floats", it would be hard to predict what would >>>>> happen in this situation: >>>>> >>>>> >>> F(1.2) + 3.4 >>>>> >>>>> Would that always return a float, even though F(1.2) + F(3.4) returns >>>>> an F? Would that return an F because F is the left-hand operand? Would it >>>>> return a float because float is the right-hand operand? Would you walk the >>>>> MROs and find the lowest type in common between the operands and return >>>>> that? It's not entirely clear which subtype predominates. With datetime, >>>>> you have: >>>>> >>>>> datetime - datetime -> timedelta >>>>> datetime ? timedelta -> datetime >>>>> timedelta ? timedelta -> timedelta >>>>> >>>>> There's no operation between two datetime objects that would return a >>>>> datetime object, so it's always clear: operations between datetime >>>>> subclasses return timedelta, operations between a datetime object and a >>>>> timedelta return the subclass of the datetime that it was added to or >>>>> subtracted from. >>>>> >>>>> Of course, the real way to resolve whether datetime should be >>>>> different from int/float/string/etc is to look at why this choice was >>>>> actually made for those types in the first place, and decide whether >>>>> datetime is like them *in this respect*. The heterogeneous operations >>>>> problem may be a reasonable justification for leaving the other builtins >>>>> alone but changing datetime, but if someone knows of other fundamental >>>>> reasons why the decision to have arithmetic operations always create the >>>>> base class was chosen, please let me know. >>>>> >>>>> Best, >>>>> Paul >>>>> On 1/5/19 3:55 AM, Alexander Belopolsky wrote: >>>>> >>>>> >>>>> >>>>> On Wed, Jan 2, 2019 at 10:18 PM Paul Ganssle wrote: >>>>> >>>>>> .. the original objection was that this implementation assumes that >>>>>> the datetime subclass has a constructor with the same (or a sufficiently >>>>>> similar) signature as datetime. >>>>>> >>>>> While this was used as a possible rationale for the way standard types >>>>> behave, the main objection to changing datetime classes is that it will >>>>> make them behave differently from builtins. For example: >>>>> >>>>> >>> class F(float): >>>>> ... pass >>>>> ... >>>>> >>> type(F.fromhex('AA')) >>>>> >>>>> >>> type(F(1) + F(2)) >>>>> >>>>> >>>>> This may be a legitimate gripe, but unfortunately that ship has sailed >>>>>> long ago. All of datetime's alternate constructors make this assumption. >>>>>> Any subclass that does not meet this requirement must have worked around it >>>>>> long ago (or they don't care about alternate constructors). >>>>>> >>>>> >>>>> This is right, but the same argument is equally applicable to int, >>>>> float, etc. subclasses. If you want to limit your change to datetime types >>>>> you should explain what makes these types special. >>>>> >>>>> _______________________________________________ >>>>> Python-Dev mailing list >>>>> Python-Dev at python.org >>>>> https://mail.python.org/mailman/listinfo/python-dev >>>>> Unsubscribe: >>>>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>>>> >>>> >>>> >>>> -- >>>> --Guido van Rossum (python.org/~guido) >>>> >>>> _______________________________________________ >>>> Python-Dev mailing list >>>> Python-Dev at python.org >>>> https://mail.python.org/mailman/listinfo/python-dev >>>> Unsubscribe: >>>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>>> >>> >>> >>> -- >>> --Guido van Rossum (python.org/~guido) >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >> >> >> -- >> --Guido van Rossum (python.org/~guido) >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/alexander.belopolsky%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.hettinger at gmail.com Mon Feb 4 15:26:12 2019 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Mon, 4 Feb 2019 12:26:12 -0800 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> Message-ID: <3C809B94-0C17-4574-BB2A-2B32917BB5C3@gmail.com> > On Feb 4, 2019, at 2:36 AM, ?ukasz Langa wrote: > > @Raymond, would you be willing to work with Davin on finishing this work in time for alpha2? I would be happy to help, but this is beyond my technical ability. The people who are qualified to work on this have already chimed in on the discussion. Fortunately, I think this is a feature that everyone wants. So it just a matter of getting the experts on the subject to team-up and help get it done. Raymond From stephane at wirtel.be Tue Feb 5 05:24:20 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Tue, 5 Feb 2019 11:24:20 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: <23ED9695-7715-40DE-9DD5-F6A3C482612F@python.org> References: <20190204130238.GB29197@xps> <23ED9695-7715-40DE-9DD5-F6A3C482612F@python.org> Message-ID: <20190205102420.GA7969@xps> Hi Barry, I was not aware of this image. So it's true that it's very useful. Thank you very much, St?phane On 02/04, Barry Warsaw wrote: >On Feb 4, 2019, at 05:02, Stephane Wirtel wrote: >> >> Just one idea, we could create a Docker image with this alpha version. >> >> This Docker image could be used with the CI of the main projects and the >> test suites of these projects. >> >> If we have some issues, we should create an issue for python 3.8.0a1. > >The time machine strikes again! > >https://gitlab.com/python-devs/ci-images/tree/master > >We call these ?semi-official?! The current image takes a slightly different approach, by including all the latest Python versions from 2.7, and 3.4-3.8, plus git head. I just pushed an update for the latest Python 3.8 alpha and 3.7.2. It?s building now, but the image should be published on quay.io as soon as that?s done. > >Contributions most welcome! > >-Barry > >_______________________________________________ >Python-Dev mailing list >Python-Dev at python.org >https://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: https://mail.python.org/mailman/options/python-dev/stephane%40wirtel.be -- St?phane Wirtel - https://wirtel.be - @matrixise From daveshawley at gmail.com Tue Feb 5 07:05:05 2019 From: daveshawley at gmail.com (David Shawley) Date: Tue, 5 Feb 2019 07:05:05 -0500 Subject: [Python-Dev] bpo-32972: Add unittest.AsyncioTestCase review (for 3.8?) Message-ID: <6BC0939D-0388-45F0-B1EC-E7AE9201FBCE@gmail.com> Hi everyone, I added a PR to add a sub-class of unittest.TestCase that makes it possible to write async test methods. I wrote this a few months ago and it is waiting on core review. Is there a core dev that can take up this review? I would love to have this functionality in the core. Lukasz - should we add this to Python 3.8 or is it too late for feature additions? BPO link: https://bugs.python.org/issue32972 Github PR: https://github.com/python/cpython/pull/10296 cheers, dave. -- "State and behavior. State and behavior. If it doesn?t bundle state and behavior in a sensible way, it should not be an object, and there should not be a class that produces it." eevee -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Feb 5 11:44:20 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 5 Feb 2019 17:44:20 +0100 Subject: [Python-Dev] bpo-32972: Add unittest.AsyncioTestCase review (for 3.8?) References: <6BC0939D-0388-45F0-B1EC-E7AE9201FBCE@gmail.com> Message-ID: <20190205174420.00fef68d@fsol> Hi David, I cannot comment on the PR, but since the functionality is asyncio-specific, I would suggest moving it to a dedicate `asyncio.testing` module, or something similar, rather than leaving it in `unittest` proper. Regards Antoine. On Tue, 5 Feb 2019 07:05:05 -0500 David Shawley wrote: > Hi everyone, I added a PR to add a sub-class of unittest.TestCase that makes it possible to write async test methods. I wrote this a few months ago and it is waiting on core review. Is there a core dev that can take up this review? I would love to have this functionality in the core. > > Lukasz - should we add this to Python 3.8 or is it too late for feature additions? > > BPO link: https://bugs.python.org/issue32972 > Github PR: https://github.com/python/cpython/pull/10296 > > cheers, dave. > -- > "State and behavior. State and behavior. If it doesn?t bundle state and behavior in a sensible way, it should not be an object, and there should not be a class that produces it." eevee > > From g.rodola at gmail.com Tue Feb 5 12:52:58 2019 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Tue, 5 Feb 2019 18:52:58 +0100 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: On Mon, Feb 4, 2019 at 4:21 AM Davin Potts < python+python_dev at discontinuity.net> wrote: > I am attempting to do the right thing and am following the advice of other > core devs in what I have done thus far. > > Borrowing heavily from what I've added to issue35813 just now: > > This work is the result of ~1.5 years of development effort, much of it > accomplished at the last two core dev sprints. The code behind it has been > stable since September 2018 and tested as an independently installable > package by multiple people. > > I was encouraged by Lukasz, Yury, and others to check in this code early, > not waiting for tests and docs, in order to both solicit more feedback and > provide for broader testing. I understand that doing such a thing is not > at all a novelty. > Actually it is a novelty (you should wait for review and approval). The main problem I have with this PR is that it seems to introduce 8 brand new APIs, but since there is no doc, docstrings or tests it's unclear which ones are supposed to be used, how or whether they are supposed to supersede or deprecate older (slower) ones involving inter process communication. The introduction of new APIs in the stdlib is a sensitive topic because once they get in they stay in, so a discussion should occur early on, definitively not at alphaX stage. Don't mean to point fingers here, the goal in itself (zero-copy, a topic I recently contributed to myself for the shutil module) is certainly valuable, but I concur and think this change should be reverted and post-poned for 3.9. -- Giampaolo - http://grodola.blogspot.com -- Giampaolo - http://grodola.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nas-python at arctrix.com Tue Feb 5 14:07:40 2019 From: nas-python at arctrix.com (Neil Schemenauer) Date: Tue, 5 Feb 2019 13:07:40 -0600 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: <20190205190740.atccbo33wprrxonw@python.ca> On 2019-02-05, Giampaolo Rodola' wrote: > The main problem I have with this PR is that it seems to introduce > 8 brand new APIs, but since there is no doc, docstrings or tests > it's unclear which ones are supposed to be used, how or whether > they are supposed to supersede or deprecate older (slower) ones > involving inter process communication. New or changed APIs are my major concern as well. Localized problems can be fixed later without much trouble. However, APIs "lock" us in and make it harder to change things later. Also, will new APIs need to be eventually supported by other Python implementations? I would imagine that doing zero-copy mixed with alternative garbage collection strategies could be complicated. Could we somehow mark these APIs as experimental in 3.8? My gut reaction is that we shouldn't revert. However, looking at the changes, it seems 'multiprocessing.shared_memory' could be an external extension package that lives in PyPI. It doesn't require changes to other interpreter internals. It doesn't seem to require internal Python header files. Regards, Neil From raymond.hettinger at gmail.com Tue Feb 5 14:35:49 2019 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 5 Feb 2019 11:35:49 -0800 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: > On Feb 5, 2019, at 9:52 AM, Giampaolo Rodola' wrote: > > The main problem I have with this PR is that it seems to introduce 8 brand new APIs, but since there is no doc, docstrings or tests it's unclear which ones are supposed to be used, how or whether they are supposed to supersede or deprecate older (slower) ones involving inter process communication. The release manger already opined that if tests and docs get finished for the second alpha, he prefers not to have a reversion and would rather on build on top of what already shipped in the first alpha. FWIW, the absence of docs isn't desirable but it isn't atypical. PEP 572 code landed without the docs. Docs for dataclasses arrived much after the code. The same was true for the decimal module. Hopefully, everyone will team up with Davin and help him get the ball over the goal line. BTW, this is a feature we really want. Our multicore story for Python isn't a good one. Due to the GIL, threading usually can't exploit multiple cores for better performance. Async has lower overhead than threading but achieves its gains by keeping all the data in a single process. That leaves us with multiprocessing where the primary obstacle has been the heavy cost of moving data between processes. If that cost can be reduced, we've got a winning story for multicore. This patch is one of the better things that is happening to Python. Aside from last week's procedural missteps and communication issues surrounding the commit, the many months of prior work on this have been stellar. How about we stop using a highly public forum to pile up on Davin (being the subject of a thread like this can be a soul crushing experience). Right now, he could really use some help and support from everyone on the team. Raymond From nas-python at arctrix.com Tue Feb 5 14:41:14 2019 From: nas-python at arctrix.com (Neil Schemenauer) Date: Tue, 5 Feb 2019 13:41:14 -0600 Subject: [Python-Dev] Asking for reversion In-Reply-To: <20190205190740.atccbo33wprrxonw@python.ca> References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> <20190205190740.atccbo33wprrxonw@python.ca> Message-ID: <20190205194114.m2pgnlz6bdubkjus@python.ca> I wrote: > Could we somehow mark these APIs as experimental in 3.8? It seems the change "e5ef45b8f519a9be9965590e1a0a587ff584c180" the one we are discussing. It adds two new files: Lib/multiprocessing/shared_memory.py Modules/_multiprocessing/posixshmem.c It doesn't introduce new C APIs. So, only multiprocessing.shared_memory seems public. I see we have PEP 411 that should cover this case: https://www.python.org/dev/peps/pep-0411/ The setup.py code could be more defensive. Maybe only build on platforms that have supported word sizes etc? For 3.8, could it be activated by uncommenting a line in Modules/Setup, rather than by setup.py? What happens in shared_memory if the _posixshmem module is not available? On Windows it seems like an import error is raised. Otherwise, _PosixSharedMemory becomes 'object'. Does that mean the API still works but you lose the zero-copy speed? Regards, Neil From ethan at stoneleaf.us Tue Feb 5 16:12:21 2019 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 5 Feb 2019 13:12:21 -0800 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: <3d865bbe-071d-3b97-a34c-5b877704d1cc@stoneleaf.us> On 02/05/2019 11:35 AM, Raymond Hettinger wrote: > How about we stop using a highly public forum to pile up on Davin (being the subject of a thread like this can be a soul crushing experience). Thank you for the reminder. > Right now, he could really use some help and support from everyone on the team. I am really looking forward to this enhancement. Thank you, Davin, and everyone else who has, and will, work on it. -- ~Ethan~ From tjreedy at udel.edu Tue Feb 5 17:20:47 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 5 Feb 2019 17:20:47 -0500 Subject: [Python-Dev] bpo-32972: Add unittest.AsyncioTestCase review (for 3.8?) In-Reply-To: <20190205174420.00fef68d@fsol> References: <6BC0939D-0388-45F0-B1EC-E7AE9201FBCE@gmail.com> <20190205174420.00fef68d@fsol> Message-ID: On 2/5/2019 11:44 AM, Antoine Pitrou wrote: > I cannot comment on the PR, but since the functionality is > asyncio-specific, I would suggest moving it to a dedicate > `asyncio.testing` module, or something similar, rather than leaving it > in `unittest` proper. That is one of the options discussed on the issue. On Tue, 5 Feb 2019 07:05:05 -0500 > David Shawley wrote: >> Hi everyone, I added a PR to add a sub-class of unittest.TestCase that makes it possible to write async test methods. I wrote this a few months ago and it is waiting on core review. Is there a core dev that can take up this review? I would love to have this functionality in the core. >> >> Lukasz - should we add this to Python 3.8 or is it too late for feature additions? Features can be added until beta1, and until that, additions are not the release manager decision. >> BPO link: https://bugs.python.org/issue32972 >> Github PR: https://github.com/python/cpython/pull/10296 All or most of the relevant people are nosy on the issue. So a reminder there would be appropriate. However, from my cursory scan, it is not clear if the 5 core devs involved (marked by blue and yellow snakes) agree on exactly what more should be added. Perhaps you should summarize what you think there is and is not agreement on. -- Terry Jan Reedy From barry at python.org Tue Feb 5 17:33:02 2019 From: barry at python.org (Barry Warsaw) Date: Tue, 5 Feb 2019 14:33:02 -0800 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: <20190205102420.GA7969@xps> References: <20190204130238.GB29197@xps> <23ED9695-7715-40DE-9DD5-F6A3C482612F@python.org> <20190205102420.GA7969@xps> Message-ID: <37A1D61F-AF75-4F66-9279-677505623EDE@python.org> On Feb 5, 2019, at 02:24, Stephane Wirtel wrote: > > I was not aware of this image. So it's true that it's very useful. > > Thank you very much, You?re welcome! I just pushed an update to add 3.8.0a1 to the set of Python?s (including git head). Do you think there?s a better way to publicize these images? Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From Paul.Monson at microsoft.com Tue Feb 5 20:09:23 2019 From: Paul.Monson at microsoft.com (Paul Monson) Date: Wed, 6 Feb 2019 01:09:23 +0000 Subject: [Python-Dev] CPython on Windows ARM32 Message-ID: Hi Python Developers, I'm Paul Monson, I've spent about 20 years working with embedded software. Since 2010 I've worked for Microsoft as a developer. Our team is working with CPython on Azure IoT Edge devices that run on x64-based devices. We would like to extend that support to Windows running on ARM 32-bit devices and have a working proof-of-concept. Our team is prepared to provide support for CPython for Windows on ARM32 for 10 years, and to provide build bots for ARM32. I like to propose that the initial sequence of PRs could be: - Update to OpenSSL 1.1.1 (without anything ARM specific) - ready to go - Migrate to libffi directly (finish https://github.com/python/cpython/pull/3806) - Build file updates for OpenSSL ARM and check into cpython-bin-deps - Build file updates for CPython ARM - ctypes updates for ARM - Test module skips for ARM - Library updates and related test fixes for ARM Updating OpenSSL and libffi are independent of ARM support but need to be done as prerequisites. OpenSSL 1.1.0 doesn't have support for ARM32 on Windows but OpenSSL 1.1.1 does. I have OpenSSL 1.1.1a ready to check in to master with all tests passing on x86 and x64 on Windows. Since work has already been done on this for other platforms only very small changes were needed for Windows. I have also integrated and tested the current libffi on Windows x64. Some additonal porting of x86 assembler to MSVC tools will need to be done. I have a working port of ARM32 assembler for MSVC but it may need to be brought up to date and cleaned up. The last four all need to go in together, but can be reviewed separately. We are not planning to support Tk/Tcl on ARM32 because Windows IoT Core, Windows containers don't support GDI, which is a depenency of Tk/Tcl. Since Window IoT Core and Windows container don't support the .msi or .exe installers found on python.org my team at Microsoft will build the CPython for Windows ARM32 from the official repo and distribute it. Thanks in advance, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Tue Feb 5 20:35:28 2019 From: steve.dower at python.org (Steve Dower) Date: Tue, 5 Feb 2019 17:35:28 -0800 Subject: [Python-Dev] CPython on Windows ARM32 In-Reply-To: References: Message-ID: Just confirming for the list that I'm aware of this and supportive, but am not the dedicated support for this effort. I also haven't reviewed the changes yet, but provided nobody is strongly opposed to taking on a supported platform (without additional releases on python.org), I expect I'll do a big part of the reviewing then. Cheers, Steve On 05Feb.2019 1709, Paul Monson via Python-Dev wrote: > Hi Python Developers, > > I'm Paul Monson, I've spent about 20 years working with embedded > software.? Since 2010 I've worked for Microsoft as a developer. > > Our team is working with CPython on Azure IoT Edge devices that run on > x64-based devices. > > We would like to extend that support to Windows running on ARM 32-bit > devices and have a working proof-of-concept.? Our team is prepared to > provide support for CPython for Windows on ARM32 for 10 years, and to > provide build bots for ARM32. > > I like to propose that the initial sequence of PRs could be: > - Update to OpenSSL 1.1.1 (without anything ARM specific) - ready to go > - Migrate to libffi directly (finish https://github.com/python/cpython/pull/3806) > - Build file updates for OpenSSL ARM and check into cpython-bin-deps > - Build file updates for CPython ARM > - ctypes updates for ARM > - Test module skips for ARM > - Library updates and related test fixes for ARM > > Updating OpenSSL and libffi are independent of ARM support but need to > be done as prerequisites.? OpenSSL 1.1.0 doesn't have support for ARM32 > on Windows but OpenSSL 1.1.1 does. > > I have OpenSSL 1.1.1a ready to check in to master with all tests passing > on x86 and x64 on Windows.? Since work has already been done on this for > other platforms only very small changes were needed for Windows. > > I have also integrated and tested the current libffi on Windows x64.? > Some additonal porting of x86 assembler to MSVC tools will need to be > done.? I have a working port of ARM32 assembler for MSVC but it may need > to be brought up to date and cleaned up. > > The last four all need to go in together, but can be reviewed separately. > > We are not planning to support Tk/Tcl on ARM32 because Windows IoT Core, > Windows containers don't support GDI, which is a depenency of Tk/Tcl. > > Since Window IoT Core and Windows container don't support the .msi or > .exe installers found on python.org my team at Microsoft will build the > CPython for Windows ARM32 from the official repo and distribute it. > > Thanks in advance, > > Paul From zachary.ware+pydev at gmail.com Tue Feb 5 22:10:36 2019 From: zachary.ware+pydev at gmail.com (Zachary Ware) Date: Tue, 5 Feb 2019 21:10:36 -0600 Subject: [Python-Dev] CPython on Windows ARM32 In-Reply-To: References: Message-ID: On Tue, Feb 5, 2019 at 7:37 PM Steve Dower wrote: > I also haven't reviewed the changes yet, but provided nobody is strongly > opposed to taking on a supported platform (without additional releases > on python.org), I expect I'll do a big part of the reviewing then. I'm all for the first two changes (especially the second), and if 10 years of pledged corporate support for a new platform is the price we have to pay for them, I'm ok with that :). I expect I'll be automatically added to any issues/PRs that come of this, but I'll keep an eye out for them anyway and give reviews as I'm able. I'll also help get the build bots set up when we're ready for them. -- Zach From stephane at wirtel.be Wed Feb 6 02:43:39 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Wed, 6 Feb 2019 08:43:39 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: <37A1D61F-AF75-4F66-9279-677505623EDE@python.org> References: <20190204130238.GB29197@xps> <23ED9695-7715-40DE-9DD5-F6A3C482612F@python.org> <20190205102420.GA7969@xps> <37A1D61F-AF75-4F66-9279-677505623EDE@python.org> Message-ID: <20190206074339.GA23428@xps> On 02/05, Barry Warsaw wrote: >On Feb 5, 2019, at 02:24, Stephane Wirtel wrote: >You?re welcome! I just pushed an update to add 3.8.0a1 to the set of Python?s (including git head). Do you think there?s a better way to publicize these images? I know that Julien Palard wanted a docker image with all the versions of Python, see: https://github.com/docker-library/python/issues/373 For my part, I wanted to propose a docker image with the last version of Python and try to use it for the detection of bugs in the main python projects (django, numpy, flask, pandas, etc...) with a CI (example: Gitlab-CI) First issue: pytest uses the ast module of python and since 3.8.0a1, the tests do not pass -> new issue for pytest Cheers, St?phane -- St?phane Wirtel - https://wirtel.be - @matrixise From tjreedy at udel.edu Wed Feb 6 03:54:52 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 6 Feb 2019 03:54:52 -0500 Subject: [Python-Dev] CPython on Windows ARM32 In-Reply-To: References: Message-ID: On 2/5/2019 10:10 PM, Zachary Ware wrote: > I'm all for the first two changes (especially the second), and if 10 > years of pledged corporate support for a new platform is the price we > have to pay for them, I'm ok with that :). I would expect that the main question should be the density of WinArm32-specific ifdefs in the main code and extensions other than ctypes. -- Terry Jan Reedy From ncoghlan at gmail.com Wed Feb 6 06:28:49 2019 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 6 Feb 2019 21:28:49 +1000 Subject: [Python-Dev] Asking for reversion In-Reply-To: <20190205190740.atccbo33wprrxonw@python.ca> References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> <20190205190740.atccbo33wprrxonw@python.ca> Message-ID: On Wed, 6 Feb 2019 at 05:17, Neil Schemenauer wrote: > My gut reaction is that we shouldn't revert. However, looking at > the changes, it seems 'multiprocessing.shared_memory' could be an > external extension package that lives in PyPI. It doesn't require > changes to other interpreter internals. It doesn't seem to require > internal Python header files. The desired dependency in this case goes the other way: we'd like this in the standard library so that other standard library components can use it, and it can eventually become part of the "assumed baseline" that the reference Python interpreter offers to projects building on top of it. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From g.rodola at gmail.com Wed Feb 6 06:51:09 2019 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Wed, 6 Feb 2019 12:51:09 +0100 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: Davin, I am not familiar with the multiprocessing module, so take the following with a big grain of salt. I took a look at the PR, then I got an idea of how multiprocessing module is organized by reading the doc. Here's some food for thought in terms of API reorganization. SharedMemoryManager, SharedMemoryServer --------------------------------------- It appears to me these are the 2 main public classes, and after reading the doc it seems they really belong to "managers " (multiprocessing.managers namespace). Also: * SharedMemoryManager is a subclass of multiprocessing.managers.SyncManager * SharedMemoryServer is a subclass of multiprocessing.managers.Server shared_memory.py could be renamed to _shared_memory.py and managers.py could import and expose these 2 classes only. Support APIs ------------ These are objects which seem to be used in support of the 2 classes above, but apparently are not meant to be public. As such they could simply live in _shared_memory.py and not be exposed: - shareable_wrap(): used only in SharedMemoryTracker.wrap() - SharedMemoryTracker: used only by SharedMemoryServer - SharedMemory, WindowsNamedSharedMemory, PosixSharedMemory: used by shareable_wrap() and SharedMemoryTracker - ShareableList: it appears this is not used, but by reading here I have a doubt: shouldn't it be register()ed against SharedMemoryManager? C extension module ------------------ - ExistentialError, Error - it appears these are not used - PermissionsException, ExistentialException - I concur with Ronald Oussoren's review: you could simply use PyErr_SetFromErrno() and let the original OSError exception bubble up. Same for O_CREAT, O_EXCL, O_CREX, O_TRUNC which are already exposed in the os module. I have a couple of other minor nitpicks re. the code but I will comment on the PR. Compatibility ------------- I'm not sure if SyncManager and SharedMemoryManager are fully interchangeable so I think the doc should clarify this. SyncManager handles a certain set of types . It appears SharedMemoryManager is supposedly able to do the same except for lists . Is my assumption correct? Also, multiprocessing.Manager() by default returns a SyncManager. If we'll get to a point where SyncManager and SharedMemoryManager are able to handle the same types it'd be good to return SharedMemoryManager as the default, but it's probably safer to leave it for later. Unless they are already there (I don't know) it would be good to have a full set of unit-tests for all the register()ed types and test them against SyncManager and SharedMemoryManager. That would give an idea on the real interchangeability of these 2 classes and would also help writing a comprehensive doc. Hope this helps. -- Giampaolo - http://grodola.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From encukou at gmail.com Wed Feb 6 07:23:42 2019 From: encukou at gmail.com (Petr Viktorin) Date: Wed, 6 Feb 2019 13:23:42 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: <20190206074339.GA23428@xps> References: <20190204130238.GB29197@xps> <23ED9695-7715-40DE-9DD5-F6A3C482612F@python.org> <20190205102420.GA7969@xps> <37A1D61F-AF75-4F66-9279-677505623EDE@python.org> <20190206074339.GA23428@xps> Message-ID: On 2/6/19 8:43 AM, Stephane Wirtel wrote: > On 02/05, Barry Warsaw wrote: >> On Feb 5, 2019, at 02:24, Stephane Wirtel wrote: >> You?re welcome!? I just pushed an update to add 3.8.0a1 to the set of >> Python?s (including git head).? Do you think there?s a better way to >> publicize these images? > > I know that Julien Palard wanted a docker image with all the versions of > Python, see: https://github.com/docker-library/python/issues/373 > > For my part, I wanted to propose a docker image with the last version of > Python and try to use it for the detection of bugs in the main python > projects (django, numpy, flask, pandas, etc...) with a CI (example: > Gitlab-CI) > > First issue: pytest uses the ast module of python and since 3.8.0a1, the > tests do not pass -> new issue for pytest FWIW, we're preparing to rebuild all Fedora packages with the 3.8 alphas/betas, so everything's tested when 3.8.0 is released: https://fedoraproject.org/wiki/Changes/Python3.8 That should cover the main Python projects, too. From doko at ubuntu.com Wed Feb 6 08:26:55 2019 From: doko at ubuntu.com (Matthias Klose) Date: Wed, 6 Feb 2019 14:26:55 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: References: <20190204130238.GB29197@xps> <23ED9695-7715-40DE-9DD5-F6A3C482612F@python.org> <20190205102420.GA7969@xps> <37A1D61F-AF75-4F66-9279-677505623EDE@python.org> <20190206074339.GA23428@xps> Message-ID: On 06.02.19 13:23, Petr Viktorin wrote: > FWIW, we're preparing to rebuild all Fedora packages with the 3.8 alphas/betas, > so everything's tested when 3.8.0 is released: > https://fedoraproject.org/wiki/Changes/Python3.8 > > That should cover the main Python projects, too. well, the real challenge is that all test suites of third party packages still pass on all architectures. From past transitions, I know that this costs the most time and resources. But yes, targeting 3.8 for Ubuntu 20.04 LTS as well. Matthias From encukou at gmail.com Wed Feb 6 08:49:56 2019 From: encukou at gmail.com (Petr Viktorin) Date: Wed, 6 Feb 2019 14:49:56 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: References: <20190204130238.GB29197@xps> <23ED9695-7715-40DE-9DD5-F6A3C482612F@python.org> <20190205102420.GA7969@xps> <37A1D61F-AF75-4F66-9279-677505623EDE@python.org> <20190206074339.GA23428@xps> Message-ID: On 2/6/19 2:26 PM, Matthias Klose wrote: > On 06.02.19 13:23, Petr Viktorin wrote: >> FWIW, we're preparing to rebuild all Fedora packages with the 3.8 alphas/betas, >> so everything's tested when 3.8.0 is released: >> https://fedoraproject.org/wiki/Changes/Python3.8 >> >> That should cover the main Python projects, too. > > well, the real challenge is that all test suites of third party packages still > pass on all architectures. From past transitions, I know that this costs the > most time and resources. Same experience here. In Fedora, tests are generally run as part of the build. (Sorry, that was definitely not obvious from my message!) > But yes, targeting 3.8 for Ubuntu 20.04 LTS as well. \o/ From g.rodola at gmail.com Wed Feb 6 11:58:32 2019 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Wed, 6 Feb 2019 17:58:32 +0100 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: On Wed, Feb 6, 2019 at 12:51 PM Giampaolo Rodola' wrote: > > Unless they are already there (I don't know) it would be good to have a > full set of unit-tests for all the register()ed types and test them against > SyncManager and SharedMemoryManager. That would give an idea on the real > interchangeability of these 2 classes and would also help writing a > comprehensive doc. > In order to speed up the alpha2 inclusion process I created a PR which implements what said above: https://github.com/python/cpython/pull/11772 https://bugs.python.org/issue35917 Apparently SharedMemoryManager works out of the box and presents no differences with SyncManager, but the list type is not using ShareableList. When I attempted to register it with "SharedMemoryManager.register('list', list, ShareableList)" I got the following error: Traceback (most recent call last): File "foo.py", line 137, in test_list o = self.manager.list() File "/home/giampaolo/svn/cpython/Lib/multiprocessing/managers.py", line 702, in temp proxy = proxytype( TypeError: __init__() got an unexpected keyword argument 'manager' I am not sure how to fix that (I'll leave it up to Davin). The tests as-is are independent from PR-11772 so I suppose they can be reviewed/checked-in regardless of the changes which will affect shared_memory.py. -- Giampaolo - http://grodola.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Feb 6 12:06:06 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 6 Feb 2019 18:06:06 +0100 Subject: [Python-Dev] About the future of multi-process Python Message-ID: <20190206180606.0dbcd927@fsol> Hello, For the record there are number of initiatives currently to boost the usefulness and efficiency of multi-process computation in Python. One of them is PEP 574 (zero-copy pickling with out-of-band buffers), which I'm working on. Another is Pierre Glaser's work on allowing pickling of dynamic functions and classes with the C-accelerated _pickle module (rather than the slow pure Python implementation): https://bugs.python.org/issue35900 https://bugs.python.org/issue35911 Another is Davin's work on shared memory managers. There are also emerging standards like Apache Arrow that provide a shared, runtime-agnostic, compute-friendly representation for in-memory tabular data, and third-party frameworks like Dask which are potentially able to work on top of that and expose nice end-user APIs. For maximum synergy between these initiatives and the resulting APIs, it is better if things are done in the open ;-) Regards Antoine. From steve.dower at python.org Wed Feb 6 13:23:38 2019 From: steve.dower at python.org (Steve Dower) Date: Wed, 6 Feb 2019 10:23:38 -0800 Subject: [Python-Dev] About the future of multi-process Python In-Reply-To: <20190206180606.0dbcd927@fsol> References: <20190206180606.0dbcd927@fsol> Message-ID: On 06Feb2019 0906, Antoine Pitrou wrote: > For the record there are number of initiatives currently to boost the > usefulness and efficiency of multi-process computation in Python. > > One of them is PEP 574 (zero-copy pickling with out-of-band buffers), > which I'm working on. > > Another is Pierre Glaser's work on allowing pickling of dynamic > functions and classes with the C-accelerated _pickle module (rather than > the slow pure Python implementation): > https://bugs.python.org/issue35900 > https://bugs.python.org/issue35911 > > Another is Davin's work on shared memory managers. > > There are also emerging standards like Apache Arrow that provide a > shared, runtime-agnostic, compute-friendly representation for in-memory > tabular data, and third-party frameworks like Dask which are > potentially able to work on top of that and expose nice end-user APIs. > > For maximum synergy between these initiatives and the resulting APIs, > it is better if things are done in the open ;-) Hopefully our steering council can determine (or delegate the determination of) the direction we should go here so we can all be pulling in the same direction :) That said, there are certainly a number of interacting components and not a lot of information about how they interact and overlap. A good start would be to identify the likely overlap of this work to see where they can build upon each other rather than competing, as well as estimating the long-term burden of standardising. Cheers, Steve From steve.dower at python.org Wed Feb 6 14:15:53 2019 From: steve.dower at python.org (Steve Dower) Date: Wed, 6 Feb 2019 11:15:53 -0800 Subject: [Python-Dev] CPython on Windows ARM32 In-Reply-To: References: Message-ID: On 06Feb2019 0054, Terry Reedy wrote: > On 2/5/2019 10:10 PM, Zachary Ware wrote: > >> I'm all for the first two changes (especially the second), and if 10 >> years of pledged corporate support for a new platform is the price we >> have to pay for them, I'm ok with that :). > > I would expect that the main question should be the density of > WinArm32-specific ifdefs in the main code and extensions other than ctypes. > Agreed. I've asked Paul to post the "final" PR early, even though it will take some refactoring as other PRs go in, so that we can see the broader picture now. There's also an option to create an ARM-specific pyconfig.h if necessary, but I don't believe it will be. I created https://bugs.python.org/issue35920 for this work. Cheers, Steve From Paul.Monson at microsoft.com Wed Feb 6 14:50:45 2019 From: Paul.Monson at microsoft.com (Paul Monson) Date: Wed, 6 Feb 2019 19:50:45 +0000 Subject: [Python-Dev] CPython on Windows ARM32 In-Reply-To: References: Message-ID: The PR is here: https://github.com/python/cpython/pull/11774 Searching _M_ARM I see these #ifdef changes outside of ctypes: * Include\pyport.h - adds on to existing MSVC ifdef * Include\pythonrun.h - adds on to existing MSVC ifdef * Modules\_decimal\libmpdec\bits.h * Python\ceval.c - workaround compiler bug, could be replaced with #pragma optimize around entire function. -----Original Message----- From: Steve Dower Sent: Wednesday, February 6, 2019 11:16 AM To: Terry Reedy ; python-dev at python.org; Paul Monson Subject: Re: [Python-Dev] CPython on Windows ARM32 On 06Feb2019 0054, Terry Reedy wrote: > On 2/5/2019 10:10 PM, Zachary Ware wrote: > >> I'm all for the first two changes (especially the second), and if 10 >> years of pledged corporate support for a new platform is the price we >> have to pay for them, I'm ok with that :). > > I would expect that the main question should be the density of > WinArm32-specific ifdefs in the main code and extensions other than ctypes. > Agreed. I've asked Paul to post the "final" PR early, even though it will take some refactoring as other PRs go in, so that we can see the broader picture now. There's also an option to create an ARM-specific pyconfig.h if necessary, but I don't believe it will be. I created https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugs.python.org%2Fissue35920&data=02%7C01%7Cpaul.monson%40microsoft.com%7Cf1e74ec935774410f37008d68c678537%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C636850773687302395&sdata=OPOHUbWy3%2FFEdjXC5MY8NRMVetZ73Rwo2lsngrLL8rs%3D&reserved=0 for this work. Cheers, Steve From christian at python.org Wed Feb 6 17:23:50 2019 From: christian at python.org (Christian Heimes) Date: Wed, 6 Feb 2019 23:23:50 +0100 Subject: [Python-Dev] CPython on Windows ARM32 In-Reply-To: References: Message-ID: <436fca54-dae6-e5d6-e5e0-42614588c717@python.org> On 06/02/2019 02.09, Paul Monson via Python-Dev wrote: > Updating OpenSSL and libffi are independent of ARM support but need to > be done as prerequisites.? OpenSSL 1.1.0 doesn't have support for ARM32 > on Windows but OpenSSL 1.1.1 does. > > ? > > I have OpenSSL 1.1.1a ready to check in to master with all tests passing > on x86 and x64 on Windows.? Since work has already been done on this for > other platforms only very small changes were needed for Windows. +1 for OpenSSL 1.1.1 from the maintainer of the ssl module. The new version also introduces TLS 1.3 support. Linux distributions have been switching to OpenSSL 1.1.1 for a while. If it's good enough for RHEL 8, then it's good enough for us, too. Do you want to update Python 3.8 (master) only or also 3.7? I'm not strictly against updating 3.7. However we have traditionally kept the OpenSSL version of each branch stable. 1.1.1 comes with new features, stricter security settings and some ciphers removed. Christian From steve.dower at python.org Wed Feb 6 18:28:05 2019 From: steve.dower at python.org (Steve Dower) Date: Wed, 6 Feb 2019 15:28:05 -0800 Subject: [Python-Dev] CPython on Windows ARM32 In-Reply-To: <436fca54-dae6-e5d6-e5e0-42614588c717@python.org> References: <436fca54-dae6-e5d6-e5e0-42614588c717@python.org> Message-ID: <1c36ef6d-9e54-f78c-9672-7e7beffb2301@python.org> On 06Feb2019 1423, Christian Heimes wrote: > Do you want to update Python 3.8 (master) only or also 3.7? I'm not > strictly against updating 3.7. However we have traditionally kept the > OpenSSL version of each branch stable. 1.1.1 comes with new features, > stricter security settings and some ciphers removed. I would prefer to stay on 1.1.0 for 3.7, but it's up to the release manager. Cheers, Steve From nad at python.org Wed Feb 6 18:41:03 2019 From: nad at python.org (Ned Deily) Date: Wed, 6 Feb 2019 18:41:03 -0500 Subject: [Python-Dev] CPython on Windows ARM32 In-Reply-To: <1c36ef6d-9e54-f78c-9672-7e7beffb2301@python.org> References: <436fca54-dae6-e5d6-e5e0-42614588c717@python.org> <1c36ef6d-9e54-f78c-9672-7e7beffb2301@python.org> Message-ID: On Feb 6, 2019, at 18:28, Steve Dower wrote: > On 06Feb2019 1423, Christian Heimes wrote: >> Do you want to update Python 3.8 (master) only or also 3.7? I'm not >> strictly against updating 3.7. However we have traditionally kept the >> OpenSSL version of each branch stable. 1.1.1 comes with new features, >> stricter security settings and some ciphers removed. > I would prefer to stay on 1.1.0 for 3.7, but it's up to the release manager. Me, too. I am concerned that 1.1.1 support has not had a lot of exposure yet. Even the "What's New" document for 3.7 states: "The ssl module has preliminary and experimental support for TLS 1.3 and OpenSSL 1.1.1. " I am OK with fixes for 1.1.1 support but I think it would be premature to change the Windows and/or macOS installers from 1.1.0 to 1.1.1. -- Ned Deily nad at python.org -- [] From Paul.Monson at microsoft.com Wed Feb 6 21:58:10 2019 From: Paul.Monson at microsoft.com (Paul Monson) Date: Thu, 7 Feb 2019 02:58:10 +0000 Subject: [Python-Dev] CPython on Windows ARM32 In-Reply-To: <1c36ef6d-9e54-f78c-9672-7e7beffb2301@python.org> References: <436fca54-dae6-e5d6-e5e0-42614588c717@python.org> <1c36ef6d-9e54-f78c-9672-7e7beffb2301@python.org> Message-ID: Here are the current OpenSSL 1.1.1a changes I have, in a seperate PR I did some additional testing and have some test failures to investigate tomorrows test_parse_cert_CVE_2019_5010 only fails win32 debug (access violation) works for amd64 debug/release and win32 release test_load_default_certs_env_windows fails on win32 and amd64 retail. skipped on debug -----Original Message----- From: Steve Dower Sent: Wednesday, February 6, 2019 3:28 PM To: Christian Heimes ; Paul Monson ; python-dev at python.org; Ned Deily Subject: Re: [Python-Dev] CPython on Windows ARM32 On 06Feb2019 1423, Christian Heimes wrote: > Do you want to update Python 3.8 (master) only or also 3.7? I'm not > strictly against updating 3.7. However we have traditionally kept the > OpenSSL version of each branch stable. 1.1.1 comes with new features, > stricter security settings and some ciphers removed. I would prefer to stay on 1.1.0 for 3.7, but it's up to the release manager. Cheers, Steve From stephane at wirtel.be Thu Feb 7 11:16:22 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Thu, 7 Feb 2019 17:16:22 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: References: <20190204130238.GB29197@xps> <23ED9695-7715-40DE-9DD5-F6A3C482612F@python.org> <20190205102420.GA7969@xps> <37A1D61F-AF75-4F66-9279-677505623EDE@python.org> <20190206074339.GA23428@xps> Message-ID: <20190207161622.GA29057@xps> On 02/06, Petr Viktorin wrote: >On 2/6/19 8:43 AM, Stephane Wirtel wrote: >>On 02/05, Barry Warsaw wrote: >>>On Feb 5, 2019, at 02:24, Stephane Wirtel wrote: >>>You?re welcome!? I just pushed an update to add 3.8.0a1 to the set >>>of Python?s (including git head).? Do you think there?s a better >>>way to publicize these images? >> >>I know that Julien Palard wanted a docker image with all the versions of >>Python, see: https://github.com/docker-library/python/issues/373 >> >>For my part, I wanted to propose a docker image with the last version of >>Python and try to use it for the detection of bugs in the main python >>projects (django, numpy, flask, pandas, etc...) with a CI (example: >>Gitlab-CI) >> >>First issue: pytest uses the ast module of python and since 3.8.0a1, the >>tests do not pass -> new issue for pytest > >FWIW, we're preparing to rebuild all Fedora packages with the 3.8 >alphas/betas, so everything's tested when 3.8.0 is released: >https://fedoraproject.org/wiki/Changes/Python3.8 Hi Petr, Will you execute the tests of these packages? I have a small discussion with Julien Palard and I wanted to create a small CI where I will execute the tests of the updated packages from the RSS feed of PyPI. The first one was pytest -- St?phane Wirtel - https://wirtel.be - @matrixise From stephane at wirtel.be Thu Feb 7 11:17:21 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Thu, 7 Feb 2019 17:17:21 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: References: <20190204130238.GB29197@xps> <23ED9695-7715-40DE-9DD5-F6A3C482612F@python.org> <20190205102420.GA7969@xps> <37A1D61F-AF75-4F66-9279-677505623EDE@python.org> <20190206074339.GA23428@xps> Message-ID: <20190207161721.GB29057@xps> Sorry Petr, I didn't see this message with the test suites. -- St?phane Wirtel - https://wirtel.be - @matrixise From nas-python at arctrix.com Thu Feb 7 13:19:14 2019 From: nas-python at arctrix.com (Neil Schemenauer) Date: Thu, 7 Feb 2019 12:19:14 -0600 Subject: [Python-Dev] About the future of multi-process Python In-Reply-To: <20190206180606.0dbcd927@fsol> References: <20190206180606.0dbcd927@fsol> Message-ID: <20190207181914.veoz32gymxrdj2ki@python.ca> On 2019-02-06, Antoine Pitrou wrote: > For maximum synergy between these initiatives and the resulting APIs, > it is better if things are done in the open ;-) Hi Antoine, It would be good if we could have some feedback from alternative Python implementations as well. I suspect they might want to support these APIs. Doing zero-copy or sharing memory areas could be a challenge with a compacting GC, for example. In that case, having something in the API that tells the VM that a certain chunk of memory cannot move would be helpful. Regards, Neil From christian at python.org Fri Feb 8 05:21:03 2019 From: christian at python.org (Christian Heimes) Date: Fri, 8 Feb 2019 11:21:03 +0100 Subject: [Python-Dev] CPython on Windows ARM32 In-Reply-To: References: <436fca54-dae6-e5d6-e5e0-42614588c717@python.org> <1c36ef6d-9e54-f78c-9672-7e7beffb2301@python.org> Message-ID: <3b7f78b0-89a6-0ce8-5e1e-3c606faf1ce1@python.org> On 07/02/2019 00.41, Ned Deily wrote: > On Feb 6, 2019, at 18:28, Steve Dower wrote: >> On 06Feb2019 1423, Christian Heimes wrote: >>> Do you want to update Python 3.8 (master) only or also 3.7? I'm not >>> strictly against updating 3.7. However we have traditionally kept the >>> OpenSSL version of each branch stable. 1.1.1 comes with new features, >>> stricter security settings and some ciphers removed. >> I would prefer to stay on 1.1.0 for 3.7, but it's up to the release manager. > > Me, too. I am concerned that 1.1.1 support has not had a lot of exposure yet. Even the "What's New" document for 3.7 states: "The ssl module has preliminary and experimental support for TLS 1.3 and OpenSSL 1.1.1. " That's from the alpha and beta phase of OpenSSL. Support for 1.1.1 is as stable as it can get. > I am OK with fixes for 1.1.1 support but I think it would be premature to change the Windows and/or macOS installers from 1.1.0 to 1.1.1. 1.1.1a is a solid release. Debian testing, Fedora, and RHEL 8 beta have been shipping and testing 1.1.1 for a while. In my professional opinion it's less about stability but more about backwards compatibility issues. TLS 1.3 behaves slightly differently and 1.1.1 has dropped some weak ciphers. Christian From encukou at gmail.com Fri Feb 8 06:21:36 2019 From: encukou at gmail.com (Petr Viktorin) Date: Fri, 8 Feb 2019 12:21:36 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: <20190207161622.GA29057@xps> References: <20190204130238.GB29197@xps> <23ED9695-7715-40DE-9DD5-F6A3C482612F@python.org> <20190205102420.GA7969@xps> <37A1D61F-AF75-4F66-9279-677505623EDE@python.org> <20190206074339.GA23428@xps> <20190207161622.GA29057@xps> Message-ID: On 2/7/19 5:16 PM, Stephane Wirtel wrote: > On 02/06, Petr Viktorin wrote: >> On 2/6/19 8:43 AM, Stephane Wirtel wrote: >>> On 02/05, Barry Warsaw wrote: >>>> On Feb 5, 2019, at 02:24, Stephane Wirtel wrote: >>>> You?re welcome!? I just pushed an update to add 3.8.0a1 to the set >>>> of Python?s (including git head).? Do you think there?s a better way >>>> to publicize these images? >>> >>> I know that Julien Palard wanted a docker image with all the versions of >>> Python, see: https://github.com/docker-library/python/issues/373 >>> >>> For my part, I wanted to propose a docker image with the last version of >>> Python and try to use it for the detection of bugs in the main python >>> projects (django, numpy, flask, pandas, etc...) with a CI (example: >>> Gitlab-CI) >>> >>> First issue: pytest uses the ast module of python and since 3.8.0a1, the >>> tests do not pass -> new issue for pytest >> >> FWIW, we're preparing to rebuild all Fedora packages with the 3.8 >> alphas/betas, so everything's tested when 3.8.0 is released: >> https://fedoraproject.org/wiki/Changes/Python3.8 > Hi Petr, > > Will you execute the tests of these packages? It's best practice to include the test suite in Fedora packages. Sometimes it's not ? e.g. if the tests need network access, or all extra testing dependencies aren't available, or most frequently, the maintainer is just lazy. If you have a specific package in mind, I can check. Currently django & numpy get tested; flask & pandas don't. For 3.7, we did the rebuild much later in the cycle. The builds themselves caught async/await SyntaxErrors, and tests caught a lot of StopIteration leaking. At the time it felt like no one really knew what porting to 3.7.0 would look like ? similar to how people didn't think "unicode" would be a big problem in py3k. That's what we're trying to avoid for 3.8.0. > I have a small discussion with Julien Palard and I wanted to create a > small CI where I will execute the tests of the updated packages from > the RSS feed of PyPI. > > The first one was pytest That sounds exciting! Something like that is on my "interesting possible projects" list, but alas, not at the top :( From j.castillo.2nd at gmail.com Fri Feb 8 11:13:34 2019 From: j.castillo.2nd at gmail.com (Javier Castillo II) Date: Fri, 8 Feb 2019 10:13:34 -0600 Subject: [Python-Dev] find_library and issue21622 Message-ID: Ran into some issues trying to deploy in an alpine container, where I wound up coming across the issue. I found a solution ( not sure if an ideal solution can exist ) that walks the paths in the environment variable LD_LIBRARY_PATH. This was submitted in github PR 10460, but not sure if there were any technical issues with this impacting its review. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Feb 8 12:43:34 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 8 Feb 2019 18:43:34 +0100 Subject: [Python-Dev] About the future of multi-process Python In-Reply-To: <20190207181914.veoz32gymxrdj2ki@python.ca> References: <20190206180606.0dbcd927@fsol> <20190207181914.veoz32gymxrdj2ki@python.ca> Message-ID: <20190208184334.2ba482f4@fsol> On Thu, 7 Feb 2019 12:19:14 -0600 Neil Schemenauer wrote: > On 2019-02-06, Antoine Pitrou wrote: > > For maximum synergy between these initiatives and the resulting APIs, > > it is better if things are done in the open ;-) > > Hi Antoine, > > It would be good if we could have some feedback from alternative > Python implementations as well. I suspect they might want to > support these APIs. Doing zero-copy or sharing memory areas could > be a challenge with a compacting GC, for example. In that case, > having something in the API that tells the VM that a certain chunk > of memory cannot move would be helpful. Both PEP 574 and Davin's shared-memory work build on top of the PEP 3118 buffer API. So I would expect that any Python implementation with support for the buffer API to have the required infrastructure to also support those initiatives. The details may deserve to be clarified, though. I'll try to send an e-mail and ask for feedback. Regards Antoine. From status at bugs.python.org Fri Feb 8 13:07:53 2019 From: status at bugs.python.org (Python tracker) Date: Fri, 08 Feb 2019 18:07:53 +0000 Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> ACTIVITY SUMMARY (2019-02-01 - 2019-02-08) Python tracker at https://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 6998 (+13) closed 40696 (+47) total 47694 (+60) Open issues with patches: 2783 Issues opened (43) ================== #35885: configparser: indentation https://bugs.python.org/issue35885 opened by mrs.red #35886: Move PyInterpreterState into Include/internal/pycore_pystate.h https://bugs.python.org/issue35886 opened by eric.snow #35887: Doc string for updating the frozen version of importlib in _bo https://bugs.python.org/issue35887 opened by nnja #35888: ssl module - could not get the server certificate w/o complete https://bugs.python.org/issue35888 opened by Lee Eric #35889: sqlite3.Row doesn't have useful repr https://bugs.python.org/issue35889 opened by vlad #35891: urllib.parse.splituser has no suitable replacement https://bugs.python.org/issue35891 opened by jaraco #35892: Fix awkwardness of statistics.mode() for multimodal datasets https://bugs.python.org/issue35892 opened by rhettinger #35893: distutils fails to build extension on windows when it is a pac https://bugs.python.org/issue35893 opened by ronaldoussoren #35898: The TARGETDIR variable must be provided when invoking this ins https://bugs.python.org/issue35898 opened by Thomas Trummer #35899: '_is_sunder' function in 'enum' module fails on empty string https://bugs.python.org/issue35899 opened by Maxpxt #35900: Add pickler hook for the user to customize the serialization o https://bugs.python.org/issue35900 opened by pierreglaser #35901: json.dumps infinite recurssion https://bugs.python.org/issue35901 opened by MultiSosnooley #35903: Build of posixshmem.c should probe for required OS functions https://bugs.python.org/issue35903 opened by nascheme #35904: Add statistics.fmean(seq) https://bugs.python.org/issue35904 opened by rhettinger #35905: macOS build docs need refresh (2019) https://bugs.python.org/issue35905 opened by jaraco #35906: Header Injection in urllib https://bugs.python.org/issue35906 opened by push0ebp #35907: Unnecessary URL scheme exists to allow file:// reading file i https://bugs.python.org/issue35907 opened by push0ebp #35912: _testembed.c fails to compile when using --with-cxx-main in th https://bugs.python.org/issue35912 opened by pablogsal #35913: asyncore: allow handling of half closed connections https://bugs.python.org/issue35913 opened by Isaac Boukris #35915: re.search extreme slowness (looks like hang/livelock), searchi https://bugs.python.org/issue35915 opened by benspiller #35918: multiprocessing's SyncManager.dict.has_key() method is broken https://bugs.python.org/issue35918 opened by giampaolo.rodola #35919: multiprocessing: shared manager Pool fails with AttributeError https://bugs.python.org/issue35919 opened by giampaolo.rodola #35920: Windows 10 ARM32 platform support https://bugs.python.org/issue35920 opened by steve.dower #35921: Use ccache by default https://bugs.python.org/issue35921 opened by pitrou #35922: robotparser crawl_delay and request_rate do not work with no m https://bugs.python.org/issue35922 opened by joseph_myers #35923: Update the BuiltinImporter in importlib to use loader._ORIGIN https://bugs.python.org/issue35923 opened by nnja #35924: curses segfault resizing window https://bugs.python.org/issue35924 opened by Josiah Ulfers #35925: test_httplib test_nntplib test_ssl fail on ARMv7 Debian buster https://bugs.python.org/issue35925 opened by pablogsal #35926: Need openssl 1.1.1 support on Windows for ARM and ARM64 https://bugs.python.org/issue35926 opened by Paul Monson #35927: Intra-package References Documentation Incomplete https://bugs.python.org/issue35927 opened by ADataGman #35928: socket makefile read-write discards received data https://bugs.python.org/issue35928 opened by pravn #35930: Raising an exception raised in a "future" instance will create https://bugs.python.org/issue35930 opened by jcea #35931: pdb: "debug print(" crashes with SyntaxError https://bugs.python.org/issue35931 opened by blueyed #35933: python doc does not say that the state kwarg in Pickler.save_r https://bugs.python.org/issue35933 opened by pierreglaser #35934: Add socket.bind_socket() utility function https://bugs.python.org/issue35934 opened by giampaolo.rodola #35935: threading.Event().wait() not interruptable with Ctrl-C on Wind https://bugs.python.org/issue35935 opened by Chris Billington #35936: Give modulefinder some much-needed updates. https://bugs.python.org/issue35936 opened by brandtbucher #35937: Add instancemethod to types.py https://bugs.python.org/issue35937 opened by bup #35939: Remove urllib.parse._splittype from mimetypes.guess_type https://bugs.python.org/issue35939 opened by corona10 #35940: multiprocessing manager tests fail in the Refleaks buildbots https://bugs.python.org/issue35940 opened by pablogsal #35941: ssl.enum_certificates() regression https://bugs.python.org/issue35941 opened by schlenk #35942: posixmodule.c:path_converter() returns an invalid exception me https://bugs.python.org/issue35942 opened by lukasz.langa #35943: PyImport_GetModule() can return partially-initialized module https://bugs.python.org/issue35943 opened by pitrou Most recent 15 issues with no replies (15) ========================================== #35942: posixmodule.c:path_converter() returns an invalid exception me https://bugs.python.org/issue35942 #35940: multiprocessing manager tests fail in the Refleaks buildbots https://bugs.python.org/issue35940 #35939: Remove urllib.parse._splittype from mimetypes.guess_type https://bugs.python.org/issue35939 #35936: Give modulefinder some much-needed updates. https://bugs.python.org/issue35936 #35934: Add socket.bind_socket() utility function https://bugs.python.org/issue35934 #35931: pdb: "debug print(" crashes with SyntaxError https://bugs.python.org/issue35931 #35930: Raising an exception raised in a "future" instance will create https://bugs.python.org/issue35930 #35928: socket makefile read-write discards received data https://bugs.python.org/issue35928 #35927: Intra-package References Documentation Incomplete https://bugs.python.org/issue35927 #35926: Need openssl 1.1.1 support on Windows for ARM and ARM64 https://bugs.python.org/issue35926 #35924: curses segfault resizing window https://bugs.python.org/issue35924 #35920: Windows 10 ARM32 platform support https://bugs.python.org/issue35920 #35919: multiprocessing: shared manager Pool fails with AttributeError https://bugs.python.org/issue35919 #35918: multiprocessing's SyncManager.dict.has_key() method is broken https://bugs.python.org/issue35918 #35912: _testembed.c fails to compile when using --with-cxx-main in th https://bugs.python.org/issue35912 Most recent 15 issues waiting for review (15) ============================================= #35936: Give modulefinder some much-needed updates. https://bugs.python.org/issue35936 #35934: Add socket.bind_socket() utility function https://bugs.python.org/issue35934 #35931: pdb: "debug print(" crashes with SyntaxError https://bugs.python.org/issue35931 #35926: Need openssl 1.1.1 support on Windows for ARM and ARM64 https://bugs.python.org/issue35926 #35922: robotparser crawl_delay and request_rate do not work with no m https://bugs.python.org/issue35922 #35921: Use ccache by default https://bugs.python.org/issue35921 #35920: Windows 10 ARM32 platform support https://bugs.python.org/issue35920 #35913: asyncore: allow handling of half closed connections https://bugs.python.org/issue35913 #35906: Header Injection in urllib https://bugs.python.org/issue35906 #35903: Build of posixshmem.c should probe for required OS functions https://bugs.python.org/issue35903 #35900: Add pickler hook for the user to customize the serialization o https://bugs.python.org/issue35900 #35887: Doc string for updating the frozen version of importlib in _bo https://bugs.python.org/issue35887 #35886: Move PyInterpreterState into Include/internal/pycore_pystate.h https://bugs.python.org/issue35886 #35878: ast.c: end_col_offset may be used uninitialized in this functi https://bugs.python.org/issue35878 #35876: test_start_new_session for posix_spawnp fails https://bugs.python.org/issue35876 Top 10 most discussed issues (10) ================================= #35813: shared memory construct to avoid need for serialization betwee https://bugs.python.org/issue35813 19 msgs #35904: Add statistics.fmean(seq) https://bugs.python.org/issue35904 11 msgs #35913: asyncore: allow handling of half closed connections https://bugs.python.org/issue35913 11 msgs #35921: Use ccache by default https://bugs.python.org/issue35921 9 msgs #35706: Make it easier to use a venv with an embedded Python interpret https://bugs.python.org/issue35706 7 msgs #35893: distutils fails to build extension on windows when it is a pac https://bugs.python.org/issue35893 7 msgs #30670: pprint for dict in sorted order or insert order? https://bugs.python.org/issue30670 6 msgs #35907: Unnecessary URL scheme exists to allow file:// reading file i https://bugs.python.org/issue35907 6 msgs #34572: C unpickling bypasses import thread safety https://bugs.python.org/issue34572 5 msgs #35933: python doc does not say that the state kwarg in Pickler.save_r https://bugs.python.org/issue35933 5 msgs Issues closed (46) ================== #20001: pathlib inheritance diagram too large https://bugs.python.org/issue20001 closed by inada.naoki #22474: No explanation of how a task gets destroyed in asyncio 'task' https://bugs.python.org/issue22474 closed by cheryl.sabella #24087: Documentation doesn't explain the term "coroutine" (PEP 342) https://bugs.python.org/issue24087 closed by paul.moore #24209: Allow IPv6 bind in http.server https://bugs.python.org/issue24209 closed by jaraco #26256: Fast decimalisation and conversion to other bases https://bugs.python.org/issue26256 closed by skrah #27344: zipfile *does* support utf-8 filenames https://bugs.python.org/issue27344 closed by cheryl.sabella #29734: os.stat handle leak https://bugs.python.org/issue29734 closed by steve.dower #30130: array.array is not an instance of collections.MutableSequence https://bugs.python.org/issue30130 closed by cheryl.sabella #32560: [EASY C] inherit the py launcher's STARTUPINFO https://bugs.python.org/issue32560 closed by steve.dower #33316: Windows: PyThread_release_lock always fails https://bugs.python.org/issue33316 closed by steve.dower #33895: LoadLibraryExW called with GIL held can cause deadlock https://bugs.python.org/issue33895 closed by steve.dower #34691: _contextvars missing in xmaster branch Windows build? https://bugs.python.org/issue34691 closed by steve.dower #35299: LGHT0091: Duplicate symbol 'File:include_pyconfig.h' found https://bugs.python.org/issue35299 closed by steve.dower #35358: Document that importlib.import_module accepts names that are n https://bugs.python.org/issue35358 closed by matrixise #35485: tkinter windows turn black while resized using Tk 8.6.9.1 on m https://bugs.python.org/issue35485 closed by ned.deily #35605: backported patch requires new sphinx, minimum sphinx version w https://bugs.python.org/issue35605 closed by ned.deily #35606: Add prod() function to the math module https://bugs.python.org/issue35606 closed by rhettinger #35615: "RuntimeError: Dictionary changed size during iteration" when https://bugs.python.org/issue35615 closed by pitrou #35642: _asynciomodule.c compiled in both pythoncore.vcxproj and _asyn https://bugs.python.org/issue35642 closed by steve.dower #35686: BufferError with memory.release() https://bugs.python.org/issue35686 closed by skrah #35692: pathlib.Path.exists() on non-existent drive raises WinError in https://bugs.python.org/issue35692 closed by steve.dower #35758: Disable x87 control word for MSVC ARM compiler https://bugs.python.org/issue35758 closed by Minmin.Gong #35851: Make search result in online docs keep their position when sea https://bugs.python.org/issue35851 closed by xtreak #35861: test_named_expressions raises SyntaxWarning https://bugs.python.org/issue35861 closed by emilyemorehouse #35862: Change the environment for a new process https://bugs.python.org/issue35862 closed by steve.dower #35872: Creating venv from venv no longer works in 3.7.2 https://bugs.python.org/issue35872 closed by steve.dower #35873: Controlling venv from venv no longer works in 3.7.2 https://bugs.python.org/issue35873 closed by steve.dower #35877: parenthesis is mandatory for named expressions in while statem https://bugs.python.org/issue35877 closed by emilyemorehouse #35879: test_type_comments leaks references https://bugs.python.org/issue35879 closed by gvanrossum #35884: Add variable access benchmark to Tools/Scripts https://bugs.python.org/issue35884 closed by rhettinger #35890: Cleanup some non-consistent API callings https://bugs.python.org/issue35890 closed by steve.dower #35894: Apparent regression in 3.8-dev: 'TypeError: required field "ty https://bugs.python.org/issue35894 closed by gvanrossum #35895: the test suite of pytest failed with 3.8.0a1 https://bugs.python.org/issue35895 closed by gvanrossum #35896: sysconfig.get_platform returns wrong value when Python 32b is https://bugs.python.org/issue35896 closed by steve.dower #35897: Support list as argument to .startswith() https://bugs.python.org/issue35897 closed by rhettinger #35902: Forking from background thread https://bugs.python.org/issue35902 closed by pitrou #35908: build with building extension modules as builtins is broken in https://bugs.python.org/issue35908 closed by doko #35909: Zip Slip Vulnerability https://bugs.python.org/issue35909 closed by christian.heimes #35910: Curious problem with my choice of variables https://bugs.python.org/issue35910 closed by matrixise #35911: add a cell construtor, and expose the cell type in Lib/types.p https://bugs.python.org/issue35911 closed by pitrou #35914: [2.7] PyStructSequence objects not behaving like nametuple https://bugs.python.org/issue35914 closed by eric.snow #35916: 3.6.5 try/except/else/finally block executes code with typos, https://bugs.python.org/issue35916 closed by SilentGhost #35917: multiprocessing: provide unit-tests for manager classes and sh https://bugs.python.org/issue35917 closed by pitrou #35929: Spam https://bugs.python.org/issue35929 closed by Mariatta #35932: Interpreter gets stuck while applying a regex pattern https://bugs.python.org/issue35932 closed by tim.peters #35938: crash of METADATA file cannot be fixed by reinstall of python https://bugs.python.org/issue35938 closed by steven.daprano From aixtools at felt.demon.nl Sat Feb 9 09:45:58 2019 From: aixtools at felt.demon.nl (Michael Felt (aixtools)) Date: Sat, 9 Feb 2019 14:45:58 +0000 Subject: [Python-Dev] =?utf-8?q?Python_3=2E8_alpha_and_AIX_buildbot_?= =?utf-8?b?4oCcc3VwcG9ydOKAnSBtb3ZpbmcgZm9yd2FyZC4=?= Message-ID: Congratulations on the official begin of the alpha phase of Python3-3.8. I hope there will be time to consider three of my PRs so that this phase has at least one of the AIX buildbots (not mine I fear) is passing all the tests and can finally serve it?s real purpose and signal when a change toggles it?s status from PASS to FAIL. Next week I hope to have some time to dig deeper and try to establish why my bot fails additional tests (in the multiprocessing module(s)) as well why AIX fails test_bdb when utf8 support is (additionally) installed but passes when utf8 support is not installed. While I am also concerned about AIX status I also hope that my inspection is helping to improve Python. Sincerely, Michael Sent from my iPhone -------------- next part -------------- An HTML attachment was scrubbed... URL: From vstinner at redhat.com Mon Feb 11 06:18:45 2019 From: vstinner at redhat.com (Victor Stinner) Date: Mon, 11 Feb 2019 12:18:45 +0100 Subject: [Python-Dev] find_library and issue21622 In-Reply-To: References: Message-ID: Hi, Would you mind to elaborate "some issues trying to deploy in an alpine container"? What are you trying to do? What is the error message? Some more context: * https://github.com/python/cpython/pull/10460 "bpo-21622: ctypes.util find_library walk LD_LIBRARY_PATH" * https://bugs.python.org/issue21622 reported in 2014: "ctypes.util incorrectly fails for libraries without DT_SONAME" Victor Le ven. 8 f?vr. 2019 ? 17:27, Javier Castillo II a ?crit : > > Ran into some issues trying to deploy in an alpine container, where I wound up coming across the issue. I found a solution ( not sure if an ideal solution can exist ) that walks the paths in the environment variable LD_LIBRARY_PATH. This was submitted in github PR 10460, but not sure if there were any technical issues with this impacting its review. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com -- Night gathers, and now my watch begins. It shall not end until my death. From j.castillo.2nd at gmail.com Mon Feb 11 15:50:15 2019 From: j.castillo.2nd at gmail.com (Javier Castillo II) Date: Mon, 11 Feb 2019 14:50:15 -0600 Subject: [Python-Dev] find_library and issue21622 In-Reply-To: References: Message-ID: This is the overarching issue: https://github.com/docker-library/python/issues/111 In short, libraries that rely on find_library to bind to libs failed to start as find_library returned nothing. In particular, a build of python and saltstack in a single container, and a few other packages when using the alpine base. On Mon, Feb 11, 2019 at 5:18 AM Victor Stinner wrote: > Hi, > > Would you mind to elaborate "some issues trying to deploy in an alpine > container"? What are you trying to do? What is the error message? > > Some more context: > > * https://github.com/python/cpython/pull/10460 "bpo-21622: ctypes.util > find_library walk LD_LIBRARY_PATH" > * https://bugs.python.org/issue21622 reported in 2014: "ctypes.util > incorrectly fails for libraries without DT_SONAME" > > Victor > > Le ven. 8 f?vr. 2019 ? 17:27, Javier Castillo II > a ?crit : > > > > Ran into some issues trying to deploy in an alpine container, where I > wound up coming across the issue. I found a solution ( not sure if an ideal > solution can exist ) that walks the paths in the environment variable > LD_LIBRARY_PATH. This was submitted in github PR 10460, but not sure if > there were any technical issues with this impacting its review. > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com > > > > -- > Night gathers, and now my watch begins. It shall not end until my death. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Demeyer at UGent.be Mon Feb 11 17:52:37 2019 From: J.Demeyer at UGent.be (Jeroen Demeyer) Date: Mon, 11 Feb 2019 23:52:37 +0100 Subject: [Python-Dev] Reviewing PEP 580 Message-ID: <5C61FCB5.80804@UGent.be> Hello, I would like to propose to the new steering council to review PEP 580. Is there already a process for that? Thanks, Jeroen. From turnbull.stephen.fw at u.tsukuba.ac.jp Mon Feb 11 22:27:26 2019 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Tue, 12 Feb 2019 12:27:26 +0900 Subject: [Python-Dev] Reviewing PEP 580 In-Reply-To: <5C61FCB5.80804@UGent.be> References: <5C61FCB5.80804@UGent.be> Message-ID: <23650.15646.542253.859787@turnbull.sk.tsukuba.ac.jp> Jeroen Demeyer writes: > I would like to propose to the new steering council to review PEP 580. > Is there already a process for that? I hope we can start with "same as it ever was." Looking at the list, it's not like anything needs to change immediately. Guido, Barry, Nick, and Brett have all been extremely active in general governance as well as the PEP process. They know what they're doing, but the Council is new. It will take some time to get going. Carol has not been so prominent on these lists, but I bet she has ideas -- they all have ideas. But ideas take time to implement. They're also all very busy. They are not experts in everything -- even Guido has been happy to delegate because he acknowledges that there are people who know more about specific requirements and implementations than he does. Delegation is explictly permitted in the Steering Council model. At least at the start, it should be employed while the Council is figuring out their own business, IMO. So, has has this been done in the past? For many PEPs, the pattern has been 1. Proponent(s) write PEP, discuss on -ideas. 2. Proponent(s) stick a fork in it, it's done enough. Either the BDFL Delegate is obvious from the discussion, or they negotiate with somebody, and propose a delegate. 3. Guido decides, including anointing a delegate if he wants. On Reject -- stop. Half-baked -- go to 1. (Never seen an inappropriate delegate proposed.) Approve -- go to 4. 4. Delegate, with the help of (usually) python-dev or some appropriate SIG, picks over the PEP and comes up with an implementation plan. 5. When brown and toasty (but not perfect, nothing ever is) delegate accepts, proponent commits, and the beta testers get to work. This is *good enough*, with the exception of s/Guido/Council/ in Step 3 -- for now. I'm sure it will evolve. I'm not proposing the following as an application form to be adopted. The Council knows what they need, they'll come up with something in due time. In view of the stylized process above, I believe this format will help speed things up for proponents and relieve some of the burden on the Council at this time when things are still pretty fluid: Hi, I'm the proponent of PEP 666 "Adding Perl ~ Regexp Operators to Python", along with Mad Max, who is doing most of the implementation. We've been discussing the PEP on Python Ideas, and we've believe it's ready for pronouncement. Max is by far the most informed about the API and implementation, and is well- qualified to be Delegate. Rufus T Firefly has been deeply involved in the discussion, is very expert, and would also be a good delegate. With apologies to the real PEP 666, which I'm pretty sure exists and has nothing to do with Perl or regexps. :-) Of course one could go on to give more information, a full status report, open issues that the delegate or Council should decide, etc. But a lot of that could also be left for the delegate to deal with -- the only thing the Council *must* do is pick a supervisor for the approval process, and this format helps with that. Also, the Council might decide they're not confident in any of the candidates for delegate (or it's an empty set), and pick a different person or do it themselves. If they do it themselves, I'm sure it will be for good reason, but it's likely to take more time than if there's a single delegate. Proponents will need to be prepared to accept that outcome. I am not criticizing Jeroen here. I'm a social scientist -- group, and especially organization, dynamics are what I think about all day every day. Rather, Jeroen's post was a good thing -- "hey, we've done stuff! now how do we get it in?" If he didn't post, given the above, why would that particular PEP get attention? The Council is not necessarily on top of the progress of every PEP! I am merely suggesting some additional information to help move things along. Y'r ob'd't servant, -- Associate Professor Division of Policy and Planning Science http://turnbull.sk.tsukuba.ac.jp/ Faculty of Systems and Information Email: turnbull at sk.tsukuba.ac.jp University of Tsukuba Tel: 029-853-5175 Tennodai 1-1-1, Tsukuba 305-8573 JAPAN From ncoghlan at gmail.com Tue Feb 12 07:49:39 2019 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 12 Feb 2019 22:49:39 +1000 Subject: [Python-Dev] Reviewing PEP 580 In-Reply-To: <5C61FCB5.80804@UGent.be> References: <5C61FCB5.80804@UGent.be> Message-ID: On Tue., 12 Feb. 2019, 9:04 am Jeroen Demeyer Hello, > > I would like to propose to the new steering council to review PEP 580. > Is there already a process for that? > Hi Jeroen, We're still considering the details of how PEP 1 is going to be adjusted for a Steering Council rather than a BDFL. Once the Council members are clear on how *we* think that should work (probably via discussion on a draft PR against PEP 1), then python-dev will be the first to know. Cheers, Nick. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liu.denton at gmail.com Tue Feb 12 05:08:35 2019 From: liu.denton at gmail.com (Denton Liu) Date: Tue, 12 Feb 2019 02:08:35 -0800 Subject: [Python-Dev] [docs] [issue35155] Requesting a review Message-ID: <20190212100835.GA28426@archbookpro.localdomain> Hello all, A couple months back, I reported bpo-35155[1] and I submitted a PR for it[2]. After a couple of reviews, it seems like progress has stalled. Would it be possible for someone to review this? Thanks, Denton [1]: https://bugs.python.org/issue35155 [2]: https://github.com/python/cpython/pull/10313 From liu.denton at gmail.com Tue Feb 12 05:14:55 2019 From: liu.denton at gmail.com (Denton Liu) Date: Tue, 12 Feb 2019 02:14:55 -0800 Subject: [Python-Dev] [bpo-35155] Requesting a review Message-ID: <20190212101455.GA29427@archbookpro.localdomain> Hello all, A couple months back, I reported bpo-35155[1] and I submitted a PR for consideration[2]. After a couple of reviews, it seems like progress has stalled. Would it be possible for someone to review this? Thanks, Denton [1]: https://bugs.python.org/issue35155 [2]: https://github.com/python/cpython/pull/10313 From tjreedy at udel.edu Tue Feb 12 15:24:03 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 12 Feb 2019 15:24:03 -0500 Subject: [Python-Dev] [bpo-35155] Requesting a review In-Reply-To: <20190212101455.GA29427@archbookpro.localdomain> References: <20190212101455.GA29427@archbookpro.localdomain> Message-ID: On 2/12/2019 5:14 AM, Denton Liu wrote: > Hello all, > > A couple months back, I reported bpo-35155[1] and I submitted a PR for > consideration[2]. After a couple of reviews, it seems like progress has > stalled. Would it be possible for someone to review this? > > Thanks, > > Denton > > [1]: https://bugs.python.org/issue35155 > [2]: https://github.com/python/cpython/pull/10313 The problem is that the urllib.request doc has several 'placeholder-literal' and 'literal-placeholder' constructs where 'literal' is text that the user *must copy* while 'placeholder' is text that the user *must replace* with one of several strings, with no evident indication of which is which. (The constructs indicate possible allowed names of user-supplied functions.) The only issue to me is how to indicate in the .rst source (and resulting html) that 'placeholder' is a placeholder and not a literal. -- Terry Jan Reedy From tjreedy at udel.edu Tue Feb 12 15:31:36 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 12 Feb 2019 15:31:36 -0500 Subject: [Python-Dev] [bpo-35155] Requesting a review In-Reply-To: References: <20190212101455.GA29427@archbookpro.localdomain> Message-ID: On 2/12/2019 3:24 PM, Terry Reedy wrote: > The problem is that the urllib.request doc has several > 'placeholder-literal' and 'literal-placeholder' constructs where Correction: The result must be a legal function name, so that should be 'placeholder_literal' and 'literal_placeholder', where the '_' is part of the literal. -- Terry Jan Reedy From benjamin at python.org Tue Feb 12 23:45:03 2019 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 12 Feb 2019 23:45:03 -0500 Subject: [Python-Dev] 2.7.16 release dates Message-ID: Greetings, I've set the dates for the 2.7.16 release in PEP 373. The release candidate will happen on February 16 with a final release 2 weeks later on March 2 if all goes well. Servus, Benjamin From g.rodola at gmail.com Wed Feb 13 07:24:53 2019 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Wed, 13 Feb 2019 13:24:53 +0100 Subject: [Python-Dev] Adding test.support.safe_rmpath() Message-ID: Hello, after discovering os.makedirs() has no unit-tests ( https://bugs.python.org/issue35982) I was thinking about working on a PR to increase the test coverage of fs-related os.* functions. In order to do so I think it would be useful to add a convenience function to "just delete something if it exists", regardless if it's a file, directory, directory tree, etc., and include it into test.support module. Basically it would be very similar to "rm -rf". I use something like this into psutil: https://github.com/giampaolo/psutil/blob/3ea94c1b8589891a8d1a5781f0445cb5080b7c3e/psutil/tests/__init__.py#L696 I find this paradigm especially useful when testing functions involving two files ("src" and "dst"). E.g. in case of os.renames() unit-tests I would write something like this: class RenamesTest(unittest.TestCase): srcname = support.TESTFN dstname = support.TESTFN + '2' def setUp(self): test.support.rmpath(self.srcname) test.support.rmpath(self.dstname) tearDown = setUp def test_rename_file(self): ... def test_rename_dir(self): ... def test_rename_failure(self): # both src and dst will not exist ... With the current utilities included in test.support the setUp function above would be written as such: def setUp(self): for path in (self.srcname, self.dstname): if os.path.isdir(path): test.support.rmtree(path) elif os.path.exists(path): test.support.unlink(path) Extra: one may argue whether this utility could be included into shutil module instead. The extra advantage of test.support.rmtree and test.support.unlink though, is that on Windows they use a timeout, catching "file is currently in use" exceptions for some time before giving up. That IMO would probably make this utility function not palatable for inclusion into shutil module, so test.support would probably be a better landing place. Thoughts? -- Giampaolo - http://grodola.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Wed Feb 13 08:27:06 2019 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Wed, 13 Feb 2019 14:27:06 +0100 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: References: Message-ID: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> > On 13 Feb 2019, at 13:24, Giampaolo Rodola' wrote: > > > Hello, > after discovering os.makedirs() has no unit-tests (https://bugs.python.org/issue35982 ) I was thinking about working on a PR to increase the test coverage of fs-related os.* functions. In order to do so I think it would be useful to add a convenience function to "just delete something if it exists", regardless if it's a file, directory, directory tree, etc., and include it into test.support module. Something like shutil.rmtree() with ignore_errors=True? Ronald -------------- next part -------------- An HTML attachment was scrubbed... URL: From vstinner at redhat.com Wed Feb 13 08:32:25 2019 From: vstinner at redhat.com (Victor Stinner) Date: Wed, 13 Feb 2019 14:32:25 +0100 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: References: Message-ID: Bikeshedding: I suggest to remove "safe_" from the name, it's hard to guarantee that removal is "safe", especially on Windows where a removal can be blocked for many reasons. Victor Le mer. 13 f?vr. 2019 ? 13:28, Giampaolo Rodola' a ?crit : > > > Hello, > after discovering os.makedirs() has no unit-tests (https://bugs.python.org/issue35982) I was thinking about working on a PR to increase the test coverage of fs-related os.* functions. In order to do so I think it would be useful to add a convenience function to "just delete something if it exists", regardless if it's a file, directory, directory tree, etc., and include it into test.support module. Basically it would be very similar to "rm -rf". I use something like this into psutil: > https://github.com/giampaolo/psutil/blob/3ea94c1b8589891a8d1a5781f0445cb5080b7c3e/psutil/tests/__init__.py#L696 > I find this paradigm especially useful when testing functions involving two files ("src" and "dst"). E.g. in case of os.renames() unit-tests I would write something like this: > > > class RenamesTest(unittest.TestCase): > srcname = support.TESTFN > dstname = support.TESTFN + '2' > > def setUp(self): > test.support.rmpath(self.srcname) > test.support.rmpath(self.dstname) > tearDown = setUp > > def test_rename_file(self): > ... > def test_rename_dir(self): > ... > def test_rename_failure(self): > # both src and dst will not exist > ... > > With the current utilities included in test.support the setUp function above would be written as such: > > def setUp(self): > for path in (self.srcname, self.dstname): > if os.path.isdir(path): > test.support.rmtree(path) > elif os.path.exists(path): > test.support.unlink(path) > > Extra: one may argue whether this utility could be included into shutil module instead. The extra advantage of test.support.rmtree and test.support.unlink though, is that on Windows they use a timeout, catching "file is currently in use" exceptions for some time before giving up. That IMO would probably make this utility function not palatable for inclusion into shutil module, so test.support would probably be a better landing place. > > Thoughts? > > -- > Giampaolo - http://grodola.blogspot.com > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com -- Night gathers, and now my watch begins. It shall not end until my death. From g.rodola at gmail.com Wed Feb 13 10:10:48 2019 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Wed, 13 Feb 2019 16:10:48 +0100 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> Message-ID: On Wed, Feb 13, 2019 at 2:27 PM Ronald Oussoren wrote: > > > On 13 Feb 2019, at 13:24, Giampaolo Rodola' wrote: > > > Hello, > after discovering os.makedirs() has no unit-tests ( > https://bugs.python.org/issue35982) I was thinking about working on a PR > to increase the test coverage of fs-related os.* functions. In order to do > so I think it would be useful to add a convenience function to "just delete > something if it exists", regardless if it's a file, directory, directory > tree, etc., and include it into test.support module. > > > Something like shutil.rmtree() with ignore_errors=True? > shutil.rmtree() is about directories and can't be used against files. support.rmpath() would take a path (meaning anything) and try to remove it. -- Giampaolo - http://grodola.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.rodola at gmail.com Wed Feb 13 10:12:38 2019 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Wed, 13 Feb 2019 16:12:38 +0100 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: References: Message-ID: On Wed, Feb 13, 2019 at 2:32 PM Victor Stinner wrote: > Bikeshedding: I suggest to remove "safe_" from the name, it's hard to > guarantee that removal is "safe", especially on Windows where a > removal can be blocked for many reasons. > > Victor > Agree. I actually meant "rmpath()" (which I used in my examples) but I mispelled that in the mail title. =) -- Giampaolo - http://grodola.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From encukou at gmail.com Wed Feb 13 10:24:48 2019 From: encukou at gmail.com (Petr Viktorin) Date: Wed, 13 Feb 2019 16:24:48 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems Message-ID: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> PEP 394 says: > This recommendation will be periodically reviewed over the next few > years, and updated when the core development team judges it > appropriate. As a point of reference, regular maintenance releases > for the Python 2.7 series will continue until at least 2020. I think it's time for another review. I'm especially worried about the implication of these: - If the `python` command is installed, it should invoke the same version of Python as the `python2` command - scripts that are deliberately written to be source compatible with both Python 2.x and 3.x [...] may continue to use `python` on their shebang line. So, to support scripts that adhere to the recommendation, Python 2 needs to be installed :( Please see this PR for details and a suggested change: https://github.com/python/peps/pull/893 From solipsis at pitrou.net Wed Feb 13 10:46:03 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 13 Feb 2019 16:46:03 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: <20190213164603.3894316f@fsol> On Wed, 13 Feb 2019 16:24:48 +0100 Petr Viktorin wrote: > PEP 394 says: > > > This recommendation will be periodically reviewed over the next few > > years, and updated when the core development team judges it > > appropriate. As a point of reference, regular maintenance releases > > for the Python 2.7 series will continue until at least 2020. > > I think it's time for another review. > I'm especially worried about the implication of these: > > - If the `python` command is installed, it should invoke the same > version of Python as the `python2` command > - scripts that are deliberately written to be source compatible > with both Python 2.x and 3.x [...] may continue to use `python` on > their shebang line. > > So, to support scripts that adhere to the recommendation, Python 2 > needs to be installed :( I think PEP 394 should acknowledge that there are now years of established usage of `python` as Python 3 for many conda users. Regards Antoine. From encukou at gmail.com Wed Feb 13 11:18:15 2019 From: encukou at gmail.com (Petr Viktorin) Date: Wed, 13 Feb 2019 17:18:15 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <20190213164603.3894316f@fsol> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: <32650e90-45d2-633f-c309-51518657aa2b@gmail.com> On 2/13/19 4:46 PM, Antoine Pitrou wrote: > On Wed, 13 Feb 2019 16:24:48 +0100 > Petr Viktorin wrote: >> PEP 394 says: >> >> > This recommendation will be periodically reviewed over the next few >> > years, and updated when the core development team judges it >> > appropriate. As a point of reference, regular maintenance releases >> > for the Python 2.7 series will continue until at least 2020. >> >> I think it's time for another review. >> I'm especially worried about the implication of these: >> >> - If the `python` command is installed, it should invoke the same >> version of Python as the `python2` command >> - scripts that are deliberately written to be source compatible >> with both Python 2.x and 3.x [...] may continue to use `python` on >> their shebang line. >> >> So, to support scripts that adhere to the recommendation, Python 2 >> needs to be installed :( > > I think PEP 394 should acknowledge that there are now years of > established usage of `python` as Python 3 for many conda users. The intention is that Conda environments are treated the same as venv environments, i.e.: When a virtual environment (created by the PEP 405 venv package or a similar tool) is active, the python command should refer to the virtual environment's interpreter. In other words, activating a virtual environment counts as deliberate user action to change the default python interpreter. Do you think conda should be listed explicitly along with venv? From vstinner at redhat.com Wed Feb 13 11:20:55 2019 From: vstinner at redhat.com (Victor Stinner) Date: Wed, 13 Feb 2019 17:20:55 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <20190213164603.3894316f@fsol> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: Hi, I'm a (strong) supporter of providing a "python" command which would be the latest Python version! As php does nowadays (after previous issues with "php4" vs "php5".) I don't recall that perl had "perl4" vs "perl5", the command was always "perl", no? Same for Ruby: it was still "ruby" after for Ruby 2, no? Only Python and PHP used different program names depending on the language version, no? And PHP now moved back to a single "php" program. In the container and virtualenv era, it's now easy to get your favorite Python version for the "python" command. On my Windows VM, "python" is Python 3.7 :-) In virtual environments, "python" can also be Python 3 as well. I recall that I saw commands using "python" rather than "python3" in the *official* Python 3 documentation: see examples below (*). Problem: On Windows, "python" is the right command. "python3" doesn't work (doesn't exist) on Windows. Should we write the doc for Windows or for Unix? Oooops. There was an interesting discussion about the Python version following Python 3.9: Python 3.10 or Python 4? And what are the issues which would make us prefer 3.10 rather than 4.0? https://mail.python.org/pipermail/python-committers/2018-September/006152.html One practical issue is that right now, six.PY3 is defined by "PY3 = sys.version_info[0] == 3" and so "if six.PY3:" will be false on Python 4. Another interesting thing to mention is the Unix Python launcher ("py") written by Brett Cannon written in Rust: https://github.com/brettcannon/python-launcher (*) A few examples of "python" commands in the Python official documentation "$ python prog.py -h" https://docs.python.org/dev/library/argparse.html "$ python logctx.py" https://docs.python.org/dev/howto/logging-cookbook.html "python setup.py install" https://docs.python.org/dev/install/index.html "python --help" https://docs.python.org/dev/howto/argparse.html "python setup.py build" https://docs.python.org/dev/extending/building.html "exec python $0 ${1+"$@"}" https://docs.python.org/dev/faq/library.html "python setup.py --help build_ext" https://docs.python.org/dev/distutils/configfile.html Victor Le mer. 13 f?vr. 2019 ? 16:49, Antoine Pitrou a ?crit : > > On Wed, 13 Feb 2019 16:24:48 +0100 > Petr Viktorin wrote: > > PEP 394 says: > > > > > This recommendation will be periodically reviewed over the next few > > > years, and updated when the core development team judges it > > > appropriate. As a point of reference, regular maintenance releases > > > for the Python 2.7 series will continue until at least 2020. > > > > I think it's time for another review. > > I'm especially worried about the implication of these: > > > > - If the `python` command is installed, it should invoke the same > > version of Python as the `python2` command > > - scripts that are deliberately written to be source compatible > > with both Python 2.x and 3.x [...] may continue to use `python` on > > their shebang line. > > > > So, to support scripts that adhere to the recommendation, Python 2 > > needs to be installed :( > > I think PEP 394 should acknowledge that there are now years of > established usage of `python` as Python 3 for many conda users. > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com -- Night gathers, and now my watch begins. It shall not end until my death. From solipsis at pitrou.net Wed Feb 13 11:28:59 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 13 Feb 2019 17:28:59 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <32650e90-45d2-633f-c309-51518657aa2b@gmail.com> Message-ID: <20190213172859.41d00642@fsol> On Wed, 13 Feb 2019 17:18:15 +0100 Petr Viktorin wrote: > On 2/13/19 4:46 PM, Antoine Pitrou wrote: > > On Wed, 13 Feb 2019 16:24:48 +0100 > > Petr Viktorin wrote: > >> PEP 394 says: > >> > >> > This recommendation will be periodically reviewed over the next few > >> > years, and updated when the core development team judges it > >> > appropriate. As a point of reference, regular maintenance releases > >> > for the Python 2.7 series will continue until at least 2020. > >> > >> I think it's time for another review. > >> I'm especially worried about the implication of these: > >> > >> - If the `python` command is installed, it should invoke the same > >> version of Python as the `python2` command > >> - scripts that are deliberately written to be source compatible > >> with both Python 2.x and 3.x [...] may continue to use `python` on > >> their shebang line. > >> > >> So, to support scripts that adhere to the recommendation, Python 2 > >> needs to be installed :( > > > > I think PEP 394 should acknowledge that there are now years of > > established usage of `python` as Python 3 for many conda users. > > The intention is that Conda environments are treated the same as venv > environments, i.e.: > > When a virtual environment (created by the PEP 405 venv package or a > similar tool) is active, the python command should refer to the virtual > environment's interpreter. In other words, activating a virtual > environment counts as deliberate user action to change the default > python interpreter. Anaconda is often used as providing, not only virtual environments, but the "main" user Python. At least it certainly is so on Windows and macOS, but I'm sure it is used like that as well on Linux, especially on ancient distributions such as RHEL 6 or Ubuntu 14.04. In any case, the fact that many people are used to "python" pointing to Python 3 is IMHO an important data point. Regards Antoine. From vstinner at redhat.com Wed Feb 13 11:45:44 2019 From: vstinner at redhat.com (Victor Stinner) Date: Wed, 13 Feb 2019 17:45:44 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: Some more context about Petr's change, Fedora, RHEL and Red Hat. At the latest Language Summit (2018), Petr detailed the state of the migration to Python 3 and how Python 2 is and will be handled at Red Hat; "Linux distributions and Python 2" talk with Matthias Klose (who works on Debian/Ubuntu): https://lwn.net/Articles/756628/ Petr explained in a comment on this change that /usr/bin/python is configurable in the incoming RHEL8: https://github.com/python/peps/pull/893#issuecomment-463240453 "I'm responsible for how this is handled in RHEL 8 beta, where `/usr/bin/python` is configurable (though configuring it is discouraged). I don't recommend that in the PEP ? I don't think it needs to cover distros that need to lock in the behavior of `/usr/bin/python` for a decade." More details in his nice "Python in RHEL 8" article: https://developers.redhat.com/blog/2018/11/14/python-in-rhel-8/ RHEL8 has specific challenges since it will be released around Python2 end-of-life with customers who are still running Python 2, but has also to be prepared for the bright Python 3 world since RHEL is usually supported for 10 years (if not longer). Petr and me are working for Red Hat on Fedora and RHEL. My team is helping on actively removing Python 2 from Fedora: https://fedoraproject.org/wiki/Changes/Mass_Python_2_Package_Removal "Python 2 will be deprecated in Fedora. Packagers can mark any other Python 2 packages as deprecated as well." See also: * https://fedoraproject.org/wiki/FinalizingFedoraSwitchtoPython3 -- work-in-progress * https://fedoraproject.org/wiki/Changes/Python_3_as_Default -- implemented in Fedora 23 The base installation of Fedora only provides "python3" (no "python" nor "python2") since Fedora 23 (released in 2015), as does Ubuntu nowadays. You get get "python" on Fedora by installing a special "python-unversioned-command" package :-) Victor Le mer. 13 f?vr. 2019 ? 16:28, Petr Viktorin a ?crit : > > PEP 394 says: > > > This recommendation will be periodically reviewed over the next few > > years, and updated when the core development team judges it > > appropriate. As a point of reference, regular maintenance releases > > for the Python 2.7 series will continue until at least 2020. > > I think it's time for another review. > I'm especially worried about the implication of these: > > - If the `python` command is installed, it should invoke the same > version of Python as the `python2` command > - scripts that are deliberately written to be source compatible > with both Python 2.x and 3.x [...] may continue to use `python` on > their shebang line. > > So, to support scripts that adhere to the recommendation, Python 2 > needs to be installed :( > > > Please see this PR for details and a suggested change: > https://github.com/python/peps/pull/893 > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com -- Night gathers, and now my watch begins. It shall not end until my death. From steve.dower at python.org Wed Feb 13 12:13:06 2019 From: steve.dower at python.org (Steve Dower) Date: Wed, 13 Feb 2019 09:13:06 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: <2fecf1f1-2754-e961-f0f9-6a6d01728e9a@python.org> On 13Feb2019 0820, Victor Stinner wrote: > On my Windows VM, "python" is Python 3.7 :-) In virtual environments, > "python" can also be Python 3 as well. > > I recall that I saw commands using "python" rather than "python3" in > the *official* Python 3 documentation: see examples below (*). > Problem: On Windows, "python" is the right command. "python3" doesn't > work (doesn't exist) on Windows. Should we write the doc for Windows > or for Unix? Oooops. With the Windows Store package of Python, you get "python", "python3", and "python3.x" links added to your PATH, and I'm still thinking about ways to make this reasonable/reliable through the full installer as well (the difference is that the OS manages the links through the Store package, whereas each individual installer has to do it on their own otherwise). I'm inclined to view "python" as the default, official command, with the versioned ones being workarounds added by distributors. So: * our docs should say "python" consistently * we should recommend that distributors use the same workaround * our docs should describe the recommended workaround in any places people are likely to first encounter it (tutorial, sys.executable, etc.) (And maybe this isn't currently how things are done, but I'd rather hold up an ideal than pretend that the status quo can't be changed - this list is literally for discussing changing the status quo of anything in core Python ;) ) Cheers, Steve From chris.barker at noaa.gov Wed Feb 13 12:41:05 2019 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Wed, 13 Feb 2019 12:41:05 -0500 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <2fecf1f1-2754-e961-f0f9-6a6d01728e9a@python.org> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <2fecf1f1-2754-e961-f0f9-6a6d01728e9a@python.org> Message-ID: > On Feb 13, 2019, at 9:13 AM, Steve Dower > > I'm inclined to view "python" as the default, official command, with the versioned ones being workarounds added by distributors. +1 ? almost. I agree that ?python? be the default, but it would be good to require (or at least highly encourage) that there be a ?python3? as well. There will be folks wanting to run python3 on systems where there is still a ?python? pointing to py2 ? particularly since that is still the ?correct? way to do it! > (And maybe this isn't currently how things are done, but I'd rather hold up an ideal than pretend that the status quo can't be changed - Exactly. -CHB > this list is literally for discussing changing the status quo of anything in core Python ;) ) > > Cheers, > Steve > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris.barker%40noaa.gov From barry at python.org Wed Feb 13 15:26:08 2019 From: barry at python.org (Barry Warsaw) Date: Wed, 13 Feb 2019 12:26:08 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: On Feb 13, 2019, at 08:20, Victor Stinner wrote: > I'm a (strong) supporter of providing a "python" command which would > be the latest Python version! I think we should aspire for this to be the case too, eventually. When this has come up in the past, we?ve said that it?s not appropriate to change PEP 394 until Python 2 is officially deprecated. OTOH, I appreciate that distros and others have to make decisions on this now. I think it?s worth discussing where we eventually want to be as a community, even if we continue to recommend no official change until 2020. I personally would like for `python` to be the latest Python 3 version (or perhaps Brett?s launcher), `python2` to be Python 2.7 where installed (and not mandatory). `python3` would be an alias for the latest Python 3. > There was an interesting discussion about the Python version following > Python 3.9: Python 3.10 or Python 4? And what are the issues which > would make us prefer 3.10 rather than 4.0? > https://mail.python.org/pipermail/python-committers/2018-September/006152.html I don?t think this should be conflated with PEP 394. IMHO, 3.10 is just fine. Python 4 should be reserved for some future mythical GIL-less interpreter or other major API breaking change. It might never happen. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From neil at python.ca Wed Feb 13 16:00:52 2019 From: neil at python.ca (Neil Schemenauer) Date: Wed, 13 Feb 2019 15:00:52 -0600 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: <20190213210052.w35eezwfwmhurj43@python.ca> On 2019-02-13, Barry Warsaw wrote: > I personally would like for `python` to be the latest Python 3 > version (or perhaps Brett?s launcher), `python2` to be Python 2.7 > where installed (and not mandatory). `python3` would be an alias > for the latest Python 3. To me, having 'py' on Unix would be a good thing(tm). If we have that then I suppose we will encourage people to prefer it over 'python', 'python3', and 'python2'. At that point, where 'python' points would be less of an issue. I'm not opposed to making 'python' configurable or eventually pointing it to python3. However, if we do go with 'py' as the preferred command in the future, it seems to be some pain for little gain. If the OS already allows it to be re-directed, maybe that's good enough. Regards, Neil From tjreedy at udel.edu Wed Feb 13 17:16:54 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 13 Feb 2019 17:16:54 -0500 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: On 2/13/2019 3:26 PM, Barry Warsaw wrote: > I personally would like for `python` to be the latest Python 3 version (or perhaps Brett?s launcher), `python2` to be Python 2.7 where installed (and not mandatory). `python3` would be an alias for the latest Python 3. It appears python is already python3 for a large majority of human users (as opposed to machines). https://www.jetbrains.com/research/python-developers-survey-2018/ Nearly 20000 valid responses, Oct-Nov. "Which version of Python do you use the most" Python 3: 75% in 2017, 84% in 2018. The figure for other public surveys were 22%, 34%, 40% in Dec 2013, Dec 2014, Jan 2016. I expect at least 90% by next January. Py3 is already 90% among data scientists. -- Terry Jan Reedy From steve at pearwood.info Wed Feb 13 18:00:05 2019 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 14 Feb 2019 10:00:05 +1100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: <20190213230004.GV1834@ando.pearwood.info> On Wed, Feb 13, 2019 at 05:16:54PM -0500, Terry Reedy wrote: > It appears python is already python3 for a large majority of human users > (as opposed to machines). > > https://www.jetbrains.com/research/python-developers-survey-2018/ > Nearly 20000 valid responses, Oct-Nov. They may be valid responses, but we don't know if they are representative of the broader Python community. Its a self-selected survey of people which always makes the results statistically suspect. (By definition, an Internet survey eliminates responses from people who don't fill out surveys on the Internet.) BUt even if representative, this survey only tells us what version people are using, now how they invoke it. We can't conclude that the command "python" means Python 3 for these users. We simply don't know one way or another (and I personally wouldn't want to hazard a guess.) -- Steven From vstinner at redhat.com Wed Feb 13 18:07:35 2019 From: vstinner at redhat.com (Victor Stinner) Date: Thu, 14 Feb 2019 00:07:35 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: Le mer. 13 f?vr. 2019 ? 21:26, Barry Warsaw a ?crit : > I don?t think this should be conflated with PEP 394. IMHO, 3.10 is just fine. Python 4 should be reserved for some future mythical GIL-less interpreter or other major API breaking change. It might never happen. My point is that changing the major version from 3 to 4 *will* break things. We have to prepare the community to support such change. For example, advice to replace "if major_version == 3: ... else: ..." with "if major_version >= 3: ... else: ...". Victor -- Night gathers, and now my watch begins. It shall not end until my death. From chris.barker at noaa.gov Wed Feb 13 18:31:10 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 13 Feb 2019 15:31:10 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: On Wed, Feb 13, 2019 at 12:29 PM Barry Warsaw wrote: > I think we should aspire for this to be the case too, eventually. When > this has come up in the past, we?ve said that it?s not appropriate to > change PEP 394 until Python 2 is officially deprecated. OTOH, I appreciate > that distros and others have to make decisions on this now. I think it?s > worth discussing where we eventually want to be as a community, even if we > continue to recommend no official change until 2020. > I don't think end-users will see any discontinuity in 2020. For quite a while now, more and more people have shifted from using python2 as the default to using python3 as the default. On the other hand, some folks are still using python2, and will continue to do so after 2020 (old, still supported versions RedHat anyone?) Hopefully, after 2020 no one will start anything new with py2, but it's going to be around for a long, long time. So as there will be no time for a "clear break", we might as well make changes when the time "seems" right. And has been pointed out in this thread, there are a lot of folks not following the PEP anyway (virtual environments, conda, ??). I myself have been telling my newbie students to make a link from "python" to python3 for a couple years (before I saw that PEP!). I personally would like for `python` to be the latest Python 3 version (or > perhaps Brett?s launcher), `python2` to be Python 2.7 where installed (and > not mandatory). `python3` would be an alias for the latest Python 3. > +1 Starting now. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Feb 13 18:33:21 2019 From: barry at python.org (Barry Warsaw) Date: Wed, 13 Feb 2019 15:33:21 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: On Feb 13, 2019, at 15:07, Victor Stinner wrote: > > Le mer. 13 f?vr. 2019 ? 21:26, Barry Warsaw a ?crit : >> I don?t think this should be conflated with PEP 394. IMHO, 3.10 is just fine. Python 4 should be reserved for some future mythical GIL-less interpreter or other major API breaking change. It might never happen. > > My point is that changing the major version from 3 to 4 *will* break > things. We have to prepare the community to support such change. Perhaps. I just don?t think Python 4 is anything but distant vaporware. There?s a cost to freaking everyone out that Python 4 is coming and will be as disruptive as Python 3. Calling Python 3.9+1 Python 4 feeds into that FUD for no reason that I can tell except for an aversion to two digit minor version numbers. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From nas-python at arctrix.com Wed Feb 13 18:41:12 2019 From: nas-python at arctrix.com (Neil Schemenauer) Date: Wed, 13 Feb 2019 17:41:12 -0600 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: <20190213234112.eyzc4dxk6pjqezo6@python.ca> On 2019-02-13, Terry Reedy wrote: > It appears python is already python3 for a large majority of human users (as > opposed to machines). IMHO, the question about where /usr/bin/python points is more important for machines than for humans. Think about changing /bin/sh to some different version of the Borne Shell that changes 'echo'. Or changing 'awk' to some incompatible version. That's going to break a lot of scripts (cron jobs, etc). I experienced the bad old days when you couldn't rely on /bin/sh to be a proper POSIX shell. It was a mess and it wasted countless hours of human life to work around all the flavours. Python is not as fundamental as the Unix shell but it has replaced a lot of shell scripting. How can we avoid making a lot of work for people? I don't see an easy answer. We don't want Python to become frozen forever (whether it is called 'python', 'python3', or 'py'). OTOH, making /usr/bin/python point to the most recent X.Y release doesn't seem like a good solution either. For example, if I used 'async' as a variable in some of my scripts and then 3.7 broke them. Should we dust off PEP 407 "New release cycle and introducing long-term support versions"? Having /usr/bin/python point to a LTS release seems better to me. I don't know if the core developers are willing to support PEP 407 though. Maybe OS packagers like Red Hat and Debian will already do something like LTS releases and core developers don't need to. /usr/bin/python in Red Hat has behaved like that, as far as I know. Another idea is that we could adopt something like the Rust "language edition" system. Obviously lots of details to be worked out. If we had that, the 'py' command could take an argument to specify the Python edition. OTOH, perhaps deprecation warnings and __future__ achieves most of the same benefits. Maintaining different editions sounds like a lot of work. More work than doing LTS releases. Maybe the solution is just that we become a lot more careful about making incompatible changes. To me, that would seem to reduce the rate that Python is improving. However, a less evolved but more stable Python could actually have a higher value to society. We could create an experimental branch of Python, e.g. python-ng. Then, all the crazy new ideas go in there. Only after they are stable would we merge them into the stable version of Python. I'm not sure how well that works in practice though. That's similar to what Linux did with the even/odd version numbering. It turned into a mess because the experimental (next) version quickly outran the stable version and merging fixes between them was difficult. They abandoned that and now use something like PEP 407 for LTS releases. Regards, Neil From turnbull.stephen.fw at u.tsukuba.ac.jp Wed Feb 13 19:17:47 2019 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Thu, 14 Feb 2019 09:17:47 +0900 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <20190213230004.GV1834@ando.pearwood.info> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <20190213230004.GV1834@ando.pearwood.info> Message-ID: <23652.45995.691039.338687@turnbull.sk.tsukuba.ac.jp> Steven D'Aprano writes: > But even if representative, this survey only tells us what version > people are using, now how they invoke it. We can't conclude that the > command "python" means Python 3 for these users. We simply don't know > one way or another (and I personally wouldn't want to hazard a > guess.) Agreed on "can't tell invocation". I've been using "pythonX.Y" since the last time I used Red Hat a lot (which was when Red Hat required Python 1.5.2 or it almost wouldn't boot, and before several core developers were born, I suspect). We should also remember that Python is often invoked implicitly in scripts that may be even older than that. I don't think that Perl and PHP experience are sufficiently analogous. As far as I can tell, they're pretty much backward compatible, except that errors became valid code. The unicode -> str, str -> bytes upgrade in Python 3 means that an awful lot of scripts break if you use the wrong one. I think in the spirit of saving keystrokes ;-), we should encourage the use of the "py" wrapper. Yet another Steve -- Associate Professor Division of Policy and Planning Science http://turnbull.sk.tsukuba.ac.jp/ Faculty of Systems and Information Email: turnbull at sk.tsukuba.ac.jp University of Tsukuba Tel: 029-853-5175 Tennodai 1-1-1, Tsukuba 305-8573 JAPAN From njs at pobox.com Wed Feb 13 20:32:44 2019 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 13 Feb 2019 17:32:44 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <20190213230004.GV1834@ando.pearwood.info> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <20190213230004.GV1834@ando.pearwood.info> Message-ID: On Wed, Feb 13, 2019 at 3:02 PM Steven D'Aprano wrote: > > On Wed, Feb 13, 2019 at 05:16:54PM -0500, Terry Reedy wrote: > > > It appears python is already python3 for a large majority of human users > > (as opposed to machines). > > > > https://www.jetbrains.com/research/python-developers-survey-2018/ > > Nearly 20000 valid responses, Oct-Nov. > > They may be valid responses, but we don't know if they are > representative of the broader Python community. Its a self-selected > survey of people which always makes the results statistically suspect. > > (By definition, an Internet survey eliminates responses from people who > don't fill out surveys on the Internet.) > > BUt even if representative, this survey only tells us what version > people are using, now how they invoke it. We can't conclude that the > command "python" means Python 3 for these users. We simply don't know > one way or another (and I personally wouldn't want to hazard a guess.) Can we gather data? What if pip started reporting info on how it was run when contacting pypi? What info would be useful? I guess whether it's pip/pip3/python -m pip/python3 -m pip would be nice to know. I don't know if sys.executable would tell us anything useful or not. pip knows where the current python's script directory is; maybe it should report whether it contains 'python2', 'python3', 'python', and perhaps which ones are the same as each other? -n -- Nathaniel J. Smith -- https://vorpus.org From steve at pearwood.info Wed Feb 13 22:25:33 2019 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 14 Feb 2019 14:25:33 +1100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: <20190214032533.GY1834@ando.pearwood.info> On Wed, Feb 13, 2019 at 03:33:21PM -0800, Barry Warsaw wrote: > I just don?t think Python 4 is anything but distant vaporware. If Python 4 follows 3.9, that could be as little as 3-4 years away :-) > There?s a cost to freaking everyone out that Python 4 is coming and > will be as disruptive as Python 3. Indeed. I do my bit to combat that in two ways: - remind people that Guido has pronounced that Python 4 will not be a disruptive, backwards-incompatible change like Python 3 was; - and use "Python 5000" to refer to any such hypothetical and very unlikely incompatible version. > Calling Python 3.9+1 Python 4 > feeds into that FUD for no reason that I can tell except for an > aversion to two digit minor version numbers. I haven't come across this FUD about Python 4, so I wonder whether it exists more in our fears than the reality. I daresay there are a few people out there who will instantly react to even a casual mention of "Python 4" as if it were a concrete upgrade that just broke their servers, but I would hope the average Python coder had more sense. I know that we have to plan for the community we have rather the community we want, but I would be very sad if we had decisions forced on us by the most ignorant, Dunning-Kruger, unteachable and proud of it segment of the community. Any such hypothetical Python 3.10/4.0 version is at least three or four years away. Let's not limit our options until we know whether or not this FUD is widespread. Whatever we plan, we should allow for *both* a Python 3.10 and a Python 4, and then we'll be good even if 4.0 follows 3.12 :-) -- Steven From tjreedy at udel.edu Wed Feb 13 23:59:25 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 13 Feb 2019 23:59:25 -0500 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <20190214032533.GY1834@ando.pearwood.info> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <20190214032533.GY1834@ando.pearwood.info> Message-ID: On 2/13/2019 10:25 PM, Steven D'Aprano wrote: > I haven't come across this FUD about Python 4, I have, on StackOverflow, induced by people reading something like "deprecated now, removed in 4.0" -- Terry Jan Reedy From rbellevi at google.com Thu Feb 14 00:05:54 2019 From: rbellevi at google.com (Richard Belleville) Date: Wed, 13 Feb 2019 21:05:54 -0800 Subject: [Python-Dev] datetime.timedelta total_microseconds Message-ID: In a recent code review, the following snippet was called out as reinventing the wheel: _MICROSECONDS_PER_SECOND = 1000000 def _timedelta_to_microseconds(delta): return int(delta.total_seconds() * _MICROSECONDS_PER_SECOND) The reviewer thought that there must already exist a standard library function that fulfills this functionality. After we had both satisfied ourselves that we hadn't simply missed something in the documentation, we decided that we had better raise the issue with a wider audience. Does this functionality already exist within the standard library? If not, would a datetime.timedelta.total_microseconds function be a reasonable addition? I would be happy to submit a patch for such a thing. Richard Belleville -------------- next part -------------- An HTML attachment was scrubbed... URL: From tahafut at gmail.com Thu Feb 14 00:23:38 2019 From: tahafut at gmail.com (Henry Chen) Date: Wed, 13 Feb 2019 21:23:38 -0800 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: Message-ID: Looks like timedelta has a microseconds property. Would this work for your needs? In [12]: d Out[12]: datetime.timedelta(0, 3, 398407) In [13]: d.microseconds Out[13]: 398407 On Wed, Feb 13, 2019 at 9:08 PM Richard Belleville via Python-Dev < python-dev at python.org> wrote: > In a recent code review, the following snippet was called out as > reinventing the > wheel: > > _MICROSECONDS_PER_SECOND = 1000000 > > > def _timedelta_to_microseconds(delta): > return int(delta.total_seconds() * _MICROSECONDS_PER_SECOND) > > > The reviewer thought that there must already exist a standard library > function > that fulfills this functionality. After we had both satisfied ourselves > that we > hadn't simply missed something in the documentation, we decided that we had > better raise the issue with a wider audience. > > Does this functionality already exist within the standard library? If not, > would > a datetime.timedelta.total_microseconds function be a reasonable addition? > I > would be happy to submit a patch for such a thing. > > Richard Belleville > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/tahafut%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tahafut at gmail.com Thu Feb 14 00:35:20 2019 From: tahafut at gmail.com (Henry Chen) Date: Wed, 13 Feb 2019 21:35:20 -0800 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: Message-ID: Oops. That isn't the TOTAL microseconds, but just the microseconds portion. Sorry for the confusion. On Wed, Feb 13, 2019 at 9:23 PM Henry Chen wrote: > Looks like timedelta has a microseconds property. Would this work for your > needs? > > In [12]: d > Out[12]: datetime.timedelta(0, 3, 398407) > > In [13]: d.microseconds > Out[13]: 398407 > > On Wed, Feb 13, 2019 at 9:08 PM Richard Belleville via Python-Dev < > python-dev at python.org> wrote: > >> In a recent code review, the following snippet was called out as >> reinventing the >> wheel: >> >> _MICROSECONDS_PER_SECOND = 1000000 >> >> >> def _timedelta_to_microseconds(delta): >> return int(delta.total_seconds() * _MICROSECONDS_PER_SECOND) >> >> >> The reviewer thought that there must already exist a standard library >> function >> that fulfills this functionality. After we had both satisfied ourselves >> that we >> hadn't simply missed something in the documentation, we decided that we >> had >> better raise the issue with a wider audience. >> >> Does this functionality already exist within the standard library? If >> not, would >> a datetime.timedelta.total_microseconds function be a reasonable >> addition? I >> would be happy to submit a patch for such a thing. >> >> Richard Belleville >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/tahafut%40gmail.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jason.swails at gmail.com Thu Feb 14 00:57:36 2019 From: jason.swails at gmail.com (Jason Swails) Date: Thu, 14 Feb 2019 00:57:36 -0500 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: On Wed, Feb 13, 2019 at 10:26 AM Petr Viktorin wrote: > PEP 394 says: > > > This recommendation will be periodically reviewed over the next few > > years, and updated when the core development team judges it > > appropriate. As a point of reference, regular maintenance releases > > for the Python 2.7 series will continue until at least 2020. > > I think it's time for another review. > I'm especially worried about the implication of these: > > - If the `python` command is installed, it should invoke the same > version of Python as the `python2` command > - scripts that are deliberately written to be source compatible > with both Python 2.x and 3.x [...] may continue to use `python` on > their shebang line. > > So, to support scripts that adhere to the recommendation, Python 2 > needs to be installed :( > I literally just ran into this problem now. Part of a software suite I've written uses Python to fetch updates during the installation process. Due to the target audience, it needs to access the system Python (only), and support systems as old as RHEL 5 (Python 2.4 and later, including Python 3.x in the same code base, using nothing but the stdlib). The shebang line was "#!/usr/bin/env python" It's been working for years, but was only now reported broken by a user that upgraded their Ubuntu distribution and suddenly had no "python" executable anywhere. But they had python3. I suspect suddenly not having any "python" executable in a Linux system will screw up a lot more people than just me. The workaround was ugly. I'd like to see there always be a `python` executable available if any version of Python is installed. Thanks, Jason -- Jason M. Swails -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcepl at cepl.eu Thu Feb 14 02:08:48 2019 From: mcepl at cepl.eu (=?UTF-8?Q?Mat=C4=9Bj?= Cepl) Date: Thu, 14 Feb 2019 08:08:48 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: On 2019-02-13, 23:33 GMT, Barry Warsaw wrote: > Perhaps. I just don?t think Python 4 is anything but distant > vaporware. There?s a cost to freaking everyone out that > Python 4 is coming and will be as disruptive as Python 3. > Calling Python 3.9+1 Python 4 feeds into that FUD for no > reason that I can tell except for an aversion to two digit > minor version numbers. Is this relevant to the discussion at hand? We are talking about the binary /usr/bin/python3 which will be surely be provided even by Python 4, won't it? Mat?j -- https://matej.ceplovi.cz/blog/, Jabber: mcepl at ceplovi.cz GPG Finger: 3C76 A027 CA45 AD70 98B5 BC1D 7920 5802 880B C9D8 Reading after a certain age diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking, just as the man who spends too much time in the theater is tempted to be content with living vicariously instead of living his own life. -- Albert Einstein to The Saturday Evening Post, October 1929 From solipsis at pitrou.net Thu Feb 14 03:41:20 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 14 Feb 2019 09:41:20 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <20190213230004.GV1834@ando.pearwood.info> Message-ID: <20190214094120.536c2b13@fsol> On Wed, 13 Feb 2019 17:32:44 -0800 Nathaniel Smith wrote: > On Wed, Feb 13, 2019 at 3:02 PM Steven D'Aprano wrote: > > > > On Wed, Feb 13, 2019 at 05:16:54PM -0500, Terry Reedy wrote: > > > > > It appears python is already python3 for a large majority of human users > > > (as opposed to machines). > > > > > > https://www.jetbrains.com/research/python-developers-survey-2018/ > > > Nearly 20000 valid responses, Oct-Nov. > > > > They may be valid responses, but we don't know if they are > > representative of the broader Python community. Its a self-selected > > survey of people which always makes the results statistically suspect. > > > > (By definition, an Internet survey eliminates responses from people who > > don't fill out surveys on the Internet.) > > > > BUt even if representative, this survey only tells us what version > > people are using, now how they invoke it. We can't conclude that the > > command "python" means Python 3 for these users. We simply don't know > > one way or another (and I personally wouldn't want to hazard a guess.) > > Can we gather data? What if pip started reporting info on how it was > run when contacting pypi? The most important information pip should report is whether it's running on a CI platform (should be doable by looking at a few environment variables, at least for the most popular platforms). Currently nobody knows what the PyPI download stats mean, because they could be 99% human or 99% CI. Regards Antoine. From solipsis at pitrou.net Thu Feb 14 03:44:17 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 14 Feb 2019 09:44:17 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: <20190214094417.26f6a2e7@fsol> On Thu, 14 Feb 2019 00:57:36 -0500 Jason Swails wrote: > > I literally just ran into this problem now. Part of a software suite I've > written uses Python to fetch updates during the installation process. Due > to the target audience, it needs to access the system Python (only), and > support systems as old as RHEL 5 (Python 2.4 and later, including Python > 3.x in the same code base, using nothing but the stdlib). The shebang line > was "#!/usr/bin/env python" > > It's been working for years, but was only now reported broken by a user > that upgraded their Ubuntu distribution and suddenly had no "python" > executable anywhere. But they had python3. > > I suspect suddenly not having any "python" executable in a Linux system > will screw up a lot more people than just me. The workaround was ugly. I'm not sure what you mean. Isn't the workaround to install Python 2 in this case? Regards Antoine. From njs at pobox.com Thu Feb 14 03:51:43 2019 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 14 Feb 2019 00:51:43 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <20190214094120.536c2b13@fsol> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <20190213230004.GV1834@ando.pearwood.info> <20190214094120.536c2b13@fsol> Message-ID: On Thu, Feb 14, 2019 at 12:43 AM Antoine Pitrou wrote: > > On Wed, 13 Feb 2019 17:32:44 -0800 > Nathaniel Smith wrote: > > On Wed, Feb 13, 2019 at 3:02 PM Steven D'Aprano wrote: > > > > > > On Wed, Feb 13, 2019 at 05:16:54PM -0500, Terry Reedy wrote: > > > > > > > It appears python is already python3 for a large majority of human users > > > > (as opposed to machines). > > > > > > > > https://www.jetbrains.com/research/python-developers-survey-2018/ > > > > Nearly 20000 valid responses, Oct-Nov. > > > > > > They may be valid responses, but we don't know if they are > > > representative of the broader Python community. Its a self-selected > > > survey of people which always makes the results statistically suspect. > > > > > > (By definition, an Internet survey eliminates responses from people who > > > don't fill out surveys on the Internet.) > > > > > > BUt even if representative, this survey only tells us what version > > > people are using, now how they invoke it. We can't conclude that the > > > command "python" means Python 3 for these users. We simply don't know > > > one way or another (and I personally wouldn't want to hazard a guess.) > > > > Can we gather data? What if pip started reporting info on how it was > > run when contacting pypi? > > The most important information pip should report is whether it's > running on a CI platform (should be doable by looking at a few > environment variables, at least for the most popular platforms). > Currently nobody knows what the PyPI download stats mean, because they > could be 99% human or 99% CI. I agree :-) https://github.com/pypa/pip/issues/5499#issuecomment-406840712 That's kind of orthogonal to this discussion though. -n -- Nathaniel J. Smith -- https://vorpus.org From encukou at gmail.com Thu Feb 14 03:56:08 2019 From: encukou at gmail.com (Petr Viktorin) Date: Thu, 14 Feb 2019 09:56:08 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: On 2/13/19 4:24 PM, Petr Viktorin wrote: > I think it's time for another review. [...] > Please see this PR for details and a suggested change: > https://github.com/python/peps/pull/893 Summary of the thread so far. Antoine Pitrou noted that the PEP should acknowledge that there are now years of established usage of `python` as Python 3 for many conda users, often as the "main" Python. Victor Stinner expressed support for "python" being the latest Python version, citing PHP, Ruby, Perl; containers; mentions of "python" in our docs. Steve Dower later proposed concrete points how to make "python" the default command: * our docs should say "python" consistently * we should recommend that distributors use the same workaround * our docs should describe the recommended workaround in any places people are likely to first encounter it (tutorial, sys.executable, etc.) Chris Barker added that "python3" should still be available, even if "python" is default. Barry Warsaw gave a +1 to making "python" default, noting that there were plans to change this when Python 2 is officially deprecated. But distros need to make decisions about 2020 now. Chris Barker noted that users won't see any discuntinuity in 2020. That's just a date support from CPython devs ends. Victor pointed to discussions on 4.0 vs. 3.10. (I'll ignore discussions on 4.0 in this summary.) Victor also posted some interesting info and links on Fedora and RHEL. There was a discussion on the PSF survey about how many people use Python 3. (I'll ignore this sub-thread, it's not really about the "python" command.) Steve noted that the Windows Store package of Python 3 provides "python", but he is still thinking how to make this reasonable/reliable in the full installer. Several people think "py" on Unix would be a good thing. Neil Schemenauer supposes we would encourage people to use it over "python"/"python2"/"python3", so "python" would be less of an issue. Neil Schemenauer is not opposed to making "python" configurable or eventually pointing it to Python 3. Jason Swails shared experience from running software with a "#!/usr/bin/env python" shebang on a system that didn't have Python 2 (and followed the PEP, so no "python" either). The workaround was ugly. ------------- Since this list is public, I'd like to remind all readers that it is full of people who work extensively with Python 3, and tend to drive it forward at any opportunity. (Myself included, but on this thread I'll leave the arguments to someone else ? they're covered adequately.) Thoughts of Python developers are important, but we're not hearing any other voices. Perhaps everyone with a different opinion has already self-selected out. I don't know of a good place for this discussion, and I'm not a good person to give arguments to support the original "python" should be Python 2 direction. (But if I did, I imagine posting them here would feel a bit scary.) But I would not be surprised, or annoyed, if the Council had a private discussion and pronounced "No, sorry, not yet". Anyway, how should this be decided? Where should it be discussed? From ronaldoussoren at mac.com Thu Feb 14 04:46:43 2019 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Thu, 14 Feb 2019 10:46:43 +0100 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> Message-ID: <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> ? Twitter: @ronaldoussoren Blog: https://blog.ronaldoussoren.net/ > On 13 Feb 2019, at 16:10, Giampaolo Rodola' wrote: > > > > On Wed, Feb 13, 2019 at 2:27 PM Ronald Oussoren > wrote: > > >> On 13 Feb 2019, at 13:24, Giampaolo Rodola' > wrote: >> >> >> Hello, >> after discovering os.makedirs() has no unit-tests (https://bugs.python.org/issue35982 ) I was thinking about working on a PR to increase the test coverage of fs-related os.* functions. In order to do so I think it would be useful to add a convenience function to "just delete something if it exists", regardless if it's a file, directory, directory tree, etc., and include it into test.support module. > > Something like shutil.rmtree() with ignore_errors=True? > > shutil.rmtree() is about directories and can't be used against files. support.rmpath() would take a path (meaning anything) and try to remove it. You?re right. I usually use shutil.rmtree for tests that need to create temporary files, and create a temporary directory for those files (that is, use tempfile.mkdtemp in setUp() and use shutil.rmtree in tearDown()). That way I don?t have to adjust house-keeping code when I make changes to test code. Ronald -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephane at wirtel.be Thu Feb 14 05:00:46 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Thu, 14 Feb 2019 11:00:46 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: <20190214100046.GA29072@xps> Hi Petr, I would like to add this issue from the devguide where I ask if we need to use python or python3 in the documentation. https://github.com/python/devguide/issues/208 -- St?phane Wirtel - https://wirtel.be - @matrixise From encukou at gmail.com Thu Feb 14 07:30:26 2019 From: encukou at gmail.com (Petr Viktorin) Date: Thu, 14 Feb 2019 13:30:26 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: On 2/13/19 5:45 PM, Victor Stinner wrote: > Some more context about Petr's change, Fedora, RHEL and Red Hat. [...] Fedora could switch "python" to Python 3 now*, if the PEP changes to allow it. * "now" has a release date of around October 2019. The next release after that should then be around May 2020. From jason.swails at gmail.com Thu Feb 14 07:35:49 2019 From: jason.swails at gmail.com (Jason Swails) Date: Thu, 14 Feb 2019 07:35:49 -0500 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <20190214094417.26f6a2e7@fsol> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190214094417.26f6a2e7@fsol> Message-ID: > On Feb 14, 2019, at 3:44 AM, Antoine Pitrou wrote: > > On Thu, 14 Feb 2019 00:57:36 -0500 > Jason Swails wrote: >> >> I literally just ran into this problem now. Part of a software suite I've >> written uses Python to fetch updates during the installation process. Due >> to the target audience, it needs to access the system Python (only), and >> support systems as old as RHEL 5 (Python 2.4 and later, including Python >> 3.x in the same code base, using nothing but the stdlib). The shebang line >> was "#!/usr/bin/env python" >> >> It's been working for years, but was only now reported broken by a user >> that upgraded their Ubuntu distribution and suddenly had no "python" >> executable anywhere. But they had python3. >> >> I suspect suddenly not having any "python" executable in a Linux system >> will screw up a lot more people than just me. The workaround was ugly. > > I'm not sure what you mean. Isn't the workaround to install Python 2 > in this case? I release the software, so the problem is not my machine, it?s others?. The installation process also fetches a local miniconda distribution for the Python utilities that are part of the program suite (and the python programs are optional and typically not installed when this suite is deployed on a supercomputer, for instance). But the software needs to check for updates before it does any of that (hence my concern ? this script needs to be able to run before the user does *anything* else, including installing dependencies). This would also be the first time we?d have to give different installation instructions for different versions of the same Linux distro. The workaround from a users perspective is simple for me, but I can?t make that same assumption for all of my users. This is an impediment to keeping the user experience as simple as possible. Thanks, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From doko at ubuntu.com Thu Feb 14 08:11:05 2019 From: doko at ubuntu.com (Matthias Klose) Date: Thu, 14 Feb 2019 14:11:05 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: On 13.02.19 17:20, Victor Stinner wrote: > Hi, > > I'm a (strong) supporter of providing a "python" command which would > be the latest Python version! This very much depends on what is working with the latest Python version, and what amount of third party packages your distro has to support. It doesn't have to be the newest version. > As php does nowadays (after previous issues with "php4" vs "php5".) I > don't recall that perl had "perl4" vs "perl5", the command was always > "perl", no? Same for Ruby: it was still "ruby" after for Ruby 2, no? > Only Python and PHP used different program names depending on the > language version, no? And PHP now moved back to a single "php" > program. it's not only upstreams doing that kind of versioned names; distros are doing that to ease the pain for larger transitions. > In the container and virtualenv era, it's now easy to get your > favorite Python version for the "python" command. > > On my Windows VM, "python" is Python 3.7 :-) In virtual environments, > "python" can also be Python 3 as well. maybe the PEP should recommend to have python3 in virtual environments as well? Matthias From doko at ubuntu.com Thu Feb 14 08:35:33 2019 From: doko at ubuntu.com (Matthias Klose) Date: Thu, 14 Feb 2019 14:35:33 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: On 13.02.19 16:24, Petr Viktorin wrote: > PEP 394 says: > >> This recommendation will be periodically reviewed over the next few >> years, and updated when the core development team judges it >> appropriate. As a point of reference, regular maintenance releases >> for the Python 2.7 series will continue until at least 2020. > > I think it's time for another review. > I'm especially worried about the implication of these: > > - If the `python` command is installed, it should invoke the same > ? version of Python as the `python2` command > - scripts that are deliberately written to be source compatible > ? with both Python 2.x and 3.x [...] may continue to use `python` on > ? their shebang line. > > So, to support scripts that adhere to the recommendation, Python 2 > needs to be installed :( Debian's concern about pointing python to python3 is that it will break software after an upgrade. The current state seems is still the same that Debian doesn't want to ship a python symlink after the Python2 removal. For Ubuntu, I'm not sure if I want a python executable at all, because there is not much progress in handling more than one python installation, so just using python3 for the distro sounds fine. pypi.org now recommends unconditionally installing with pip, and pip is still happy to modify system installed packages when asked, messing around with the distro packages. But probably that kind of users then install their own python symlink anyway. For the Ubuntu 20.04 LTS release and the Debian bullseye release (maybe 2021), I am trying to make sure that the python shebang isn't used by distro packages anymore (either by removing python2 altogether, or by using the python2/python2.7 shebangs). Matthias From paul at ganssle.io Thu Feb 14 09:04:20 2019 From: paul at ganssle.io (Paul Ganssle) Date: Thu, 14 Feb 2019 09:04:20 -0500 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: Message-ID: <1cfc2984-216c-fdc7-7ea2-692662d93971@ganssle.io> I don't think it's totally unreasonable to have other total_X() methods, where X would be days, hours, minutes and microseconds, but it also doesn't seem like a pressing need to me. I think the biggest argument against it is that they are all trivial to implement as necessary, because they're just unit conversions that involve multiplication or division by constants, which is nowhere near as complicated to implement as the original `total_seconds` method. Here's the issue where total_seconds() was implemented, it doesn't seem like there was any discussion of other total methods until after the issue was closed: https://bugs.python.org/issue5788 I think the main issue is how "thick" we want the timedelta class to be.? With separate methods for every unit, we have to maintain and document 5 methods instead of 1, though the methods are trivial and the documentation could maybe be shared. If I had a time machine, I'd probably recommend an interface something like this: def total_duration(self, units='seconds'): ??? return self._total_seconds() * _SECONDS_PER_UNIT[units] I suppose it would be possible to move to that interface today, though I think it would be mildly confusing to have two functions that do the same thing (total_seconds and total_duration), which may not be worth it considering that these functions are a pretty minor convenience. Best, Paul On 2/14/19 12:05 AM, Richard Belleville via Python-Dev wrote: > In a recent code review, the following snippet was called out as > reinventing the > wheel: > > _MICROSECONDS_PER_SECOND = 1000000 > > > def _timedelta_to_microseconds(delta): > ? return int(delta.total_seconds() * _MICROSECONDS_PER_SECOND) > > > The reviewer thought that there must already exist a standard library > function > that fulfills this functionality. After we had both satisfied > ourselves that we > hadn't simply missed something in the documentation, we decided that > we had > better raise the issue with a wider audience. > > Does this functionality already exist within the standard library? If > not, would > a datetime.timedelta.total_microseconds function be a reasonable > addition? I > would be happy to submit a patch for such a thing. > > Richard Belleville > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/paul%40ganssle.io -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From ericsnowcurrently at gmail.com Thu Feb 14 09:25:42 2019 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Thu, 14 Feb 2019 07:25:42 -0700 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> Message-ID: On Thu, Feb 14, 2019, 02:47 Ronald Oussoren via Python-Dev < python-dev at python.org wrote: > > I usually use shutil.rmtree for tests that need to create temporary files, > and create a temporary directory for those files (that is, use > tempfile.mkdtemp in setUp() and use shutil.rmtree in tearDown()). That way > I don?t have to adjust house-keeping code when I make changes to test code. > Same here. -eric > -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.rodola at gmail.com Thu Feb 14 09:56:52 2019 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Thu, 14 Feb 2019 15:56:52 +0100 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> Message-ID: On Thu, Feb 14, 2019 at 3:25 PM Eric Snow wrote: > On Thu, Feb 14, 2019, 02:47 Ronald Oussoren via Python-Dev < > python-dev at python.org wrote: > >> >> I usually use shutil.rmtree for tests that need to create temporary >> files, and create a temporary directory for those files (that is, use >> tempfile.mkdtemp in setUp() and use shutil.rmtree in tearDown()). That way >> I don?t have to adjust house-keeping code when I make changes to test code. >> > > Same here. > > -eric > >> What I generally do is avoid relying on tempfile.mkdtemp() and always use TESTFN instead. I think it's cleaner as a pradigm because it's an incentive to not pollute the single unit tests with `self.addCleanup()` instructions (the whole cleanup logic is always supposed to occur in setUp/tearDown): TESTFN = support.TESTFN TESTFN2 = TESTFN + '2' class FilesystemTest(unittest.TestCase): def setUp(self): remove_file_or_dir(TESTFN) remove_file_or_dir(TESTFN2) tearDown = setUp def test_mkdir(self): ... def test_listdir(self): ... def test_rename(self): ... -- Giampaolo - http://grodola.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at timgolden.me.uk Thu Feb 14 10:03:47 2019 From: mail at timgolden.me.uk (Tim Golden) Date: Thu, 14 Feb 2019 15:03:47 +0000 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> Message-ID: <6083f67a-8413-fc40-0118-cfab2284ae2a@timgolden.me.uk> On 14/02/2019 14:56, Giampaolo Rodola' wrote: > > > On Thu, Feb 14, 2019 at 3:25 PM Eric Snow > wrote: > > On Thu, Feb 14, 2019, 02:47 Ronald Oussoren via Python-Dev > wrote: > > > I usually use shutil.rmtree for tests that need to create > temporary files, and create a temporary directory for those > files (that is, use tempfile.mkdtemp in setUp() and use > shutil.rmtree in tearDown()). That way I don?t have to adjust > house-keeping code when I make changes to test code. > > > Same here. > > -eric > > > What I generally do is avoid relying on tempfile.mkdtemp() and always > use TESTFN instead. I think it's cleaner as a pradigm because it's an > incentive to not pollute the single unit tests with? `self.addCleanup()` > instructions (the whole cleanup logic is always supposed to occur in > setUp/tearDown): Must chime in here because I've been pushing (variously months & years ago) to move *away* from TESTFN because it generates numerous intermittent errors on my Windows setup. I've had several goes at starting to do that but a combination of my own lack of time plus some people's reluctance to go that route altogether has stalled the thing. I'm not sure I understand the difference in cleanup/teardown terms between using tempfile and using TESTFN. The objections I've seen from people (apart, obviously, from test churn) are to do with building up testing temp artefacts on a possibly low-sized disk. TJG From g.rodola at gmail.com Thu Feb 14 10:24:41 2019 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Thu, 14 Feb 2019 16:24:41 +0100 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: <6083f67a-8413-fc40-0118-cfab2284ae2a@timgolden.me.uk> References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> <6083f67a-8413-fc40-0118-cfab2284ae2a@timgolden.me.uk> Message-ID: On Thu, Feb 14, 2019 at 4:03 PM Tim Golden wrote: > On 14/02/2019 14:56, Giampaolo Rodola' wrote: > > > > > > On Thu, Feb 14, 2019 at 3:25 PM Eric Snow > > wrote: > > > > On Thu, Feb 14, 2019, 02:47 Ronald Oussoren via Python-Dev > > wrote: > > > > > > I usually use shutil.rmtree for tests that need to create > > temporary files, and create a temporary directory for those > > files (that is, use tempfile.mkdtemp in setUp() and use > > shutil.rmtree in tearDown()). That way I don?t have to adjust > > house-keeping code when I make changes to test code. > > > > > > Same here. > > > > -eric > > > > > > What I generally do is avoid relying on tempfile.mkdtemp() and always > > use TESTFN instead. I think it's cleaner as a pradigm because it's an > > incentive to not pollute the single unit tests with `self.addCleanup()` > > instructions (the whole cleanup logic is always supposed to occur in > > setUp/tearDown): > > Must chime in here because I've been pushing (variously months & years > ago) to move *away* from TESTFN because it generates numerous > intermittent errors on my Windows setup. I've had several goes at > starting to do that but a combination of my own lack of time plus some > people's reluctance to go that route altogether has stalled the thing. > > I'm not sure I understand the difference in cleanup/teardown terms > between using tempfile and using TESTFN. The objections I've seen from > people (apart, obviously, from test churn) are to do with building up > testing temp artefacts on a possibly low-sized disk. > > TJG > I suppose you mean the intermittent failures are usually due to "file is already in use by another process" correct? test.support's unlink(), rmdir() and rmtree() functions already implement a retry-with-timeout logic in order to prevent this issue. I suppose when this issue may still occur, though, is when the file/handle is held by another process, meaning that the unit-test probably forgot to terminate()/wait() a subprocess or should have used support.read_children(). In summary, my approach is more "strict" because it implies that unit-tests always do a proper cleanup. tempfile.mkdtemp() may prevent failures but it may hide a unit-test which doesn't do a proper file/dir cleanup and should have been fixed instead. The drawback in practical terms is that orphaned test files are left behind. Extra: an argument in favor of using tempfile.mkdtemp() instead of TESTFN is parallel testing, but I think we're not using it. -- Giampaolo - http://grodola.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at timgolden.me.uk Thu Feb 14 10:31:42 2019 From: mail at timgolden.me.uk (Tim Golden) Date: Thu, 14 Feb 2019 15:31:42 +0000 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> <6083f67a-8413-fc40-0118-cfab2284ae2a@timgolden.me.uk> Message-ID: <55edeb6e-8a22-200f-22de-538da20ba55c@timgolden.me.uk> On 14/02/2019 15:24, Giampaolo Rodola' wrote: > > > On Thu, Feb 14, 2019 at 4:03 PM Tim Golden > wrote: > > On 14/02/2019 14:56, Giampaolo Rodola' wrote: > > > > > > On Thu, Feb 14, 2019 at 3:25 PM Eric Snow > > > >> wrote: > > > >? ? ?On Thu, Feb 14, 2019, 02:47 Ronald Oussoren via Python-Dev > >? ? ? > > wrote: > > > > > >? ? ? ? ?I usually use shutil.rmtree for tests that need to create > >? ? ? ? ?temporary files, and create a temporary directory for those > >? ? ? ? ?files (that is, use tempfile.mkdtemp in setUp() and use > >? ? ? ? ?shutil.rmtree in tearDown()). That way I don?t have to adjust > >? ? ? ? ?house-keeping code when I make changes to test code. > > > > > >? ? ?Same here. > > > >? ? ?-eric > > > > > > What I generally do is avoid relying on tempfile.mkdtemp() and > always > > use TESTFN instead. I think it's cleaner as a pradigm because > it's an > > incentive to not pollute the single unit tests with > `self.addCleanup()` > > instructions (the whole cleanup logic is always supposed to occur in > > setUp/tearDown): > > Must chime in here because I've been pushing (variously months & years > ago) to move *away* from TESTFN because it generates numerous > intermittent errors on my Windows setup. I've had several goes at > starting to do that but a combination of my own lack of time plus some > people's reluctance to go that route altogether has stalled the thing. > > I'm not sure I understand the difference in cleanup/teardown terms > between using tempfile and using TESTFN. The objections I've seen from > people (apart, obviously, from test churn) are to do with building up > testing temp artefacts on a possibly low-sized disk. > > TJG > > > I suppose you mean the intermittent failures are usually due to "file is > already in use by another process" correct? test.support's unlink(), Occasionally (and those are usually down to a poorly-handled cleanup). More commonly it's due to residual share-delete handles on those files, probably from indexing & virus checkers or TortoiseXXX cache handlers. Obviously I can (and to some extent do) try to mitigate those issues. In short: reusing the same filepath over and over for tests which are run in quick succession doesn't seem like a good idea usually. That's commonly what TESTFN-based tests do (some do; some don't). I'm 100% with you on strict clean-up, not leaving testing turds behind, not over-complicating simple tests with lost of framework. All that. But -- however it's done -- I'd prefer to move away from the test-global TESTFN approach. I'm not at my dev box atm so can't pick out examples but I definitely have some :) I have no issue with your proposal here: better and simpler cleanup is A Good Thing. But it won't solve the problem of re-using the same test filepath again and again. TJG From j.orponen at 4teamwork.ch Thu Feb 14 10:39:30 2019 From: j.orponen at 4teamwork.ch (Joni Orponen) Date: Thu, 14 Feb 2019 16:39:30 +0100 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> Message-ID: On Thu, Feb 14, 2019 at 10:49 AM Ronald Oussoren via Python-Dev < python-dev at python.org> wrote: > I usually use shutil.rmtree for tests that need to create temporary files, > and create a temporary directory for those files (that is, use > tempfile.mkdtemp in setUp() and use shutil.rmtree in tearDown()). That way > I don?t have to adjust house-keeping code when I make changes to test code. > As tempfile provides context managers, should these be used internally for something like this? Provide a decorator which passes in the temp file / directory. -- Joni Orponen -------------- next part -------------- An HTML attachment was scrubbed... URL: From vstinner at redhat.com Thu Feb 14 10:49:52 2019 From: vstinner at redhat.com (Victor Stinner) Date: Thu, 14 Feb 2019 16:49:52 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: Le jeu. 14 f?vr. 2019 ? 14:38, Matthias Klose a ?crit : > Debian's concern about pointing python to python3 is that it will break software > after an upgrade. The current state seems is still the same that Debian doesn't > want to ship a python symlink after the Python2 removal. The other safer alternative is to start to provide "py" launcher on Unix as well. Since it's something new, it's perfectly fine to decide from the start to make it point to the latest Python version by default. Victor -- Night gathers, and now my watch begins. It shall not end until my death. From sorin.sbarnea at gmail.com Thu Feb 14 07:46:35 2019 From: sorin.sbarnea at gmail.com (Sorin Sbarnea) Date: Thu, 14 Feb 2019 12:46:35 +0000 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: I am glad this resurfaced as back in September I proposed updated that very old PEP but I got rejected. https://github.com/python/peps/pull/785 The main issue is that most distros will not fix it until PEP is refreshed because most of them do want to follow PEPs. There is still hope. Cheers Sorin > On 13 Feb 2019, at 16:20, Victor Stinner wrote: > > Hi, > > I'm a (strong) supporter of providing a "python" command which would > be the latest Python version! > > As php does nowadays (after previous issues with "php4" vs "php5".) I > don't recall that perl had "perl4" vs "perl5", the command was always > "perl", no? Same for Ruby: it was still "ruby" after for Ruby 2, no? > Only Python and PHP used different program names depending on the > language version, no? And PHP now moved back to a single "php" > program. > > In the container and virtualenv era, it's now easy to get your > favorite Python version for the "python" command. > > On my Windows VM, "python" is Python 3.7 :-) In virtual environments, > "python" can also be Python 3 as well. > > I recall that I saw commands using "python" rather than "python3" in > the *official* Python 3 documentation: see examples below (*). > Problem: On Windows, "python" is the right command. "python3" doesn't > work (doesn't exist) on Windows. Should we write the doc for Windows > or for Unix? Oooops. > > There was an interesting discussion about the Python version following > Python 3.9: Python 3.10 or Python 4? And what are the issues which > would make us prefer 3.10 rather than 4.0? > https://mail.python.org/pipermail/python-committers/2018-September/006152.html > > One practical issue is that right now, six.PY3 is defined by "PY3 = > sys.version_info[0] == 3" and so "if six.PY3:" will be false on Python > 4. > > Another interesting thing to mention is the Unix Python launcher > ("py") written by Brett Cannon written in Rust: > https://github.com/brettcannon/python-launcher > > > (*) A few examples of "python" commands in the Python official documentation > > "$ python prog.py -h" > https://docs.python.org/dev/library/argparse.html > > "$ python logctx.py" > https://docs.python.org/dev/howto/logging-cookbook.html > > "python setup.py install" > https://docs.python.org/dev/install/index.html > > "python --help" > https://docs.python.org/dev/howto/argparse.html > > "python setup.py build" > https://docs.python.org/dev/extending/building.html > > "exec python $0 ${1+"$@"}" > https://docs.python.org/dev/faq/library.html > > "python setup.py --help build_ext" > https://docs.python.org/dev/distutils/configfile.html > > Victor > > Le mer. 13 f?vr. 2019 ? 16:49, Antoine Pitrou a ?crit : >> >> On Wed, 13 Feb 2019 16:24:48 +0100 >> Petr Viktorin wrote: >>> PEP 394 says: >>> >>>> This recommendation will be periodically reviewed over the next few >>>> years, and updated when the core development team judges it >>>> appropriate. As a point of reference, regular maintenance releases >>>> for the Python 2.7 series will continue until at least 2020. >>> >>> I think it's time for another review. >>> I'm especially worried about the implication of these: >>> >>> - If the `python` command is installed, it should invoke the same >>> version of Python as the `python2` command >>> - scripts that are deliberately written to be source compatible >>> with both Python 2.x and 3.x [...] may continue to use `python` on >>> their shebang line. >>> >>> So, to support scripts that adhere to the recommendation, Python 2 >>> needs to be installed :( >> >> I think PEP 394 should acknowledge that there are now years of >> established usage of `python` as Python 3 for many conda users. >> >> Regards >> >> Antoine. >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com > > > > -- > Night gathers, and now my watch begins. It shall not end until my death. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/sorin.sbarnea%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Thu Feb 14 12:28:48 2019 From: barry at python.org (Barry Warsaw) Date: Thu, 14 Feb 2019 09:28:48 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> Message-ID: <22F2AD71-5694-4EB5-9C21-8B965654A481@python.org> On Feb 13, 2019, at 23:08, Mat?j Cepl wrote: > Is this relevant to the discussion at hand? We are talking about > the binary /usr/bin/python3 which will be surely be provided > even by Python 4, won't it? Why would it be? Since this is all hypothetical anyway , I?d more likely expect to only ship /usr/bin/python. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From brett at python.org Thu Feb 14 12:48:40 2019 From: brett at python.org (Brett Cannon) Date: Thu, 14 Feb 2019 09:48:40 -0800 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> <6083f67a-8413-fc40-0118-cfab2284ae2a@timgolden.me.uk> Message-ID: On Thu, Feb 14, 2019 at 7:26 AM Giampaolo Rodola' wrote: > > > On Thu, Feb 14, 2019 at 4:03 PM Tim Golden wrote: > >> On 14/02/2019 14:56, Giampaolo Rodola' wrote: >> > >> > >> > On Thu, Feb 14, 2019 at 3:25 PM Eric Snow > > > wrote: >> > >> > On Thu, Feb 14, 2019, 02:47 Ronald Oussoren via Python-Dev >> > wrote: >> > >> > >> > I usually use shutil.rmtree for tests that need to create >> > temporary files, and create a temporary directory for those >> > files (that is, use tempfile.mkdtemp in setUp() and use >> > shutil.rmtree in tearDown()). That way I don?t have to adjust >> > house-keeping code when I make changes to test code. >> > >> > >> > Same here. >> > >> > -eric >> > >> > >> > What I generally do is avoid relying on tempfile.mkdtemp() and always >> > use TESTFN instead. I think it's cleaner as a pradigm because it's an >> > incentive to not pollute the single unit tests with >> `self.addCleanup()` >> > instructions (the whole cleanup logic is always supposed to occur in >> > setUp/tearDown): >> >> Must chime in here because I've been pushing (variously months & years >> ago) to move *away* from TESTFN because it generates numerous >> intermittent errors on my Windows setup. I've had several goes at >> starting to do that but a combination of my own lack of time plus some >> people's reluctance to go that route altogether has stalled the thing. >> >> I'm not sure I understand the difference in cleanup/teardown terms >> between using tempfile and using TESTFN. The objections I've seen from >> people (apart, obviously, from test churn) are to do with building up >> testing temp artefacts on a possibly low-sized disk. >> >> TJG >> > > I suppose you mean the intermittent failures are usually due to "file is > already in use by another process" correct? test.support's unlink(), > rmdir() and rmtree() functions already implement a retry-with-timeout logic > in order to prevent this issue. I suppose when this issue may still occur, > though, is when the file/handle is held by another process, meaning that > the unit-test probably forgot to terminate()/wait() a subprocess or should > have used support.read_children(). In summary, my approach is more "strict" > because it implies that unit-tests always do a proper cleanup. > tempfile.mkdtemp() may prevent failures but it may hide a unit-test which > doesn't do a proper file/dir cleanup and should have been fixed instead. > The drawback in practical terms is that orphaned test files are left behind. > > Extra: an argument in favor of using tempfile.mkdtemp() instead of TESTFN > is parallel testing, but I think we're not using it. > With -j you can do parallel testing and I know I always run with that on. But TESTFN does *attempt *to account for that by using the PID in the name. Having said that, I do use tempfile like Eric, Ronald, and Tim when I write tests as I have real-world experience using tempfile so I usually remember to clean up versus TESTFN which is Python stdlib internals only and I have to remember that it won't clean up itself. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Feb 14 12:59:06 2019 From: brett at python.org (Brett Cannon) Date: Thu, 14 Feb 2019 09:59:06 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: On Thu, Feb 14, 2019 at 7:50 AM Victor Stinner wrote: > Le jeu. 14 f?vr. 2019 ? 14:38, Matthias Klose a ?crit : > > Debian's concern about pointing python to python3 is that it will break > software > > after an upgrade. The current state seems is still the same that Debian > doesn't > > want to ship a python symlink after the Python2 removal. > > The other safer alternative is to start to provide "py" launcher on > Unix as well. Since it's something new, it's perfectly fine to decide > from the start to make it point to the latest Python version by > default. > Since it has come up a couple of times and in case people are curious, the Python Launcher for UNIX is currently available at https://crates.io/crates/python-launcher and the basics are there. I have one more key feature to implement -- `py --list` -- before I view it as having all the basics in place. Once I have --list done it will be trying to tackle the hard issue of how to tie in things like PyPy or non-PATH-installed interpreters into the launcher (which, since it is configuration, people will bikeshed on forever about, so maybe I should ignore people and solve it quickly ;) . -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Thu Feb 14 13:12:00 2019 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu, 14 Feb 2019 13:12:00 -0500 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: <1cfc2984-216c-fdc7-7ea2-692662d93971@ganssle.io> References: <1cfc2984-216c-fdc7-7ea2-692662d93971@ganssle.io> Message-ID: On Thu, Feb 14, 2019 at 9:07 AM Paul Ganssle wrote: > I don't think it's totally unreasonable to have other total_X() methods, > where X would be days, hours, minutes and microseconds > I do. I was against adding the total_seconds() method to begin with because the same effect can be achieved with delta / timedelta(seconds=1) this is easily generalized to delta / timedelta(X=1) where X can be days, hours, minutes or microseconds. -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at ganssle.io Thu Feb 14 13:17:26 2019 From: paul at ganssle.io (Paul Ganssle) Date: Thu, 14 Feb 2019 13:17:26 -0500 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <1cfc2984-216c-fdc7-7ea2-692662d93971@ganssle.io> Message-ID: <3e46f177-bc42-71d3-01f1-5e1868d4a155@ganssle.io> Ah yes, good point, I forgot about this because IIRC it's not supported in Python 2.7, so it's not a particularly common idiom in polyglot library code. Obviously any new methods would be Python 3-only, so there's no benefit to adding them. Best, Paul On 2/14/19 1:12 PM, Alexander Belopolsky wrote: > > > On Thu, Feb 14, 2019 at 9:07 AM Paul Ganssle > wrote: > > I don't think it's totally unreasonable to have other total_X() > methods, where X would be days, hours, minutes and microseconds > > I do.? I was against adding the total_seconds() method to begin with > because?the same effect can be achieved with > > delta / timedelta(seconds=1) > > this is easily generalized to > > delta / timedelta(X=1) > > where X can be days, hours, minutes or microseconds. > ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gjcarneiro at gmail.com Thu Feb 14 13:20:11 2019 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Thu, 14 Feb 2019 18:20:11 +0000 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: On Thu, 14 Feb 2019 at 15:52, Victor Stinner wrote: > Le jeu. 14 f?vr. 2019 ? 14:38, Matthias Klose a ?crit : > > Debian's concern about pointing python to python3 is that it will break > software > > after an upgrade. The current state seems is still the same that Debian > doesn't > > want to ship a python symlink after the Python2 removal. > > The other safer alternative is to start to provide "py" launcher on > Unix as well. Since it's something new, it's perfectly fine to decide > from the start to make it point to the latest Python version by > default. > While I like very much the idea of having `py` as command, does it really need to be a wrapper command? Why can't it simply be a symlink? /usr/bin/py -> /usr/bin/python3 I worry about (1) startup time overhead of starting another process, (2) added complexity of learning about py's additional command-line options, we don't really need them, imho. > Victor > -- > Night gathers, and now my watch begins. It shall not end until my death. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/gjcarneiro%40gmail.com > -- Gustavo J. A. M. Carneiro Gambit Research "The universe is always one step beyond logic." -- Frank Herbert -------------- next part -------------- An HTML attachment was scrubbed... URL: From eelizondo at fb.com Thu Feb 14 14:01:36 2019 From: eelizondo at fb.com (Eddie Elizondo) Date: Thu, 14 Feb 2019 19:01:36 +0000 Subject: [Python-Dev] An update on heap-allocated types Message-ID: <94D97A63-B1AD-4E4B-A451-2ECB7DBF8D3A@fb.com> I?ll be adding a change to move the Py_INCREF of heap allocated types from PyType_GenericAlloc to PyObject_Init. You can follow the discussion and/or add comments to: https://bugs.python.org/issue35810. This change will make types created through PyType_FromSpec behave like classes in managed code. Thus, making CPython much safer. Without this change, there are a couple of edge cases where the use of PyObject_{,GC}_New{,Var} does not correctly increase refcount. This leads to weird behavior, especially when migrating types from PyType_Ready to PyType_FromSpec. For example, consider a static type with tp_new = NULL and tp_dealloc = NULL. This type initializes instances through PyObject_New and never increases the type?s refcount. tp_dealloc will be a no-op since it's NULL and it's a static type. When this type is migrated to PyType_FromSpec, tp_dealloc will now inherit subtype_dealloc which decrefs the type. This leads to a crash. For the vast majority of existing code this should not have a visible side effect. And, at worst, this will only cause some type to become immortal. I?ve added instructions in the ?Porting to Python 3.8? to correctly account for this new incref along with examples. In general, there are only two cases that would require any modification: 1. If the type creates instances through PyObject_{,GC}_New{,Var} and the type manually increfs afterwards. The fix here is to remove that manual incref. 2. If the type has a custom tp_dealloc and it?s not decrefing the type. The fix here is that a custom tp_dealloc should ALWAYS decref the type. Open to feedback/discussion so feel free to reply if you have any questions! - Eddie Elizondo -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Thu Feb 14 14:38:25 2019 From: steve.dower at python.org (Steve Dower) Date: Thu, 14 Feb 2019 11:38:25 -0800 Subject: [Python-Dev] Is distutils.util.get_platform() the "current" or the "target" platform Message-ID: <7f0f615b-fab5-0504-b1bf-20b3c9cd0402@python.org> As part of adding ARM32 support for Windows, we need to enable cross-compilation in distutils. This is easy enough, though it requires somehow getting the target platform as well as the current platform. Right now, the change at https://github.com/python/cpython/pull/11774 adds a get_target_platform() function for this and updates (as far as I can tell) all uses of get_platform() to use this instead. I would rather just change get_platform() to return the target platform. The current docs are somewhat vague on exactly what this function does, and I suspect have been mostly written from an "always build from source" mentality that may have implied, but not explicitly considered cross-compilation. https://docs.python.org/3/distutils/apiref.html#distutils.util.get_platform "Return a string that identifies the current platform. This is used mainly to distinguish platform-specific build directories and platform-specific built distributions." So it says "current" platform, explicitly says that "os.uname()" is the source, but then goes on to say: "For non-POSIX platforms, currently just returns sys.platform." Which is incorrect, as sys.platform is always "win32" always but get_platform() already returns "win-amd64" for 64-bit builds. And also: "For Mac OS X systems the OS version reflects the minimal version on which binaries will run (that is, the value of MACOSX_DEPLOYMENT_TARGET during the build of Python), not the OS version of the current system." So it seems like this function is already returning "the default platform that should be used when building extensions" - ignoring bugs in libraries that monkeypatch distutils, the "--plat-name" option should entirely override the return value of this function. Given this, does it seem to be okay to have it determine and return the target platform rather than the host platform? Right now, that would only affect the new target of building for win-arm32, but I would also like to update the documentation to make it more about how this value should be used rather than where it comes from. Any objections or concerns? Cheers, Steve From greg at krypto.org Thu Feb 14 14:47:38 2019 From: greg at krypto.org (Gregory P. Smith) Date: Thu, 14 Feb 2019 11:47:38 -0800 Subject: [Python-Dev] Is distutils.util.get_platform() the "current" or the "target" platform In-Reply-To: <7f0f615b-fab5-0504-b1bf-20b3c9cd0402@python.org> References: <7f0f615b-fab5-0504-b1bf-20b3c9cd0402@python.org> Message-ID: On Thu, Feb 14, 2019 at 11:38 AM Steve Dower wrote: > As part of adding ARM32 support for Windows, we need to enable > cross-compilation in distutils. This is easy enough, though it requires > somehow getting the target platform as well as the current platform. > > Right now, the change at https://github.com/python/cpython/pull/11774 > adds a get_target_platform() function for this and updates (as far as I > can tell) all uses of get_platform() to use this instead. I would rather > just change get_platform() to return the target platform. > > The current docs are somewhat vague on exactly what this function does, > and I suspect have been mostly written from an "always build from > source" mentality that may have implied, but not explicitly considered > cross-compilation. > > https://docs.python.org/3/distutils/apiref.html#distutils.util.get_platform > > "Return a string that identifies the current platform. This is used > mainly to distinguish platform-specific build directories and > platform-specific built distributions." > > So it says "current" platform, explicitly says that "os.uname()" is the > source, but then goes on to say: > > "For non-POSIX platforms, currently just returns sys.platform." > > Which is incorrect, as sys.platform is always "win32" always but > get_platform() already returns "win-amd64" for 64-bit builds. > > And also: > > "For Mac OS X systems the OS version reflects the minimal version on > which binaries will run (that is, the value of MACOSX_DEPLOYMENT_TARGET > during the build of Python), not the OS version of the current system." > > So it seems like this function is already returning "the default > platform that should be used when building extensions" - ignoring bugs > in libraries that monkeypatch distutils, the "--plat-name" option should > entirely override the return value of this function. > > Given this, does it seem to be okay to have it determine and return the > target platform rather than the host platform? Right now, that would > only affect the new target of building for win-arm32, but I would also > like to update the documentation to make it more about how this value > should be used rather than where it comes from. > > Any objections or concerns? > To alleviate confusion long term I'd love it if we could deprecate the unqualified get_platform() API and point people towards always being explicit about get_target_platform() vs get_current_platform(). There are valid reasons for people to be expecting either target or current return values from get_platform(), but I agree with you, having it return the target platform *feels* more likely to be what people want. It'd be worth auditing a random sample of people's calls of this API in open source projects to confirm that intuition. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Thu Feb 14 21:29:32 2019 From: larry at hastings.org (Larry Hastings) Date: Thu, 14 Feb 2019 18:29:32 -0800 Subject: [Python-Dev] Proposed dates for Python 3.4.10 and Python 3.5.7 Message-ID: Howdy howdy!? It's time to make the next bugfix release of 3.5--and the /final/ release /ever/ of Python 3.4. Here's the schedule I propose: 3.4.10rc1 and 3.5.7rc1 - Saturday March 2 2019 3.4.10 final and 3.5.7 final - Saturday March 16 2019 What's going in these releases?? Not much.? I have two outstanding PRs against 3.5: bpo-33127 GH-10994: Compatibility patch for LibreSSL 2.7.0 bpo-34623 GH-9933: XML_SetHashSalt in _elementtree and one PR against 3.4: bpo-34623 GH-9953: Use XML_SetHashSalt in _elementtree I expect to merge all three of those, I just need to get around to it.? There's one more recent security fix (bpo-35746, GH-11569) that? I want in these releases that still needs backporting. And that's the entire list.? bpo-34623 is the only current release blocker for either 3.4 or 3.5--I'm not aware of anything else in the pipeline.? If you have anything you think needs to go into the next 3.5, or the final 3.4, and it's /not/ listed above, please either file a GitHub PR, file a release-blocker bug on bpo, or email me directly. Good night sweet Python 3.4, and flights of angels sing thee to thy rest! //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhroncok at redhat.com Fri Feb 15 06:01:15 2019 From: mhroncok at redhat.com (=?UTF-8?Q?Miro_Hron=c4=8dok?=) Date: Fri, 15 Feb 2019 12:01:15 +0100 Subject: [Python-Dev] Proposed dates for Python 3.4.10 and Python 3.5.7 In-Reply-To: References: Message-ID: <4e9b1cc3-0ed1-d86d-fcf6-84f4abc18066@redhat.com> On 15. 02. 19 3:29, Larry Hastings wrote: > If you have > anything you think needs to go into the next 3.5, or the final 3.4, and it's > /not/ listed above, please either file a GitHub PR, file a release-blocker bug > on bpo, or email me directly. I've checked Fedora CVE bugs against python 3.4 and 3.5. Here is one missing I found: CVE-2018-20406 https://bugs.python.org/issue34656 memory exhaustion in Modules/_pickle.c:1393 Marked as resolved, but I don't see it fixed on 3.5 or 3.4. Should we get it fixed? openSUSE AFAK has backported the patch. -- Miro Hron?ok -- Phone: +420777974800 IRC: mhroncok From vstinner at redhat.com Fri Feb 15 06:28:45 2019 From: vstinner at redhat.com (Victor Stinner) Date: Fri, 15 Feb 2019 12:28:45 +0100 Subject: [Python-Dev] Proposed dates for Python 3.4.10 and Python 3.5.7 In-Reply-To: <4e9b1cc3-0ed1-d86d-fcf6-84f4abc18066@redhat.com> References: <4e9b1cc3-0ed1-d86d-fcf6-84f4abc18066@redhat.com> Message-ID: Hi, Le ven. 15 f?vr. 2019 ? 12:07, Miro Hron?ok a ?crit : > I've checked Fedora CVE bugs against python 3.4 and 3.5. Here is one missing I > found: > > CVE-2018-20406 https://bugs.python.org/issue34656 > memory exhaustion in Modules/_pickle.c:1393 > Marked as resolved, but I don't see it fixed on 3.5 or 3.4. > > Should we get it fixed? openSUSE AFAK has backported the patch. I'm working on fixes :-) I had a draft email but you was faster than me to post yours. Le ven. 15 f?vr. 2019 ? 03:29, Larry Hastings a ?crit : > What's going in these releases? Not much. I have two outstanding PRs against 3.5: > > bpo-33127 GH-10994: Compatibility patch for LibreSSL 2.7.0 > bpo-34623 GH-9933: XML_SetHashSalt in _elementtree According to my tool tracking security fixes, 3.5 lacks fixes for: https://python-security.readthedocs.io/vuln/ssl-crl-dps-dos.html https://python-security.readthedocs.io/vuln/pickle-load-dos.html https://python-security.readthedocs.io/vuln/xml-pakage-ignore-environment.html > and one PR against 3.4: > > bpo-34623 GH-9953: Use XML_SetHashSalt in _elementtree and 3.4 lacks fixes for: https://python-security.readthedocs.io/vuln/ssl-crl-dps-dos.html https://python-security.readthedocs.io/vuln/pickle-load-dos.html => Matej Cepl backported the change to 3.4, but the patch should be converted into a PR https://python-security.readthedocs.io/vuln/xml-pakage-ignore-environment.html Victor -- Night gathers, and now my watch begins. It shall not end until my death. From vstinner at redhat.com Fri Feb 15 07:05:26 2019 From: vstinner at redhat.com (Victor Stinner) Date: Fri, 15 Feb 2019 13:05:26 +0100 Subject: [Python-Dev] Proposed dates for Python 3.4.10 and Python 3.5.7 In-Reply-To: References: <4e9b1cc3-0ed1-d86d-fcf6-84f4abc18066@redhat.com> Message-ID: I wrote fixes: Le ven. 15 f?vr. 2019 ? 12:28, Victor Stinner a ?crit : > https://python-security.readthedocs.io/vuln/ssl-crl-dps-dos.html 3.5: https://github.com/python/cpython/pull/11867 3.4: https://github.com/python/cpython/pull/11868 > https://python-security.readthedocs.io/vuln/pickle-load-dos.html 3.5: https://github.com/python/cpython/pull/11869 3.4: https://github.com/python/cpython/pull/11870 > https://python-security.readthedocs.io/vuln/xml-pakage-ignore-environment.html 3.5: https://github.com/python/cpython/pull/11871 3.4: https://github.com/python/cpython/pull/11872 It would be nice if someone can review these PRs to help Larry ;-) Victor Victor From g.rodola at gmail.com Fri Feb 15 07:44:07 2019 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Fri, 15 Feb 2019 13:44:07 +0100 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> <6083f67a-8413-fc40-0118-cfab2284ae2a@timgolden.me.uk> Message-ID: On Thu, Feb 14, 2019 at 6:48 PM Brett Cannon wrote: > > With -j you can do parallel testing and I know I always run with that on. > But TESTFN does *attempt *to account for that > > by using the PID in the name. > Good to know, thanks. TESTFN aside, I was more interested in knowing if there's interest in landing something like this in test.support: def rmpath(path): "Try to remove a path regardless of its type." if os.path.isdir(path): test.support.rmtree(path) elif os.path.exists(path): test.support.unlink(path) -- Giampaolo - http://grodola.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephane at wirtel.be Fri Feb 15 11:13:30 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Fri, 15 Feb 2019 17:13:30 +0100 Subject: [Python-Dev] Python 3.8.0a1 with sqlite 3.27.1 -> OK Message-ID: <20190215161330.GA17161@xps> Hi all, I wanted to test with the new version of SQLite3 3.27.1 and there is no issue (compiled with a debian:latest docker image and the last version of 3.27.1). Sorry, it's not a bug, I wanted to inform you there is no issue with the last stable version of SQLite3. Have a nice week-end, St?phane root at 3b7c342683ff:/src# ./python -m test test_sqlite -v == CPython 3.8.0a1+ (default, Feb 15 2019, 16:05:54) [GCC 6.3.0 20170516] == Linux-4.20.7-200.fc29.x86_64-x86_64-with-glibc2.17 little-endian == cwd: /src/build/test_python_15721 == CPU count: 4 == encodings: locale=UTF-8, FS=utf-8 Run tests sequentially 0:00:00 load avg: 1.99 [1/1] test_sqlite test_sqlite: testing with version '2.6.0', sqlite_version '3.27.1' ... Ran 288 tests in 0.553s OK (skipped=3) == Tests result: SUCCESS == 1 test OK. Total duration: 654 ms Tests result: SUCCESS -- St?phane Wirtel - https://wirtel.be - @matrixise From xdegaye at gmail.com Fri Feb 15 11:57:38 2019 From: xdegaye at gmail.com (Xavier de Gaye) Date: Fri, 15 Feb 2019 17:57:38 +0100 Subject: [Python-Dev] buildbottest on Android emulator with docker Message-ID: The following command runs the buildbottest on an Android emulator with docker (it will use a little bit more than 11 GB): $ docker run -it --privileged xdegaye/abifa:r14b-24-x86_64-master This command does: * pull an image from the Docker hub (only the first time that the command is run, note that this a 2 GB download !) and start a container * pull the latest changes from the GitHub cpython repository and cross-compile python * start an Android emulator and install python on it * run the buildbottest target of cpython Makefile The image is built from a Dockerfile [2]. This same image can also be used with the 'bash' command line argument to enter bash in the container and run python interactively on the emulator [1]. If the 'docker run' command also sets a bind mount to a local cpython repository, then it is possible to develop/debug/fix python on the emulator running in this container using one's own clone of cpython. [1] documentation at https://xdegaye.gitlab.io/abifa/docker.html [2] Dockerfile at https://gitlab.com/xdegaye/abifa/blob/master/docker/Dockerfile.r14b-24-x86_64-master Xavier From steve.dower at python.org Fri Feb 15 12:43:13 2019 From: steve.dower at python.org (Steve Dower) Date: Fri, 15 Feb 2019 09:43:13 -0800 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> <6083f67a-8413-fc40-0118-cfab2284ae2a@timgolden.me.uk> Message-ID: <1c094bac-a1d1-aaaf-4e3a-0f110c0dfd4f@python.org> On 14Feb.2019 0948, Brett Cannon wrote: > On Thu, Feb 14, 2019 at 7:26 AM Giampaolo Rodola' > wrote: > Extra: an argument in favor of using tempfile.mkdtemp() instead of > TESTFN is parallel testing, but I think we're not using it. > > > With -j you can do parallel testing and I know I always run with that > on. But TESTFN does /attempt /to account for that > > by using the PID in the name. > > Having said that, I do use tempfile like Eric, Ronald, and Tim when I > write tests as I have real-world experience using tempfile so I usually > remember to clean up versus TESTFN which is Python stdlib internals only > and I have to remember that it won't clean up itself. I spend a decent amount of time rewriting tests to use TESTFN, since that enables us to keep all test files constrained to either a default directory or the one specified by --tempdir (which is a relatively recent addition, I'll grant, but it's been useful for improving test performance and stability when the default TEMP locations are not reliable - e.g. if $tmp is a mount point it breaks some tests, if it's a symlink it breaks others, if it's got particular permissions it breaks others again, etc.). That said, I'd love to have a context manager that we can use to make this easier. Really, none of us should be having to decide "how am I going to use a temporary location on the file system in my test", because we should have one obvious (and easy!) way to do it. But please, don't keep reinventing the functions we already have in test.support for doing this (unless you're putting better versions in test.support!) Cheers, Steve From vstinner at redhat.com Fri Feb 15 12:51:58 2019 From: vstinner at redhat.com (Victor Stinner) Date: Fri, 15 Feb 2019 18:51:58 +0100 Subject: [Python-Dev] OpenSSL 1.1.1 fixes merged into Python 2.7 Message-ID: Hi, I reviewed and merged pull requests written by my colleague Charalampos Stratakis to backport OpenSSL 1.1.1 fixes into the future Python 2.7.16. Benjamin Peterson (Python 2.7 release manager) wrote me: "I would very much like to see 1.1.1 support in a Python 2.7 release." These changes are backports of Python 3.6 changes written by Christian Heimes. With these changes, Python 2.7 becomes more secure and should be closer to Python 3.6 security. I apologize for merging these changes late in 2.7.16 devcycle, but we were very busy with higher priority issues :-( I hope that 2.7.16 release candidate will provide enough time to test properly these changes (and fix regressions if any). So far, I'm only aware of one issue on one specific buildbot worker, but I'm not sure that the test failures are regressions caused by merged ssl changes (the worker was offline for 1 month for an unknown reason): https://bugs.python.org/issue33570 Summary of the ssl changes: (*) ssl.SSLContext is now created with secure default values. The options OP_NO_COMPRESSION, OP_CIPHER_SERVER_PREFERENCE, OP_SINGLE_DH_USE, OP_SINGLE_ECDH_USE, OP_NO_SSLv2 (except for PROTOCOL_SSLv2), OP_NO_SSLv3 (except for PROTOCOL_SSLv3) are set by default. The initial cipher suite list contains only "HIGH" ciphers, no "NULL" ciphers and no "MD5" ciphers (except for PROTOCOL_SSLv2). (*) OpenSSL 1.1.1 has TLS 1.3 cipher suites enabled by default. The suites cannot be disabled with SSLContext.set_ciphers(). (*) Add a new ssl.OP_ENABLE_MIDDLEBOX_COMPAT constant (*) Tools/ssl/multissltests.py has been updated for OpenSSL 1.1.1. I merged 4 changes into 2.7: commit c49f63c1761ce03df7850b9e0b31a18c432dac64 Author: stratakis Date: Fri Feb 15 14:17:12 2019 +0100 [2.7] bpo-33570: TLS 1.3 ciphers for OpenSSL 1.1.1 (GH-6976) (GH-8760) (GH-10607) Change TLS 1.3 cipher suite settings for compatibility with OpenSSL 1.1.1-pre6 and newer. OpenSSL 1.1.1 will have TLS 1.3 cipers enabled by default. Also update multissltests to test with latest OpenSSL. Signed-off-by: Christian Heimes . (cherry picked from commit 3e630c541b35c96bfe5619165255e559f577ee71) Co-authored-by: Christian Heimes commit b8eaec697a2b5d9d2def2950a0aa50e8ffcf1059 Author: stratakis Date: Fri Feb 15 15:24:11 2019 +0100 [2.7] bpo-28043: improved default settings for SSLContext (GH-10608) The options OP_NO_COMPRESSION, OP_CIPHER_SERVER_PREFERENCE, OP_SINGLE_DH_USE, OP_SINGLE_ECDH_USE, OP_NO_SSLv2 (except for PROTOCOL_SSLv2), and OP_NO_SSLv3 (except for PROTOCOL_SSLv3) are set by default. The initial cipher suite list contains only HIGH ciphers, no NULL ciphers and MD5 ciphers (except for PROTOCOL_SSLv2). (cherry picked from commit 358cfd426ccc0fcd6a7940d306602138e76420ae) commit 28eb87f4f558952f259fada7be1ab5b31b8a91ef (upstream/2.7, 2.7) Author: stratakis Date: Fri Feb 15 17:18:58 2019 +0100 Fixup from test_ssl test_default_ecdh_curve (GH-11877) Partial backport from cb5b68abdeb1b1d56c581d5b4d647018703d61e3 Co-authored-by: Christian Heimes commit 2149a9ad7a9d39d7d680ec0fb602042c91057484 (HEAD -> 2.7, upstream/2.7) Author: stratakis Date: Fri Feb 15 18:27:44 2019 +0100 [2.7] bpo-32947: Fixes for TLS 1.3 and OpenSSL 1.1.1 (GH-8761) (GH-11876) Backport of TLS 1.3 related fixes from 3.7. Misc fixes and workarounds for compatibility with OpenSSL 1.1.1 from git master and TLS 1.3 support. With OpenSSL 1.1.1, Python negotiates TLS 1.3 by default. Some test cases only apply to TLS 1.2. OpenSSL 1.1.1 has added a new option OP_ENABLE_MIDDLEBOX_COMPAT for TLS 1.3. The feature is enabled by default for maximum compatibility with broken middle boxes. Users should be able to disable the hack and CPython's test suite needs it to verify default options Signed-off-by: Christian Heimes (cherry picked from commit 2a4ee8aa01d61b6a9c8e9c65c211e61bdb471826) And there is a minor multissltests update that's going will be merged as well: https://github.com/python/cpython/pull/11879 Victor -- Night gathers, and now my watch begins. It shall not end until my death. From zachary.ware+pydev at gmail.com Fri Feb 15 13:01:40 2019 From: zachary.ware+pydev at gmail.com (Zachary Ware) Date: Fri, 15 Feb 2019 12:01:40 -0600 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: <1c094bac-a1d1-aaaf-4e3a-0f110c0dfd4f@python.org> References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> <6083f67a-8413-fc40-0118-cfab2284ae2a@timgolden.me.uk> <1c094bac-a1d1-aaaf-4e3a-0f110c0dfd4f@python.org> Message-ID: On Fri, Feb 15, 2019 at 11:44 AM Steve Dower wrote: > That said, I'd love to have a context manager that we can use to make > this easier. Really, none of us should be having to decide "how am I > going to use a temporary location on the file system in my test", > because we should have one obvious (and easy!) way to do it. I found an old rejected issue [1] for adding a `tmpdir` method to unittest.TestCase, which is actually a solution that we've independently developed and use frequently for work. It basically works by registering a cleanup function before returning the path to the temporary directory, so you just call `self.tmpdir()`, use the path, forget about cleanup, and don't lose a level of indentation to a context manager. I think it would be worthwhile to reconsider this addition to unittest, or add it as a standard base test class in test.support (though either way it would need a cleaner and more robust implementation than is offered in that issue). [1] https://bugs.python.org/issue2156 -- Zach From rob.cliffe at btinternet.com Fri Feb 15 14:44:39 2019 From: rob.cliffe at btinternet.com (Rob Cliffe) Date: Fri, 15 Feb 2019 19:44:39 +0000 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: Message-ID: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> A function with "microseconds" in the name IMO misleadingly suggests that it has something closer to microsecond accuracy than a 1-second granularity. Rob Cliffe On 14/02/2019 05:05:54, Richard Belleville via Python-Dev wrote: > In a recent code review, the following snippet was called out as > reinventing the > wheel: > > _MICROSECONDS_PER_SECOND = 1000000 > > > def _timedelta_to_microseconds(delta): > ? return int(delta.total_seconds() * _MICROSECONDS_PER_SECOND) > > > The reviewer thought that there must already exist a standard library > function > that fulfills this functionality. After we had both satisfied > ourselves that we > hadn't simply missed something in the documentation, we decided that > we had > better raise the issue with a wider audience. > > Does this functionality already exist within the standard library? If > not, would > a datetime.timedelta.total_microseconds function be a reasonable > addition? I > would be happy to submit a patch for such a thing. > > Richard Belleville > > > Virus-free. www.avg.com > > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/rob.cliffe%40btinternet.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Feb 15 16:48:57 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 15 Feb 2019 13:48:57 -0800 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> References: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> Message-ID: On Fri, Feb 15, 2019 at 11:58 AM Rob Cliffe via Python-Dev < python-dev at python.org> wrote: > A function with "microseconds" in the name IMO misleadingly suggests that > it has something closer to microsecond accuracy than a 1-second granularity. > it sure does, but `delta.total_seconds()` is a float, so ms accuracy is preserved. However, if you DO want a "timedelta_to_microseconds" function, it really should use the microseconds field in the timedelta object. I haven't thought it through, but it makes me nervous to convert to floating point, and then back again -- for some large values of timedelta some precision may be lost. Also: _MICROSECONDS_PER_SECOND = 1000000 really? why in the world would you define a constant for something that simple that can never change? (and probably isn't used in more than one place anyway As Alexander pointed out the canonical way to spell this would be: delta / timedelta(microseconds=1) but I think that is less than obvious to the usual user, so I think a: delta.total_microseconds() would be a reasonable addition. I know I use .totalseconds() quite a bit, and would not want to have to spell it: delta / timedelta(seconds=1) (and can't do that in py2 anyway) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Fri Feb 15 17:02:43 2019 From: steve.dower at python.org (Steve Dower) Date: Fri, 15 Feb 2019 14:02:43 -0800 Subject: [Python-Dev] Is distutils.util.get_platform() the "current" or the "target" platform In-Reply-To: References: <7f0f615b-fab5-0504-b1bf-20b3c9cd0402@python.org> Message-ID: On 14Feb2019 1147, Gregory P. Smith wrote: > To alleviate confusion long term I'd love it if we could deprecate the > unqualified get_platform() API and point people towards always being > explicit about get_target_platform() vs get_current_platform(). This is an option too, though it doesn't reduce the code churn. I personally want to consider distutils deprecated as a whole anyway, and only maintained for the sake of our core needs. > There are valid reasons for people to be expecting either target or > current return values from get_platform(), but I agree with you, having > it return the target platform /feels/ more likely to be what people > want.? It'd be worth auditing a random sample of people's calls of this > API in open source projects to confirm that intuition. I took a random sample of about 50 uses from GitHub and 100% of them were copies of our distutils/tests/test_util.py (not even kidding: https://github.com/search?q=distutils+get_platform&type=Code) If you go far enough down the results, they're all copies of wheel's (or pip's) pep425tags.py, which import distutils.util but don't seem to use get_platform I'm inclined to say that nobody but us uses this API :) Does that make it seem more okay to "clarify" that it's returning target platform? Cheers, Steve From greg at krypto.org Fri Feb 15 17:23:09 2019 From: greg at krypto.org (Gregory P. Smith) Date: Fri, 15 Feb 2019 14:23:09 -0800 Subject: [Python-Dev] Is distutils.util.get_platform() the "current" or the "target" platform In-Reply-To: References: <7f0f615b-fab5-0504-b1bf-20b3c9cd0402@python.org> Message-ID: On Fri, Feb 15, 2019 at 2:02 PM Steve Dower wrote: > On 14Feb2019 1147, Gregory P. Smith wrote: > > To alleviate confusion long term I'd love it if we could deprecate the > > unqualified get_platform() API and point people towards always being > > explicit about get_target_platform() vs get_current_platform(). > > This is an option too, though it doesn't reduce the code churn. I > personally want to consider distutils deprecated as a whole anyway, and > only maintained for the sake of our core needs. > > > There are valid reasons for people to be expecting either target or > > current return values from get_platform(), but I agree with you, having > > it return the target platform /feels/ more likely to be what people > > want. It'd be worth auditing a random sample of people's calls of this > > API in open source projects to confirm that intuition. > > I took a random sample of about 50 uses from GitHub and 100% of them > were copies of our distutils/tests/test_util.py (not even kidding: > https://github.com/search?q=distutils+get_platform&type=Code) > > If you go far enough down the results, they're all copies of wheel's (or > pip's) pep425tags.py, which import distutils.util but don't seem to use > get_platform > > I'm inclined to say that nobody but us uses this API :) Does that make > it seem more okay to "clarify" that it's returning target platform? > All of the instances of its use that I can find in a quick search (excluding copies/clones/forks of other code) are using it to mean target platform as well. So yeah, I'd just go with that assumption. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at ganssle.io Fri Feb 15 17:23:43 2019 From: paul at ganssle.io (Paul Ganssle) Date: Fri, 15 Feb 2019 17:23:43 -0500 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> Message-ID: <4ca1c28b-d906-beea-875f-224ebb77cd0a@ganssle.io> I'm still with Alexander on this. I see functions like total_X as basically putting one of the arguments directly in the function name - it should be `total_duration(units)`, not `total_units()`, because all of those functions do the same thing and only differ in the units they use. But Alexander's approach of "divide it by the base unit" is /even more general/ than this, because it allows you to use non-traditional units like weeks (timedelta(days=7)) or "two-day periods" or whatever you want. If you use this idiom a lot and want a simple "calculate the total" function, this should suffice: def total_duration(td, *args, **kwargs): ??? return td / timedelta(*args, **kwargs) Then you can spell "x.total_microseconds()" as: total_duration(x, microseconds=1) Or you can write it like this: def total_duration(td, units='seconds'): ??? return td / timedelta(**{units: 1}) In which case it would be spelled: total_duration(x, units='microseconds') I don't see there being any compelling reason to add a bunch of methods for a marginal (and I'd say arguable) gain in aesthetics. On 2/15/19 4:48 PM, Chris Barker via Python-Dev wrote: > On Fri, Feb 15, 2019 at 11:58 AM Rob Cliffe via Python-Dev > > wrote: > > A function with "microseconds" in the name IMO misleadingly > suggests that it has something closer to microsecond accuracy than > a 1-second granularity. > > > it sure does, but `delta.total_seconds()` is a float, so ms accuracy > is preserved. > > However, if you DO want a "timedelta_to_microseconds" function, it > really should use the microseconds field in the timedelta object. I > haven't thought it through, but it makes me nervous to convert to > floating point, and then back again -- for some large values of > timedelta some precision may be lost. > > Also: > >> _MICROSECONDS_PER_SECOND = 1000000 > > really? why in the world would you define a constant for something > that simple that can never change? (and probably isn't used in more > than one place anyway > ? > As Alexander pointed out the canonical way to spell this would be: > > delta / timedelta(microseconds=1) > > but I think that is less than obvious to the usual user, so I think a: > > delta.total_microseconds() > > would be a reasonable addition. > > I know I use .totalseconds() quite a bit, and would not want to have > to spell it: > > delta / timedelta(seconds=1) > > (and can't do that in py2 anyway) > > -CHB > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R ? ? ? ? ? ?(206) 526-6959?? voice > 7600 Sand Point Way NE ??(206) 526-6329?? fax > Seattle, WA ?98115 ? ? ??(206) 526-6317?? main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/paul%40ganssle.io -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From brett at python.org Fri Feb 15 17:35:44 2019 From: brett at python.org (Brett Cannon) Date: Fri, 15 Feb 2019 14:35:44 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: On Thu, Feb 14, 2019 at 10:21 AM Gustavo Carneiro wrote: > > > On Thu, 14 Feb 2019 at 15:52, Victor Stinner wrote: > >> Le jeu. 14 f?vr. 2019 ? 14:38, Matthias Klose a ?crit : >> > Debian's concern about pointing python to python3 is that it will break >> software >> > after an upgrade. The current state seems is still the same that >> Debian doesn't >> > want to ship a python symlink after the Python2 removal. >> >> The other safer alternative is to start to provide "py" launcher on >> Unix as well. Since it's something new, it's perfectly fine to decide >> from the start to make it point to the latest Python version by >> default. >> > > While I like very much the idea of having `py` as command, does it really > need to be a wrapper command? Why can't it simply be a symlink? > > /usr/bin/py -> /usr/bin/python3 > Because that is not guaranteed to be the *latest* version of Python 3, just the *last* version installed or the *first* one that happens to be on PATH. > > I worry about (1) startup time overhead of starting another process, > It's being implemented in Rust, uses execv(), etc. The initial design is such that it is meant to minimize overhead such that you should worry more about what you import at startup than using the Python launcher if you're that concerned with startup performance. :) But honestly, you don't *have* to use the launcher; it's just for convenience. > (2) added complexity of learning about py's additional command-line > options, we don't really need them, imho. > There's only 2 more and they only work in the first position, so the cognitive overhead is extremely low. In my experience after using 'py' on Windows I consistently miss it on UNIX now, so to me there is enough of a benefit that I will continue to chip away at the project until it's done regardless of whether anyone else uses it. :) -Brett > > >> Victor >> -- >> Night gathers, and now my watch begins. It shall not end until my death. >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/gjcarneiro%40gmail.com >> > > > -- > Gustavo J. A. M. Carneiro > Gambit Research > "The universe is always one step beyond logic." -- Frank Herbert > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tahafut at gmail.com Fri Feb 15 17:50:42 2019 From: tahafut at gmail.com (Henry Chen) Date: Fri, 15 Feb 2019 14:50:42 -0800 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: <4ca1c28b-d906-beea-875f-224ebb77cd0a@ganssle.io> References: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> <4ca1c28b-d906-beea-875f-224ebb77cd0a@ganssle.io> Message-ID: Indeed there is a potential loss of precision: _timedelta_to_microseconds(timedelta(0, 1, 1)) returns 1000000 where conversion function is defined according to the initial message in this thread On Fri, Feb 15, 2019 at 2:29 PM Paul Ganssle wrote: > I'm still with Alexander on this. I see functions like total_X as > basically putting one of the arguments directly in the function name - it > should be `total_duration(units)`, not `total_units()`, because all of > those functions do the same thing and only differ in the units they use. > > But Alexander's approach of "divide it by the base unit" is *even more > general* than this, because it allows you to use non-traditional units > like weeks (timedelta(days=7)) or "two-day periods" or whatever you want. > If you use this idiom a lot and want a simple "calculate the total" > function, this should suffice: > > def total_duration(td, *args, **kwargs): > return td / timedelta(*args, **kwargs) > > Then you can spell "x.total_microseconds()" as: > > total_duration(x, microseconds=1) > > Or you can write it like this: > > def total_duration(td, units='seconds'): > return td / timedelta(**{units: 1}) > > In which case it would be spelled: > > total_duration(x, units='microseconds') > > I don't see there being any compelling reason to add a bunch of methods > for a marginal (and I'd say arguable) gain in aesthetics. > On 2/15/19 4:48 PM, Chris Barker via Python-Dev wrote: > > On Fri, Feb 15, 2019 at 11:58 AM Rob Cliffe via Python-Dev < > python-dev at python.org> wrote: > >> A function with "microseconds" in the name IMO misleadingly suggests that >> it has something closer to microsecond accuracy than a 1-second granularity. >> > > it sure does, but `delta.total_seconds()` is a float, so ms accuracy is > preserved. > > However, if you DO want a "timedelta_to_microseconds" function, it really > should use the microseconds field in the timedelta object. I haven't > thought it through, but it makes me nervous to convert to floating point, > and then back again -- for some large values of timedelta some precision > may be lost. > > Also: > > _MICROSECONDS_PER_SECOND = 1000000 > > > really? why in the world would you define a constant for something that > simple that can never change? (and probably isn't used in more than one > place anyway > > As Alexander pointed out the canonical way to spell this would be: > > delta / timedelta(microseconds=1) > > but I think that is less than obvious to the usual user, so I think a: > > delta.total_microseconds() > > would be a reasonable addition. > > I know I use .totalseconds() quite a bit, and would not want to have to > spell it: > > delta / timedelta(seconds=1) > > (and can't do that in py2 anyway) > > -CHB > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > Python-Dev mailing listPython-Dev at python.orghttps://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/paul%40ganssle.io > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/tahafut%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Feb 15 18:13:35 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 15 Feb 2019 15:13:35 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: On Fri, Feb 15, 2019 at 2:39 PM Brett Cannon wrote: > In my experience after using 'py' on Windows I consistently miss it on > UNIX now, so to me there is enough of a benefit that I will continue to > chip away at the project until it's done regardless of whether anyone else > uses it. :) > And I would REALLY like it if as much was the same as possible on all platforms... -CHB > -Brett > > >> >> >>> Victor >>> -- >>> Night gathers, and now my watch begins. It shall not end until my death. >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/gjcarneiro%40gmail.com >>> >> >> >> -- >> Gustavo J. A. M. Carneiro >> Gambit Research >> "The universe is always one step beyond logic." -- Frank Herbert >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/brett%40python.org >> > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/chris.barker%40noaa.gov > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Fri Feb 15 18:15:25 2019 From: greg at krypto.org (Gregory P. Smith) Date: Fri, 15 Feb 2019 15:15:25 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <22F2AD71-5694-4EB5-9C21-8B965654A481@python.org> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <22F2AD71-5694-4EB5-9C21-8B965654A481@python.org> Message-ID: On Thu, Feb 14, 2019 at 9:29 AM Barry Warsaw wrote: > On Feb 13, 2019, at 23:08, Mat?j Cepl wrote: > > > Is this relevant to the discussion at hand? We are talking about > > the binary /usr/bin/python3 which will be surely be provided > > even by Python 4, won't it? > > Why would it be? Since this is all hypothetical anyway , I?d more > likely expect to only ship /usr/bin/python. > Because nobody can use 'python' and expect that to be anything but a 2and3 compatible interpreter for the next 5+ years given we live in a world where people routinely have a very real need to write #! lines that works with obsolete distributions. python3 implies >=3.x, thus python 4, 5, 6, 2069, 3001, and 90210 should all have python3 point to them. realistically people will stop referring to python3 by 2069 so we could consider removing the recommendation at that point. 2020 is not the end of use or end of importance for Python 2. It is merely the end of bugfixes applied by python-dev. A thing I want to make sure we _don't_ do in the future is allow future pythonN binaries. python4, python90210, etc. those should never exist. python, python3, and pythonX.Y only. If we were ever to go back on our promise and create another world-breaking python version, it could get its own canonical binary name. But we're specifically planning _not_ to make that mistake again. I suspect most of my opining below will be contentious to multiple people because I describe a state of the world that is at conflict with decisions multiple independent distros have already made. Accept their mistakes and move on past it to the hack in that case: A new "py" launcher isn't going to solve this problem - it is separate and should be its own PEP as it has its own set of requirements and reasons to be considered (especially on platforms with no concept of a #!). Recommend "py" today-ish and nobody can rely on it for at least 10+ years in a wide support cross platform scripting type of situation because it won't be present on the obsolete or long term supported things that people have a need for such #!s to run on. Not our problem? Well, actually, it is. Matthias speaking for Debian suggesting they don't want to have "python" at all if it isn't a synonym for "python2" because it'll break software is... wrong. If software is not 3 compatible and uses "python", it'll also break when python is python3. Just in a different manner. "python" should point to python3 when a distro does not require python2 for its core. It should not _vary_ as to which of 2.7 or 3.7 it will point to within a given stable distribution (installing python2.7 should never suddenly redirect it back to python2). But "python" itself should always exist when any python interpreter is core to the OS. That means if a distro no longer requires python2 as part of its base/core but does require python3... "python" must point to "python3". If a posixy OS no longer requires python at all (surely there are some by now?) the question of what python should point to when an OS distro supplied optional python package gets installed is likely either "nothing at all" or ">=3.x" but should never be decided as "2.7" (which sadly may be what macOS does). Do we already have LTS _stable_ distributions making that mistake today? If so they've done something undesirable for the world at large and we're already screwed if that distro release is deemed important by masses of users: There is no way to write a *direct* #! line that works out of the box to launch a working latest version Python interpreter across all platforms. The hack to make that work otherwise involves: ```sh #!/bin/sh # (or bash if that much is guaranteed) ... some shell logic to find _an_ acceptible interpreter ... exec "${DISCOVERED_PYTHON}" - << 1.5.2 for eons at the same time as shipping 2.x on the system. The entire world wanted to be writing 2.0-2.4 code but there was no simple "python2" binary on most systems with 2.x installed yet. We all survived despite ourselves. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Fri Feb 15 18:37:53 2019 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 15 Feb 2019 16:37:53 -0700 Subject: [Python-Dev] Making PyInterpreterState an opaque type Message-ID: Hi all, I've been working on the runtime lately, particularly focused on my multi-core Python project. One thing that would help simplify changes in this area is if PyInterpreterState were defined in Include/internal. This would mean the type would be opaque unless Py_BUILD_CORE were defined. The docs [1] already say none of the struct's fields are public. Furthermore, Victor already moved it into Include/cpython (i.e. not in the stable ABI) in November. Overall, the benefit of making internal types like this opaque is realized in our ability to change the internal details without further breaking C-API users. Realistically, there may be extension modules out there that use PyInterpreterState fields directly. They would break. I expect there to be few such modules and fixing them would not involve great effort. We'd have to add any missing accessor functions to the public C-API, which I see as a good thing. I have an issue [2] open for the change and a PR open. My PR already adds an entry to the porting section of the 3.8 What's New doc about dealing with PyInterpreterState. Anyway, I just wanted to see if there are any objections to making PyInterpreterState an opaque type outside of core use. -eric p.s. I'd like to do the same with PyThreadState, but that's a bit trickier [3] and not one of my immediate needs. :) [1] https://docs.python.org/3/c-api/init.html#c.PyInterpreterState [2] https://bugs.python.org/issue35886 [3] https://bugs.python.org/issue35949 From J.Demeyer at UGent.be Sat Feb 16 05:15:44 2019 From: J.Demeyer at UGent.be (Jeroen Demeyer) Date: Sat, 16 Feb 2019 11:15:44 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: Message-ID: <5C67E2D0.7020906@UGent.be> On 2019-02-16 00:37, Eric Snow wrote: > One thing that would help simplify changes > in this area is if PyInterpreterState were defined in > Include/internal. How would that help anything? I don't like the idea (in general, I'm not talking about PyInterpreterState specifically) that external modules should be second-class citizens compared to modules inside CPython. If you want to break the undocumented API, just break it. I don't mind. But I don't see why it's required to move the include to Include/internal for that. From ncoghlan at gmail.com Sat Feb 16 11:31:18 2019 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 17 Feb 2019 02:31:18 +1000 Subject: [Python-Dev] Is distutils.util.get_platform() the "current" or the "target" platform In-Reply-To: References: <7f0f615b-fab5-0504-b1bf-20b3c9cd0402@python.org> Message-ID: On Sat, 16 Feb 2019 at 08:06, Steve Dower wrote: > I'm inclined to say that nobody but us uses this API :) Does that make > it seem more okay to "clarify" that it's returning target platform? I've always treated the situation as "Cross-compilation doesn't work, build on the target platform, using a VM if you have to", and I suspect a lot of folks have approached the status quo the same way. So if there are functions you can change to make cross-compilation actually work without requiring changes to a lot of other projects, that seems like a good thing to me. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Feb 16 11:59:36 2019 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 17 Feb 2019 02:59:36 +1000 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <1cfc2984-216c-fdc7-7ea2-692662d93971@ganssle.io> Message-ID: On Fri, 15 Feb 2019 at 04:15, Alexander Belopolsky wrote: > > > > On Thu, Feb 14, 2019 at 9:07 AM Paul Ganssle wrote: >> >> I don't think it's totally unreasonable to have other total_X() methods, where X would be days, hours, minutes and microseconds > > I do. I was against adding the total_seconds() method to begin with because the same effect can be achieved with > > delta / timedelta(seconds=1) > > this is easily generalized to > > delta / timedelta(X=1) > > where X can be days, hours, minutes or microseconds. As someone who reads date/time manipulation code far more often then he writes it, it's immediately obvious to me what "delta.total_seconds()" is doing, while "some_var / some_other_var" could be doing anything. So for the sake of those us that aren't as well versed in how time delta division works, it seems to me that adding: def total_duration(td, interval=timedelta(seconds=1)): return td / interval as a module level helper function would make a lot of sense. (This is a variant on Paul's helper function that accepts the divisor as a specifically named argument with a default value, rather than creating it on every call) Cheers, Nick. P.S. Why a function rather than a method? Mostly because this feels like "len() for timedelta objects" to me, but also because as a helper function, the docs can easily describe how to add it as a utility function for older versions. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From paul at ganssle.io Sat Feb 16 12:16:32 2019 From: paul at ganssle.io (Paul Ganssle) Date: Sat, 16 Feb 2019 12:16:32 -0500 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <1cfc2984-216c-fdc7-7ea2-692662d93971@ganssle.io> Message-ID: I am definitely sympathetic to the idea of it being more readable, but I feel like this adds some unnecessary bloat to the interface when "divide the value by the units" is not at all uncommon. Plus, if you add a total_duration that by default does the same thing as total_seconds, you now have three functions that do exactly the same thing: - td / timedelta(seconds=1) - td.total_seconds() - total_duration(td) If it's just for the purposes of readability, you can also do this: ??? from operator import truediv as total_duration?? # (timedelta, interval) I think if we add such a function, it will essentially be just a slower version of something that already exists. I suspect the main reason the "divide the timedelta by the interval" thing isn't a common enough idiom that people see it all the time is that it's only supported in Python 3. As more code drops Python 2, I think the "td / interval" idiom will hopefully become common enough that it will obviate the need for a total_duration function. That said, if people feel very strongly that a total_duration function would be useful, maybe the best thing to do would be for me to add it to dateutil.utils? In that case it would at least be available in Python 2, so people who find it more readable /and/ people still writing polyglot code would be able to use it, without the standard library unnecessarily providing two ways to do the exact same thing. On 2/16/19 11:59 AM, Nick Coghlan wrote: > On Fri, 15 Feb 2019 at 04:15, Alexander Belopolsky > wrote: >> >> >> On Thu, Feb 14, 2019 at 9:07 AM Paul Ganssle wrote: >>> I don't think it's totally unreasonable to have other total_X() methods, where X would be days, hours, minutes and microseconds >> I do. I was against adding the total_seconds() method to begin with because the same effect can be achieved with >> >> delta / timedelta(seconds=1) >> >> this is easily generalized to >> >> delta / timedelta(X=1) >> >> where X can be days, hours, minutes or microseconds. > As someone who reads date/time manipulation code far more often then > he writes it, it's immediately obvious to me what > "delta.total_seconds()" is doing, while "some_var / some_other_var" > could be doing anything. > > So for the sake of those us that aren't as well versed in how time > delta division works, it seems to me that adding: > > def total_duration(td, interval=timedelta(seconds=1)): > return td / interval > > as a module level helper function would make a lot of sense. (This is > a variant on Paul's helper function that accepts the divisor as a > specifically named argument with a default value, rather than creating > it on every call) > > Cheers, > Nick. > > P.S. Why a function rather than a method? Mostly because this feels > like "len() for timedelta objects" to me, but also because as a helper > function, the docs can easily describe how to add it as a utility > function for older versions. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From ncoghlan at gmail.com Sat Feb 16 12:25:04 2019 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 17 Feb 2019 03:25:04 +1000 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <22F2AD71-5694-4EB5-9C21-8B965654A481@python.org> Message-ID: On Sat, 16 Feb 2019 at 09:19, Gregory P. Smith wrote: > Not our problem? Well, actually, it is. Matthias speaking for Debian suggesting they don't want to have "python" at all if it isn't a synonym for "python2" because it'll break software is... wrong. If software is not 3 compatible and uses "python", it'll also break when python is python3. Just in a different manner. "python" should point to python3 when a distro does not require python2 for its core. It should not _vary_ as to which of 2.7 or 3.7 it will point to within a given stable distribution (installing python2.7 should never suddenly redirect it back to python2). But "python" itself should always exist when any python interpreter is core to the OS. That means if a distro no longer requires python2 as part of its base/core but does require python3... "python" must point to "python3". If a posixy OS no longer requires python at all (surely there are some by now?) the question of what python should point to when an OS distro supplied optional python package gets installed is likely either "nothing at all" or ">=3.x" but should never be decided as "2.7" (which sadly may be what macOS does). > > Do we already have LTS _stable_ distributions making that mistake today? If so they've done something undesirable for the world at large and we're already screwed if that distro release is deemed important by masses of users: There is no way to write a direct #! line that works out of the box to launch a working latest version Python interpreter across all platforms. This is exactly why we want to change Fedora et al to have /usr/bin/python aliased to /usr/bin/python3 by default, and yes, having /usr/bin/python missing by default does indeed break the world (search for Fedora 28 and Ubuntu 16.04 Ansible Python issues for more). While Matthias is still personally reluctant to add the alias for Debian/Ubuntu, the *only* thing preventing aliasing /usr/bin/python to /usr/bin/python3 right now on the Fedora & RHEL side of things is PEP 394, and Guido objected strongly when Petr last tried to get the PEP to even acknowledge that it was reasonable for distros to make that setting configurable on a system-wide basis: https://github.com/python/peps/pull/630 For RHEL 8, the resolution was "Well, we'll ignore the upstream PEP, then" and make the setting configurable anyway, but Fedora tries to work more closely with upstream than that - if we think upstream are giving people bad or outdated advice, then we'll aim to get the advice changed rather than ignoring it. In this case, the advice is outdated: there have been a couple of years of releases with /usr/bin/python missing, so it's time to move to the "/usr/bin/python3" side of source compatibility, and go back to having "just run python" be the way you start a modern Python interpreter, even when you're using the system Python on a Linux distro. Cheers, Nick. P.S. Note that we're not asking for the PEP to say "You should do this..." - just for the PEP to acknowledge it as a reasonable choice for distros to make given the looming Python 2 End of Life. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From steve.dower at python.org Sat Feb 16 12:29:15 2019 From: steve.dower at python.org (Steve Dower) Date: Sat, 16 Feb 2019 09:29:15 -0800 Subject: [Python-Dev] Is distutils.util.get_platform() the "current" or the "target" platform In-Reply-To: References: <7f0f615b-fab5-0504-b1bf-20b3c9cd0402@python.org> Message-ID: <03fa7e4a-6cb9-58b8-709c-44a589fdb706@python.org> On 16Feb.2019 0831, Nick Coghlan wrote: > On Sat, 16 Feb 2019 at 08:06, Steve Dower wrote: >> I'm inclined to say that nobody but us uses this API :) Does that make >> it seem more okay to "clarify" that it's returning target platform? > > I've always treated the situation as "Cross-compilation doesn't work, > build on the target platform, using a VM if you have to", and I > suspect a lot of folks have approached the status quo the same way. For platforms where pyconfig.h is generated, this is still going to be true, at least until the compiler classes learn to add a platform-specific include path. On Windows, we have a static pyconfig.h that changes behaviour based on compiler and Windows SDK provided preprocessor directives, so we can quite comfortably use the same file. > So if there are functions you can change to make cross-compilation > actually work without requiring changes to a lot of other projects, > that seems like a good thing to me. Okay Paul (Monson), that's your cue to update the PR :) Cheers, Steve From ncoghlan at gmail.com Sat Feb 16 12:38:01 2019 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 17 Feb 2019 03:38:01 +1000 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <1cfc2984-216c-fdc7-7ea2-692662d93971@ganssle.io> Message-ID: On Sun, 17 Feb 2019 at 03:20, Paul Ganssle wrote: > I think if we add such a function, it will essentially be just a slower version of something that already exists. I suspect the main reason the "divide the timedelta by the interval" thing isn't a common enough idiom that people see it all the time is that it's only supported in Python 3. As more code drops Python 2, I think the "td / interval" idiom will hopefully become common enough that it will obviate the need for a total_duration function. And personally, the total_seconds() case has always been enough for me. > That said, if people feel very strongly that a total_duration function would be useful, maybe the best thing to do would be for me to add it to dateutil.utils? In that case it would at least be available in Python 2, so people who find it more readable and people still writing polyglot code would be able to use it, without the standard library unnecessarily providing two ways to do the exact same thing. I'm now thinking a slight documentation improvement would have addressed my own confusion (and I suspect the OPs as well): * In the "Supported Operations" section of https://docs.python.org/3/library/datetime.html#timedelta-objects, change "Division (3) of t2 by t3." to "Division (3) of overall duration t2 by interval unit t3." * In the total_seconds() documentation, add a sentence "For interval units other than seconds, use the division form directly (e.g. `td / timedelta(microseconds=1)`)" Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From tahafut at gmail.com Sat Feb 16 13:47:34 2019 From: tahafut at gmail.com (Henry Chen) Date: Sat, 16 Feb 2019 10:47:34 -0800 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <1cfc2984-216c-fdc7-7ea2-692662d93971@ganssle.io> Message-ID: +1 on the improved docs solution: no new code to maintain and big return on investment in preventing future bugs / confusion :) On Sat, Feb 16, 2019 at 9:40 AM Nick Coghlan wrote: > On Sun, 17 Feb 2019 at 03:20, Paul Ganssle wrote: > > I think if we add such a function, it will essentially be just a slower > version of something that already exists. I suspect the main reason the > "divide the timedelta by the interval" thing isn't a common enough idiom > that people see it all the time is that it's only supported in Python 3. As > more code drops Python 2, I think the "td / interval" idiom will hopefully > become common enough that it will obviate the need for a total_duration > function. > > And personally, the total_seconds() case has always been enough for me. > > > That said, if people feel very strongly that a total_duration function > would be useful, maybe the best thing to do would be for me to add it to > dateutil.utils? In that case it would at least be available in Python 2, so > people who find it more readable and people still writing polyglot code > would be able to use it, without the standard library unnecessarily > providing two ways to do the exact same thing. > > I'm now thinking a slight documentation improvement would have > addressed my own confusion (and I suspect the OPs as well): > > * In the "Supported Operations" section of > https://docs.python.org/3/library/datetime.html#timedelta-objects, > change "Division (3) of t2 by t3." to "Division (3) of overall > duration t2 by interval unit t3." > * In the total_seconds() documentation, add a sentence "For interval > units other than seconds, use the division form directly (e.g. `td / > timedelta(microseconds=1)`)" > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/tahafut%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Sat Feb 16 14:13:03 2019 From: barry at python.org (Barry Warsaw) Date: Sat, 16 Feb 2019 11:13:03 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <22F2AD71-5694-4EB5-9C21-8B965654A481@python.org> Message-ID: <2DBD97CF-855A-45DA-968B-102FA785B9CF@python.org> On Feb 16, 2019, at 09:25, Nick Coghlan wrote: > While Matthias is still personally reluctant to add the alias for > Debian/Ubuntu, the *only* thing preventing aliasing /usr/bin/python to > /usr/bin/python3 right now on the Fedora & RHEL side of things is PEP > 394, and Guido objected strongly when Petr last tried to get the PEP > to even acknowledge that it was reasonable for distros to make that > setting configurable on a system-wide basis: > https://github.com/python/peps/pull/630 > P.S. Note that we're not asking for the PEP to say "You should do > this..." - just for the PEP to acknowledge it as a reasonable choice > for distros to make given the looming Python 2 End of Life. I think this is a reasonable ask. PEP 394 shouldn?t *prevent* distros from doing what they believe is in the best interest of their users. While we do want consistency in the user experience across Linux distros (and more broadly, across all supported platforms), I think we also have to acknowledge that we?re still in a time of transition (maybe more so right now), so we should find ways to allow for experimentation within that context. I?m not sure that I agree with all the proposed changes to PEP 394, but those are the guidelines I think I?ll re-evaluate the PR by. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From solipsis at pitrou.net Sat Feb 16 16:32:10 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 16 Feb 2019 22:32:10 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type References: <5C67E2D0.7020906@UGent.be> Message-ID: <20190216223210.09ef3944@fsol> On Sat, 16 Feb 2019 11:15:44 +0100 Jeroen Demeyer wrote: > On 2019-02-16 00:37, Eric Snow wrote: > > One thing that would help simplify changes > > in this area is if PyInterpreterState were defined in > > Include/internal. > > How would that help anything? I don't like the idea (in general, I'm not > talking about PyInterpreterState specifically) that external modules > should be second-class citizens compared to modules inside CPython. > > If you want to break the undocumented API, just break it. I don't mind. > But I don't see why it's required to move the include to > Include/internal for that. This sounds like a reasonable design principle: decree the API non-stable and prone to breakage (it already is, anyway), don't hide it. It's true that in the PyInterpreterState case specifically, there doesn't seem much worthy of use by third-party libraries. Regards Antoine. From steve.dower at python.org Sat Feb 16 17:34:46 2019 From: steve.dower at python.org (Steve Dower) Date: Sat, 16 Feb 2019 14:34:46 -0800 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <20190216223210.09ef3944@fsol> References: <5C67E2D0.7020906@UGent.be> <20190216223210.09ef3944@fsol> Message-ID: On 16Feb.2019 1332, Antoine Pitrou wrote: > On Sat, 16 Feb 2019 11:15:44 +0100 > Jeroen Demeyer wrote: >> On 2019-02-16 00:37, Eric Snow wrote: >>> One thing that would help simplify changes >>> in this area is if PyInterpreterState were defined in >>> Include/internal. >> >> How would that help anything? I don't like the idea (in general, I'm not >> talking about PyInterpreterState specifically) that external modules >> should be second-class citizens compared to modules inside CPython. >> >> If you want to break the undocumented API, just break it. I don't mind. >> But I don't see why it's required to move the include to >> Include/internal for that. > > This sounds like a reasonable design principle: decree the API > non-stable and prone to breakage (it already is, anyway), don't hide it. As I was chatting with Eric shortly before he posted this, I assume the idea would be to expose anything useful/necessary via a function. That at least removes the struct layout from the ABI, without removing functionality. > It's true that in the PyInterpreterState case specifically, there > doesn't seem much worthy of use by third-party libraries. Which seems to suggest that the answer to "which members are important to expose?" is "probably none". And that's possibly why Eric didn't mention it in his email :) This is mostly about being able to assign blame when things break, so I'm totally okay with extension modules that want to play with internals declaring Py_BUILD_CORE to get access to them (though I suspect that won't work out of the box - maybe we should have a Py_I_TOO_LIKE_TO_LIVE_DANGEROUSLY?). I like that we're taking (small) steps to reduce the size of our API. It helps balance out the growth and leaves us with a chance of one day being able to have an extension model that isn't as tied to C's ABI. Cheers, Steve From solipsis at pitrou.net Sat Feb 16 17:47:31 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 16 Feb 2019 23:47:31 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type References: <5C67E2D0.7020906@UGent.be> <20190216223210.09ef3944@fsol> Message-ID: <20190216234731.4ea34101@fsol> On Sat, 16 Feb 2019 14:34:46 -0800 Steve Dower wrote: > On 16Feb.2019 1332, Antoine Pitrou wrote: > > On Sat, 16 Feb 2019 11:15:44 +0100 > > Jeroen Demeyer wrote: > >> On 2019-02-16 00:37, Eric Snow wrote: > >>> One thing that would help simplify changes > >>> in this area is if PyInterpreterState were defined in > >>> Include/internal. > >> > >> How would that help anything? I don't like the idea (in general, I'm not > >> talking about PyInterpreterState specifically) that external modules > >> should be second-class citizens compared to modules inside CPython. > >> > >> If you want to break the undocumented API, just break it. I don't mind. > >> But I don't see why it's required to move the include to > >> Include/internal for that. > > > > This sounds like a reasonable design principle: decree the API > > non-stable and prone to breakage (it already is, anyway), don't hide it. > > As I was chatting with Eric shortly before he posted this, I assume the > idea would be to expose anything useful/necessary via a function. That > at least removes the struct layout from the ABI, without removing > functionality. Well, the ABI is allowed to break at each feature version (except for the "stable ABI" subset, which PyInterpreterState isn't part of), so I'm not sure that would change anything ;-) > > It's true that in the PyInterpreterState case specifically, there > > doesn't seem much worthy of use by third-party libraries. > > Which seems to suggest that the answer to "which members are important > to expose?" is "probably none". That sounds intuitive. But we don't know what kind of hacks some extension authors might do, for legitimate reasons... (perhaps some gevent-like framework needs access to the interpreter state?) Regards Antoine. From richardlev at gmail.com Sat Feb 16 18:23:38 2019 From: richardlev at gmail.com (Richard Levasseur) Date: Sat, 16 Feb 2019 15:23:38 -0800 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> <6083f67a-8413-fc40-0118-cfab2284ae2a@timgolden.me.uk> <1c094bac-a1d1-aaaf-4e3a-0f110c0dfd4f@python.org> Message-ID: On Fri, Feb 15, 2019 at 10:02 AM Zachary Ware wrote: > On Fri, Feb 15, 2019 at 11:44 AM Steve Dower > wrote: > > That said, I'd love to have a context manager that we can use to make > > this easier. Really, none of us should be having to decide "how am I > > going to use a temporary location on the file system in my test", > > because we should have one obvious (and easy!) way to do it. > > I found an old rejected issue [1] for adding a `tmpdir` method to > unittest.TestCase, which is actually a solution that we've > independently developed and use frequently for work. It basically > works by registering a cleanup function before returning the path to > the temporary directory, so you just call `self.tmpdir()`, use the > path, forget about cleanup, and don't lose a level of indentation to a > context manager. I think it would be worthwhile to reconsider this > addition to unittest, or add it as a standard base test class in > test.support (though either way it would need a cleaner and more > robust implementation than is offered in that issue). > (Sorry if this starts to veer off the original topic a bit) I added something similar (though more robust) in the absl testing framework . Tests can just call self.create_tempfile() or self.create_tempdir() and not have to worry about cleanup, prior state, or the low-level details of where and how the file gets created. I have re-implemented the same logic quite a few times, and seen it code reviews even more times, but *rarely* have I seen it done *correctly* -- it turns out its not easy to do entirely right. tl;dr: I agree: it would be nice if unittest provided some help here. I apologize for the length here. I've had to answer "just use tempfile, whats wrong with that?" a few times, so I've got a whole enumerated list of points :). While adding this conceptually simple feature to absl, I unexpectedly found it to be kinda complicated. I'll try to keep it short and to the point. To be clear, this is all about needing a named file on disk that can be used with e.g. open() by the code-under-test. I wouldn't call this incredibly common overall, but its not uncommon. There's basically 3 problems that have a bit of overlap. First: The tempfile module is a poor fit for testing (don't get me wrong, it works, but its not *nice for use in tests*)*.* This is because: 1. Using it as a context manager is distracting. The indentation signifies a conceptual scope the reader needs to be aware of, but in a test context, its usually not useful. At worst, it covers most of the test. At best, its constrained to a block at the start. 2. tempfile defaults to binary mode instead of text; just another thing to bite you. 3. On windows, you can't reopen the file, so for cross-platform stuff, you can't even use it for this case. 4. You pretty much always have to pass delete=False, which *kinda* defeats the point of e.g. using a context manager Second: The various file/test apis to do setup and cleanup are awkward. This is because: 1. Are you deleting a file, directory tree, just a directory (is it empty?)? Make sure to call the proper function, otherwise you'll get an error. 2. Creating a directory tree? Make sure to call makedirs() with the right parameters, otherwise you'll get an error. 3. Using tearDown? Make sure to handle errors, lest other tearDown logic not run and leave a dirty post-test state that might inconsistently break a following test. 4. Using setUp? Make sure to not assume a clean state because of (3). 5. Did you write a helper function to e.g., making creating "foo/bar/baz.txt" easy? Now you have to implement logic to split up the path, create dirs, etc. Not hard, admittedly, but its the ~9th thing in this list so far -- "I just want to create a temp file for testing" 6. Are you using mkstemp? Remember to close the FD it returns, even though its "just a test" 7. Are you using tempfile.gettempdir (or some other manual scheme)? Make sure to give each test a unique location within it, otherwise collisions can happen. Third: This is a bit more opinion: I'm *really* into optimizing my edit-debug cycle latency, so random file/dir names are really annoying because they slow me down. They're absolutely necessary for running a finished test suite, but get in the way for debugging specific tests (i.e. where a dev spends the majority of their time when dealing with a failing test). This is because: 1. Command history is lost. I can't up-arrow-enter to re-run e.g. a grep over the file I'm interested in. 2. The only way to inspect a file is to set_trace() before it gets deleted, but after the logic I need to check has run. Then run some expression that'll print the filename. > [1] https://bugs.python.org/issue2156 > > -- > Zach > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/richardlev%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Sat Feb 16 20:10:48 2019 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 16 Feb 2019 20:10:48 -0500 Subject: [Python-Dev] [RELEASE] Python 2.7.16 release candidate 1 Message-ID: <715ded96-a728-42ec-8bb7-72c3f7e1695b@www.fastmail.com> I'm pleased to announce the immediate availability of Python 2.7.16 release candidate 1. This is a prerelease for yet another bug fix release in the Python 2.7.x series. It includes over 100 fixes over Python 2.7.15. See the changelog at https://raw.githubusercontent.com/python/cpython/baacaac06f93dd624c9d7b3bac0e13fbe34f2d8c/Misc/NEWS.d/2.7.16rc1.rst for full details. Downloads are at: https://www.python.org/downloads/release/python-2716rc1/ Please test your software against the new release and report any issues to https://bugs.python.org/ If all goes according to plan, Python 2.7.16 final will be released on March 2. All the best, Benjamin From cedric.krier at b2ck.com Sun Feb 17 11:58:50 2019 From: cedric.krier at b2ck.com (=?utf-8?Q?C=C3=A9dric?= Krier) Date: Sun, 17 Feb 2019 17:58:50 +0100 Subject: [Python-Dev] Request review for bpo-35153 Message-ID: <20190217165850.GB11909@kei> Hi, A few months ago, I submitted bpo-35153 with a PR to allow to set headers from xmlrpc.client.ServerProxy. Is there a core developer willing to review it? It will be great to have it for Python 3.8. https://bugs.python.org/issue35153 https://github.com/python/cpython/pull/10308 Thanks, -- C?dric Krier - B2CK SPRL Email/Jabber: cedric.krier at b2ck.com Tel: +32 472 54 46 59 Website: http://www.b2ck.com/ From doko at ubuntu.com Mon Feb 18 10:34:33 2019 From: doko at ubuntu.com (Matthias Klose) Date: Mon, 18 Feb 2019 16:34:33 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <22F2AD71-5694-4EB5-9C21-8B965654A481@python.org> Message-ID: <1632d0c7-4958-4641-a7fb-db6ebfc0fbca@ubuntu.com> On 16.02.19 00:15, Gregory P. Smith wrote: > On Thu, Feb 14, 2019 at 9:29 AM Barry Warsaw wrote: > >> On Feb 13, 2019, at 23:08, Mat?j Cepl wrote: >> >>> Is this relevant to the discussion at hand? We are talking about >>> the binary /usr/bin/python3 which will be surely be provided >>> even by Python 4, won't it? >> >> Why would it be? Since this is all hypothetical anyway , I?d more >> likely expect to only ship /usr/bin/python. >> > > Because nobody can use 'python' and expect that to be anything but a 2and3 > compatible interpreter for the next 5+ years given we live in a world where > people routinely have a very real need to write #! lines that works with > obsolete distributions. python3 implies >=3.x, thus python 4, 5, 6, 2069, > 3001, and 90210 should all have python3 point to them. realistically > people will stop referring to python3 by 2069 so we could consider removing > the recommendation at that point. > > 2020 is not the end of use or end of importance for Python 2. It is merely > the end of bugfixes applied by python-dev. > > A thing I want to make sure we _don't_ do in the future is allow future > pythonN binaries. python4, python90210, etc. those should never exist. > python, python3, and pythonX.Y only. If we were ever to go back on our > promise and create another world-breaking python version, it could get its > own canonical binary name. But we're specifically planning _not_ to make > that mistake again. > > I suspect most of my opining below will be contentious to multiple people > because I describe a state of the world that is at conflict with decisions > multiple independent distros have already made. Accept their mistakes and > move on past it to the hack in that case: > > A new "py" launcher isn't going to solve this problem - it is separate and > should be its own PEP as it has its own set of requirements and reasons to > be considered (especially on platforms with no concept of a #!). Recommend > "py" today-ish and nobody can rely on it for at least 10+ years in a wide > support cross platform scripting type of situation because it won't be > present on the obsolete or long term supported things that people have a > need for such #!s to run on. > > Not our problem? Well, actually, it is. Matthias speaking for Debian > suggesting they don't want to have "python" at all if it isn't a synonym > for "python2" because it'll break software is... wrong. If software is not > 3 compatible and uses "python", it'll also break when python is python3. > Just in a different manner. "python" should point to python3 when a distro > does not require python2 for its core. It should not _vary_ as to which of > 2.7 or 3.7 it will point to within a given stable distribution (installing > python2.7 should never suddenly redirect it back to python2). But "python" > itself should always exist when any python interpreter is core to the OS. > That means if a distro no longer requires python2 as part of its base/core > but does require python3... "python" must point to "python3". If a posixy > OS no longer requires python at all (surely there are some by now?) the > question of what python should point to when an OS distro supplied optional > python package gets installed is likely either "nothing at all" or ">=3.x" > but should never be decided as "2.7" (which sadly may be what macOS does). There is no notion of a "core" for Debian. So "core" applies to the whole distro, as long as there are python shebangs found. For Ubuntu, you don't have a python command on the default desktop install, just python3. Trying to invoke python, command-not-found tells you: $ python Command 'python' not found, but can be installed with: [...] You also have python3 installed, you can run 'python3' instead. That tells you which way to go. > Do we already have LTS _stable_ distributions making that mistake today? > If so they've done something undesirable for the world at large and we're > already screwed if that distro release is deemed important by masses of > users: There is no way to write a *direct* #! line that works out of the > box to launch a working latest version Python interpreter across all > platforms. If you count the above example towards this "mistake", probably yes. But there is *no* way to have a sane way to have what you want. Matthias From doko at ubuntu.com Mon Feb 18 10:38:49 2019 From: doko at ubuntu.com (Matthias Klose) Date: Mon, 18 Feb 2019 16:38:49 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <22F2AD71-5694-4EB5-9C21-8B965654A481@python.org> Message-ID: On 16.02.19 18:25, Nick Coghlan wrote: > While Matthias is still personally reluctant to add the alias for > Debian/Ubuntu, the *only* thing preventing aliasing /usr/bin/python to > /usr/bin/python3 right now on the Fedora & RHEL side of things is PEP > 394, and Guido objected strongly when Petr last tried to get the PEP > to even acknowledge that it was reasonable for distros to make that > setting configurable on a system-wide basis: > https://github.com/python/peps/pull/630 No, I'm not "personally reluctant" about this, it's the current majority view of people on the debian-python ML. Barry stepped back as a Debian maintainer, so there are not many people supporting your view. Matthias From remi.lapeyre at henki.fr Mon Feb 18 11:16:36 2019 From: remi.lapeyre at henki.fr (=?UTF-8?Q?R=C3=A9mi_Lapeyre?=) Date: Mon, 18 Feb 2019 08:16:36 -0800 Subject: [Python-Dev] int() and math.trunc don't accept objects that only define __index__ Message-ID: Hi, I open this thread to discuss the proposal by Nick Coghlan in https://bugs.python.org/issue33039 to add __int__ and __trunc__ to a type when __index__ is defined. Currently __int__ does not default to __index__ during class initialisation so both must be defined to get a coherant behavior: (cpython-venv) ? cpython git:(add-key-argument-to-bisect) ? python3 Python 3.8.0a1+ (heads/add-key-argument-to-bisect:b7aaa1adad, Feb 18 2019, 16:10:22) [Clang 10.0.0 (clang-1000.10.44.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import math >>> class MyInt: ... def __index__(self): ... return 4 ... >>> int(MyInt()) Traceback (most recent call last): File "", line 1, in TypeError: int() argument must be a string, a bytes-like object or a number, not 'MyInt' >>> math.trunc(MyInt()) Traceback (most recent call last): File "", line 1, in TypeError: type MyInt doesn't define __trunc__ method >>> hex(MyInt()) '0x4' >>> len("a"*MyInt()) 4 >>> MyInt.__int__ = MyInt.__index__ >>> int(MyInt()) 4 The difference in behavior is espacially weird in builtins like int() and hex(). The documentation mentions at https://docs.python.org/3/reference/datamodel.html#object.__index__ the need to always define both __index__ and __int__: Note: In order to have a coherent integer type class, when __index__() is defined __int__() should also be defined, and both should return the same value. Nick Coghlan proposes to make __int__ defaults to __index__ when only the second is defined and asked to open a discussion on python-dev before making any change "as the closest equivalent we have to this right now is the "negative" derivation, where overriding __eq__ without overriding __hash__ implicitly marks the derived class as unhashable (look for "type->tp_hash = PyObject_HashNotImplemented;").". I think the change proposed makes more sense than the current behavior and volunteer to implement it if it is accepted. What do you think about this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Mon Feb 18 12:28:21 2019 From: greg at krypto.org (Gregory P. Smith) Date: Mon, 18 Feb 2019 09:28:21 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <1632d0c7-4958-4641-a7fb-db6ebfc0fbca@ubuntu.com> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <20190213164603.3894316f@fsol> <22F2AD71-5694-4EB5-9C21-8B965654A481@python.org> <1632d0c7-4958-4641-a7fb-db6ebfc0fbca@ubuntu.com> Message-ID: On Mon, Feb 18, 2019, 7:34 AM Matthias Klose On 16.02.19 00:15, Gregory P. Smith wrote: > > On Thu, Feb 14, 2019 at 9:29 AM Barry Warsaw wrote: > > > >> On Feb 13, 2019, at 23:08, Mat?j Cepl wrote: > >> > >>> Is this relevant to the discussion at hand? We are talking about > >>> the binary /usr/bin/python3 which will be surely be provided > >>> even by Python 4, won't it? > >> > >> Why would it be? Since this is all hypothetical anyway , I?d more > >> likely expect to only ship /usr/bin/python. > >> > > > > Because nobody can use 'python' and expect that to be anything but a > 2and3 > > compatible interpreter for the next 5+ years given we live in a world > where > > people routinely have a very real need to write #! lines that works with > > obsolete distributions. python3 implies >=3.x, thus python 4, 5, 6, > 2069, > > 3001, and 90210 should all have python3 point to them. realistically > > people will stop referring to python3 by 2069 so we could consider > removing > > the recommendation at that point. > > > > 2020 is not the end of use or end of importance for Python 2. It is > merely > > the end of bugfixes applied by python-dev. > > > > A thing I want to make sure we _don't_ do in the future is allow future > > pythonN binaries. python4, python90210, etc. those should never exist. > > python, python3, and pythonX.Y only. If we were ever to go back on our > > promise and create another world-breaking python version, it could get > its > > own canonical binary name. But we're specifically planning _not_ to make > > that mistake again. > > > > I suspect most of my opining below will be contentious to multiple people > > because I describe a state of the world that is at conflict with > decisions > > multiple independent distros have already made. Accept their mistakes > and > > move on past it to the hack in that case: > > > > A new "py" launcher isn't going to solve this problem - it is separate > and > > should be its own PEP as it has its own set of requirements and reasons > to > > be considered (especially on platforms with no concept of a #!). > Recommend > > "py" today-ish and nobody can rely on it for at least 10+ years in a wide > > support cross platform scripting type of situation because it won't be > > present on the obsolete or long term supported things that people have a > > need for such #!s to run on. > > > > Not our problem? Well, actually, it is. Matthias speaking for Debian > > suggesting they don't want to have "python" at all if it isn't a synonym > > for "python2" because it'll break software is... wrong. If software is > not > > 3 compatible and uses "python", it'll also break when python is python3. > > Just in a different manner. "python" should point to python3 when a > distro > > does not require python2 for its core. It should not _vary_ as to which > of > > 2.7 or 3.7 it will point to within a given stable distribution > (installing > > python2.7 should never suddenly redirect it back to python2). But > "python" > > itself should always exist when any python interpreter is core to the OS. > > That means if a distro no longer requires python2 as part of its > base/core > > but does require python3... "python" must point to "python3". If a > posixy > > OS no longer requires python at all (surely there are some by now?) the > > question of what python should point to when an OS distro supplied > optional > > python package gets installed is likely either "nothing at all" or > ">=3.x" > > but should never be decided as "2.7" (which sadly may be what macOS > does). > > There is no notion of a "core" for Debian. So "core" applies to the whole > distro, as long as there are python shebangs found. > > For Ubuntu, you don't have a python command on the default desktop > install, just > python3. Trying to invoke python, command-not-found tells you: > > $ python > > Command 'python' not found, but can be installed with: > > [...] > > You also have python3 installed, you can run 'python3' instead. > > That tells you which way to go. > "Core" just means part of the minimal install, needed by startup scripts and the package manager perhaps. This would be a default install with no package groups selected or perhaps the netinst image for Debian. If packages in that set don't need a python interpreter, Debian is in great shape! :) > > Do we already have LTS _stable_ distributions making that mistake today? > > If so they've done something undesirable for the world at large and we're > > already screwed if that distro release is deemed important by masses of > > users: There is no way to write a *direct* #! line that works out of the > > box to launch a working latest version Python interpreter across all > > platforms. > > If you count the above example towards this "mistake", probably yes. But > there > is *no* way to have a sane way to have what you want. > Agreed. In the long run, expecting python 2 to exist is not sane. But given our pep394 text of "for the time being, all distributions *should* ensure that python, if installed, refers to the same target as python2," What Debian has done is still unfortunately encouraged by us. We've created a world where #! lines cannot be used to invoke an intentionally compatible script across a wide variety of platforms over time. But our decision to do that was the decision to have an incompatible release in the first place. Too late now. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Mon Feb 18 14:25:40 2019 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 18 Feb 2019 12:25:40 -0700 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <20190216223210.09ef3944@fsol> Message-ID: On Sat, Feb 16, 2019 at 3:34 PM Steve Dower wrote: > On 16Feb.2019 1332, Antoine Pitrou wrote: > > This sounds like a reasonable design principle: decree the API > > non-stable and prone to breakage (it already is, anyway), don't hide it. > > As I was chatting with Eric shortly before he posted this, I assume the > idea would be to expose anything useful/necessary via a function. That > at least removes the struct layout from the ABI, without removing > functionality. > > > It's true that in the PyInterpreterState case specifically, there > > doesn't seem much worthy of use by third-party libraries. > > Which seems to suggest that the answer to "which members are important > to expose?" is "probably none". And that's possibly why Eric didn't > mention it in his email :) > > This is mostly about being able to assign blame when things break, so > I'm totally okay with extension modules that want to play with internals > declaring Py_BUILD_CORE to get access to them (though I suspect that > won't work out of the box - maybe we should have a > Py_I_TOO_LIKE_TO_LIVE_DANGEROUSLY?). > > I like that we're taking (small) steps to reduce the size of our API. It > helps balance out the growth and leaves us with a chance of one day > being able to have an extension model that isn't as tied to C's ABI. Yeah, what Steve said. :) -eric From ericsnowcurrently at gmail.com Mon Feb 18 14:27:55 2019 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 18 Feb 2019 12:27:55 -0700 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <20190216234731.4ea34101@fsol> References: <5C67E2D0.7020906@UGent.be> <20190216223210.09ef3944@fsol> <20190216234731.4ea34101@fsol> Message-ID: On Sat, Feb 16, 2019 at 3:47 PM Antoine Pitrou wrote: > On Sat, 16 Feb 2019 14:34:46 -0800 > Steve Dower wrote: > > Which seems to suggest that the answer to "which members are important > > to expose?" is "probably none". > > That sounds intuitive. But we don't know what kind of hacks some > extension authors might do, for legitimate reasons... > > (perhaps some gevent-like framework needs access to the interpreter > state?) In those cases either we will expose accessor functions in the C-API or they can define Py_BUILD_CORE. -eric From ericsnowcurrently at gmail.com Mon Feb 18 15:17:50 2019 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 18 Feb 2019 13:17:50 -0700 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <5C67E2D0.7020906@UGent.be> References: <5C67E2D0.7020906@UGent.be> Message-ID: On Sat, Feb 16, 2019 at 3:16 AM Jeroen Demeyer wrote: > On 2019-02-16 00:37, Eric Snow wrote: > > One thing that would help simplify changes > > in this area is if PyInterpreterState were defined in > > Include/internal. > > How would that help anything? I'm talking just about changes in the runtime implementation. A lot of of the runtime-related API is defined in Include/internal. Relying on the public headers (i.e. Include/*) for internal runtime API can complicate changes there. I've run into this recently. Moving more internal API into the internal headers helps with that problem. Having distinct header files for the internal API is a relatively new thing (i.e. in the last year), which is why some of the internal API is still defined in the public header files. > I don't like the idea (in general, I'm not > talking about PyInterpreterState specifically) that external modules > should be second-class citizens compared to modules inside CPython. > > If you want to break the undocumented API, just break it. I don't mind. > But I don't see why it's required to move the include to > Include/internal for that. Keep in mind that the "internal" (or "private") API is intended for use exclusively in the runtime and in the builtin modules. Historically our approach to keeping API private was to use underscore prefixes and to leave them out of the documentation (along with guarding with "#ifndef Py_LIMITED_API"). However, this has lead to occasional confusion and breakage, and even to leaking things into the stable ABI that shouldn't have been. Lately we've been working on making the distinction between internal and public API (and stable ABI) more clear and less prone to accidental exposure. Victor has done a lot of work in this area. So I'd like to understand your objection. Is it with exposing some things only under Py_BUILD_CORE (i.e. when building Python itself)? Is it to having "private" C-API in general? Is it just to having separate include directories? -eric From J.Demeyer at UGent.be Mon Feb 18 16:24:00 2019 From: J.Demeyer at UGent.be (Jeroen Demeyer) Date: Mon, 18 Feb 2019 22:24:00 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> Message-ID: <5C6B2270.1080203@UGent.be> On 2019-02-18 21:17, Eric Snow wrote: > Historically our approach to keeping API private was to use underscore > prefixes and to leave them out of the documentation (along with > guarding with "#ifndef Py_LIMITED_API"). However, this has lead to > occasional confusion and breakage, and even to leaking things into the > stable ABI that shouldn't have been. Lately we've been working on > making the distinction between internal and public API (and stable > ABI) more clear and less prone to accidental exposure. Victor has > done a lot of work in this area. > > So I'd like to understand your objection. First of all, if everybody can actually #define Py_BUILD_CORE and get access to the complete API, I don't mind so much. But then it's important that this actually keeps working (i.e. that those headers will always be installed). Still, do we really need so many levels of API: (1) stable API (with #define Py_LIMITED_API) (2) public documented API (3) private undocumented API (the default exposed API) (4) internal API (with #define Py_BUILD_CORE) I would argue to fold (4) into (3). Applications using (3) already know that they are living dangerously by using private API. I'm afraid of hiding actually useful private macros under Py_BUILD_CORE. For example, Modules/_functoolsmodule.c and Modules/_json.c use API functions from (4). But if an API function is useful for implementing functools or json, then it's probably also useful for external extension modules: what if I want to implement something similar to functools or json, why shouldn't I be allowed to use those same API functions? For a very concrete example, was it really necessary to put _PyTuple_ITEMS in (4)? That's used in _functoolsmodule.c. Especially given that the very similar PySequence_Fast_ITEMS is in (2), that seems like a strange and arbitrary limiting choice. Jeroen. From steve.dower at python.org Mon Feb 18 22:04:31 2019 From: steve.dower at python.org (Steve Dower) Date: Mon, 18 Feb 2019 19:04:31 -0800 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <5C6B2270.1080203@UGent.be> References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> Message-ID: On 18Feb.2019 1324, Jeroen Demeyer wrote: > Still, do we really need so many levels of API: > (1) stable API (with #define Py_LIMITED_API) > (2) public documented API > (3) private undocumented API (the default exposed API) > (4) internal API (with #define Py_BUILD_CORE) > > I would argue to fold (4) into (3). Applications using (3) already know > that they are living dangerously by using private API. I agree completely. It's unfortunate we ended up in a state where the stable API was opt-in, but that's where we are now and we have to transition carefully. The ideal would be: * default to cross-version supported APIs (i.e. stable for all 3.*) * opt-in to current-version stable APIs (i.e. stable for all 3.7.*) * extra opt-in to unstable APIs (i.e. you are guaranteed to break one day without warning) > I'm afraid of hiding actually useful private macros under Py_BUILD_CORE. > For example, Modules/_functoolsmodule.c and Modules/_json.c use API > functions from (4). But if an API function is useful for implementing > functools or json, then it's probably also useful for external extension > modules: what if I want to implement something similar to functools or > json, why shouldn't I be allowed to use those same API functions? > > For a very concrete example, was it really necessary to put > _PyTuple_ITEMS in (4)? That's used in _functoolsmodule.c. Especially > given that the very similar PySequence_Fast_ITEMS is in (2), that seems > like a strange and arbitrary limiting choice. The reason to do this is that we can "guarantee" that we've fixed all users when we change the internal representation. Otherwise, the internal memory layout becomes part of the public ABI, which is what we want to fix. (PyTuple_GET_ITEM is just as problematic, FWIW.) If you always rebuild your extension for every micro version (3.x.y) of CPython, then sure, go ahead and use this. But you're by far into the minority of users/developers, and so we really don't want to optimise for this case when it's going to break the 90%+ of people who don't recompile everything all the time. Cheers, Steve From J.Demeyer at UGent.be Tue Feb 19 05:17:43 2019 From: J.Demeyer at UGent.be (Jeroen Demeyer) Date: Tue, 19 Feb 2019 11:17:43 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> Message-ID: <5C6BD7C7.3050901@UGent.be> On 2019-02-19 04:04, Steve Dower wrote: > Otherwise, the > internal memory layout becomes part of the public ABI Of course, the ABI (not API) depends on the internal memory layout. Why is this considered a problem? If you want a fixed ABI, use API level (1) from my last post. If you want a fixed API but not ABI, use level (2). If you really want stuff to be broken at any time, use (3) or (4). This is why I don't see the need to make a difference between (3) and (4): neither of them makes any guarantees about stability. From J.Demeyer at UGent.be Tue Feb 19 05:29:08 2019 From: J.Demeyer at UGent.be (Jeroen Demeyer) Date: Tue, 19 Feb 2019 11:29:08 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> Message-ID: <5C6BDA74.8060800@UGent.be> On 2019-02-19 04:04, Steve Dower wrote: > On 18Feb.2019 1324, Jeroen Demeyer wrote: >> For a very concrete example, was it really necessary to put >> _PyTuple_ITEMS in (4)? That's used in _functoolsmodule.c. Especially >> given that the very similar PySequence_Fast_ITEMS is in (2), that seems >> like a strange and arbitrary limiting choice. > > The reason to do this is that we can "guarantee" that we've fixed all > users when we change the internal representation. I think that CPython should then at least "eat its own dog food" and don't use any of the internal functions/macros when implementing the stdlib. As I said before: if a function/macro is useful for implementing stdlib functionality like "functools" or "json", it's probably useful for external modules too. From solipsis at pitrou.net Tue Feb 19 05:37:41 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 19 Feb 2019 11:37:41 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> Message-ID: <20190219113741.4514750e@fsol> On Mon, 18 Feb 2019 19:04:31 -0800 Steve Dower wrote: > > > I'm afraid of hiding actually useful private macros under Py_BUILD_CORE. > > For example, Modules/_functoolsmodule.c and Modules/_json.c use API > > functions from (4). But if an API function is useful for implementing > > functools or json, then it's probably also useful for external extension > > modules: what if I want to implement something similar to functools or > > json, why shouldn't I be allowed to use those same API functions? > > > > For a very concrete example, was it really necessary to put > > _PyTuple_ITEMS in (4)? That's used in _functoolsmodule.c. Especially > > given that the very similar PySequence_Fast_ITEMS is in (2), that seems > > like a strange and arbitrary limiting choice. > > The reason to do this is that we can "guarantee" that we've fixed all > users when we change the internal representation. Otherwise, the > internal memory layout becomes part of the public ABI, which is what we > want to fix. (PyTuple_GET_ITEM is just as problematic, FWIW.) But PyTuple_GET_ITEM and PyList_GET_ITEM are important for performance, as are other performance-oriented macros. > If you always rebuild your extension for every micro version (3.x.y) of > CPython, then sure, go ahead and use this. Usually we would guarantee that API details don't change in bugfix versions (i.e. the 3.x.y -> 3.x.(y + 1) transition). Has that changed? That may turn out a big problem for several third-party extensions... Regards Antoine. From songofacandy at gmail.com Tue Feb 19 05:53:28 2019 From: songofacandy at gmail.com (INADA Naoki) Date: Tue, 19 Feb 2019 19:53:28 +0900 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <5C6BDA74.8060800@UGent.be> References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <5C6BDA74.8060800@UGent.be> Message-ID: On Tue, Feb 19, 2019 at 7:32 PM Jeroen Demeyer wrote: > > On 2019-02-19 04:04, Steve Dower wrote: > > On 18Feb.2019 1324, Jeroen Demeyer wrote: > >> For a very concrete example, was it really necessary to put > >> _PyTuple_ITEMS in (4)? That's used in _functoolsmodule.c. Especially > >> given that the very similar PySequence_Fast_ITEMS is in (2), that seems > >> like a strange and arbitrary limiting choice. > > > > The reason to do this is that we can "guarantee" that we've fixed all > > users when we change the internal representation. > > I think that CPython should then at least "eat its own dog food" and > don't use any of the internal functions/macros when implementing the > stdlib. As I said before: if a function/macro is useful for implementing > stdlib functionality like "functools" or "json", it's probably useful > for external modules too. If we are perfect and we can design perfect APIs from start, I agree with you. But we should fix design mistake of new APIs. stdlibs are updated with Python itself. So changing internal APIs with micro version is OK. If Cython start using such internal APIs, external modules from PyPI will be broken when Python is upgraded. It feels nightmare to me. So having experimental APIs only for stdlibs makes sense to me. On the other hand, it makes sense to move _PyTuple_ITEMS to (3) or even (2). PyTuple_ITEMS(t) seems better than &PyTuple_GET_ITEM(t, 0). Regards, -- INADA Naoki From vstinner at redhat.com Tue Feb 19 07:08:18 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 19 Feb 2019 13:08:18 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <5C6B2270.1080203@UGent.be> References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> Message-ID: Le lun. 18 f?vr. 2019 ? 22:34, Jeroen Demeyer a ?crit : > First of all, if everybody can actually #define Py_BUILD_CORE and get > access to the complete API, I don't mind so much. But then it's > important that this actually keeps working (i.e. that those headers will > always be installed). > > Still, do we really need so many levels of API: > (1) stable API (with #define Py_LIMITED_API) > (2) public documented API > (3) private undocumented API (the default exposed API) > (4) internal API (with #define Py_BUILD_CORE) It's not a matter of documentation. It's a matter of warranty provided to users. I would like to move towards (1) by default: only provide a stable API by default. IMHO most users will be just fine with this subset of the API. The borders between (2), (3) and (4) are unclear. I created Include/cpython/ which is not really a "private API" but more "CPython implementation details". A better definition of (1) would be "portable stable API" whereas (2)+(3) would be "CPython stable API". And (4) would be the unstable API. Summary: (1) Portable stable API (2) Portable CPython API (3) Unstable API ... The border between (2) and (3) is a "work-in-progress". I modified "make install" to install (3) as well: they are users of this API. Performance can be a good motivation: Cython for example. Debuggers and profiles really need to access to the lowest level of API usually because they can only *inspect* (structure fileds) but no *execute* code (call functions). > I'm afraid of hiding actually useful private macros under Py_BUILD_CORE. Again, it's not a matter of usefulness. It's a matter of backward compatibility announced to users. I would like to make it more explicit that if you *opt in* for an unstable API, you are on your own. My intent is that in 5 years or 10 years, slowly, most C extensions will use (1) which will allow Python to experiment new optimizations, and should help PyPy (cpyext module) to become even more efficient. Victor -- Night gathers, and now my watch begins. It shall not end until my death. From vstinner at redhat.com Tue Feb 19 07:09:53 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 19 Feb 2019 13:09:53 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <5C6BDA74.8060800@UGent.be> Message-ID: Le mar. 19 f?vr. 2019 ? 11:57, INADA Naoki a ?crit : > On the other hand, it makes sense to move _PyTuple_ITEMS to (3) or even (2). > PyTuple_ITEMS(t) seems better than &PyTuple_GET_ITEM(t, 0). Please don't use &PyTuple_GET_ITEM() or _PyTuple_ITEMS(). It prevents to use a more efficient storage for tuple. Something like: https://pythoncapi.readthedocs.io/optimization_ideas.html#specialized-list-for-small-integers PyPy already has the issue right now. Victor -- Night gathers, and now my watch begins. It shall not end until my death. From vstinner at redhat.com Tue Feb 19 07:10:49 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 19 Feb 2019 13:10:49 +0100 Subject: [Python-Dev] buildbottest on Android emulator with docker In-Reply-To: References: Message-ID: This is cool! Sadly, I don't have the bandwidth right now to play with it. I may have a look later. Victor Le ven. 15 f?vr. 2019 ? 18:27, Xavier de Gaye a ?crit : > > The following command runs the buildbottest on an Android emulator with docker (it will use a little bit more than 11 GB): > > $ docker run -it --privileged xdegaye/abifa:r14b-24-x86_64-master > > This command does: > * pull an image from the Docker hub (only the first time that the command is run, note that this a 2 GB download !) and start a container > * pull the latest changes from the GitHub cpython repository and cross-compile python > * start an Android emulator and install python on it > * run the buildbottest target of cpython Makefile > > The image is built from a Dockerfile [2]. > > This same image can also be used with the 'bash' command line argument to enter bash in the container and run python interactively on the emulator [1]. If the 'docker run' command also sets a bind mount to a local cpython repository, then it is possible to develop/debug/fix python on the emulator running in this container using one's own clone of cpython. > > [1] documentation at https://xdegaye.gitlab.io/abifa/docker.html > [2] Dockerfile at https://gitlab.com/xdegaye/abifa/blob/master/docker/Dockerfile.r14b-24-x86_64-master > > Xavier > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com -- Night gathers, and now my watch begins. It shall not end until my death. From stephane at wirtel.be Tue Feb 19 07:26:56 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Tue, 19 Feb 2019 13:26:56 +0100 Subject: [Python-Dev] buildbottest on Android emulator with docker In-Reply-To: References: Message-ID: <20190219122656.GA7124@xps> Nice, we could start to fix the Android issues with this docker image, maybe modified with the local repository but really useful. Thank you so much for your job. -- St?phane Wirtel - https://wirtel.be - @matrixise From eryksun at gmail.com Tue Feb 19 07:48:25 2019 From: eryksun at gmail.com (eryk sun) Date: Tue, 19 Feb 2019 06:48:25 -0600 Subject: [Python-Dev] Adding test.support.safe_rmpath() In-Reply-To: References: <7AF16DF0-A237-44B7-B272-7427CB5AD5B0@mac.com> <3C12F8E9-E825-4F28-9CFE-A81FB35694A6@mac.com> <6083f67a-8413-fc40-0118-cfab2284ae2a@timgolden.me.uk> <1c094bac-a1d1-aaaf-4e3a-0f110c0dfd4f@python.org> Message-ID: On 2/16/19, Richard Levasseur wrote: > > First: The tempfile module is a poor fit for testing (don't get me wrong, > it works, but its not *nice for use in tests*)*.* This is because: > 1. Using it as a context manager is distracting. The indentation signifies > a conceptual scope the reader needs to be aware of, but in a test context, > its usually not useful. At worst, it covers most of the test. At best, its > constrained to a block at the start. > 2. tempfile defaults to binary mode instead of text; just another thing to > bite you. > 3. On windows, you can't reopen the file, so for cross-platform stuff, you > can't even use it for this case. Python opens files with at least read and write sharing in Windows, so typically there's no problem with opening a file multiple times. The problem is with deleting and renaming open files. Typically delete access is not shared, and, even if it is, a normal delete just sets a disposition. A deleted file is unlinked only after all handles have been closed. Similarly, replacing an open file via os.replace will fail because it can't be unlinked. In Windows 10 we can delete and rename files with POSIX-like semantics. To do this, open a handle with delete access and call SetFileInformationByHandle to set the FileDispositionInfoEx or FileRenameInfoEx information. Thus far this is supported by NTFS, and I think it's only NTFS. It's still not completely like POSIX, since it requires delete-access sharing. But it does provide immediate unlinking, which avoids the race condition when trying to remove a directory that has watched files. Programs that have open files that have been unlinked can continue to access them normally. From ncoghlan at gmail.com Tue Feb 19 08:39:54 2019 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Feb 2019 23:39:54 +1000 Subject: [Python-Dev] int() and math.trunc don't accept objects that only define __index__ In-Reply-To: References: Message-ID: On Tue, 19 Feb 2019 at 03:31, R?mi Lapeyre wrote: > Nick Coghlan proposes to make __int__ defaults to __index__ when only the second > is defined and asked to open a discussion on python-dev before making any change > "as the closest equivalent we have to this right now is the "negative" derivation, > where overriding __eq__ without overriding __hash__ implicitly marks the derived > class as unhashable (look for "type->tp_hash = PyObject_HashNotImplemented;").". Reading this again now, it occurs to me that there's another developer experience improvement we already made along these lines in Python 3: "By default, __ne__() delegates to __eq__() and inverts the result unless it is NotImplemented. " [1] By contrast, the corresponding (and annoying) Python 2 behaviour was: "The truth of x==y does not imply that x!=y is false. Accordingly, when defining __eq__(), one should also define __ne__() so that the operators will behave as expected." [2] The only difference is that whereas the new `__ne__` delegation behaviour could just be defined directly in `object.__ne__()`, `object` doesn't implement `__int__` by default, so the delegating function would need to be injected into the type when it is defined (and that's the part that's similar to the `__hash__ = None` negative derivation). So +1 from me. Cheers, Nick. [1] https://docs.python.org/3/reference/datamodel.html#object.__ne__ [2] https://docs.python.org/2/reference/datamodel.html#object.__ne__ -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Feb 19 08:55:17 2019 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 19 Feb 2019 23:55:17 +1000 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <20190216223210.09ef3944@fsol> <20190216234731.4ea34101@fsol> Message-ID: On Tue, 19 Feb 2019 at 05:33, Eric Snow wrote: > > On Sat, Feb 16, 2019 at 3:47 PM Antoine Pitrou wrote: > > On Sat, 16 Feb 2019 14:34:46 -0800 > > Steve Dower wrote: > > > Which seems to suggest that the answer to "which members are important > > > to expose?" is "probably none". > > > > That sounds intuitive. But we don't know what kind of hacks some > > extension authors might do, for legitimate reasons... > > > > (perhaps some gevent-like framework needs access to the interpreter > > state?) > > In those cases either we will expose accessor functions in the C-API > or they can define Py_BUILD_CORE. I really don't want us to ever get into a situation where we're actively encouraging third party projects to define Py_BUILD_CORE. If we decide we do want to go down a path like that, I'd instead prefer to see us define something more like "Py_FRAGILE_API" to make it clear that folks using those extra interfaces are tightly coupling themselves to a specific version of CPython, and are likely going to need to make changes when new versions are released. Even though we would probably *implement* that by having this snippet in one of our header files: #ifdef Py_FRAGILE_API #define Py_BUILD_CORE #endif I still think it would convey the concerns we have more clearly than simply telling people to define Py_BUILD_CORE would. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From remi.lapeyre at henki.fr Tue Feb 19 08:55:38 2019 From: remi.lapeyre at henki.fr (=?UTF-8?Q?R=C3=A9mi_Lapeyre?=) Date: Tue, 19 Feb 2019 05:55:38 -0800 Subject: [Python-Dev] int() and math.trunc don't accept objects that only define __index__ In-Reply-To: References: Message-ID: Another point in favor of the change I just noticed is that int() accept objects defining __index__ as its `base` argument: ? ? Python 3.7.2 (default, Jan 13 2019, 12:50:01) ? ? [Clang 10.0.0 (clang-1000.11.45.5)] on darwin ? ? Type "help", "copyright", "credits" or "license" for more information. ? ? >>> class MyInt: ? ? ... ? ? def __index__(self): ? ? ... ? ? ? ? ? ? return 4 ? ? ... ? ? >>> int("3", base=MyInt()) ? ? 3 ? ? >>> int(MyInt()) ? ? Traceback (most recent call last): ? ? File "", line 1, in ? ? TypeError: int() argument must be a string, a bytes-like object or a number, not 'MyInt' From ncoghlan at gmail.com Tue Feb 19 09:00:16 2019 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 20 Feb 2019 00:00:16 +1000 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <20190219113741.4514750e@fsol> References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> Message-ID: On Tue, 19 Feb 2019 at 20:41, Antoine Pitrou wrote: > On Mon, 18 Feb 2019 19:04:31 -0800 > Steve Dower wrote: > > If you always rebuild your extension for every micro version (3.x.y) of > > CPython, then sure, go ahead and use this. > > Usually we would guarantee that API details don't change in bugfix > versions (i.e. the 3.x.y -> 3.x.(y + 1) transition). Has that changed? > That may turn out a big problem for several third-party extensions... This is the genuine technical difference between the three levels: * Py_BUILD_CORE -> no ABI stability guarantees at all * standard -> stable within a maintenance branch * Py_LIMITED_API -> stable across feature releases Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From stephane at wirtel.be Tue Feb 19 10:52:49 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Tue, 19 Feb 2019 16:52:49 +0100 Subject: [Python-Dev] buildbottest on Android emulator with docker In-Reply-To: <20190219122656.GA7124@xps> References: <20190219122656.GA7124@xps> Message-ID: <20190219155249.GA30353@xps> Hi Xavier, I get this exception Timeout (360 seconds) reached; failed to start emulator ---> Device ready. ---> Install Python on the emulator. /home/pydev/build/python-native/python -B /home/pydev/abifa/Android/tools/install.py error: device 'emulator-5556' not found unzip -q /home/pydev/dist/python3.8-android-24-x86_64-stdlib.zip -d /tmp/tmpt_v_gdbo /home/pydev/android/android-sdk/platform-tools/adb -s emulator-5556 shell mkdir -p /data/local/tmp/python Command "('/home/pydev/android/android-sdk/platform-tools/adb', '-s', 'emulator-5556', 'shell', 'mkdir -p /data/local/tmp/python')" returned non-zero exit status 1 /home/pydev/abifa/Android/emulator.mk:57: recipe for target '_install' failed make: *** [_install] Error 1 Is there a repository with a bug tracker because I would not want to pullute this ML with that. Thank you, St?phane -- St?phane Wirtel - https://wirtel.be - @matrixise From lyssdod at gmail.com Tue Feb 19 11:13:16 2019 From: lyssdod at gmail.com (Alexander Revin) Date: Tue, 19 Feb 2019 17:13:16 +0100 Subject: [Python-Dev] new binary wheels PEP idea Message-ID: Hi all, I have an idea regarding Python binary wheels on non-glibc platforms, and it seems that initially I've posted it to the wrong list ([1]) Long story short, the proposal is to use platform tuples (like compiler ones) for wheel names, which will allow much broader platform support, for example: package-1.0-cp36-cp36m-amd64_linux_gnu.whl package-1.0-cp36-cp36m-amd64_linux_musl.whl So eventually only {platform tag} part will be modified. Glibc/musl detection is quite trivial and eventually will be based on existing one in PEP 513 [2]. Let me know what you think. Best regards, Alex [1] https://mail.python.org/pipermail/python-list/2019-February/739524.html [2] https://www.python.org/dev/peps/pep-0513/#id49 From steve.dower at python.org Tue Feb 19 11:43:54 2019 From: steve.dower at python.org (Steve Dower) Date: Tue, 19 Feb 2019 08:43:54 -0800 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <5C6BDA74.8060800@UGent.be> References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <5C6BDA74.8060800@UGent.be> Message-ID: <61551538-3059-9833-e704-48c7f958652d@python.org> On 19Feb2019 0229, Jeroen Demeyer wrote: > On 2019-02-19 04:04, Steve Dower wrote: >> On 18Feb.2019 1324, Jeroen Demeyer wrote: >>> For a very concrete example, was it really necessary to put >>> _PyTuple_ITEMS in (4)? That's used in _functoolsmodule.c. Especially >>> given that the very similar PySequence_Fast_ITEMS is in (2), that seems >>> like a strange and arbitrary limiting choice. >> >> The reason to do this is that we can "guarantee" that we've fixed all >> users when we change the internal representation. > > I think that CPython should then at least "eat its own dog food" and > don't use any of the internal functions/macros when implementing the > stdlib. As I said before: if a function/macro is useful for implementing > stdlib functionality like "functools" or "json", it's probably useful > for external modules too. I'm inclined to agree, but then I'm also one of the advocates for breaking out as much as possible of the stdlib into pip-installable modules, which would necessitate this :) There are certainly parts of the stdlib that are there to _enable_ the use of these features without exposing them - asyncio has some good examples of this. But the rest probably don't. If they really do, then we would have to define stable ways to get the same functionality (one example - numpy currently relies on being able to check the refcount to see if it equals 1, but we could easily provide a "Py_HasNoOtherReferences" function that would do the same thing and also allow us to one day move or remove reference counts without breaking numpy). That said, one of the criteria for "are you eligible to use the internal API" is "will users always have matched builds of this module" - for the standard library, the answer is yes, and so they can use that API. For third-party extension modules, the answer _might_ be yes, which means they _might_ be able to use the API. But both of those "mights" are outside of the control of the core development team, so we can't take that responsibility for you. The best we can do is make it easy to use the stable APIs, and make using the unstable APIs a deliberate choice so that those who do so are aware that they are now more responsible for their user's success than they thought they were. Cheers, Steve From steve.dower at python.org Tue Feb 19 11:46:17 2019 From: steve.dower at python.org (Steve Dower) Date: Tue, 19 Feb 2019 08:46:17 -0800 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <20190216223210.09ef3944@fsol> <20190216234731.4ea34101@fsol> Message-ID: <1b51f3dd-d0b6-0078-98ed-0ac52ebcc723@python.org> On 19Feb2019 0555, Nick Coghlan wrote: > I really don't want us to ever get into a situation where we're > actively encouraging third party projects to define Py_BUILD_CORE. > > If we decide we do want to go down a path like that, I'd instead > prefer to see us define something more like "Py_FRAGILE_API" to make > it clear that folks using those extra interfaces are tightly coupling > themselves to a specific version of CPython, and are likely going to > need to make changes when new versions are released. I mean, my suggestion of "Py_I_TOO_LIKE_TO_LIVE_DANGEROUSLY" was only a little bit tongue-in-cheek :) Maybe there's a good Monty Python reference we can use here? "Py_ITS_JUST_A_FLESH_WOUND" or "Py_THEN_WE_JUMP_OUT_OF_THE_RABBIT" Cheers, Steve From xdegaye at gmail.com Tue Feb 19 14:52:04 2019 From: xdegaye at gmail.com (Xavier de Gaye) Date: Tue, 19 Feb 2019 20:52:04 +0100 Subject: [Python-Dev] buildbottest on Android emulator with docker In-Reply-To: <20190219155249.GA30353@xps> References: <20190219122656.GA7124@xps> <20190219155249.GA30353@xps> Message-ID: https://gitlab.com/xdegaye/abifa The table of content of the first url in my initial post gives also a link to that repository.under the name 'Repository'. Xavier On Tue, Feb 19, 2019 at 4:54 PM Stephane Wirtel wrote: > > Hi Xavier, > > I get this exception > > Timeout (360 seconds) reached; failed to start emulator > ---> Device ready. > ---> Install Python on the emulator. > /home/pydev/build/python-native/python -B /home/pydev/abifa/Android/tools/install.py > error: device 'emulator-5556' not found > unzip -q /home/pydev/dist/python3.8-android-24-x86_64-stdlib.zip -d /tmp/tmpt_v_gdbo > /home/pydev/android/android-sdk/platform-tools/adb -s emulator-5556 shell mkdir -p /data/local/tmp/python > Command "('/home/pydev/android/android-sdk/platform-tools/adb', '-s', 'emulator-5556', 'shell', 'mkdir -p /data/local/tmp/python')" returned non-zero exit status 1 > /home/pydev/abifa/Android/emulator.mk:57: recipe for target '_install' failed > make: *** [_install] Error 1 > > > Is there a repository with a bug tracker because I would not want to > pullute this ML with that. > > Thank you, > > St?phane > > > -- > St?phane Wirtel - https://wirtel.be - @matrixise > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/xdegaye%40gmail.com From barry at python.org Tue Feb 19 14:41:56 2019 From: barry at python.org (Barry Warsaw) Date: Tue, 19 Feb 2019 11:41:56 -0800 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <20190216223210.09ef3944@fsol> Message-ID: Steve?Dower?wrote?on?2/16/19?14:34:> > This?is?mostly?about?being?able?to?assign?blame?when?things?break,?so > I'm?totally?okay?with?extension?modules?that?want?to?play?with?internals > declaring?Py_BUILD_CORE?to?get?access?to?them?(though?I?suspect?that > won't?work?out?of?the?box?-?maybe?we?should?have?a > Py_I_TOO_LIKE_TO_LIVE_DANGEROUSLY?). Let's?call?it?Py_POINTED_STICK?of?course! http://www.montypython.net/scripts/fruit.php -Barry -------------- next part -------------- An HTML attachment was scrubbed... URL: From xdegaye at gmail.com Tue Feb 19 15:05:36 2019 From: xdegaye at gmail.com (Xavier de Gaye) Date: Tue, 19 Feb 2019 21:05:36 +0100 Subject: [Python-Dev] Is distutils.util.get_platform() the "current" or the "target" platform In-Reply-To: References: <7f0f615b-fab5-0504-b1bf-20b3c9cd0402@python.org> <03fa7e4a-6cb9-58b8-709c-44a589fdb706@python.org> Message-ID: <414ae4ee-5a56-b137-6efe-570070cc9e9a@gmail.com> > [Any reason for dropping python-dev?] Sorry. just clicked the wrong button. > And the answer is a resounding "yes, it returns the target platform"? It seems you're saying this, but the wording of your email sounds just enough of a question that I'm not sure whether you are definitively answering it or not. The answer is yes indeed, it returns the target platform. This is a listing of the non-obvious steps that lead to this conclusion. Xavier On 2/19/19 8:45 PM, Steve Dower wrote: > [Any reason for dropping python-dev?] > > On 19Feb2019 1139, Xavier de Gaye wrote: >> Is distutils.util.get_platform() the "current" or the "target" platform ? > > I *think* you're answering this below, yes? > >> * When cross-compiling on posix platforms using autoconf, configure is >> ?? run with the '--host=host-type' [1] command line option to specify >> ?? the target platform. >> >> * The AC_CANONICAL_HOST macro is used by configure.ac to get the >> ?? canonical variables `host' and `host_cpu' [2]. >> ?? Those variables are used to compute _PYTHON_HOST_PLATFORM. >> >> * The Makefile generated by configure runs setup.py, >> ?? generate-posix-vars, etc... on the build platform using >> ?? PYTHON_FOR_BUILD, a native python interpreter that is set >> ?? to run with the _PYTHON_HOST_PLATFORM environment variable. >> >> * get_platform() in setup.py and in Lib/distutils/util.py returns the >> ?? value of _PYTHON_HOST_PLATFORM when cross-compiling. >> >> So the process of cross-compilation on posix platforms has >> get_platform() return the target ('host' in autoconf terminology) platform. >> >> [1] https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.69/html_node/Specifying-Target-Triplets.html >> [2] https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/autoconf-2.69/html_node/Canonicalizing.html > > And the answer is a resounding "yes, it returns the target platform"? It seems you're saying this, but the wording of your email sounds just enough of a question that I'm not sure whether you are definitively answering it or not. > > Cheers, > Steve From stefan_ml at behnel.de Tue Feb 19 15:12:05 2019 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 19 Feb 2019 21:12:05 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> Message-ID: Nick Coghlan schrieb am 19.02.19 um 15:00: > On Tue, 19 Feb 2019 at 20:41, Antoine Pitrou wrote: >> On Mon, 18 Feb 2019 19:04:31 -0800 Steve Dower wrote: >>> If you always rebuild your extension for every micro version (3.x.y) of >>> CPython, then sure, go ahead and use this. >> >> Usually we would guarantee that API details don't change in bugfix >> versions (i.e. the 3.x.y -> 3.x.(y + 1) transition). Has that changed? >> That may turn out a big problem for several third-party extensions... > > This is the genuine technical difference between the three levels: > > * Py_BUILD_CORE -> no ABI stability guarantees at all > * standard -> stable within a maintenance branch > * Py_LIMITED_API -> stable across feature releases I'm happy with this split, and i think this is how it should be. There is no reason (not withstanding critical bugs) to break the C-API within a maintenance (3.x) release series. Apart from the 3.5.[12] slip, CPython has proven very reliable in these guarantees. We can (or at least could) easily take care in Cython to enable version specific features and optimisations only from CPython alpha/beta releases on, and not when they should become available in later point releases, so that users can compile their code in, say, CPython 3.7.5 and it will work correctly in 3.7.1. We never cared about Py_BUILD_CORE (because that's obviously internal), and it's also not very likely that we will have a Py_LIMITED_API backend anywhere in the near future (although we would consider PRs for it that implement the support as an optional C compile time feature). What I would ask, though, and I think that's also Jeroen's request, is to be careful what you lock up behind Py_BUILD_CORE. Any new functionality should be available to extension modules by default, unless there is a good reason why it should remain internal. Usually, there is a reason why this functionality was added, and I doubt that there are many cases where these reasons are entirely internal to CPython. One thing that is not mentioned above are underscore private C-API functions. I imagine that they are a bit annoying for CPython itself because promoting them to public means renaming them, which is already a breaking change. But they are a clear marker for potential future breakage, which is good. Still, my experience so far suggests that they also fall under the "keep stable in maintenance branch" rule, which is even better. So, yeah, I'm happy with the status quo, and a bit worried about all the moving around of declarations and that scent of a sword of Damocles hanging over their potential confinement. IMHO, things should just be public and potentially marked as "unstable" to advertise a risk of breakage in a future CPython X.Y feature releases. Then it's up to the users to decide how much work they want to invest into keeping up with C-API changes vs. potentially sub-optimal but stable C-API usage. Stefan From steve.dower at python.org Tue Feb 19 15:40:42 2019 From: steve.dower at python.org (Steve Dower) Date: Tue, 19 Feb 2019 12:40:42 -0800 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> Message-ID: On 19Feb2019 1212, Stefan Behnel wrote: > So, yeah, I'm happy with the status quo, and a bit worried about all the > moving around of declarations and that scent of a sword of Damocles hanging > over their potential confinement. IMHO, things should just be public and > potentially marked as "unstable" to advertise a risk of breakage in a > future CPython X.Y feature releases. Then it's up to the users to decide > how much work they want to invest into keeping up with C-API changes vs. > potentially sub-optimal but stable C-API usage. Unfortunately, advertising a risk of breakage doesn't make the break less painful when it happens. We'd rather avoid that pain by preemptively breaking (at a major version update, e.g. 3.8) so that minor breaks (e.g. between 3.8.2 and 3.8.3) don't cause any problems at all. And if we preemptively break, then we can also preemptively add functions to cover what direct memory accesses previously used to do. And it's not up to the users - it's up to the package developers. Most of whom optimise for their own ease of life (as someone who supports Windows users, I'm well aware of where package developers cut painful corners ;) ). The only choice users get in the matter is whether they ever update Python, or if they switch to a language that is more respectful toward them. For what it's worth, the users I've been speaking to recently are *far* more concerned about being able to update Python without things breaking than they are about runtime performance. Cheers, Steve From steve.dower at python.org Tue Feb 19 15:41:31 2019 From: steve.dower at python.org (Steve Dower) Date: Tue, 19 Feb 2019 12:41:31 -0800 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <20190216223210.09ef3944@fsol> Message-ID: <9dd3d0cb-14b2-aaf7-661a-94e4276805f1@python.org> On 19Feb2019 1141, Barry Warsaw wrote: > Steve?Dower?wrote?on?2/16/19?14:34:> >> This?is?mostly?about?being?able?to?assign?blame?when?things?break,?so >> I'm?totally?okay?with?extension?modules?that?want?to?play?with?internals >> declaring?Py_BUILD_CORE?to?get?access?to?them?(though?I?suspect?that >> won't?work?out?of?the?box?-?maybe?we?should?have?a >> Py_I_TOO_LIKE_TO_LIVE_DANGEROUSLY?). > > Let's?call?it?Py_POINTED_STICK?of?course! > > http://www.montypython.net/scripts/fruit.php +1, and instead of "using the internal API" we can call it "coming at CPython with a banana" :D Cheers, Steve From stefan_ml at behnel.de Tue Feb 19 16:05:10 2019 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 19 Feb 2019 22:05:10 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> Message-ID: Steve Dower schrieb am 19.02.19 um 21:40: > On 19Feb2019 1212, Stefan Behnel wrote: >> Then it's up to the users to decide >> how much work they want to invest into keeping up with C-API changes vs. >> potentially sub-optimal but stable C-API usage. > [...] > And it's not up to the users - it's up to the package developers. I meant "users" as in "users of the C-API", i.e. package developers. Stefan From brett at python.org Tue Feb 19 16:41:00 2019 From: brett at python.org (Brett Cannon) Date: Tue, 19 Feb 2019 13:41:00 -0800 Subject: [Python-Dev] new binary wheels PEP idea In-Reply-To: References: Message-ID: Unfortunately you're still posted to the wrong list, Alexander. You want to mail distutils-sig at python.org where packaging discussions occur. On Tue, Feb 19, 2019 at 8:19 AM Alexander Revin wrote: > Hi all, > > I have an idea regarding Python binary wheels on non-glibc platforms, > and it seems that initially I've posted it to the wrong list ([1]) > > Long story short, the proposal is to use platform tuples (like > compiler ones) for wheel names, which will allow much broader platform > support, for example: > > package-1.0-cp36-cp36m-amd64_linux_gnu.whl > package-1.0-cp36-cp36m-amd64_linux_musl.whl > > So eventually only {platform tag} part will be modified. Glibc/musl > detection is quite trivial and eventually will be based on existing > one in PEP 513 [2]. > > Let me know what you think. > > Best regards, > Alex > > [1] > https://mail.python.org/pipermail/python-list/2019-February/739524.html > [2] https://www.python.org/dev/peps/pep-0513/#id49 > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Tue Feb 19 16:57:23 2019 From: steve.dower at python.org (Steve Dower) Date: Tue, 19 Feb 2019 13:57:23 -0800 Subject: [Python-Dev] new binary wheels PEP idea In-Reply-To: References: Message-ID: And for what it's worth, most of the really active contributors from distutils-sig seem to prefer the "Packaging" category at https://discuss.python.org/ If you'd prefer to use Discourse, I'd suggest posting there first and also email distutils-sig with a link to the discussion. Otherwise, go straight to distutils-sig (just don't be too surprised if you don't seem to get much traction there or if someone restarts the discussion on Discourse for you). Cheers, Steve On 19Feb2019 1341, Brett Cannon wrote: > Unfortunately you're still posted to the wrong list, Alexander. You want > to mail distutils-sig at python.org where > packaging discussions occur. > > On Tue, Feb 19, 2019 at 8:19 AM Alexander Revin > wrote: > > Hi all, > > I have an idea regarding Python binary wheels on non-glibc platforms, > and it seems that initially I've posted it to the wrong list ([1]) > > Long story short, the proposal is to use platform tuples (like > compiler ones) for wheel names, which will allow much broader platform > support, for example: > > package-1.0-cp36-cp36m-amd64_linux_gnu.whl > package-1.0-cp36-cp36m-amd64_linux_musl.whl > > So eventually only {platform tag} part will be modified. Glibc/musl > detection is quite trivial and eventually will be based on existing > one in PEP 513 [2]. > > Let me know what you think. > > Best regards, > Alex > > [1] > https://mail.python.org/pipermail/python-list/2019-February/739524.html > [2] https://www.python.org/dev/peps/pep-0513/#id49 From lyssdod at gmail.com Tue Feb 19 17:32:27 2019 From: lyssdod at gmail.com (Alexander Revin) Date: Tue, 19 Feb 2019 23:32:27 +0100 Subject: [Python-Dev] new binary wheels PEP idea In-Reply-To: References: Message-ID: Thank you guys! Will try it that way. Best, Alex On Tue, Feb 19, 2019 at 10:57 PM Steve Dower wrote: > > And for what it's worth, most of the really active contributors from > distutils-sig seem to prefer the "Packaging" category at > https://discuss.python.org/ > > If you'd prefer to use Discourse, I'd suggest posting there first and > also email distutils-sig with a link to the discussion. Otherwise, go > straight to distutils-sig (just don't be too surprised if you don't seem > to get much traction there or if someone restarts the discussion on > Discourse for you). > > Cheers, > Steve > > On 19Feb2019 1341, Brett Cannon wrote: > > Unfortunately you're still posted to the wrong list, Alexander. You want > > to mail distutils-sig at python.org where > > packaging discussions occur. > > > > On Tue, Feb 19, 2019 at 8:19 AM Alexander Revin > > wrote: > > > > Hi all, > > > > I have an idea regarding Python binary wheels on non-glibc platforms, > > and it seems that initially I've posted it to the wrong list ([1]) > > > > Long story short, the proposal is to use platform tuples (like > > compiler ones) for wheel names, which will allow much broader platform > > support, for example: > > > > package-1.0-cp36-cp36m-amd64_linux_gnu.whl > > package-1.0-cp36-cp36m-amd64_linux_musl.whl > > > > So eventually only {platform tag} part will be modified. Glibc/musl > > detection is quite trivial and eventually will be based on existing > > one in PEP 513 [2]. > > > > Let me know what you think. > > > > Best regards, > > Alex > > > > [1] > > https://mail.python.org/pipermail/python-list/2019-February/739524.html > > [2] https://www.python.org/dev/peps/pep-0513/#id49 > From xdegaye at gmail.com Wed Feb 20 04:38:02 2019 From: xdegaye at gmail.com (Xavier de Gaye) Date: Wed, 20 Feb 2019 10:38:02 +0100 Subject: [Python-Dev] buildbottest on Android emulator with docker In-Reply-To: <20190219155249.GA30353@xps> References: <20190219122656.GA7124@xps> <20190219155249.GA30353@xps> Message-ID: <13650274-52bc-afad-c905-ff51542df662@gmail.com> > Timeout (360 seconds) reached; failed to start emulator > ---> Device ready. > ---> Install Python on the emulator. > /home/pydev/build/python-native/python -B /home/pydev/abifa/Android/tools/install.py > error: device 'emulator-5556' not found I can reproduce this error after removing the kvm kernel modules (I am on ArchLinux and ArchLinux kernels provide the required kernel modules to support KVM virtualization). So my fault, forgot to explain that kvm is required to run this docker image, kvm is not required though when the docker image is built for other architectures (non x86_64), but it is very very slow then. To fix the problem one must install the Linux distribution packages that provide KVM virtualization. Whatever the Linux distribution it may also be necessary to enable virtualization support in the BIOS. I will enter an issue and update the documentation at https://gitlab.com/xdegaye/abifa. Thanks Stephane for the report. Xavier On 2/19/19 4:52 PM, Stephane Wirtel wrote: > Hi Xavier, > > I get this exception > > Timeout (360 seconds) reached; failed to start emulator > ---> Device ready. > ---> Install Python on the emulator. > /home/pydev/build/python-native/python -B /home/pydev/abifa/Android/tools/install.py > error: device 'emulator-5556' not found > unzip -q /home/pydev/dist/python3.8-android-24-x86_64-stdlib.zip -d /tmp/tmpt_v_gdbo > /home/pydev/android/android-sdk/platform-tools/adb -s emulator-5556 shell mkdir -p /data/local/tmp/python > Command "('/home/pydev/android/android-sdk/platform-tools/adb', '-s', 'emulator-5556', 'shell', 'mkdir -p /data/local/tmp/python')" returned non-zero exit status 1 > /home/pydev/abifa/Android/emulator.mk:57: recipe for target '_install' failed > make: *** [_install] Error 1 From stephane at wirtel.be Wed Feb 20 10:01:20 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Wed, 20 Feb 2019 16:01:20 +0100 Subject: [Python-Dev] Question - Bug Triage for 3.4 & 3.5 Message-ID: <20190220150120.GA26898@xps> Hi, As you know, Python 3.4 and 3.5 are in security mode and the EOL for these versions are respectively 2019-03-16 and 2020-09-13. Number of issues 3.4: 1530 issues 3.5: 1901 issues But some issues are not related to the security. Could we update these issues (non-security) to 3.6/3.7 & 3.8? Cheers, St?phane -- St?phane Wirtel - https://wirtel.be - @matrixise From stephane at wirtel.be Wed Feb 20 10:11:20 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Wed, 20 Feb 2019 16:11:20 +0100 Subject: [Python-Dev] Question - Bug Triage for 3.4 & 3.5 In-Reply-To: <20190220150120.GA26898@xps> References: <20190220150120.GA26898@xps> Message-ID: <20190220151120.GA27904@xps> After discussion with Victor, my proposal will generate noise with the ML, maybe for nothing. On 02/20, Stephane Wirtel wrote: >Hi, > >As you know, Python 3.4 and 3.5 are in security mode and the EOL for >these versions are respectively 2019-03-16 and 2020-09-13. > >Number of issues > >3.4: 1530 issues >3.5: 1901 issues > >But some issues are not related to the security. > >Could we update these issues (non-security) to 3.6/3.7 & 3.8? > >Cheers, > >St?phane > >-- >St?phane Wirtel - https://wirtel.be - @matrixise >_______________________________________________ >Python-Dev mailing list >Python-Dev at python.org >https://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: https://mail.python.org/mailman/options/python-dev/stephane%40wirtel.be -- St?phane Wirtel - https://wirtel.be - @matrixise From steve.dower at python.org Wed Feb 20 10:22:11 2019 From: steve.dower at python.org (Steve Dower) Date: Wed, 20 Feb 2019 07:22:11 -0800 Subject: [Python-Dev] Question - Bug Triage for 3.4 & 3.5 In-Reply-To: <20190220151120.GA27904@xps> References: <20190220150120.GA26898@xps> <20190220151120.GA27904@xps> Message-ID: <94b593ba-47e1-8489-5ee4-6ddeb24cf4b8@python.org> On 20Feb.2019 0711, Stephane Wirtel wrote: > After discussion with Victor, my proposal will generate noise with the > ML, maybe for nothing. > > On 02/20, Stephane Wirtel wrote: >> Hi, >> >> As you know, Python 3.4 and 3.5 are in security mode and the EOL for >> these versions are respectively 2019-03-16 and 2020-09-13. >> >> Number of issues >> >> 3.4: 1530 issues >> 3.5: 1901 issues >> >> But some issues are not related to the security. >> >> Could we update these issues (non-security) to 3.6/3.7 & 3.8? >> >> Cheers, >> >> St?phane It'll make same noise, sure, but if we schedule a bulk close of issues that have not been touched since 3.6 was released then at least it'll be easy to "mark all as read". And searching for current issues will become easier. I'm always in favor of cleaning up inactionable bugs (as much as I'm in favor of keeping actionable-but-low-priority bugs open, which causes quite a few conflicts at work...) That said, maybe it makes sense to wait until 2.7's EOL and do them all at once? Cheers, Steve From storchaka at gmail.com Wed Feb 20 10:38:00 2019 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 20 Feb 2019 17:38:00 +0200 Subject: [Python-Dev] Question - Bug Triage for 3.4 & 3.5 In-Reply-To: <20190220150120.GA26898@xps> References: <20190220150120.GA26898@xps> Message-ID: 20.02.19 17:01, Stephane Wirtel ????: > As you know, Python 3.4 and 3.5 are in security mode and the EOL for > these versions are respectively 2019-03-16 and 2020-09-13. > > Number of issues > > 3.4: 1530 issues > 3.5: 1901 issues > > But some issues are not related to the security. > > Could we update these issues (non-security) to 3.6/3.7 & 3.8? I am against a mass changing the status of issue, since this will generate an enormous amount of noise (to me at least). But you are free to update issues one by one if you can to do some progress with them. From stephane at wirtel.be Wed Feb 20 10:56:50 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Wed, 20 Feb 2019 16:56:50 +0100 Subject: [Python-Dev] Question - Bug Triage for 3.4 & 3.5 In-Reply-To: References: <20190220150120.GA26898@xps> Message-ID: <20190220155650.GA1700@xps> >I am against a mass changing the status of issue, since this will >generate an enormous amount of noise (to me at least). But you are >free to update issues one by one if you can to do some progress with >them. Sure, I don't want to update them massively, just one by one. But I prefer to ask before this kind of operation. -- St?phane Wirtel - https://wirtel.be - @matrixise From stephane at wirtel.be Wed Feb 20 11:02:53 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Wed, 20 Feb 2019 17:02:53 +0100 Subject: [Python-Dev] Question - Bug Triage for 3.4 & 3.5 In-Reply-To: <94b593ba-47e1-8489-5ee4-6ddeb24cf4b8@python.org> References: <20190220150120.GA26898@xps> <20190220151120.GA27904@xps> <94b593ba-47e1-8489-5ee4-6ddeb24cf4b8@python.org> Message-ID: <20190220160253.GB1700@xps> Hi Steve, I reply on the mailing list On 02/20, Steve Dower wrote: >It'll make same noise, sure, but if we schedule a bulk close of issues >that have not been touched since 3.6 was released then at least it'll be >easy to "mark all as read". And searching for current issues will become >easier. As Serhyi proposed, a one by one could be interesting, to be sure we "migrate" the right issue to the right version. > >I'm always in favor of cleaning up inactionable bugs (as much as I'm in >favor of keeping actionable-but-low-priority bugs open, which causes >quite a few conflicts at work...) > >That said, maybe it makes sense to wait until 2.7's EOL and do them all >at once? I have already started with some issues. -- St?phane Wirtel - https://wirtel.be - @matrixise From vstinner at redhat.com Wed Feb 20 12:58:18 2019 From: vstinner at redhat.com (Victor Stinner) Date: Wed, 20 Feb 2019 18:58:18 +0100 Subject: [Python-Dev] Question - Bug Triage for 3.4 & 3.5 In-Reply-To: <20190220150120.GA26898@xps> References: <20190220150120.GA26898@xps> Message-ID: Hi, If Python 3.4 was the current version when a bug was reported, I would expect the version field of the bug set to Python 3.4. Maybe the bug has been fixed in the meanwhile, maybe not. Closing all bugs affected to 3.4 is a risk of loosing useful information on real bugs: closed bugs are ignored by default in "Search" operation. Changing the version field: well, I don't think that it's useful. I usually ignore this field. And it would send like 3000 emails... I don't see the point. It's not uncommon that I fix bugs which 5 years old if not longer. Sometimes, I decide to look at all bugs of a specific module. And most of old bugs are still relevant nowadays. Sometimes, closing the bug as WONTFIX is the right answer, but it can only be done on a case by case basis. Note: Same rationale for Python 3.5, Python 2.6, or another other old Python version ;-) Bug triage is hard and requires plenty of time :-) Victor Le mer. 20 f?vr. 2019 ? 16:05, Stephane Wirtel a ?crit : > > Hi, > > As you know, Python 3.4 and 3.5 are in security mode and the EOL for > these versions are respectively 2019-03-16 and 2020-09-13. > > Number of issues > > 3.4: 1530 issues > 3.5: 1901 issues > > But some issues are not related to the security. > > Could we update these issues (non-security) to 3.6/3.7 & 3.8? > > Cheers, > > St?phane > > -- > St?phane Wirtel - https://wirtel.be - @matrixise > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com -- Night gathers, and now my watch begins. It shall not end until my death. From brett at python.org Wed Feb 20 14:17:30 2019 From: brett at python.org (Brett Cannon) Date: Wed, 20 Feb 2019 11:17:30 -0800 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <9dd3d0cb-14b2-aaf7-661a-94e4276805f1@python.org> References: <5C67E2D0.7020906@UGent.be> <20190216223210.09ef3944@fsol> <9dd3d0cb-14b2-aaf7-661a-94e4276805f1@python.org> Message-ID: On Tue, Feb 19, 2019 at 12:45 PM Steve Dower wrote: > On 19Feb2019 1141, Barry Warsaw wrote: > > Steve Dower wrote on 2/16/19 14:34:> > >> This is mostly about being able to assign blame when things break, so > >> I'm totally okay with extension modules that want to play with internals > >> declaring Py_BUILD_CORE to get access to them (though I suspect that > >> won't work out of the box - maybe we should have a > >> Py_I_TOO_LIKE_TO_LIVE_DANGEROUSLY?). > > > > Let's call it Py_POINTED_STICK of course! > > > > http://www.montypython.net/scripts/fruit.php > > +1, and instead of "using the internal API" we can call it "coming at > CPython with a banana" :D > I don't think now is the exact time to do this since how to even handle PEPs isn't really settled in this new steering council world, but eventually maybe a PEP to outlining the re-org, how to opt into what seems to be the 3 different layers of the C API for users, guidelines on how to decide what APIs go where, etc. might be warranted? Stuff is starting to get strewn about without a centralized thing to focus discussions around so probably having a PEP to focus around might help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Wed Feb 20 15:59:27 2019 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Wed, 20 Feb 2019 15:59:27 -0500 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: <4ca1c28b-d906-beea-875f-224ebb77cd0a@ganssle.io> References: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> <4ca1c28b-d906-beea-875f-224ebb77cd0a@ganssle.io> Message-ID: On Fri, Feb 15, 2019 at 5:29 PM Paul Ganssle wrote: > it allows you to use non-traditional units like weeks (timedelta(days=7)) > Weeks are traditional: >>> timedelta(weeks=1) datetime.timedelta(7) :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Monson at microsoft.com Wed Feb 20 15:15:44 2019 From: Paul.Monson at microsoft.com (Paul Monson) Date: Wed, 20 Feb 2019 20:15:44 +0000 Subject: [Python-Dev] Is distutils.util.get_platform() the "current" or the "target" platform In-Reply-To: <414ae4ee-5a56-b137-6efe-570070cc9e9a@gmail.com> References: <7f0f615b-fab5-0504-b1bf-20b3c9cd0402@python.org> <03fa7e4a-6cb9-58b8-709c-44a589fdb706@python.org> <414ae4ee-5a56-b137-6efe-570070cc9e9a@gmail.com> Message-ID: Thanks for the feedback. I updated the PR to use get_platform and get_host_platform. More testing is still needed before it's ready to merge to make sure it still does what it was intended to do. -----Original Message----- From: Python-Dev On Behalf Of Xavier de Gaye Sent: Tuesday, February 19, 2019 12:06 PM To: Steve Dower ; python-dev at python.org Subject: Re: [Python-Dev] Is distutils.util.get_platform() the "current" or the "target" platform > [Any reason for dropping python-dev?] Sorry. just clicked the wrong button. > And the answer is a resounding "yes, it returns the target platform"? It seems you're saying this, but the wording of your email sounds just enough of a question that I'm not sure whether you are definitively answering it or not. The answer is yes indeed, it returns the target platform. This is a listing of the non-obvious steps that lead to this conclusion. Xavier On 2/19/19 8:45 PM, Steve Dower wrote: > [Any reason for dropping python-dev?] > > On 19Feb2019 1139, Xavier de Gaye wrote: >> Is distutils.util.get_platform() the "current" or the "target" platform ? > > I *think* you're answering this below, yes? > >> * When cross-compiling on posix platforms using autoconf, configure is >> ?? run with the '--host=host-type' [1] command line option to specify >> ?? the target platform. >> >> * The AC_CANONICAL_HOST macro is used by configure.ac to get the >> ?? canonical variables `host' and `host_cpu' [2]. >> ?? Those variables are used to compute _PYTHON_HOST_PLATFORM. >> >> * The Makefile generated by configure runs setup.py, >> ?? generate-posix-vars, etc... on the build platform using >> ?? PYTHON_FOR_BUILD, a native python interpreter that is set >> ?? to run with the _PYTHON_HOST_PLATFORM environment variable. >> >> * get_platform() in setup.py and in Lib/distutils/util.py returns the >> ?? value of _PYTHON_HOST_PLATFORM when cross-compiling. >> >> So the process of cross-compilation on posix platforms has >> get_platform() return the target ('host' in autoconf terminology) platform. >> >> [1] https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.gnu.org%2Fsavannah-checkouts%2Fgnu%2Fautoconf%2Fmanual%2Fautoconf-2.69%2Fhtml_node%2FSpecifying-Target-Triplets.html&data=02%7C01%7CPaul.Monson%40microsoft.com%7C186de203c3d644c7517208d696a5f266%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636862036817328765&sdata=2PMKn%2Bt5ed82gkSBnew0nT1TA1qzDuU3VZNOFPYPqIM%3D&reserved=0 >> [2] https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.gnu.org%2Fsavannah-checkouts%2Fgnu%2Fautoconf%2Fmanual%2Fautoconf-2.69%2Fhtml_node%2FCanonicalizing.html&data=02%7C01%7CPaul.Monson%40microsoft.com%7C186de203c3d644c7517208d696a5f266%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636862036817328765&sdata=zRtcyLjDrkL%2BfcignLCDjtUri29j7mCscdVkGqhNk50%3D&reserved=0 > > And the answer is a resounding "yes, it returns the target platform"? It seems you're saying this, but the wording of your email sounds just enough of a question that I'm not sure whether you are definitively answering it or not. > > Cheers, > Steve _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.python.org%2Fmailman%2Flistinfo%2Fpython-dev&data=02%7C01%7CPaul.Monson%40microsoft.com%7C186de203c3d644c7517208d696a5f266%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636862036817328765&sdata=Au0xrggEwNRRAen6dr8GnBuTTvMbxVhSuWOtX67p3Ts%3D&reserved=0 Unsubscribe: https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.python.org%2Fmailman%2Foptions%2Fpython-dev%2Fpaulmon%2540microsoft.com&data=02%7C01%7CPaul.Monson%40microsoft.com%7C186de203c3d644c7517208d696a5f266%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636862036817328765&sdata=nVHiEW69so29h6uTbY7XDjPF6%2FwU0pnlNpQEeYSoygY%3D&reserved=0 From aixtools at felt.demon.nl Thu Feb 21 04:44:46 2019 From: aixtools at felt.demon.nl (Michael) Date: Thu, 21 Feb 2019 10:44:46 +0100 Subject: [Python-Dev] Question - Bug Triage for 3.4 & 3.5 In-Reply-To: References: <20190220150120.GA26898@xps> Message-ID: On 20/02/2019 18:58, Victor Stinner wrote: > If Python 3.4 was the current version when a bug was reported, I would > expect the version field of the bug set to Python 3.4. Maybe the bug > has been fixed in the meanwhile, maybe not. Closing all bugs affected > to 3.4 is a risk of loosing useful information on real bugs: closed > bugs are ignored by default in "Search" operation. Short: add version 3.X when it is discovered in "latest branch" versus issues reported against a "binary packaged" version. Maybe the instructions re: setting version (for new issues) should be to leave it blank (especially if it is still valid on the latest (e.g., 3.X rather than official numbered branch) or only indicate the branches that will be considered for a fix). Where "version" could be useful would be when someone finds something in a "binary" release at say level 3.6, while testing shows it works fine on 3.7 (or 3.8-alpha). In other words, I see little value in a bug/issue reported when, e.g., 3.4 was fully supported (or better becoming the latest branch comparable to labeling as 3.8 today). Maybe having a label "3.X" that just goes with the flow - in addition to 3.4 - (I am thinking maybe it is not bad to know it was first reported against 3.4, but does that also mean it wasn't there at 3.3?) > > Changing the version field: well, I don't think that it's useful. I > usually ignore this field. And it would send like 3000 emails... I > don't see the point. > > It's not uncommon that I fix bugs which 5 years old if not longer. > Sometimes, I decide to look at all bugs of a specific module. And most > of old bugs are still relevant nowadays. Sometimes, closing the bug as > WONTFIX is the right answer, but it can only be done on a case by case > basis. > > Note: Same rationale for Python 3.5, Python 2.6, or another other old > Python version ;-) > > Bug triage is hard and requires plenty of time :-) Again, if early on, an issue could (also) be flagged as 3.X - this may make it easier to track 'ancient' bugs - and automate keeping them in sight. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From aixtools at felt.demon.nl Thu Feb 21 05:26:07 2019 From: aixtools at felt.demon.nl (Michael) Date: Thu, 21 Feb 2019 11:26:07 +0100 Subject: [Python-Dev] before I open an issue re: posix.stat and/or os.stat Message-ID: My focus is AIX - and I believe I found a bug in AIX include files in 64-bit mode. I'll take that up with IBM and AIX support. However, this issue might also be valid in Python3. The following is from Centos, not AIX Python 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.maxsize 9223372036854775807 >>> import posix >>> posix.stat("/tmp/xxx") posix.stat_result(st_mode=33188, st_ino=33925869, st_dev=64768L, st_nlink=1, st_uid=0, st_gid=0, st_size=0, st_atime=1550742595, st_mtime=1550742595, st_ctime=1550742595) >>> st=posix.stat("/tmp/xxx") >>> dev=st.st_dev >>> min=posix.minor(dev) >>> maj=posix.major(dev) >>> min,max (0, ) >>> min 0 >>> max >>> maj 253 >>> posix.minor(dev) 0 >>> posix.major(655536) 2560 >>> posix.major(65536) 256 >>> posix.major(256) 1 >>> import os >>> os.major(256) 1 >>> In AIX - 64-bit mode Python 3.8.0a1+ (heads/master:e7a4bb554e, Feb 20 2019, 18:40:08) [C] on aix7 Type "help", "copyright", "credits" or "license" for more information. >>> import sys,os,posix >>> sys.maxsize 9223372036854775807 >>> posix.major(256) 0 >>> posix.major(65536) 1 >>> posix.stat("/tmp/xxx") os.stat_result(st_mode=33188, st_ino=12, st_dev=-9223371993905102841, st_nlink=1, st_uid=202, st_gid=1954, st_size=0, st_atime=1550690105, st_mtime=1550690105, st_ctime=1550690105) AIX 32-bit: root at x066:[/data/prj/python/git/python3-3.8.0.66]./python Python 3.8.0a1+ (heads/master:e7a4bb554e, Feb 19 2019, 11:22:56) [C] on aix6 Type "help", "copyright", "credits" or "license" for more information. >>> import os,sys,posix >>> sys.maxsize 2147483647 >>> posix.major(65536) 1 >>> posix.stat("/tmp/xxx") os.stat_result(st_mode=33188, st_ino=149, st_dev=655367, st_nlink=1, st_uid=0, st_gid=0, st_size=0, st_atime=1550743517, st_mtime=1550743517, st_ctime=1550743517) To make it easier to view: buildbot at x064:[/home/buildbot]cat osstat.c #include #include #include #include main() { ??????? dev_t dev; ??????? char *path = "/tmp/xxx"; ??????? struct stat st; ??????? int???? minor,major; ??????? lstat(path,&st); ??????? printf("size: %d\n", sizeof(st.st_dev)); ??????? dev = st.st_dev; ??????? minor = minor(dev); ??????? major = major(dev); ??????? printf("%016lx %ld %ld\n",dev,dev, (unsigned) dev); ??????? printf("%d,%d\n",major,minor); } buildbot at x064:[/home/buildbot]OBJECT_MODE=32 cc osstat.c -o osstat-32 && ./osstat-32 size: 4 00000000000a0007 655367 655367 10,7 And here is the AIX behavior (and bug - major() macro!) buildbot at x064:[/home/buildbot]OBJECT_MODE=64 cc osstat.c -o osstat-64 && ./osstat-64 size: 8 8000000a00000007 -9223371993905102841 7 0,7 The same on AIX 6 (above is AIX7) - and also with gcc: root at x068:[/data/prj]gcc -maix64 osstat.c -o osstat-64 && ./osstat-64 size: 8 8000000a00000007 -9223371993905102841 42949672967 0,7 root at x068:[/data/prj]gcc -maix32 osstat.c -o osstat-32 && ./osstat-32 size: 4 00000000000a0007 655367 0 10,7 root at x068:[/data/prj] So, the AIX 'bug' with the macro major() has been around for ages - but ALSO setting the MSB of the st_dev. +++++ Now my question: Will this continue to be enough space - i.e., is the Dev size going to be enough? ?+2042? #ifdef MS_WINDOWS ?+2043????? PyStructSequence_SET_ITEM(v, 2, PyLong_FromUnsignedLong(st->st_dev)); ?+2044? #else ?+2045????? PyStructSequence_SET_ITEM(v, 2, _PyLong_FromDev(st->st_dev)); ?+2046? #endif ?+711? #define _PyLong_FromDev PyLong_FromLongLong It seems so - however, Is there something such as PyUnsignedLong and is that large enough for a "long long"? and if it exists, would that make the value positive (for the first test). posix.major and os.major will need to mask away the MSB and posix.makedev and os.makedev will need to add it back. OR - do I need to make the PyStat values "the same" in both 32-bit and 64-bit? Puzzled on what you think is the correct approach. Michael -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From vstinner at redhat.com Thu Feb 21 06:13:51 2019 From: vstinner at redhat.com (Victor Stinner) Date: Thu, 21 Feb 2019 12:13:51 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> Message-ID: Le mar. 19 f?vr. 2019 ? 21:15, Stefan Behnel a ?crit : > What I would ask, though, and I think that's also Jeroen's request, is to > be careful what you lock up behind Py_BUILD_CORE. Any new functionality > should be available to extension modules by default, unless there is a good > reason why it should remain internal. Usually, there is a reason why this > functionality was added, and I doubt that there are many cases where these > reasons are entirely internal to CPython. I think that we should have some rules here. One rule is that we should avoid APIs which allow to do something no possible in Python. That's an important rule for PyPy and other Python implementations. We cannot avoid such APIs completely, but they should be the exception, not the default. Another rule is to avoid stop adding new APIs which only exist for performance. For example, PyDict_GetItem() only exists for performance: PyObject_GetItem() can be used instead. There are multiple issues with writing "specialized code". For example, PyDict_GetItem() must only used if the type is exactly dict (PyDict_CheckExact). Otherwise, you change the Python semantics (don't respect overriden __getitem__). Each new API means more work for other Python implementations, but also more maintenance work for CPython. Premature optimization is the root of all evil. Most C extensions use premature optimization which are causing us a lot of troubles nowadays when we want to make the C API evolve and cause issues to PyPy which has issues to reimplement the C API on top of their different object model with a different GC. These are just proposals. Feel free to comment :-) Victor -- Night gathers, and now my watch begins. It shall not end until my death. From vstinner at redhat.com Thu Feb 21 06:18:41 2019 From: vstinner at redhat.com (Victor Stinner) Date: Thu, 21 Feb 2019 12:18:41 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: Message-ID: Hi Eric, IMHO the main blocker issue for any C API change is that nobody is able to measure the risk these changes. To better control the risk, I propose to select a list of popular C extensions, and build a CI to run their test suite on top of the development version of Python. Such CI wouldn't detect all backward incompatible changes. It wouldn't prevent us to merge backward incompatible changes. Some projects are already tested on the master branch of Python. My intent is to detect issues earlier, and if something goes wrong: discuss early to decide what to do. Fixing only some popular C extensions is one option. Another option is to provide some commands / hints to help maintainers of C extensions to adapt their code to the new C API. The other obvious option is to revert the change and maybe do it differently. Right now, it's too scary to walk in the dark. What I also would like to see is the creation of a group of people who work on the C API to discuss each change and test these changes properly. Victor Le sam. 16 f?vr. 2019 ? 00:41, Eric Snow a ?crit : > > Hi all, > > I've been working on the runtime lately, particularly focused on my > multi-core Python project. One thing that would help simplify changes > in this area is if PyInterpreterState were defined in > Include/internal. This would mean the type would be opaque unless > Py_BUILD_CORE were defined. > > The docs [1] already say none of the struct's fields are public. > Furthermore, Victor already moved it into Include/cpython (i.e. not in > the stable ABI) in November. Overall, the benefit of making internal > types like this opaque is realized in our ability to change the > internal details without further breaking C-API users. > > Realistically, there may be extension modules out there that use > PyInterpreterState fields directly. They would break. I expect there > to be few such modules and fixing them would not involve great effort. > We'd have to add any missing accessor functions to the public C-API, > which I see as a good thing. I have an issue [2] open for the change > and a PR open. My PR already adds an entry to the porting section of > the 3.8 What's New doc about dealing with PyInterpreterState. > > Anyway, I just wanted to see if there are any objections to making > PyInterpreterState an opaque type outside of core use. > > -eric > > p.s. I'd like to do the same with PyThreadState, but that's a bit > trickier [3] and not one of my immediate needs. :) > > > [1] https://docs.python.org/3/c-api/init.html#c.PyInterpreterState > [2] https://bugs.python.org/issue35886 > [3] https://bugs.python.org/issue35949 > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com -- Night gathers, and now my watch begins. It shall not end until my death. From solipsis at pitrou.net Thu Feb 21 06:32:52 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 21 Feb 2019 12:32:52 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> Message-ID: <20190221123252.11bd1d31@fsol> On Thu, 21 Feb 2019 12:13:51 +0100 Victor Stinner wrote: > > Premature optimization is the root of all evil. Most C extensions use > premature optimization How do you know it's premature? Some extensions _are_ meant for speed. Regards Antoine. From aixtools at felt.demon.nl Thu Feb 21 06:35:09 2019 From: aixtools at felt.demon.nl (Michael) Date: Thu, 21 Feb 2019 12:35:09 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <20190216223210.09ef3944@fsol> Message-ID: <62a79cfb-a0ae-925a-9186-33bc6e5ee74a@felt.demon.nl> On 16/02/2019 23:34, Steve Dower wrote: > I like that we're taking (small) steps to reduce the size of our API. I consider myself - an "outsider", so an "outsider's" view is that anything that makes it more clear about what is intended aka supported as the Python API is an improvement. Without clarity there is a chance (read risk) that someone starts using something and forces a long and difficult process to make it part of the official API or get acceptance that something never should have been done "that way". Shorter: promote clarity. IMHO: it is easier to move something from the 'internal' to public than v.v. and whenever there is not a compelling reason to not put something into some form of 'internal' - do it before not doing so bites you in a nasty way. My two (outsider) bits - :) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From p.f.moore at gmail.com Thu Feb 21 06:58:00 2019 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 21 Feb 2019 11:58:00 +0000 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <20190221123252.11bd1d31@fsol> References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> <20190221123252.11bd1d31@fsol> Message-ID: On Thu, 21 Feb 2019 at 11:35, Antoine Pitrou wrote: > > On Thu, 21 Feb 2019 12:13:51 +0100 > Victor Stinner wrote: > > > > Premature optimization is the root of all evil. Most C extensions use > > premature optimization > > How do you know it's premature? Some extensions _are_ meant for speed. Extensions that need to squeeze every bit of speed out of the C API are the exception rather than the rule. Making it easier for extension authors to naturally pick portable options seems reasonable to me. Gating the "fast, but unsafe" APIs behind some sort of "opt in" setting seems like a sensible approach. However I agree, making it *impossible* to get right down to the high-speed calls (with the understanding that you're typing yourself to CPython and need to carefully track internal changes) is not something we should do. Python's built an ecosystem around high-performance C extensions, and we should support that ecosystem. Paul From antoine at python.org Thu Feb 21 07:09:38 2019 From: antoine at python.org (Antoine Pitrou) Date: Thu, 21 Feb 2019 13:09:38 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> <20190221123252.11bd1d31@fsol> Message-ID: <4a4990bf-b386-a629-3a69-171af83976f9@python.org> Le 21/02/2019 ? 12:58, Paul Moore a ?crit?: > On Thu, 21 Feb 2019 at 11:35, Antoine Pitrou wrote: >> >> On Thu, 21 Feb 2019 12:13:51 +0100 >> Victor Stinner wrote: >>> >>> Premature optimization is the root of all evil. Most C extensions use >>> premature optimization >> >> How do you know it's premature? Some extensions _are_ meant for speed. > > Extensions that need to squeeze every bit of speed out of the C API > are the exception rather than the rule. Making it easier for extension > authors to naturally pick portable options seems reasonable to me. Actually, it would be interesting to have some kind of survey of C extensions (through random sampling? or popularity?) to find out why the developers had to write a C extension in the first place and what their concerns are. Intuitively there are three categories of C extensions: 1. extensions whose entire purpose is performance (this includes all scientific computing C extensions - including Numpy or Pandas -, but also C accelerators such as SQLAlchemy's or simplejson's) 2. extensions wrapping third-party APIs that are not performance-critical. If you are exposing a wrapper to e.g. the posix_spawn() system call, probably the wrapper performance isn't very important. 3. extensions wrapping third-party APIs that are performance-critical. For example, in a database wrapper, it's important that your native DB-to-Python and Python-to-native DB conversions are as fast as possible, because users are going to convert a *lot* of data. Note that category 2 may be taken care of by ctypes or cffi. Regards Antoine. From p.f.moore at gmail.com Thu Feb 21 07:22:32 2019 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 21 Feb 2019 12:22:32 +0000 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <4a4990bf-b386-a629-3a69-171af83976f9@python.org> References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> <20190221123252.11bd1d31@fsol> <4a4990bf-b386-a629-3a69-171af83976f9@python.org> Message-ID: On Thu, 21 Feb 2019 at 12:12, Antoine Pitrou wrote: > Actually, it would be interesting to have some kind of survey of C > extensions (through random sampling? or popularity?) to find out why the > developers had to write a C extension in the first place and what their > concerns are. Indeed. There's also embedding, where I suspect there's a much higher likelihood that performance isn't key. And in your survey, I'd split out "needs the Python/C interface to be fast" from "needs internal operations to be fast, but data transfer between C and Python isn't as critical". I suspect there's a lot of people who believe they are in the former category, but are actually in the latter... Paul From vstinner at redhat.com Thu Feb 21 07:45:05 2019 From: vstinner at redhat.com (Victor Stinner) Date: Thu, 21 Feb 2019 13:45:05 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <20190221123252.11bd1d31@fsol> References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> <20190221123252.11bd1d31@fsol> Message-ID: Le jeu. 21 f?vr. 2019 ? 12:36, Antoine Pitrou a ?crit : > > On Thu, 21 Feb 2019 12:13:51 +0100 > Victor Stinner wrote: > > > > Premature optimization is the root of all evil. Most C extensions use > > premature optimization > > How do you know it's premature? Some extensions _are_ meant for speed. Sorry, I don't ask to stop optimizing C extension. I'm asking to stop to use low-level C API like PyTuple_GET_ITEM() if the modified code is the not the performance bottleneck. Victor -- Night gathers, and now my watch begins. It shall not end until my death. From songofacandy at gmail.com Thu Feb 21 08:01:07 2019 From: songofacandy at gmail.com (INADA Naoki) Date: Thu, 21 Feb 2019 22:01:07 +0900 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> <20190221123252.11bd1d31@fsol> <4a4990bf-b386-a629-3a69-171af83976f9@python.org> Message-ID: On Thu, Feb 21, 2019 at 9:23 PM Paul Moore wrote: > > On Thu, 21 Feb 2019 at 12:12, Antoine Pitrou wrote: > > > Actually, it would be interesting to have some kind of survey of C > > extensions (through random sampling? or popularity?) to find out why the > > developers had to write a C extension in the first place and what their > > concerns are. > > Indeed. There's also embedding, where I suspect there's a much higher > likelihood that performance isn't key. > > And in your survey, I'd split out "needs the Python/C interface to be > fast" from "needs internal operations to be fast, but data transfer > between C and Python isn't as critical". I suspect there's a lot of > people who believe they are in the former category, but are actually > in the latter... > > Paul As my experience, I can't believe the majority of extensions are in category B. And when speed is matter, PyPy support is not mandatory. PyPy is fast enough with pure Python implementation. Python/C API doesn't make PyPy fast when creating or reading massive objects. These are my recent Python/C API usages. # msgpack msgpack is a serialization format like JSON, but in binary format. It contains pure Python implementation and Cython implementation. Cython implementation is for CPython. PyPy is fast enough with pure Python implementation. It is important to fast object access. It is like marshal in Python. So this is clearly in category A. # mysqlclient It is binding of libmysqlclient or libmariadbclient. In some case, query can return many small data and I need to convert them to Python object. So this library is basically in category B, but sometime in C. # MarkupSafe It is library which escape strings. It should handle massive small string chunks, because it is called from template engine. So this library is in category A. Note that MarkupSafe having pure Python version, like msgpack. While MarkupSafe is not optimized for PyPy yet, it's possible to add implementation for PyPy written in Python using PyPy's string builder [1]. [1] https://morepypy.blogspot.com/2011/10/speeding-up-json-encoding-in-pypy.html -- INADA Naoki From solipsis at pitrou.net Thu Feb 21 07:53:46 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 21 Feb 2019 13:53:46 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> <20190221123252.11bd1d31@fsol> Message-ID: <20190221135346.15337d59@fsol> On Thu, 21 Feb 2019 13:45:05 +0100 Victor Stinner wrote: > Le jeu. 21 f?vr. 2019 ? 12:36, Antoine Pitrou a ?crit : > > > > On Thu, 21 Feb 2019 12:13:51 +0100 > > Victor Stinner wrote: > > > > > > Premature optimization is the root of all evil. Most C extensions use > > > premature optimization > > > > How do you know it's premature? Some extensions _are_ meant for speed. > > Sorry, I don't ask to stop optimizing C extension. I'm asking to stop > to use low-level C API like PyTuple_GET_ITEM() if the modified code is > the not the performance bottleneck. As long as some people need that API, you'll have to maintain it anyway, even if _less_ people use it. Ingesting lists and tuples as fast as possible is important for some use cases. I have worked personally on some of them (on e.g. Numba or PyArrow). Regards Antoine. From J.Demeyer at UGent.be Thu Feb 21 08:49:02 2019 From: J.Demeyer at UGent.be (Jeroen Demeyer) Date: Thu, 21 Feb 2019 14:49:02 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: Message-ID: <5C6EAC4E.1050304@UGent.be> On 2019-02-21 12:18, Victor Stinner wrote: > What I also would like to see is the creation of a group of people who > work on the C API to discuss each change and test these changes > properly. I don't think that we should "discuss each change", we should first have an overall plan. It doesn't make a lot of sense to take small steps if we have no clue where we're heading to. I am aware of https://pythoncapi.readthedocs.io/new_api.html but we should first make that into an accepted PEP. From armin.rigo at gmail.com Thu Feb 21 08:58:49 2019 From: armin.rigo at gmail.com (Armin Rigo) Date: Thu, 21 Feb 2019 14:58:49 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <5C6BDA74.8060800@UGent.be> Message-ID: Hi, On Tue, 19 Feb 2019 at 13:12, Victor Stinner wrote: > Please don't use &PyTuple_GET_ITEM() or _PyTuple_ITEMS(). It prevents > to use a more efficient storage for tuple. Something like: > https://pythoncapi.readthedocs.io/optimization_ideas.html#specialized-list-for-small-integers > > PyPy already has the issue right now. Just to clarify PyPy's point of view (or at least mine): 1. No, it no longer has this issue. You can misuse ``&PyTuple_GET_ITEM()`` freely with PyPy too. 2. This whole discussion is nice but is of little help to PyPy at this point. The performance hit comes mostly from emulating reference counting and non-movable objects. If the API was half the size and did not contain anything with irregular behavior, it would have made our job easier in the past, but now it's done---and it wouldn't have improved the performance of the result. A bient?t, Armin. From stephane at wirtel.be Thu Feb 21 09:53:40 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Thu, 21 Feb 2019 15:53:40 +0100 Subject: [Python-Dev] Add minimal information with a new issue? Message-ID: <20190221145340.GA4286@xps> Hi, What do you think if we suggest a "template" for the new bugs? For example: * Python version (exact version) * Operating System * on Linux -> Distribution (python can have some patches) * Add a script for the reproduction of the bug * Eventually, try with the docker images (on i386,x86_64, ...) * etc... We can lost a lot of time just trying to find the right information for the reproduction of the issue. Cheers, St?phane -- St?phane Wirtel - https://wirtel.be - @matrixise From steve.dower at python.org Thu Feb 21 10:03:38 2019 From: steve.dower at python.org (Steve Dower) Date: Thu, 21 Feb 2019 07:03:38 -0800 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <5C6BDA74.8060800@UGent.be> Message-ID: Just letting everyone know that I'm intending to restart this discussion over in capi-sig, as I feel like I've got an informational-PEP worth of "vision", "ideas" and "direction" and nomenclature for our C API (*not* talking about a rewrite, but the principles we should be following now... and would also want to follow in any rewrite ;) ). Nothing that should be extremely controversial. Well, perhaps. But let's at least get the stuff we already agree on into a PEP that we can use as a reference for guiding future work. I'll throw together an outline draft by email first, as I want to discuss the ideas right now rather than the grammar. Hopefully later this morning (next 3-4 hours). python-dev can expect (hope for) an informational PEP to return. If you're not currently on capi-sig, you can join it at https://mail.python.org/mailman3/lists/capi-sig.python.org/ Cheers, Steve From skrah at bytereef.org Thu Feb 21 06:24:51 2019 From: skrah at bytereef.org (Stefan Krah) Date: Thu, 21 Feb 2019 12:24:51 +0100 Subject: [Python-Dev] Making PyInterpreterState an opaque type Message-ID: <20190221112451.GA4998@bytereef.org> Victor Stinner wrote: > Premature optimization is the root of all evil. Most C extensions use > premature optimization which are causing us a lot of troubles nowadays > when we want to make the C API evolve and cause issues to PyPy which > has issues to reimplement the C API on top of their different object > model with a different GC. I'm intimately familiar with several C extensions in the math/science space and this is not the impression I get at all. Most people are happy if they get things to work, because the actual problems are much harder than focusing on private vs. public API. As for PyPy, if I understood correctly, Armin Rigo was skeptical of the proposed plan and favored publishing an API as a third party package. Stefan Krah From chekat2 at gmail.com Thu Feb 21 08:57:25 2019 From: chekat2 at gmail.com (Cheryl Sabella) Date: Thu, 21 Feb 2019 08:57:25 -0500 Subject: [Python-Dev] "Good first issues" on the bug tracker Message-ID: Hello! Due to the upcoming sprints at PyCon, there will be issues marked in the bug tracker as 'Good first issues'. Please do *not* submit pull requests for these as it is difficult to find suitable issues. Instead of working on these issues, I'd like to propose a much more difficult challenge to current contributors looking for something to work on. Please find issues which can be tackled at the sprint! They are out there, but the trick is finding them. :-) Here's the link to the mentored sprint page to understand the audience: https://us.pycon.org/2019/hatchery/mentoredsprints/ I don't think this needs to be limited to documentation changes, but those are the obvious choice (even the devguide suggests it). The goal is really workflow, meaning the new contributor doesn't necessarily have to solve the problem themselves - if an issue says: x function in y module needs to have z applied, then there's a lot of hurdles there for a newcomer which may not involve solving the problem. If you find anything you think is suitable, please add a comment with 'good first issue' and nosy me or Mariatta on it. If you're unsure, then nosy us anyway. It would be awesome to have too many issues to choose from than not enough. If an issue isn't worked on at the sprint, then it would still be available to new contributors afterwards, so the exercise of flagging the issues wouldn't be wasted effort. Thanks! Cheryl -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Thu Feb 21 13:08:22 2019 From: barry at python.org (Barry Warsaw) Date: Thu, 21 Feb 2019 10:08:22 -0800 Subject: [Python-Dev] Add minimal information with a new issue? In-Reply-To: <20190221145340.GA4286@xps> References: <20190221145340.GA4286@xps> Message-ID: On Feb 21, 2019, at 06:53, Stephane Wirtel wrote: > > What do you think if we suggest a "template" for the new bugs? > > For example: > * Python version (exact version) > * Operating System * on Linux -> Distribution (python can have some patches) > * Add a script for the reproduction of the bug > * Eventually, try with the docker images (on i386,x86_64, ...) > * etc... > > We can lost a lot of time just trying to find the right information for > the reproduction of the issue. Getting reproducible cases is usually the hardest part of any bug fix. :) If there is information about the platform, version of Python, build flags etc. that we want to capture in bug reports, there should be a `python -m bugreport` or some such that we can ask people to run and attach or paste into their issue. Maybe even cooler would be something that opens a new GitHub issue with that information pre-populated. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From vstinner at redhat.com Thu Feb 21 13:17:29 2019 From: vstinner at redhat.com (Victor Stinner) Date: Thu, 21 Feb 2019 19:17:29 +0100 Subject: [Python-Dev] Add minimal information with a new issue? In-Reply-To: References: <20190221145340.GA4286@xps> Message-ID: Hi, I wrote "python -m test.pythoninfo", but I wrote it to debug buildbot failures. It produces a large output. Only very few info are useful, but previously it was hard to get these info. I wouldn't suggest to start to request it. Or if you want it, it should be attached as a file to not pollute the issue. Note: On Fedora at least, you have to install python3-test package to get test.pythoninfo. Victor Le jeu. 21 f?vr. 2019 ? 19:11, Barry Warsaw a ?crit : > > On Feb 21, 2019, at 06:53, Stephane Wirtel wrote: > > > > What do you think if we suggest a "template" for the new bugs? > > > > For example: > > * Python version (exact version) > > * Operating System * on Linux -> Distribution (python can have some patches) > > * Add a script for the reproduction of the bug > > * Eventually, try with the docker images (on i386,x86_64, ...) > > * etc... > > > > We can lost a lot of time just trying to find the right information for > > the reproduction of the issue. > > Getting reproducible cases is usually the hardest part of any bug fix. :) > > If there is information about the platform, version of Python, build flags etc. that we want to capture in bug reports, there should be a `python -m bugreport` or some such that we can ask people to run and attach or paste into their issue. Maybe even cooler would be something that opens a new GitHub issue with that information pre-populated. > > -Barry > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com -- Night gathers, and now my watch begins. It shall not end until my death. From raymond.hettinger at gmail.com Thu Feb 21 13:34:49 2019 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Thu, 21 Feb 2019 10:34:49 -0800 Subject: [Python-Dev] Add minimal information with a new issue? In-Reply-To: <20190221145340.GA4286@xps> References: <20190221145340.GA4286@xps> Message-ID: <90812FDD-1B6A-40D3-9E79-C9C8BEF610CD@gmail.com> On Feb 21, 2019, at 6:53 AM, Stephane Wirtel wrote: > > What do you think if we suggest a "template" for the new bugs? 99% of the time the template would be not applicable. Historically, we asked for more information when needed and that wasn't very often. I think that anything that raises the cost of filing a bug report will work to our detriment. Ideally, we want the barriers to reporting to be as low as possible. Raymond From barry at python.org Thu Feb 21 14:17:07 2019 From: barry at python.org (Barry Warsaw) Date: Thu, 21 Feb 2019 11:17:07 -0800 Subject: [Python-Dev] Add minimal information with a new issue? In-Reply-To: <90812FDD-1B6A-40D3-9E79-C9C8BEF610CD@gmail.com> References: <20190221145340.GA4286@xps> <90812FDD-1B6A-40D3-9E79-C9C8BEF610CD@gmail.com> Message-ID: On Feb 21, 2019, at 10:34, Raymond Hettinger wrote: > > I think that anything that raises the cost of filing a bug report will work to our detriment. Ideally, we want the barriers to reporting to be as low as possible. `python -m reportbug` could make the process even easier (too easy?). -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From python at mrabarnett.plus.com Thu Feb 21 14:15:43 2019 From: python at mrabarnett.plus.com (MRAB) Date: Thu, 21 Feb 2019 19:15:43 +0000 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: <20190221135346.15337d59@fsol> References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <20190219113741.4514750e@fsol> <20190221123252.11bd1d31@fsol> <20190221135346.15337d59@fsol> Message-ID: <6c93d989-0dff-454d-ec38-b065437826c4@mrabarnett.plus.com> On 2019-02-21 12:53, Antoine Pitrou wrote: > On Thu, 21 Feb 2019 13:45:05 +0100 > Victor Stinner wrote: >> Le jeu. 21 f?vr. 2019 ? 12:36, Antoine Pitrou a ?crit : >> > >> > On Thu, 21 Feb 2019 12:13:51 +0100 >> > Victor Stinner wrote: >> > > >> > > Premature optimization is the root of all evil. Most C extensions use >> > > premature optimization >> > >> > How do you know it's premature? Some extensions _are_ meant for speed. >> >> Sorry, I don't ask to stop optimizing C extension. I'm asking to stop >> to use low-level C API like PyTuple_GET_ITEM() if the modified code is >> the not the performance bottleneck. > > As long as some people need that API, you'll have to maintain it > anyway, even if _less_ people use it. Ingesting lists and tuples as > fast as possible is important for some use cases. I have worked > personally on some of them (on e.g. Numba or PyArrow). > If I'm working with a dict, the first place I look is PyDict_*, and that leads me to PyDict_GetItem. The docs for PyDict_GetItem don't mention PyObject_GetItem. Perhaps, if PyObject_GetItem is recommended, it should say so, and similarly for other parts of the API. From steve.dower at python.org Thu Feb 21 15:06:29 2019 From: steve.dower at python.org (Steve Dower) Date: Thu, 21 Feb 2019 12:06:29 -0800 Subject: [Python-Dev] Add minimal information with a new issue? In-Reply-To: References: <20190221145340.GA4286@xps> <90812FDD-1B6A-40D3-9E79-C9C8BEF610CD@gmail.com> Message-ID: <935b0f98-70ae-af79-558e-c4ede09b3021@python.org> On 21Feb2019 1117, Barry Warsaw wrote: > On Feb 21, 2019, at 10:34, Raymond Hettinger wrote: >> >> I think that anything that raises the cost of filing a bug report will work to our detriment. Ideally, we want the barriers to reporting to be as low as possible. > > `python -m reportbug` could make the process even easier (too easy?). It's spelled `python -m reportabug` ;) https://pypi.org/project/reportabug/ https://github.com/zooba/reportabug Example: https://github.com/zooba/reportabug/issues/1 Cheers, Steve From steve.dower at python.org Thu Feb 21 15:08:31 2019 From: steve.dower at python.org (Steve Dower) Date: Thu, 21 Feb 2019 12:08:31 -0800 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: On 21Feb2019 0557, Cheryl Sabella wrote: > If you find anything you think is suitable, please add a comment with > 'good first issue' and nosy me or Mariatta on it.? If you're unsure, > then nosy us anyway.? It would be awesome to have too many issues to > choose from than not enough.? If an issue isn't worked on at the sprint, > then it would still be available to new contributors afterwards, so the > exercise of flagging the issues wouldn't be wasted effort. I agree completely. We normally add the "Easy" or "Easy (C)" keywords to mark these (the latter for issues that involve C code), and these are collected under the "Easy issues" link at the left hand side of the tracker. Any reason to change from this process? Cheers, Steve From stephane at wirtel.be Thu Feb 21 15:31:03 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Thu, 21 Feb 2019 21:31:03 +0100 Subject: [Python-Dev] Add minimal information with a new issue? In-Reply-To: <935b0f98-70ae-af79-558e-c4ede09b3021@python.org> References: <20190221145340.GA4286@xps> <90812FDD-1B6A-40D3-9E79-C9C8BEF610CD@gmail.com> <935b0f98-70ae-af79-558e-c4ede09b3021@python.org> Message-ID: <08FC7C35-AC20-40BD-9CA9-38953196169A@wirtel.be> +1 > Le 21 f?vr. 2019 ? 21:06, Steve Dower a ?crit : > >> On 21Feb2019 1117, Barry Warsaw wrote: >>> On Feb 21, 2019, at 10:34, Raymond Hettinger wrote: >>> >>> I think that anything that raises the cost of filing a bug report will work to our detriment. Ideally, we want the barriers to reporting to be as low as possible. >> `python -m reportbug` could make the process even easier (too easy?). > > > It's spelled `python -m reportabug` ;) > > https://pypi.org/project/reportabug/ > https://github.com/zooba/reportabug > > Example: https://github.com/zooba/reportabug/issues/1 > > Cheers, > Steve > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/stephane%40wirtel.be From chekat2 at gmail.com Thu Feb 21 15:58:55 2019 From: chekat2 at gmail.com (Cheryl Sabella) Date: Thu, 21 Feb 2019 15:58:55 -0500 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: > > I agree completely. We normally add the "Easy" or "Easy (C)" keywords to > mark these (the latter for issues that involve C code), and these are > collected under the "Easy issues" link at the left hand side of the > tracker. > > Any reason to change from this process? > > Thanks for asking about this. The intent isn't to stop the use of the 'easy' keyword, but to try to reserve some tickets for May. If they are just marked as 'easy', then there could be more of a risk that someone would work on it before the sprints. By assigning them to Mariatta, it will serve the dual purpose of trying to reserve these as well as making them easier to find later on. I think the equivalent would be the ability to add an additional tag to GitHub issues, such as when there's a 'good first date', 'help wanted' and 'sprint' tag on the same ticket. But, I also don't want to complicate the current process, so I apologize if my idea isn't constructive. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lisandrosnik at gmail.com Thu Feb 21 16:26:49 2019 From: lisandrosnik at gmail.com (Lysandros Nikolaou) Date: Thu, 21 Feb 2019 22:26:49 +0100 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: On Thu, Feb 21, 2019 at 9:59 PM Cheryl Sabella wrote: > I agree completely. We normally add the "Easy" or "Easy (C)" keywords to >> mark these (the latter for issues that involve C code), and these are >> collected under the "Easy issues" link at the left hand side of the >> tracker. >> >> Any reason to change from this process? >> >> Yeah, I think some kind of separation between the two would be needed in this case. There are some of us newbies, who frequently click the "Easy" button, so that we find some issue to tackle. Some kind of marking an issue as a "sprint issue" would quickly tell me that the issue is not available to take on. If that's not done, there's the risk that a good number of easy issues would be closed until the sprint. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Feb 21 17:12:08 2019 From: brett at python.org (Brett Cannon) Date: Thu, 21 Feb 2019 14:12:08 -0800 Subject: [Python-Dev] Making PyInterpreterState an opaque type In-Reply-To: References: <5C67E2D0.7020906@UGent.be> <5C6B2270.1080203@UGent.be> <5C6BDA74.8060800@UGent.be> Message-ID: On Thu, Feb 21, 2019 at 6:01 AM Armin Rigo wrote: > Hi, > > On Tue, 19 Feb 2019 at 13:12, Victor Stinner wrote: > > Please don't use &PyTuple_GET_ITEM() or _PyTuple_ITEMS(). It prevents > > to use a more efficient storage for tuple. Something like: > > > https://pythoncapi.readthedocs.io/optimization_ideas.html#specialized-list-for-small-integers > > > > PyPy already has the issue right now. > > Just to clarify PyPy's point of view (or at least mine): > > 1. No, it no longer has this issue. You can misuse > ``&PyTuple_GET_ITEM()`` freely with PyPy too. > > 2. This whole discussion is nice but is of little help to PyPy at this > point. The performance hit comes mostly from emulating reference > counting and non-movable objects. If the API was half the size and > did not contain anything with irregular behavior, it would have made > our job easier in the past, but now it's done---and it wouldn't have > improved the performance of the result. > While it's unfortunate we start this conversation back when PyPy started to suffer through this so that we could try to make it easier for them, I don't want people to think trying to come up with a simpler FFI API eventually wouldn't be beneficial to other implementations (either ones that haven't reach Python 3 yet or have not even been written). -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Thu Feb 21 17:26:42 2019 From: steve.dower at python.org (Steve Dower) Date: Thu, 21 Feb 2019 14:26:42 -0800 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: On 21Feb2019 1258, Cheryl Sabella wrote: > I agree completely. We normally add the "Easy" or "Easy (C)" > keywords to > mark these (the latter for issues that involve C code), and these are > collected under the "Easy issues" link at the left hand side of the > tracker. > > Any reason to change from this process? > > > Thanks for asking about this.? The intent isn't to stop the use of the > 'easy' keyword, but to try to reserve some tickets for May.? If they are > just marked as 'easy', then there could be more of a risk that someone > would work on it before the sprints.? By assigning them to Mariatta, it > will serve the dual purpose of trying to reserve these as well as making > them easier to find later on.? I think the equivalent would be the > ability to add an additional tag to GitHub issues, such as when there's > a 'good first date', 'help wanted' and 'sprint' tag on the same ticket. > > But, I also don't want to complicate the current process, so I apologize > if my idea isn't constructive. I'm just trying to keep things easy to search. Keywords are the bpo equivalent of GitHub tags, so if we want a "saved_for_sprint" tag then we should add a new keyword. (In my experience, "sprint" tags on GitHub are normally used on PRs to indicate that they were created at a sprint.) I'm sympathetic to wanting to have tasks for the PyCon sprints, but at the same time it feels exclusionary to "save" them from people who want to volunteer at other times. Having paid to attend PyCon shouldn't be a barrier or a privilege for contributing (though it's certainly a smoother road by having contributors there to assist, which is why other conferences/sprints are keen to have core developers attend as mentors). I'm 100% in favor of discouraging regular contributors from fixing them - we should be *generating* easy issues by describing how to fix it and then walking away. I'm just not entirely comfortable with trying to also hide them from first time contributors. Either way, I'll keep marking issues as Easy when I think they are good first contributions. Cheers, Steve From mariatta at python.org Thu Feb 21 18:18:23 2019 From: mariatta at python.org (Mariatta) Date: Thu, 21 Feb 2019 15:18:23 -0800 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: Cheryl, thanks for starting this thread and for helping to find easy issues to be worked on. I'm sympathetic to wanting to have tasks for the PyCon sprints, but at > the same time it feels exclusionary to "save" them from people who want > to volunteer at other times. Having paid to attend PyCon shouldn't be a > barrier or a privilege for contributing (though it's certainly a > smoother road by having contributors there to assist, which is why other > conferences/sprints are keen to have core developers attend as mentors). I understand your concern, but I hope people understand that the mentored sprint is a little bit different than the regular sprint days. The target audience of the mentored sprint are folks who are underrepresented minorities, in my personal experience, they are less privileged to begin with. We want to be inclusive, and therefore we're encouraging those who are not from underrepresented group to bring along someone else from underrepresented group with them. The mentored sprint is taking place during the main conference days, and looking at the signups, most of the participants have told us they want to sprint for 4 hours on Saturday afternoon. This means they will be missing out on lots of quality talks happening at the same time which they paid for. Our goal is really to make contributing more accessible by paring them up with mentors, but without straightforward issues to be worked on, they can't continue effectively. The issues that's been earmarked for the mentored sprint are so far documentation and typo fixes and not urgent issues. I don't believe that we're "stealing away" opportunity to contribute from those who weren't able to come to PyCon, but I understand that point of view. In any case, I'm appreciative to those who have helped find issues to be worked on the mentored sprint, and I also understand that I can't (and won't) stop other people from working on these issues before the sprint at PyCon. While I'm here, I'd like to invite you all to participate in the mentored sprint if you can. Help spread the word, or sign up to mentor. Any open source project is welcome to sign up! Check the page for more details. https://us.pycon.org/2019/hatchery/mentoredsprints/ I'm a little busy with PyCascades until next Monday, but after that I'll be happy to answer any questions about the mentored sprint. ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From chekat2 at gmail.com Thu Feb 21 18:22:57 2019 From: chekat2 at gmail.com (Cheryl Sabella) Date: Thu, 21 Feb 2019 18:22:57 -0500 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: > I'm 100% in favor of discouraging regular contributors from fixing them > This would be nice too. Speaking from my experience, it's hard to leave the comfort of what looks to be a quick win (by submitting a PR on an easy issue) and moving on to something where you're more unsure. But... > - we should be *generating* easy issues by describing how to fix it and > then walking away. What a great idea! Maybe that could be written up under the triager section of the devguide as a way to triage. An experienced person can do the legwork, just short of submitting the PR, and then help mentor a newcomer who wants to follow it through. This might be worth a thread on its own. > I'm just not entirely comfortable with trying to also > hide them from first time contributors. Either way, I'll keep marking > issues as Easy when I think they are good first contributions. > > Thanks! :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From turnbull.stephen.fw at u.tsukuba.ac.jp Fri Feb 22 00:14:01 2019 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Fri, 22 Feb 2019 14:14:01 +0900 Subject: [Python-Dev] Add minimal information with a new issue? In-Reply-To: References: <20190221145340.GA4286@xps> <90812FDD-1B6A-40D3-9E79-C9C8BEF610CD@gmail.com> Message-ID: <23663.34073.970966.97431@turnbull.sk.tsukuba.ac.jp> Barry Warsaw writes: > On Feb 21, 2019, at 10:34, Raymond Hettinger wrote: > > > > I think that anything that raises the cost of filing a bug report > > will work to our detriment. Ideally, we want the barriers to > > reporting to be as low as possible. A template is probably counterproductive. A program that fills in the most of the template automatically would make the barrier lower than it currently is. > `python -m reportbug` could make the process even easier (too easy?). If badly designed it could make the process too easy (ie, fill the tracker with reports that triage to "closed as duplicate of #NNNNN"). We know a lot about this process now, though. For example, Launchpad and some other trackers ask you to input some keywords and tries to pull up related reports. The reportbug program could collect internal information (with an option it could suck up all the information Victor's program collects), ask the reporter a few simple questions, formulate the query (including both the generated information and the reporter's information), and open a browser window on the tracker. This would probably make a good GSoC project .... Steve -- Associate Professor Division of Policy and Planning Science http://turnbull.sk.tsukuba.ac.jp/ Faculty of Systems and Information Email: turnbull at sk.tsukuba.ac.jp University of Tsukuba Tel: 029-853-5175 Tennodai 1-1-1, Tsukuba 305-8573 JAPAN From turnbull.stephen.fw at u.tsukuba.ac.jp Fri Feb 22 00:15:35 2019 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Fri, 22 Feb 2019 14:15:35 +0900 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: <23663.34167.878718.324163@turnbull.sk.tsukuba.ac.jp> Steve Dower writes: > I'm sympathetic to wanting to have tasks for the PyCon sprints, but at > the same time it feels exclusionary to "save" them from people who want > to volunteer at other times. It's already possible to do this sub rosa. Just get a mentor to "claim" the issue, and have a separate page listing them for the mentored sprint attendees. I think it's reasonable for each mentor to claim a couple of issues. Worth a try for a year at multiple sprints, no? If people worry that once it's been done for a year, the mentors will claim it in perpetuity, they can lobby the Council to explicitly sunset the practice, so that the Council would have to act to extend the privilege of reserving issues in this way. > I'm 100% in favor of discouraging regular contributors from fixing > them - we should be *generating* easy issues by describing how to > fix it and then walking away. "Generate", of course! But creating a bug report clear enough for another to reproduce and then fix a problem is far more effort than doing it yourself in many, probably most, cases. Some people are already doing this, I'm sure, but to dramatically increase the number of such issues, there would need to be strong incentive for experienced developers to do the extra work.[1] One such incentive might be to require mentors to tag one (or more) easy issue(s) for each one they "claim" for sprints. ("More" would compensate for the possibility that a PR would be generated and applied significantly faster.) The point of Cheryl's proposal is specifically NOT to walk away, but to rather provide in-person mentoring at a sprint. True, this is creating a privileged class from people who can be present at PyCon. We shouldn't award such privileges lightly. But ISTM there are two kinds of mentoring: the first is "patch piloting" the contribution through the contribution process. The second is helping the contributor navigate the existing code and produce new or modified code in good core Python style. It's not clear to me why the first needs to take place at a sprint. The contribution process is entirely formal, and there is no overwhelming need to privilege a subset of potential contributors for "trivial" changes. If this is a reasonable way to look at it, the "reserved for sprint" issues should be difficult enough coding to deserve some mentoring, of a kind that is best done in person. Eg, doc issues (including message and error strings) should be excluded from the reservable class. The mentored sprint new contributors should be encouraged to work on some trivial issues before the sprint. ("Required" doesn't make sense to me, since there are probably a lot who are new to coding entirely.) > I'm just not entirely comfortable with trying to also hide them > from first time contributors. I too feel that discomfort, but there are a number of good reasons to privilege PyCon attendees. The special "mentored sprints" are intentionally diversity-enhancing. If restricted as I suggest, the issues themselves benefit from personal mentoring, and so this will both improve education of new contributors and be an efficient use of mentor time. Finally, these contributors have demonstrated both a financial commitment and a time commitment to Python, an interest in contributing *to Python*, and so are more likely to become consistent contributors than, say, GSoC students who just want to demonstrate that they understand the PR process to qualify for the $5500.[2] Finally, if my analysis above is correct, these issues aren't hidden in any important way. The hidden issues are the ones that get fixed "en passant" as core devs work on something else. > Either way, I'll keep marking issues as Easy when I think they are > good first contributions. Of course! Footnotes: [1] Please tell me I'm wrong about Python! But fixing minor issues "en passant" is my experience in every other project I've contributed to, including as a non-committing "new contributor" in the first few patches. [2] Not saying that GSoC students don't want to contribute *to Python*, just that their financial incentive goes "the wrong way" relative to PyCon attendees. From guido at python.org Fri Feb 22 01:27:10 2019 From: guido at python.org (Guido van Rossum) Date: Thu, 21 Feb 2019 23:27:10 -0700 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: <23663.34167.878718.324163@turnbull.sk.tsukuba.ac.jp> References: <23663.34167.878718.324163@turnbull.sk.tsukuba.ac.jp> Message-ID: In the mypy project we've definitely experienced the problem with "beginner" issues that they would be fixed by disinterested folks who just wanted to score some GitHub points, and we stopped doing that. While we've not kept track carefully, my impression is that we've not seen the quantity of contributions go down, and we've seen fewer one-time contributors. There's still the occasional contributor who fixes a doc typo and then is never heard from again, but those are easy to accept (but CPython's PR process is much more heavy-weight than mypy's.) On Thu, Feb 21, 2019 at 10:18 PM Stephen J. Turnbull < turnbull.stephen.fw at u.tsukuba.ac.jp> wrote: > Steve Dower writes: > > > I'm sympathetic to wanting to have tasks for the PyCon sprints, but at > > the same time it feels exclusionary to "save" them from people who want > > to volunteer at other times. > > It's already possible to do this sub rosa. Just get a mentor to > "claim" the issue, and have a separate page listing them for the > mentored sprint attendees. I think it's reasonable for each mentor to > claim a couple of issues. Worth a try for a year at multiple sprints, > no? > > If people worry that once it's been done for a year, the mentors will > claim it in perpetuity, they can lobby the Council to explicitly > sunset the practice, so that the Council would have to act to extend > the privilege of reserving issues in this way. > > > I'm 100% in favor of discouraging regular contributors from fixing > > them - we should be *generating* easy issues by describing how to > > fix it and then walking away. > > "Generate", of course! But creating a bug report clear enough for > another to reproduce and then fix a problem is far more effort than > doing it yourself in many, probably most, cases. Some people are > already doing this, I'm sure, but to dramatically increase the number > of such issues, there would need to be strong incentive for > experienced developers to do the extra work.[1] One such incentive > might be to require mentors to tag one (or more) easy issue(s) for > each one they "claim" for sprints. ("More" would compensate for the > possibility that a PR would be generated and applied significantly > faster.) > > The point of Cheryl's proposal is specifically NOT to walk away, but > to rather provide in-person mentoring at a sprint. True, this is > creating a privileged class from people who can be present at PyCon. > We shouldn't award such privileges lightly. But ISTM there are two > kinds of mentoring: the first is "patch piloting" the contribution > through the contribution process. The second is helping the > contributor navigate the existing code and produce new or modified > code in good core Python style. > > It's not clear to me why the first needs to take place at a sprint. > The contribution process is entirely formal, and there is no > overwhelming need to privilege a subset of potential contributors for > "trivial" changes. > > If this is a reasonable way to look at it, the "reserved for sprint" > issues should be difficult enough coding to deserve some mentoring, of > a kind that is best done in person. Eg, doc issues (including message > and error strings) should be excluded from the reservable class. The > mentored sprint new contributors should be encouraged to work on some > trivial issues before the sprint. ("Required" doesn't make sense to > me, since there are probably a lot who are new to coding entirely.) > > > I'm just not entirely comfortable with trying to also hide them > > from first time contributors. > > I too feel that discomfort, but there are a number of good reasons to > privilege PyCon attendees. The special "mentored sprints" are > intentionally diversity-enhancing. If restricted as I suggest, the > issues themselves benefit from personal mentoring, and so this will > both improve education of new contributors and be an efficient use of > mentor time. Finally, these contributors have demonstrated both a > financial commitment and a time commitment to Python, an interest in > contributing *to Python*, and so are more likely to become consistent > contributors than, say, GSoC students who just want to demonstrate > that they understand the PR process to qualify for the $5500.[2] > > Finally, if my analysis above is correct, these issues aren't hidden > in any important way. The hidden issues are the ones that get fixed > "en passant" as core devs work on something else. > > > Either way, I'll keep marking issues as Easy when I think they are > > good first contributions. > > Of course! > > > Footnotes: > [1] Please tell me I'm wrong about Python! But fixing minor issues > "en passant" is my experience in every other project I've contributed > to, including as a non-committing "new contributor" in the first few > patches. > > [2] Not saying that GSoC students don't want to contribute *to > Python*, just that their financial incentive goes "the wrong way" > relative to PyCon attendees. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Fri Feb 22 03:25:39 2019 From: storchaka at gmail.com (Serhiy Storchaka) Date: Fri, 22 Feb 2019 10:25:39 +0200 Subject: [Python-Dev] int() and math.trunc don't accept objects that only define __index__ In-Reply-To: References: Message-ID: 18.02.19 18:16, R?mi Lapeyre ????: > The documentation mentions at > https://docs.python.org/3/reference/datamodel.html#object.__index__ > the need to always define both __index__ and __int__: > > ? ? Note: In order to have a coherent integer type class, when > __index__() is defined __int__() should also be defined, and both should > return the same value. > > Nick Coghlan proposes to make __int__ defaults to __index__ when only > the second > is defined and asked to open a discussion on python-dev before making > any change > "as the closest equivalent we have to this right now is the "negative" > derivation, > where overriding __eq__ without overriding __hash__ implicitly marks the > derived > class as unhashable (look for "type->tp_hash = > PyObject_HashNotImplemented;").". > > > I think the change proposed makes more sense than the current behavior and > volunteer to implement it if it is accepted. Should we add default implementations of __float__ and __complex__ when either __index__ or __int__ is defined? Currently: >>> class A: ... def __int__(self): return 42 ... >>> int(A()) 42 >>> float(A()) Traceback (most recent call last): File "", line 1, in TypeError: float() argument must be a string or a number, not 'A' >>> complex(A()) Traceback (most recent call last): File "", line 1, in TypeError: complex() first argument must be a string or a number, not 'A' Or just document that in order to have a coherent integer type class, when __index__() or __int__() are defined __float__() and __complex__() should also be defined, and all should return equal values. From vstinner at redhat.com Fri Feb 22 07:07:37 2019 From: vstinner at redhat.com (Victor Stinner) Date: Fri, 22 Feb 2019 13:07:37 +0100 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: Hi, Let me share with you (this mailing list) my experience with mentoring and what we call "easy issue". First of all, almost all "easy issues" are very hard issues: issues open for longer than one year, with many comments, and nobody succeeded to come up with a solution (well, otherwise the issue would be closed, no? ;-)). Does it mean that there is no easy issue? Well, when someone creates a really easy issue, it's usually fixed in less than 2 hours(!): either by a core dev who want to relax their brain, or by a contributor eager to get more commits under their name. It's not uncommon that even if an issue is marked as easy (easy keyword and [EASY] in the title), it's fixed by a core a dev. Or at least by a "regular" contributor who wants to get more commits. For a regular contributor, I see the point of getting more commits: getting more visible. But in my criteria to promote a contributor as a core dev, it's not enough. It's not with easy issue that you learn hard stuff: trade off between backward compatibility and new features which has to break it, write complex test, document properly a change, etc. Does it mean that we don't need easy issue? No. For someone who really starts from the beginning, easy issues are required. You cannot ask a newcomer to immediately write the perfect PR at the first attempt with a long list of strict requirements: complex tests, handle portability issues, etc. You need to build a stair made of *small* steps. What's the solution? I'm not sure. I decided to stop opening public issues for my mentorees and instead give them easy bugs by private email. Why? My main concern is the high pressure on an open issue: the race to be the first to produce a PR. Recently, I opened a public issue anyway but explicitly wrote that it's reserved for my mentoree. Someone ignored the message and wrote a PR... It was a first time contributor and it was really hard to me to reject the PR :-( (My mentoree wrote a better PR with my help, at least a PR closer to what I expected, obviously since I helped.) I prefer to give unlimited time to my mentoree to dig into the code, ask questions, write code step by step. I really hate the time pressure of the race on open easy issues :-( Now the question is: how can *you* find an easy issue? Well. In the past, my real problem was that I didn't let anyone write the code that I would like to write myself. I considered that nobody writes code as well as I do... Sorry :-) It took me time to change my mind. Lao Tsu said that if you give a hungry man a fish, you feed him for a day, but if you teach him how to fish, you feed him for a lifetim To better scale horizontally, I have to teach other people what I do, to be able to distribute the work. I am working hard against myself to stop writing all code myself. Instead, I'm trying to explain carefully what I would like to do, split the work into tasks, and distribute these tasks to my mentorees, one by one. But I don't put my mentorees in a competition, each mentoree has its own project which doesn't conflict with others. Doing that takes me a lot of time: create tasks, follow tasks, review PRs, etc. To be honest, right now, I'm mostly able to follow correctly only one mentoree at the same time. The others are more "on their own" (sorry!). As you may know, finding a task doable by mentorees is hard. I have a lot of ideas, but many ideas will take years to be implemented and are controversial :-) I don't ask you to completely stop writing code. I ask you to not write everything yourself, and sometimes give the simplest issues to contributors who are eager to help you. Victor From tir.karthi at gmail.com Fri Feb 22 11:11:26 2019 From: tir.karthi at gmail.com (Karthikeyan) Date: Fri, 22 Feb 2019 21:41:26 +0530 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: I would also suggest cleaning up the existing set of easy issues where the issues was tagged as easy initially followed by discussion about how the though easy has other concerns like backwards compatibility due to which it can't be merged. It's getting hard to get more easy issues and what could seem as an easy fix might later expand into a discussion that the beginner might not have enough context to answer and in that case the easy tag should be removed. It's even harder over realizing when to fix it by myself or tag it as easy for someone to fix it because if it's fixed by a regular contributor then there could be a thought that I myself could have done it since triaging itself is a difficult work. I would also recommend waiting for a core dev or someone to provide some feedback or confirmation on even an easy issue's fix since it's easy to propose a fix to be later rejected due to various reasons resulting in wasted work and disappointment. PS : Using mailing list for the first time apologies if I have done anything wrong. On Fri, Feb 22, 2019, 5:40 PM Victor Stinner wrote: > Hi, > > Let me share with you (this mailing list) my experience with mentoring > and what we call "easy issue". First of all, almost all "easy issues" > are very hard issues: issues open for longer than one year, with many > comments, and nobody succeeded to come up with a solution (well, > otherwise the issue would be closed, no? ;-)). Does it mean that there > is no easy issue? Well, when someone creates a really easy issue, it's > usually fixed in less than 2 hours(!): either by a core dev who want > to relax their brain, or by a contributor eager to get more commits > under their name. > > It's not uncommon that even if an issue is marked as easy (easy > keyword and [EASY] in the title), it's fixed by a core a dev. Or at > least by a "regular" contributor who wants to get more commits. For a > regular contributor, I see the point of getting more commits: getting > more visible. But in my criteria to promote a contributor as a core > dev, it's not enough. It's not with easy issue that you learn hard > stuff: trade off between backward compatibility and new features which > has to break it, write complex test, document properly a change, etc. > > Does it mean that we don't need easy issue? No. For someone who really > starts from the beginning, easy issues are required. You cannot ask a > newcomer to immediately write the perfect PR at the first attempt with > a long list of strict requirements: complex tests, handle portability > issues, etc. You need to build a stair made of *small* steps. > > What's the solution? I'm not sure. I decided to stop opening public > issues for my mentorees and instead give them easy bugs by private > email. Why? My main concern is the high pressure on an open issue: the > race to be the first to produce a PR. Recently, I opened a public > issue anyway but explicitly wrote that it's reserved for my mentoree. > Someone ignored the message and wrote a PR... It was a first time > contributor and it was really hard to me to reject the PR :-( (My > mentoree wrote a better PR with my help, at least a PR closer to what > I expected, obviously since I helped.) > > I prefer to give unlimited time to my mentoree to dig into the code, > ask questions, write code step by step. I really hate the time > pressure of the race on open easy issues :-( > > Now the question is: how can *you* find an easy issue? Well. In the > past, my real problem was that I didn't let anyone write the code that > I would like to write myself. I considered that nobody writes code as > well as I do... Sorry :-) It took me time to change my mind. > > Lao Tsu said that if you give a hungry man a fish, > you feed him for a day, > but if you teach him how to fish, > you feed him for a lifetim > > To better scale horizontally, I have to teach other people what I do, > to be able to distribute the work. > > I am working hard against myself to stop writing all code myself. > Instead, I'm trying to explain carefully what I would like to do, > split the work into tasks, and distribute these tasks to my mentorees, > one by one. But I don't put my mentorees in a competition, each > mentoree has its own project which doesn't conflict with others. > > Doing that takes me a lot of time: create tasks, follow tasks, review > PRs, etc. To be honest, right now, I'm mostly able to follow correctly > only one mentoree at the same time. The others are more "on their own" > (sorry!). As you may know, finding a task doable by mentorees is hard. > I have a lot of ideas, but many ideas will take years to be > implemented and are controversial :-) > > I don't ask you to completely stop writing code. I ask you to not > write everything yourself, and sometimes give the simplest issues to > contributors who are eager to help you. > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/tir.karthi%40gmail.com > -- Regards, Karthikeyan S -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Feb 22 12:09:36 2019 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 23 Feb 2019 03:09:36 +1000 Subject: [Python-Dev] int() and math.trunc don't accept objects that only define __index__ In-Reply-To: References: Message-ID: On Fri, 22 Feb 2019 at 18:29, Serhiy Storchaka wrote: > Should we add default implementations of __float__ and __complex__ when > either __index__ or __int__ is defined? Currently: > > >>> class A: > ... def __int__(self): return 42 > ... > >>> int(A()) > 42 > >>> float(A()) > Traceback (most recent call last): > File "", line 1, in > TypeError: float() argument must be a string or a number, not 'A' > >>> complex(A()) > Traceback (most recent call last): > File "", line 1, in > TypeError: complex() first argument must be a string or a number, not 'A' > > Or just document that in order to have a coherent integer type class, > when __index__() or __int__() are defined __float__() and __complex__() > should also be defined, and all should return equal values. I think when __index__ is defined, it would be reasonable to have that imply the same floating point conversion rules as are applied for builtin ints, since the conversion is supposed to be lossless in that case (and if it isn't lossless, that's what `__int__` is for). However, I don't think the decision is quite as clearcut as it is for `__index__` implying `__int__`. Lossy conversions to int shouldn't imply anything about conversions to real numbers or floating point values. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chekat2 at gmail.com Sat Feb 23 14:50:59 2019 From: chekat2 at gmail.com (Cheryl Sabella) Date: Sat, 23 Feb 2019 14:50:59 -0500 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: On Fri, Feb 22, 2019 at 11:11 AM Karthikeyan wrote: > I would also suggest cleaning up the existing set of easy issues where the > issues was tagged as easy initially followed by discussion about how the > though easy has other concerns like backwards compatibility due to which it > can't be merged. It's getting hard to get more easy issues and what could > seem as an easy fix might later expand into a discussion that the beginner > might not have enough context to answer and in that case the easy tag > should be removed. It's even harder over realizing when to fix it by myself > or tag it as easy for someone to fix it because if it's fixed by a regular > contributor then there could be a thought that I myself could have done it > since triaging itself is a difficult work. > > I agree that many issues need to have the 'easy' tag removed. There really isn't a stage on the tracker for 'in discussion' or 'deadlocked'. I think 'languishing' was the closest status for that. When I've added 'easy' to older issues in the past, I try not to do more than one or two a day. That way, the nosy list doesn't get spammed too much and there aren't too many issues that float to the first page. Maybe something similar for removing the 'easy' tag? Only change a few day, but also leave a comment summarizing the discussion (if it had been long), or just saying "X core dev thinks it should be this way and Y core dev thinks this, so until that is resolved, this is no longer an easy issue." A comment would help because you would have done some research, so that communicates to others what you found out about the ticket. I certainly didn't intend for my first week as a core dev to be about trying to change the workflow, so apologies in advance if anyone thinks this is too drastic. > I would also recommend waiting for a core dev or someone to provide some > feedback or confirmation on even an easy issue's fix since it's easy to > propose a fix to be later rejected due to various reasons resulting in > wasted work and disappointment. > > Agreed, but perhaps the most helpful way to do that is to propose the fix in a comment on the bug tracker and then, if a core dev or expert says it's a good idea, then move ahead with it? Although, just recently for IDLE, I made a suggestion on the tracker and then my code wasn't what Terry expected, so sometimes the clearest explanation *is* a code change. But, for any change that someone will spend some time on, then there should probably be consensus first. > PS : Using mailing list for the first time apologies if I have done > anything wrong. > > You did great! This topic was my first post too, so I know exactly what you mean. :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From python+python_dev at discontinuity.net Sat Feb 23 23:09:03 2019 From: python+python_dev at discontinuity.net (Davin Potts) Date: Sat, 23 Feb 2019 22:09:03 -0600 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: I have done what I was asked to do: I added tests and docs in a new PR (GH-11816) as of Feb 10. Since that time, the API has matured thanks to thoughtful feedback from a number of active reviewers. At present, we appear to have stabilized around an API and code that deserves to be exercised further. To get that exercise and because there are currently no outstanding objections, I am merging the PR to get it into the second alpha. There will undoubtedly be further revisions and adjustments. During this effort, I have received a surprising number of personal emails expressing support and encouragement from people, most of whom I have never met. Your kindness has been wonderful. Davin On Wed, Feb 6, 2019 at 10:58 AM Giampaolo Rodola' wrote: > > > > On Wed, Feb 6, 2019 at 12:51 PM Giampaolo Rodola' wrote: >> >> >> Unless they are already there (I don't know) it would be good to have a full set of unit-tests for all the register()ed types and test them against SyncManager and SharedMemoryManager. That would give an idea on the real interchangeability of these 2 classes and would also help writing a comprehensive doc. > > > In order to speed up the alpha2 inclusion process I created a PR which implements what said above: > https://github.com/python/cpython/pull/11772 > https://bugs.python.org/issue35917 > Apparently SharedMemoryManager works out of the box and presents no differences with SyncManager, but the list type is not using ShareableList. When I attempted to register it with "SharedMemoryManager.register('list', list, ShareableList)" I got the following error: > > Traceback (most recent call last): > File "foo.py", line 137, in test_list > o = self.manager.list() > File "/home/giampaolo/svn/cpython/Lib/multiprocessing/managers.py", line 702, in temp > proxy = proxytype( > TypeError: __init__() got an unexpected keyword argument 'manager' > > I am not sure how to fix that (I'll leave it up to Davin). The tests as-is are independent from PR-11772 so I suppose they can be reviewed/checked-in regardless of the changes which will affect shared_memory.py. > > -- > Giampaolo - http://grodola.blogspot.com > From tjreedy at udel.edu Sun Feb 24 02:34:06 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 24 Feb 2019 02:34:06 -0500 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: On 2/23/2019 2:50 PM, Cheryl Sabella wrote: AM Karthikeyan wrote: > I would also recommend waiting for a core dev or someone to provide > some feedback or confirmation on even an easy issue's fix since it's > easy to propose a fix to be later rejected due to various reasons > resulting in wasted work and disappointment. > > Agreed, but perhaps the most helpful way to do that is to propose the > fix in a comment on the bug tracker and then, if a core dev or expert > says it's a good idea, then move ahead with it? I agree with both of you as to what contributors, especially new contributors, *should* do. But they sometimes race to 'grab' an issue by (prematurely) submitting a PR, sometimes after ignoring coredev comments and disagreements. I have occasionally said on an issue that a PR was premature. What really annoys me is if I say on an issue "Maybe we should add this sentence: 'jkjsf j fsjk sjkf sjskjfjs sflsj sfjsfjljsgjgeij k fjlfjs.' What do others think?" and within an hour someone who is incapable of writing or even properly reviewing the sentence mechanically copies it into a PR. I see this as intellectual theft and have been tempted to close a couple of PRs as such. -- Terry Jan Reedy From tir.karthi at gmail.com Sun Feb 24 06:31:32 2019 From: tir.karthi at gmail.com (Karthikeyan) Date: Sun, 24 Feb 2019 17:01:32 +0530 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: On Sun, Feb 24, 2019 at 1:07 PM Terry Reedy wrote: > On 2/23/2019 2:50 PM, Cheryl Sabella wrote: > > AM Karthikeyan wrote: > > I would also recommend waiting for a core dev or someone to provide > > some feedback or confirmation on even an easy issue's fix since it's > > easy to propose a fix to be later rejected due to various reasons > > resulting in wasted work and disappointment. > > > > Agreed, but perhaps the most helpful way to do that is to propose the > > fix in a comment on the bug tracker and then, if a core dev or expert > > says it's a good idea, then move ahead with it? > > I agree with both of you as to what contributors, especially new > contributors, *should* do. But they sometimes race to 'grab' an issue > by (prematurely) submitting a PR, sometimes after ignoring coredev > comments and disagreements. I have occasionally said on an issue that a > PR was premature. > I guess it could be due to the initial excitement in contributing to a large project. I must admit I too did some mistakes in my initial set of PRs along similar lines. I guess it's one of the things both someone contributing new and a regular contributor should learn over the course of time that there are cases where the solution might seem important from the perspective of the contributor in getting code merged but provides less value amidst other factors like code maintenance, backwards compatibility, etc. There is also high interest in creating a PR and less on reviewing other PRs (1020 open PRs on GitHub) which could be a separate topic on its own. There could be some action or motivation on making sure there is a balance in the incoming PRs and review bandwidth since there might be a stage where there is a lot of interest or efforts in getting new contributors who create a PR with less bandwidth to review resulting in potentially making them disappointed in having work not reviewed. We should be getting new people on board and it's not that I complaining but this is something that the steering council could discuss upon regarding reviews and there was a recent thread on it [0] [0] https://mail.python.org/pipermail/python-committers/2019-February/006517.html -- Regards, Karthikeyan S -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.rodola at gmail.com Sun Feb 24 17:00:50 2019 From: g.rodola at gmail.com (Giampaolo Rodola') Date: Sun, 24 Feb 2019 23:00:50 +0100 Subject: [Python-Dev] Asking for reversion In-Reply-To: References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: On Sun, Feb 24, 2019 at 5:09 AM Davin Potts < python+python_dev at discontinuity.net> wrote: > I have done what I was asked to do: I added tests and docs in a new > PR (GH-11816) as of Feb 10. > > Since that time, the API has matured thanks to thoughtful feedback > from a number of active reviewers. At present, we appear to have > stabilized around an API and code that deserves to be exercised > further. To get that exercise and because there are currently no > outstanding objections, I am merging the PR to get it into the second > alpha. There will undoubtedly be further revisions and adjustments. Nice job! It wasn't easy to abstract such a low level interface. -- Giampaolo - http://grodola.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.hettinger at gmail.com Sun Feb 24 23:54:02 2019 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 24 Feb 2019 20:54:02 -0800 Subject: [Python-Dev] Possible performance regression Message-ID: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> I'll been running benchmarks that have been stable for a while. But between today and yesterday, there has been an almost across the board performance regression. It's possible that this is a measurement error or something unique to my system (my Mac installed the 10.14.3 release today), so I'm hoping other folks can run checks as well. Raymond -- Yesterday ------------------------------------------------------------------------ $ ./python.exe Tools/scripts/var_access_benchmark.py Variable and attribute read access: 4.0 ns read_local 4.5 ns read_nonlocal 13.1 ns read_global 17.4 ns read_builtin 17.4 ns read_classvar_from_class 15.8 ns read_classvar_from_instance 24.6 ns read_instancevar 19.7 ns read_instancevar_slots 18.5 ns read_namedtuple 26.3 ns read_boundmethod Variable and attribute write access: 4.6 ns write_local 4.8 ns write_nonlocal 17.5 ns write_global 39.1 ns write_classvar 34.4 ns write_instancevar 25.3 ns write_instancevar_slots Data structure read access: 17.5 ns read_list 18.4 ns read_deque 19.2 ns read_dict Data structure write access: 19.0 ns write_list 22.0 ns write_deque 24.4 ns write_dict Stack (or queue) operations: 55.5 ns list_append_pop 46.3 ns deque_append_pop 46.7 ns deque_append_popleft Timing loop overhead: 0.3 ns loop_overhead -- Today --------------------------------------------------------------------------- $ ./python.exe py Tools/scripts/var_access_benchmark.py Variable and attribute read access: 5.0 ns read_local 5.3 ns read_nonlocal 14.7 ns read_global 18.6 ns read_builtin 19.9 ns read_classvar_from_class 17.7 ns read_classvar_from_instance 26.1 ns read_instancevar 21.0 ns read_instancevar_slots 21.7 ns read_namedtuple 27.8 ns read_boundmethod Variable and attribute write access: 6.1 ns write_local 7.3 ns write_nonlocal 18.9 ns write_global 40.7 ns write_classvar 36.2 ns write_instancevar 26.1 ns write_instancevar_slots Data structure read access: 19.1 ns read_list 19.6 ns read_deque 20.6 ns read_dict Data structure write access: 22.8 ns write_list 23.5 ns write_deque 27.8 ns write_dict Stack (or queue) operations: 54.8 ns list_append_pop 49.5 ns deque_append_pop 49.4 ns deque_append_popleft Timing loop overhead: 0.3 ns loop_overhead From ericsnowcurrently at gmail.com Mon Feb 25 00:04:40 2019 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sun, 24 Feb 2019 22:04:40 -0700 Subject: [Python-Dev] Possible performance regression In-Reply-To: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> Message-ID: I'll take a look tonight. -eric On Sun, Feb 24, 2019, 21:54 Raymond Hettinger wrote: > I'll been running benchmarks that have been stable for a while. But > between today and yesterday, there has been an almost across the board > performance regression. > > It's possible that this is a measurement error or something unique to my > system (my Mac installed the 10.14.3 release today), so I'm hoping other > folks can run checks as well. > > > Raymond > > > -- Yesterday > ------------------------------------------------------------------------ > > $ ./python.exe Tools/scripts/var_access_benchmark.py > Variable and attribute read access: > 4.0 ns read_local > 4.5 ns read_nonlocal > 13.1 ns read_global > 17.4 ns read_builtin > 17.4 ns read_classvar_from_class > 15.8 ns read_classvar_from_instance > 24.6 ns read_instancevar > 19.7 ns read_instancevar_slots > 18.5 ns read_namedtuple > 26.3 ns read_boundmethod > > Variable and attribute write access: > 4.6 ns write_local > 4.8 ns write_nonlocal > 17.5 ns write_global > 39.1 ns write_classvar > 34.4 ns write_instancevar > 25.3 ns write_instancevar_slots > > Data structure read access: > 17.5 ns read_list > 18.4 ns read_deque > 19.2 ns read_dict > > Data structure write access: > 19.0 ns write_list > 22.0 ns write_deque > 24.4 ns write_dict > > Stack (or queue) operations: > 55.5 ns list_append_pop > 46.3 ns deque_append_pop > 46.7 ns deque_append_popleft > > Timing loop overhead: > 0.3 ns loop_overhead > > > -- Today > --------------------------------------------------------------------------- > > $ ./python.exe py Tools/scripts/var_access_benchmark.py > > Variable and attribute read access: > 5.0 ns read_local > 5.3 ns read_nonlocal > 14.7 ns read_global > 18.6 ns read_builtin > 19.9 ns read_classvar_from_class > 17.7 ns read_classvar_from_instance > 26.1 ns read_instancevar > 21.0 ns read_instancevar_slots > 21.7 ns read_namedtuple > 27.8 ns read_boundmethod > > Variable and attribute write access: > 6.1 ns write_local > 7.3 ns write_nonlocal > 18.9 ns write_global > 40.7 ns write_classvar > 36.2 ns write_instancevar > 26.1 ns write_instancevar_slots > > Data structure read access: > 19.1 ns read_list > 19.6 ns read_deque > 20.6 ns read_dict > > Data structure write access: > 22.8 ns write_list > 23.5 ns write_deque > 27.8 ns write_dict > > Stack (or queue) operations: > 54.8 ns list_append_pop > 49.5 ns deque_append_pop > 49.4 ns deque_append_popleft > > Timing loop overhead: > 0.3 ns loop_overhead > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/ericsnowcurrently%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Mon Feb 25 01:06:08 2019 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sun, 24 Feb 2019 23:06:08 -0700 Subject: [Python-Dev] Possible performance regression In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> Message-ID: On Sun, Feb 24, 2019 at 10:04 PM Eric Snow wrote: > I'll take a look tonight. I made 2 successive runs of the script (on my laptop) for a commit from early Saturday, and 2 runs from a commit this afternoon (close to master). The output is below, with the earlier commit first. That one is a little faster in places and a little slower in others. However, I also saw quite a bit of variability in the results for the same commit. So I'm not sure what to make of it. I'll look into it in more depth tomorrow. FWIW, I have a few commits in the range you described, so I want to make sure I didn't slow things down for us. :) -eric * commit 175421b58cc97a2555e474f479f30a6c5d2250b0 (HEAD) | Author: Pablo Galindo | Date: Sat Feb 23 03:02:06 2019 +0000 | | bpo-36016: Add generation option to gc.getobjects() (GH-11909) $ ./python Tools/scripts/var_access_benchmark.py Variable and attribute read access: 18.1 ns read_local 19.4 ns read_nonlocal 48.3 ns read_global 52.4 ns read_builtin 55.7 ns read_classvar_from_class 56.1 ns read_classvar_from_instance 78.6 ns read_instancevar 67.6 ns read_instancevar_slots 65.9 ns read_namedtuple 106.1 ns read_boundmethod Variable and attribute write access: 25.1 ns write_local 26.9 ns write_nonlocal ^[[A 78.0 ns write_global 154.1 ns write_classvar 132.0 ns write_instancevar 88.2 ns write_instancevar_slots Data structure read access: 69.6 ns read_list 69.0 ns read_deque 68.4 ns read_dict Data structure write access: 73.2 ns write_list 79.0 ns write_deque 103.5 ns write_dict Stack (or queue) operations: 348.3 ns list_append_pop 169.0 ns deque_append_pop 170.8 ns deque_append_popleft Timing loop overhead: 1.3 ns loop_overhead $ ./python Tools/scripts/var_access_benchmark.py Variable and attribute read access: 17.7 ns read_local 19.2 ns read_nonlocal 39.9 ns read_global 50.3 ns read_builtin 54.4 ns read_classvar_from_class 55.8 ns read_classvar_from_instance 80.3 ns read_instancevar 70.7 ns read_instancevar_slots 66.1 ns read_namedtuple 108.9 ns read_boundmethod Variable and attribute write access: 25.1 ns write_local 25.6 ns write_nonlocal 70.0 ns write_global 151.5 ns write_classvar 133.9 ns write_instancevar 90.7 ns write_instancevar_slots Data structure read access: 140.7 ns read_list 89.6 ns read_deque 86.6 ns read_dict Data structure write access: 97.9 ns write_list 100.5 ns write_deque 120.0 ns write_dict Stack (or queue) operations: 375.9 ns list_append_pop 179.3 ns deque_append_pop 179.4 ns deque_append_popleft Timing loop overhead: 1.5 ns loop_overhead * commit 3b0abb019662e42070f1d6f7e74440afb1808f03 (HEAD) | Author: Giampaolo Rodola | Date: Sun Feb 24 15:46:40 2019 -0800 | | bpo-33671: allow setting shutil.copyfile() bufsize globally (GH-12016) $ ./python Tools/scripts/var_access_benchmark.py Variable and attribute read access: 20.2 ns read_local 20.0 ns read_nonlocal 41.9 ns read_global 52.9 ns read_builtin 56.3 ns read_classvar_from_class 56.9 ns read_classvar_from_instance 80.2 ns read_instancevar 70.6 ns read_instancevar_slots 69.5 ns read_namedtuple 114.5 ns read_boundmethod Variable and attribute write access: 23.4 ns write_local 25.0 ns write_nonlocal 74.5 ns write_global 152.0 ns write_classvar 131.7 ns write_instancevar 90.1 ns write_instancevar_slots Data structure read access: 69.9 ns read_list 73.4 ns read_deque 77.8 ns read_dict Data structure write access: 83.3 ns write_list 94.9 ns write_deque 120.6 ns write_dict Stack (or queue) operations: 383.4 ns list_append_pop 187.1 ns deque_append_pop 182.2 ns deque_append_popleft Timing loop overhead: 1.4 ns loop_overhead $ ./python Tools/scripts/var_access_benchmark.py Variable and attribute read access: 19.1 ns read_local 20.9 ns read_nonlocal 43.8 ns read_global 57.8 ns read_builtin 58.4 ns read_classvar_from_class 61.3 ns read_classvar_from_instance 84.7 ns read_instancevar 72.9 ns read_instancevar_slots 69.7 ns read_namedtuple 109.9 ns read_boundmethod Variable and attribute write access: 23.1 ns write_local 23.7 ns write_nonlocal 72.8 ns write_global 149.9 ns write_classvar 133.3 ns write_instancevar 89.4 ns write_instancevar_slots Data structure read access: 69.0 ns read_list 69.6 ns read_deque 69.1 ns read_dict Data structure write access: 74.5 ns write_list 80.9 ns write_deque 105.4 ns write_dict Stack (or queue) operations: 338.2 ns list_append_pop 165.6 ns deque_append_pop 164.7 ns deque_append_popleft Timing loop overhead: 1.3 ns loop_overhead From turnbull.stephen.fw at u.tsukuba.ac.jp Mon Feb 25 03:59:01 2019 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Mon, 25 Feb 2019 17:59:01 +0900 Subject: [Python-Dev] "Good first issues" on the bug tracker In-Reply-To: References: Message-ID: <23667.44629.584875.66275@turnbull.sk.tsukuba.ac.jp> Karthikeyan writes: > I would also recommend waiting for a core dev or someone to provide some > feedback or confirmation on even an easy issue's fix FWIW, I don't think waiting on core devs is a very good idea, because we just don't have enough free core dev time, and I don't think we (or any project!) ever will -- if core devs have enough free time to do lots of triage and commenting, they're the kind of developer who also has plenty of own projects on the back burner. OTOH, new developers aren't going to know who the core devs are, and it's probably true that an issue with comments on it is likely to be easier to get your head wrapped around than one without. (I don't know that non-core devs are any more likely to make comments, though.) > since it's easy to propose a fix to be later rejected due to > various reasons This is certainly true, but: > resulting in wasted work I have to disagree. Learning is hard work, and at least you get to spend that effort on Python doing it the way you think is right. If you got it wrong in the opinion of a committer, you learn something, because they're usually right (for Python), that's how they get to be core developers. The work that I consider wasted is when I tell the boss that the idea sucks and why, they say we're going to do it so STFU and write, and then when they see the product they say "Ohhhhhh." > and disappointment. Yes. It is disappointing when something you think is useful, even important, gets tabled just before the feature freeze. Especially when it gets postponed because somebody has decided that something unrelated that your code touches that's not broke needs fixing[1] but they haven't decided what that means. My answer to that is to have lots of little projects pending time to work on them, even though I'm not a core developer. FWIW, YMMV as they say. Footnotes: [1] Good luck parsing that, but I'm sure you know the feeling. -- Associate Professor Division of Policy and Planning Science http://turnbull.sk.tsukuba.ac.jp/ Faculty of Systems and Information Email: turnbull at sk.tsukuba.ac.jp University of Tsukuba Tel: 029-853-5175 Tennodai 1-1-1, Tsukuba 305-8573 JAPAN From raymond.hettinger at gmail.com Mon Feb 25 04:22:51 2019 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Mon, 25 Feb 2019 01:22:51 -0800 Subject: [Python-Dev] Possible performance regression In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> Message-ID: > On Feb 24, 2019, at 10:06 PM, Eric Snow wrote: > > I'll look into it in more depth tomorrow. FWIW, I have a few commits > in the range you described, so I want to make sure I didn't slow > things down for us. :) Thanks for looking into it. FWIW, I can consistently reproduce the results several times in row. Here's the bash script I'm using: #!/bin/bash make clean ./configure make # Apple LLVM version 10.0.0 (clang-1000.11.45.5) for i in `seq 1 3`; do git checkout d610116a2e48b55788b62e11f2e6956af06b3de0 # Go back to 2/23 make # Rebuild sleep 30 # Let the system get quiet and cool echo '---- baseline ---' >> results.txt # Label output ./python.exe Tools/scripts/var_access_benchmark.py >> results.txt # Run benchmark git checkout 16323cb2c3d315e02637cebebdc5ff46be32ecdf # Go to end-of-day 2/24 make # Rebuild sleep 30 # Let the system get quiet and cool echo '---- end of day ---' >> results.txt # Label output ./python.exe Tools/scripts/var_access_benchmark.py >> results.txt # Run benchmark > > -eric > > > * commit 175421b58cc97a2555e474f479f30a6c5d2250b0 (HEAD) > | Author: Pablo Galindo > | Date: Sat Feb 23 03:02:06 2019 +0000 > | > | bpo-36016: Add generation option to gc.getobjects() (GH-11909) > > $ ./python Tools/scripts/var_access_benchmark.py > Variable and attribute read access: > 18.1 ns read_local > 19.4 ns read_nonlocal These timings are several times larger than they should be. Perhaps you're running a debug build? Or perhaps 32-bit? Or on VM or some such. Something looks way off because I'm getting 4 and 5 ns on my 2013 Haswell laptop. Raymond From vstinner at redhat.com Mon Feb 25 04:42:22 2019 From: vstinner at redhat.com (Victor Stinner) Date: Mon, 25 Feb 2019 10:42:22 +0100 Subject: [Python-Dev] Possible performance regression In-Reply-To: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> Message-ID: Hi, Le lun. 25 f?vr. 2019 ? 05:57, Raymond Hettinger a ?crit : > I'll been running benchmarks that have been stable for a while. But between today and yesterday, there has been an almost across the board performance regression. How do you run your benchmarks? If you use Linux, are you using CPU isolation? > It's possible that this is a measurement error or something unique to my system (my Mac installed the 10.14.3 release today), so I'm hoping other folks can run checks as well. Getting reproducible benchmark results on timing smaller than 1 ms is really hard. I wrote some advices to get more stable results: https://perf.readthedocs.io/en/latest/run_benchmark.html#how-to-get-reproductible-benchmark-results > Variable and attribute read access: > 4.0 ns read_local In my experience, for timing less than 100 ns, *everything* impacts the benchmark, and the result is useless without the standard deviation. On such microbenchmarks, the hash function hash a significant impact on performance. So you should run your benchmark on multiple different *processes* to get multiple different hash functions. Some people prefer to use PYTHONHASHSEED=0 (or another value), but I dislike using that since it's less representative of performance "on production" (with randomized hash function). For example, using 20 processes to test 20 randomized hash function is enough to compute the average cost of the hash function. More remark was more general, I didn't look at the specific case of var_access_benchmark.py. Maybe benchmarks on C depend on the hash function. For example, 4.0 ns +/- 10 ns or 4.0 ns +/- 0.1 ns is completely different to decide if "5.0 ns" is slower to faster. The "perf compare" command of my perf module "determines whether two samples differ significantly using a Student?s two-sample, two-tailed t-test with alpha equals to 0.95.": https://en.wikipedia.org/wiki/Student's_t-test I don't understand how these things work, I just copied the code from the old Python benchmark suite :-) See also my articles in my journey to stable benchmarks: * https://vstinner.github.io/journey-to-stable-benchmark-system.html # nosy applications / CPU isolation * https://vstinner.github.io/journey-to-stable-benchmark-deadcode.html # PGO * https://vstinner.github.io/journey-to-stable-benchmark-average.html # randomized hash function There are likely other parameters which impact benchmarks, that's why std dev and how the benchmark matter so much. Victor -- Night gathers, and now my watch begins. It shall not end until my death. From solipsis at pitrou.net Mon Feb 25 05:52:39 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 25 Feb 2019 11:52:39 +0100 Subject: [Python-Dev] Asking for reversion References: <20190203220340.3158b236@fsol> <8933B3A4-DE0D-47AD-8A5A-10E7B54023D7@gmail.com> Message-ID: <20190225115239.5b4ee7b2@fsol> On Sat, 23 Feb 2019 22:09:03 -0600 Davin Potts wrote: > I have done what I was asked to do: I added tests and docs in a new > PR (GH-11816) as of Feb 10. > > Since that time, the API has matured thanks to thoughtful feedback > from a number of active reviewers. At present, we appear to have > stabilized around an API and code that deserves to be exercised > further. To get that exercise and because there are currently no > outstanding objections, I am merging the PR to get it into the second > alpha. There will undoubtedly be further revisions and adjustments. I agree the overall API looks reasonable. Thanks for doing this in time. Regards Antoine. From solipsis at pitrou.net Mon Feb 25 05:54:52 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 25 Feb 2019 11:54:52 +0100 Subject: [Python-Dev] Possible performance regression References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> Message-ID: <20190225115452.30ed11bb@fsol> On Sun, 24 Feb 2019 20:54:02 -0800 Raymond Hettinger wrote: > I'll been running benchmarks that have been stable for a while. But between today and yesterday, there has been an almost across the board performance regression. Have you tried bisecting to find out the offending changeset, if there any? Regards Antoine. From raymond.hettinger at gmail.com Mon Feb 25 12:31:32 2019 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Mon, 25 Feb 2019 09:31:32 -0800 Subject: [Python-Dev] Possible performance regression In-Reply-To: <20190225115452.30ed11bb@fsol> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> Message-ID: <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> > On Feb 25, 2019, at 2:54 AM, Antoine Pitrou wrote: > > Have you tried bisecting to find out the offending changeset, if there > any? I got it down to two checkins before running out of time: Between git checkout 463572c8beb59fd9d6850440af48a5c5f4c0c0c9 And: git checkout 3b0abb019662e42070f1d6f7e74440afb1808f03 So the subinterpreter patch was likely the trigger. I can reproduce it over and over again on Clang, but not for a GCC-8 build, so it is compiler specific (and possibly macOS specific). Will look at it more after work this evening. I posted here to try to solicit independent confirmation. Raymond From ericsnowcurrently at gmail.com Mon Feb 25 12:42:17 2019 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 25 Feb 2019 10:42:17 -0700 Subject: [Python-Dev] Possible performance regression In-Reply-To: <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> Message-ID: On Mon, Feb 25, 2019 at 10:32 AM Raymond Hettinger wrote: > I got it down to two checkins before running out of time: > > Between > git checkout 463572c8beb59fd9d6850440af48a5c5f4c0c0c9 > > And: > git checkout 3b0abb019662e42070f1d6f7e74440afb1808f03 > > So the subinterpreter patch was likely the trigger. > > I can reproduce it over and over again on Clang, but not for a GCC-8 build, so it is compiler specific (and possibly macOS specific). > > Will look at it more after work this evening. I posted here to try to solicit independent confirmation. I'll look into it around then too. See https://bugs.python.org/issue33608. -eric From francismb at email.de Mon Feb 25 12:57:40 2019 From: francismb at email.de (francismb) Date: Mon, 25 Feb 2019 18:57:40 +0100 Subject: [Python-Dev] OT?: Re: Possible performance regression In-Reply-To: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> Message-ID: <8e236537-93cc-ef77-337e-22ae97a8462b@email.de> Hi, just curious on this, On 2/25/19 5:54 AM, Raymond Hettinger wrote: > I'll been running benchmarks that have been stable for a while. But between today and yesterday, there has been an almost across the board performance regression. > > It's possible that this is a measurement error or something unique to my system (my Mac installed the 10.14.3 release today), so I'm hoping other folks can run checks as well. aren't the build boots caching/measuring those regressions? or what are the current impediments here ? Thanks in advance! --francis From lukasz at langa.pl Mon Feb 25 15:05:41 2019 From: lukasz at langa.pl (=?utf-8?Q?=C5=81ukasz_Langa?=) Date: Mon, 25 Feb 2019 21:05:41 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing Message-ID: <8AFF29B7-D3DB-4DE1-BAF7-CAE6F4017378@langa.pl> I packaged another release. Go get it here: https://www.python.org/downloads/release/python-380a2/ Python 3.8.0a2 is the second of four planned alpha releases of Python 3.8, the next feature release of Python. During the alpha phase, Python 3.8 remains under heavy development: additional features will be added and existing features may be modified or deleted. Please keep in mind that this is a preview release and its use is not recommended for production environments. The next preview release, 3.8.0a3, is planned for 2019-03-25. This time around the stable buildbots were a bit less green than they should have. This early in the cycle, I didn't postpone the release and I didn't use the revert hammer. But soon enough, I will. Let's make sure future changes keep the buildbots happy. - ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From larry at hastings.org Mon Feb 25 18:30:54 2019 From: larry at hastings.org (Larry Hastings) Date: Mon, 25 Feb 2019 15:30:54 -0800 Subject: [Python-Dev] before I open an issue re: posix.stat and/or os.stat In-Reply-To: References: Message-ID: <18818b3a-7546-8dc0-428f-49b59b10257e@hastings.org> On 2/21/19 2:26 AM, Michael wrote: > Will this continue to be enough space - i.e., is the Dev size going to > be enough? > > ?+2042? #ifdef MS_WINDOWS > ?+2043????? PyStructSequence_SET_ITEM(v, 2, > PyLong_FromUnsignedLong(st->st_dev)); > ?+2044? #else > ?+2045????? PyStructSequence_SET_ITEM(v, 2, _PyLong_FromDev(st->st_dev)); > ?+2046? #endif > > ?+711? #define _PyLong_FromDev PyLong_FromLongLong > > It seems so - however, Is there something such as PyUnsignedLong and is > that large enough for a "long long"? and if it exists, would that make > the value positive (for the first test). Surely you can answer this second question yourself?? You do have full source to the CPython interpreter. To answer your question: there is no PyUnsignedLong.? Python 2 has a "native int" PyIntObject, which is for most integers you see in a program, and a "long" PyLongObject which is those integers that end in L.? Python 3 only has the PyLongObject, which is used for all integers. The PyLongObject is an "arbitrary-precision integer": https://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic and can internally expand as needed to accommodate any size integer, assuming you have enough heap space.? Any "long long", unsigned or not, on any extant AIX platform, would be no problem to represent in a PyLongObject.? You should use PyLong_FromLongLong or PyLong_FromUnsignedLongLong to create your PyLong object and populate the st_dev field of the os.stat() structsequence. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Mon Feb 25 23:23:57 2019 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 25 Feb 2019 21:23:57 -0700 Subject: [Python-Dev] Possible performance regression In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> Message-ID: On Mon, Feb 25, 2019 at 10:42 AM Eric Snow wrote: > I'll look into it around then too. See https://bugs.python.org/issue33608. I ran the "performance" suite (https://github.com/python/performance), which has 57 different benchmarks. In the results, 9 were marked as "significantly" different between the two commits.. 2 of the benchmarks showed a marginal slowdown and 7 showed a marginal speedup: +-------------------------+--------------+-------------+--------------+-----------------------+ | Benchmark | speed.before | speed.after | Change | Significance | +=========================+==============+=============+==============+=======================+ | django_template | 177 ms | 172 ms | 1.03x faster | Significant (t=3.66) | +-------------------------+--------------+-------------+--------------+-----------------------+ | html5lib | 126 ms | 122 ms | 1.03x faster | Significant (t=3.46) | +-------------------------+--------------+-------------+--------------+-----------------------+ | json_dumps | 17.6 ms | 17.2 ms | 1.02x faster | Significant (t=2.65) | +-------------------------+--------------+-------------+--------------+-----------------------+ | nbody | 157 ms | 161 ms | 1.03x slower | Significant (t=-3.85) | +-------------------------+--------------+-------------+--------------+-----------------------+ | pickle_dict | 29.5 us | 30.5 us | 1.03x slower | Significant (t=-6.37) | +-------------------------+--------------+-------------+--------------+-----------------------+ | scimark_monte_carlo | 144 ms | 139 ms | 1.04x faster | Significant (t=3.61) | +-------------------------+--------------+-------------+--------------+-----------------------+ | scimark_sparse_mat_mult | 5.41 ms | 5.25 ms | 1.03x faster | Significant (t=4.26) | +-------------------------+--------------+-------------+--------------+-----------------------+ | sqlite_synth | 3.99 us | 3.91 us | 1.02x faster | Significant (t=2.49) | +-------------------------+--------------+-------------+--------------+-----------------------+ | unpickle_pure_python | 497 us | 481 us | 1.03x faster | Significant (t=5.04) | +-------------------------+--------------+-------------+--------------+-----------------------+ (Issue #33608 has more detail.) So it looks like commit ef4ac967 is not responsible for a performance regression. -eric From vstinner at redhat.com Tue Feb 26 05:51:34 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 26 Feb 2019 11:51:34 +0100 Subject: [Python-Dev] Possible performance regression In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> Message-ID: Hi, Le mar. 26 f?vr. 2019 ? 05:27, Eric Snow a ?crit : > I ran the "performance" suite (https://github.com/python/performance), > which has 57 different benchmarks. Ah yes, by the way: I also ran manually performance on speed.python.org yesterday: it added a new dot at Feb 25. > In the results, 9 were marked as > "significantly" different between the two commits.. 2 of the > benchmarks showed a marginal slowdown and 7 showed a marginal speedup: I'm not surprised :-) Noise on micro-benchmark is usually "ignored by the std dev" (delta included in the std dev). At speed.python.org, you can see that basically the performances are stable since last summer. I let you have a look at https://speed.python.org/timeline/ > | Benchmark | speed.before | speed.after | Change > | Significance | > +=========================+==============+=============+==============+=======================+ > | django_template | 177 ms | 172 ms | 1.03x faster > | Significant (t=3.66) | > +-------------------------+--------------+-------------+--------------+-----------------------+ > | html5lib | 126 ms | 122 ms | 1.03x faster > | Significant (t=3.46) | > +-------------------------+--------------+-------------+--------------+-----------------------+ > | json_dumps | 17.6 ms | 17.2 ms | 1.02x faster > | Significant (t=2.65) | > +-------------------------+--------------+-------------+--------------+-----------------------+ > | nbody | 157 ms | 161 ms | 1.03x slower > | Significant (t=-3.85) | (...) Usually, I just ignore changes which are smaller than 5% ;-) Victor -- Night gathers, and now my watch begins. It shall not end until my death. From songofacandy at gmail.com Tue Feb 26 06:30:08 2019 From: songofacandy at gmail.com (INADA Naoki) Date: Tue, 26 Feb 2019 20:30:08 +0900 Subject: [Python-Dev] Compact ordered set Message-ID: Hello, folks. I'm working on compact and ordered set implementation. It has internal data structure similar to new dict from Python 3.6. It is still work in progress. Comments, tests, and documents should be updated. But it passes existing tests excluding test_sys and test_gdb (both tests checks implementation detail) https://github.com/methane/cpython/pull/16 Before completing this work, I want to evaluate it. Following is my current thoughts about the compact ordered set. ## Preserving insertion order Order is not fundamental for set. There are no order in set in the math world. But it is convenient sometime in real world. For example, it makes doctest easy. When writing set to logs, we can use "grep" command if print order is stable. pyc is stable without PYTHONHASHSEED=0 hack. Additionally, consistency with dict is desirable. It removes one pitfall for new Python users. "Remove duplicated items from list" idiom become `list(set(duplicated))` from `list(dict.fromkeys(duplicated))`. ## Memory efficiency Hash table has dilemma. To reduce collision rate, hash table should be sparse. But it wastes memory. Since current set is optimized for both of hit and miss cases, it is more sparse than dict. (It is bit surprise that set typically uses more memory than same size dict!) New implementation partially solve this dilemma. It has sparse "index table" which items are small (1byte when table size <= 256, 2bytes when table size <= 65536), and dense entry table (each item has key and hash, which is 16bytes on 64bit system). I use 1/2 for capacity rate for now. So new implementation is memory efficient when len(s) <= 32768. But memory efficiency is roughly equal to current implementation when 32768 < len(s) <= 2**31, and worse than current implementation when len(s) > 2**31. Here is quick test about memory usage. https://gist.github.com/methane/98b7f43fc00a84964f66241695112e91 # Performance pyperformance result: $ ./python -m perf compare_to master.json oset2.json -G --min-speed=2 Slower (3): - unpickle_list: 8.48 us +- 0.09 us -> 12.8 us +- 0.5 us: 1.52x slower (+52%) - unpickle: 29.6 us +- 2.5 us -> 44.1 us +- 2.5 us: 1.49x slower (+49%) - regex_dna: 448 ms +- 3 ms -> 462 ms +- 2 ms: 1.03x slower (+3%) Faster (4): - meteor_contest: 189 ms +- 1 ms -> 165 ms +- 1 ms: 1.15x faster (-13%) - telco: 15.8 ms +- 0.2 ms -> 15.3 ms +- 0.2 ms: 1.03x faster (-3%) - django_template: 266 ms +- 6 ms -> 259 ms +- 3 ms: 1.03x faster (-3%) - unpickle_pure_python: 818 us +- 6 us -> 801 us +- 9 us: 1.02x faster (-2%) Benchmark hidden because not significant (49) unpickle and unpickle_list shows massive slowdown. I suspect this slowdown is not caused from set change. Linux perf shows many pagefault is happened in pymalloc_malloc. I think memory usage changes hit weak point of pymalloc accidentally. I will try to investigate it. On the other hand, meteor_contest shows 13% speedup. It uses set. Other doesn't show significant performance changes. I need to write more benchmarks for various set workload. I expect new set is faster on simple creation, iteration and destruction. Especially, sequential iteration and deletion will reduce cache misses. (e.g. https://bugs.python.org/issue32846 ) On the other hand, new implementation will be slow on complex (heavy random add & del) case. ----- Any comments are welcome. And any benchmark for set workloads are very welcome. Regards, -- INADA Naoki From stephane at wirtel.be Tue Feb 26 07:59:03 2019 From: stephane at wirtel.be (Stephane Wirtel) Date: Tue, 26 Feb 2019 13:59:03 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: <8AFF29B7-D3DB-4DE1-BAF7-CAE6F4017378@langa.pl> References: <8AFF29B7-D3DB-4DE1-BAF7-CAE6F4017378@langa.pl> Message-ID: <20190226125903.GA4712@xps> Hi ?ukasz, Thank you for your job. I have created a Merge Request for the docker image of Barry [1]. I also filled an issue [2] for brotlipy (used by httpbin and requests). The problem is with PyInterpreterState. Via Twitter, I have proposed to the community to fix the issue [2]. [1]: https://gitlab.com/python-devs/ci-images/merge_requests/7 [2]: https://github.com/python-hyper/brotlipy/issues/147 Thanks again for your job. Cheers, St?phane -- St?phane Wirtel - https://wirtel.be - @matrixise From encukou at gmail.com Tue Feb 26 10:32:59 2019 From: encukou at gmail.com (Petr Viktorin) Date: Tue, 26 Feb 2019 16:32:59 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> Message-ID: <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> On 2/14/19 9:56 AM, Petr Viktorin wrote: > On 2/13/19 4:24 PM, Petr Viktorin wrote: >> I think it's time for another review. > [...] >> Please see this PR for details and a suggested change: >> https://github.com/python/peps/pull/893 > > Summary of the thread so far. > > Antoine Pitrou noted that the PEP should acknowledge that there are now > years of > established usage of `python` as Python 3 for many conda users, often as > the "main" Python. > > Victor Stinner expressed support for "python" being the latest Python > version, citing PHP, Ruby, Perl; containers; mentions of "python" in our > docs. > > Steve Dower later proposed concrete points how to make "python" the > default command: > ? * our docs should say "python" consistently > ? * we should recommend that distributors use the same workaround > ? * our docs should describe the recommended workaround in any places > people are likely to first encounter it (tutorial, sys.executable, etc.) > > Chris Barker added that "python3" should still be available, even if > "python" is default. > > Barry Warsaw gave a +1 to making "python" default, noting that there > were plans to change this when Python 2 is officially deprecated. But > distros need to make decisions about 2020 now. > > Chris Barker noted that users won't see any discuntinuity in 2020. > That's just a date support from CPython devs ends. > > Victor pointed to discussions on 4.0 vs. 3.10. (I'll ignore discussions > on 4.0 in this summary.) > Victor also posted some interesting info and links on Fedora and RHEL. > There was a discussion on the PSF survey about how many people use > Python 3. (I'll ignore this sub-thread, it's not really about the > "python" command.) > > Steve noted that the Windows Store package of Python 3 provides > "python", but he is still thinking how to make this reasonable/reliable > in the full installer. > > Several people think "py" on Unix would be a good thing. Neil > Schemenauer supposes we would encourage people to use it over > "python"/"python2"/"python3", so "python" would be less of an issue. > > Neil Schemenauer is not opposed to making "python" configurable or > eventually pointing it to Python 3. > > Jason Swails shared experience from running software with a > "#!/usr/bin/env python" shebang on a system that didn't have Python 2 > (and followed the PEP, so no "python" either). The workaround was ugly. There haven't been many new ideas since this summary ? mostly it was explaining and re-hashing what's been mentioned before. Matthias Klose pointed out some Debian/Ubuntu points, to which I'll add the situation in other distros I know of. *Debian* is concerned that python ? python3 will break software after an upgrade. Debian appears to not want to ship the unversioned command after py2 removal. For *Ubuntu*, Matthias is not sure if he wants a python executable at all. He notes that pypi.org recommends pip, and pip still breaks system-installed packages when asked to. For both Ubuntu 20.04 LTS and Debian bullseye, the goal is that distro packages don't use the unversioned shebang. *Fedora* packages don't use the unversioned shebang. If it was up to me, the unversioned command would be removed in F31 (released in the first half of 2019) and then pointed to python3 in F32 (second half). But we'd rather happy to follow upstream consensus. (And the PEP, if it reflects the consensus.) In *RHEL*, the unversioned command is missing by default. Sysadmins can change it, but are advised to use python2/python3 instead. RHEL decision makers don't give the PEP much weight. *Arch* did the switch to python3 a long time ago (and the resulting fire wasn't all that bright). With *Homebrew*, `python` points to Homebrew?s Python 2.7.x (if installed) otherwise the macOS system Python. That's exactly according to the PEP. They tried to switch python to python3 before, and got rather nasty backlash citing PEP 394. I assume they will follow the PEP quite strictly from now on. The *macOS* system Python is out of our hands; Apple ignores upstream recommendations. From vstinner at redhat.com Tue Feb 26 10:37:05 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 26 Feb 2019 16:37:05 +0100 Subject: [Python-Dev] Compact ordered set In-Reply-To: References: Message-ID: Le mar. 26 f?vr. 2019 ? 12:33, INADA Naoki a ?crit : > - unpickle_list: 8.48 us +- 0.09 us -> 12.8 us +- 0.5 us: 1.52x slower (+52%)> ... > ... > unpickle and unpickle_list shows massive slowdown. I suspect this slowdown > is not caused from set change. Linux perf shows many pagefault is happened > in pymalloc_malloc. I think memory usage changes hit weak point of pymalloc > accidentally. I will try to investigate it. Please contact me to get access to speed.python.org server. *Maybe* your process to run benchmarks is not reliable and you are getting "noise" in results. > On the other hand, meteor_contest shows 13% speedup. It uses set. > Other doesn't show significant performance changes. I recall that some benchmarks are unstable and depend a lot on how you run the benchmark, how Python is compiled (ex: PGO or not). IMHO it's fine if the overall performance result is "no significant change", as soon as we reduce the memory footprint. Victor -- Night gathers, and now my watch begins. It shall not end until my death. From christian at python.org Tue Feb 26 10:40:08 2019 From: christian at python.org (Christian Heimes) Date: Tue, 26 Feb 2019 16:40:08 +0100 Subject: [Python-Dev] OpenSSL 1.1.1 update for 3.7/3.8 Message-ID: Hi, today's OpenSSL release of 1.0.2r and 1.1.1b reminded me of OpenSSL's release strategy [1]. OpenSSL 1.0.2 will reach EOL on 2019-12-31, 1.1.0 will reach EOL on 2019-09-11 (one year after release of OpenSSL 1.1.1). First the good news: There is no need to take any action for 2.7 to 3.6. As of today, Python 2.7, 3.5, and 3.6 are using OpenSSL 1.0.2. Python 3.6.8 (2018-12-24) and 3.5.5 (2018-02-05) were the last regular update with binary packages. 3.5.6 is a source-only security release. 3.6.9 will be the first source-only security release of the 3.6 series. Python 2.7 will reach EOL just a day after OpenSSL 1.0.2 reaches EOL. IMHO it's fine to ship the last 2.7 build with an OpenSSL version that was EOLed just 24h earlier. Python 3.7 and master (3.8) are affected. As of now, both branches use OpenSSL 1.1.0 and must be updated to 1.1.1 soonish. Ned has scheduled 3.7.3 release for 2019-03-25. That's still well within the release schedule for 1.1.0. I suggest that we update to 1.1.1 directly after the release of Python 3.7.3 and target 3.7.4 as first builds with TLS 1.3 support. That gives Victor, Steve, and me enough time to sort out the remaining issues. In worst case we could revert the update and postpone the update to 3.7.5. Or we disable TLS 1.3 support by default in Mac and Windows builds. Christian [1] https://www.openssl.org/policies/releasestrat.html From vstinner at redhat.com Tue Feb 26 10:43:42 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 26 Feb 2019 16:43:42 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: <20190226125903.GA4712@xps> References: <8AFF29B7-D3DB-4DE1-BAF7-CAE6F4017378@langa.pl> <20190226125903.GA4712@xps> Message-ID: Armin Rigo released https://pypi.org/project/cffi/1.12.2/ which is compatible with Python 3.8.0a2. The issue was related to the PyInterpreterState change: https://bugs.python.org/issue35886#msg336501 Note: "[RELEASE] Python 3.8.0a1 is now available for testing" the correct version is 3.8.0a2 :-) Victor Le mar. 26 f?vr. 2019 ? 14:02, Stephane Wirtel a ?crit : > > Hi ?ukasz, > > Thank you for your job. > > I have created a Merge Request for the docker image of Barry [1]. > > I also filled an issue [2] for brotlipy (used by httpbin and requests). > The problem is with PyInterpreterState. > > Via Twitter, I have proposed to the community to fix the issue [2]. > > [1]: https://gitlab.com/python-devs/ci-images/merge_requests/7 > [2]: https://github.com/python-hyper/brotlipy/issues/147 > > Thanks again for your job. > > Cheers, > > St?phane > > -- > St?phane Wirtel - https://wirtel.be - @matrixise > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com -- Night gathers, and now my watch begins. It shall not end until my death. From aratzml at opendeusto.es Tue Feb 26 08:12:37 2019 From: aratzml at opendeusto.es (Aratz Manterola Lasa) Date: Tue, 26 Feb 2019 14:12:37 +0100 Subject: [Python-Dev] New member Message-ID: I do not know if I should be doing this, maybe I have been cheated, but this is my introduction: Hello, my name is Aratz, I am a Computer Engineering Bachelor's (not, not science) student on my 3rd grade. I love Python's gramatic efficiency, but most of all, how much I learn thanks to its community. I want to learn more, and if it is possible from the best ones. So... I ended up here. Thanks for your attention. -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Tue Feb 26 11:33:28 2019 From: songofacandy at gmail.com (INADA Naoki) Date: Wed, 27 Feb 2019 01:33:28 +0900 Subject: [Python-Dev] Compact ordered set In-Reply-To: References: Message-ID: On Wed, Feb 27, 2019 at 12:37 AM Victor Stinner wrote: > > Le mar. 26 f?vr. 2019 ? 12:33, INADA Naoki a ?crit : > > - unpickle_list: 8.48 us +- 0.09 us -> 12.8 us +- 0.5 us: 1.52x slower (+52%)> ... > > ... > > unpickle and unpickle_list shows massive slowdown. I suspect this slowdown > > is not caused from set change. Linux perf shows many pagefault is happened > > in pymalloc_malloc. I think memory usage changes hit weak point of pymalloc > > accidentally. I will try to investigate it. > > Please contact me to get access to speed.python.org server. *Maybe* > your process to run benchmarks is not reliable and you are getting > "noise" in results. My company gives me dedicated Linux machine with Core(TM) i7-6700. So I think it's not issue of my machine. perf shows this line caused many page fault. https://github.com/python/cpython/blob/c606a9cbd48f69d3f4a09204c781dda9864218b7/Objects/obmalloc.c#L1513 This line is executed when pymalloc can't reuse existing pool and uses new pool. So I suspect there is some weak point about pymalloc and adding more hysteresis may help it. But I'm not sure yet. I'll investigate it later. If you want to reproduce it, try this commit. https://github.com/methane/cpython/pull/16/commits/3178dc96305435c691af83515b2e4725ab6eb826 Ah, another interesting point, this huge slowdown happens only when bm_pickle.py is executed through pyperformance. When run it directly, slowdown is not so large. So I think this issue is tightly coupled with how pages are mapped. $ ./python -m performance.benchmarks.bm_pickle --compare-to ./py-master unpickle py-master: ..................... 27.7 us +- 1.8 us python: ..................... 28.7 us +- 2.5 us Mean +- std dev: [py-master] 27.7 us +- 1.8 us -> [python] 28.7 us +- 2.5 us: 1.04x slower (+4%) > > > On the other hand, meteor_contest shows 13% speedup. It uses set. > > Other doesn't show significant performance changes. > > I recall that some benchmarks are unstable and depend a lot on how you > run the benchmark, how Python is compiled (ex: PGO or not). As far as reading bm_meteor_contest.py source, it uses frozenset heavily. So I think this is real performance gain. Anyway, pyperformance is not perfect and doesn't cover all set workloads. I need to write more benchmarks. -- INADA Naoki From songofacandy at gmail.com Tue Feb 26 11:59:16 2019 From: songofacandy at gmail.com (INADA Naoki) Date: Wed, 27 Feb 2019 01:59:16 +0900 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> Message-ID: > > With *Homebrew*, `python` points to Homebrew?s Python 2.7.x (if > installed) otherwise the macOS system Python. That's exactly according > to the PEP. They tried to switch python to python3 before, and got > rather nasty backlash citing PEP 394. I assume they will follow the PEP > quite strictly from now on. > I want to add note here. When homebrew switched to python -> python3, node-gyp is broken. It is very widely used tool for web developers. Since Google was very lazy about adding Python 3 to gyp, node-gyp can't support Python 3 for a long time. But this situation is changing. Google added Python 3 support to gyp. node-gyp project is working on Python 3 support for now. I think keeping PEP 394 as-is until node-gyp officially support Python 3 would helps many web developers. -- INADA Naoki From raymond.hettinger at gmail.com Tue Feb 26 12:32:24 2019 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 26 Feb 2019 09:32:24 -0800 Subject: [Python-Dev] Compact ordered set In-Reply-To: References: Message-ID: > On Feb 26, 2019, at 3:30 AM, INADA Naoki wrote: > > I'm working on compact and ordered set implementation. > It has internal data structure similar to new dict from Python 3.6. > > On Feb 26, 2019, at 3:30 AM, INADA Naoki wrote: > > I'm working on compact and ordered set implementation. > It has internal data structure similar to new dict from Python 3.6 I've also looked at this as well. Some thoughts: * Set objects have a different and conflicting optimization that works better for a broad range of use cases. In particular, there is a linear probing search step that gives excellent cache performance (multiple entries retrieved in a single cache line) and it reduces the cost of finding the next entry to a single increment (entry++). This greatly reduces the cost of collisions and makes it cheaper to verify an item is not in a set. * The technique for compaction involves making the key/hash entry array dense and augmenting it with a sparse array of indices. This necessarily involves adding a layer of indirection for every probe. * With the cache misses, branching costs, and extra layer of indirection, collisions would stop being cheap, so we would need to work to avoid them altogether. To get anything like the current performance for a collision of the first probe, I suspect we would have to lower the table density down from 60% to 25%. * The intersection operation has an important optimization where it loops over the smaller of its two inputs. To give a guaranteed order that preserves the order of the first input, you would have to forgo this optimization, possibly crippling any existing code that depends on it. * Maintaining order in the face of deletions adds a new workload to sets that didn't exist before. You risk thrashing the set support a feature that hasn't been asked for and that isn't warranted mathematically (where the notion of sets is unordered). * It takes a lot of care and planning to avoid fooling yourself with benchmarks on sets. Anything done with a small tight loop will tend to hide all branch prediction costs and cache miss costs, both of which are significant in real world uses of sets. * For sets, we care much more about look-up performance than space. And unlike dicts where we usually expect to find a key, sets are all about checking membership which means they have to be balanced for the case where the key is not in the set. * Having and preserving order is one of the least important things a set can offer (it does have some value, but it is the least important feature, one that was never contemplated by the original set PEP). After the success of the compact dict, I can understand an almost irresistible urge to apply the same technique to sets. If it was clear that it was a win, I would have already done it long ago, even before dicts (it was much harder to get buy in to changing the dicts). Please temper the enthusiasm with rationality and caution. The existing setobject code has been finely tuned and micro-optimized over the years, giving it excellent performance on workloads we care about. It would be easy throw all of that away. Raymond From barry at python.org Tue Feb 26 12:54:47 2019 From: barry at python.org (Barry Warsaw) Date: Tue, 26 Feb 2019 09:54:47 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> Message-ID: <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> > There haven't been many new ideas since this summary ? mostly it was explaining and re-hashing what's been mentioned before. Thanks for the summary Petr. Here?s another way to think about the problem. I know Nick and I have talked about this before, but I don?t think any distros have actually done this, though I?ve been out of that business a while now so correct me if I?m wrong. I see this question as having several parts, and the conflation of them is part of the reason why the unversioned `python` command is so problematic. Python is used for: * OS functionality * to run applications that aren?t critical to the OS but are delivered on the OS * as the entry point to the interactive interpreter * to run applications written and deployed on the OS but completely outside of it Which `python` are we trying to change? All of them? For OS functionality, there should probably be a separate command not conflated with /usr/bin/python. The OS can make any adjustments it needs, calling it `spython` (as I think Nick once suggested), or whatever. Nobody but OS maintainers cares what this is called or what version of Python it exposes. I strongly believe that (eventually) the interactive interpreter command should be /usr/bin/python and that this should point to Python 3, since this provides the best experience for beginners, dabblers, etc. So what about the other two use cases? Well, for applications packages within the OS but aren?t critical to it, I think they should always use the versioned shebang, never the unversioned shebang. Distros can control this, so that transition should be easier. The tricky part then seems to me what to do for 3rd parties which are using the distro Python in their shebangs? Nobody sees their code but them, and changing the shebang out from under them could cause their code to break. But don?t they already take lots of precautions and planning for any OS upgrade? Changing the shebang for Python 2 would be just one of the things they?d have to worry about in an OS upgrade. I don?t know whether this analysis is complete or correct, but perhaps it helps inform a way forward on PEP 394. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From nas-python at arctrix.com Tue Feb 26 14:35:12 2019 From: nas-python at arctrix.com (Neil Schemenauer) Date: Tue, 26 Feb 2019 13:35:12 -0600 Subject: [Python-Dev] Possible performance regression In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> Message-ID: <20190226193512.m5wzrecaxt3yfm7l@python.ca> On 2019-02-25, Eric Snow wrote: > So it looks like commit ef4ac967 is not responsible for a performance > regression. I did a bit of exploration myself and that was my conclusion as well. Perhaps others would be interested in how to use "perf" so I did a little write up: https://discuss.python.org/t/profiling-cpython-with-perf/940 To me, it looks like using a register based VM could produce a pretty decent speedup. Research project for someone. ;-) Regards, Neil From wes.turner at gmail.com Tue Feb 26 15:31:12 2019 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 26 Feb 2019 15:31:12 -0500 Subject: [Python-Dev] OpenSSL 1.1.1 update for 3.7/3.8 In-Reply-To: References: Message-ID: > IMHO it's fine to ship the last 2.7 build with an OpenSSL version that was EOLed just 24h earlier. Is this a time / cost issue or a branch policy issue? If someone was to back port the forthcoming 1.1.1 to 2.7 significantly before the EOL date, could that be merged? There are all sorts of e.g. legacy academic works that'll never be upgraded etc etc On Tuesday, February 26, 2019, Christian Heimes wrote: > Hi, > > today's OpenSSL release of 1.0.2r and 1.1.1b reminded me of OpenSSL's > release strategy [1]. OpenSSL 1.0.2 will reach EOL on 2019-12-31, 1.1.0 > will reach EOL on 2019-09-11 (one year after release of OpenSSL 1.1.1). > > First the good news: There is no need to take any action for 2.7 to 3.6. > As of today, Python 2.7, 3.5, and 3.6 are using OpenSSL 1.0.2. Python > 3.6.8 (2018-12-24) and 3.5.5 (2018-02-05) were the last regular update > with binary packages. 3.5.6 is a source-only security release. 3.6.9 > will be the first source-only security release of the 3.6 series. Python > 2.7 will reach EOL just a day after OpenSSL 1.0.2 reaches EOL. IMHO it's > fine to ship the last 2.7 build with an OpenSSL version that was EOLed > just 24h earlier. > > Python 3.7 and master (3.8) are affected. As of now, both branches use > OpenSSL 1.1.0 and must be updated to 1.1.1 soonish. Ned has scheduled > 3.7.3 release for 2019-03-25. That's still well within the release > schedule for 1.1.0. I suggest that we update to 1.1.1 directly after the > release of Python 3.7.3 and target 3.7.4 as first builds with TLS 1.3 > support. That gives Victor, Steve, and me enough time to sort out the > remaining issues. > > In worst case we could revert the update and postpone the update to > 3.7.5. Or we disable TLS 1.3 support by default in Mac and Windows builds. > > Christian > > [1] https://www.openssl.org/policies/releasestrat.html > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > wes.turner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vstinner at redhat.com Tue Feb 26 15:36:16 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 26 Feb 2019 21:36:16 +0100 Subject: [Python-Dev] Possible performance regression In-Reply-To: <20190226193512.m5wzrecaxt3yfm7l@python.ca> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> Message-ID: I made an attempt once and it was faster: https://faster-cpython.readthedocs.io/registervm.html But I had bugs and I didn't know how to implement correctly a compiler. Victor Le mardi 26 f?vrier 2019, Neil Schemenauer a ?crit : > On 2019-02-25, Eric Snow wrote: >> So it looks like commit ef4ac967 is not responsible for a performance >> regression. > > I did a bit of exploration myself and that was my conclusion as > well. Perhaps others would be interested in how to use "perf" so I > did a little write up: > > https://discuss.python.org/t/profiling-cpython-with-perf/940 > > To me, it looks like using a register based VM could produce a > pretty decent speedup. Research project for someone. ;-) > > Regards, > > Neil > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com > -- Night gathers, and now my watch begins. It shall not end until my death. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nas-python at python.ca Tue Feb 26 15:58:41 2019 From: nas-python at python.ca (Neil Schemenauer) Date: Tue, 26 Feb 2019 14:58:41 -0600 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> Message-ID: <20190226205841.wo6a6bc65igmxpk3@python.ca> On 2019-02-26, Victor Stinner wrote: > I made an attempt once and it was faster: > https://faster-cpython.readthedocs.io/registervm.html Interesting. I don't think I have seen that before. Were you aware of "Rattlesnake" before you started on that? It seems your approach is similar. Probably not because I don't think it is easy to find. I uploaded a tarfile I had on my PC to my web site: http://python.ca/nas/python/rattlesnake20010813/ It seems his name doesn't appear in the readme or source but I think Rattlesnake was Skip Montanaro's project. I suppose my idea of unifying the local variables and the registers could have came from Rattlesnake. Very little new in the world. ;-P Cheers, Neil From raymond.hettinger at gmail.com Tue Feb 26 16:02:36 2019 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 26 Feb 2019 13:02:36 -0800 Subject: [Python-Dev] Compact ordered set In-Reply-To: References: Message-ID: Quick summary of what I found when I last ran experiments with this idea: * To get the same lookup performance, the density of index table would need to go down to around 25%. Otherwise, there's no way to make up for the extra indirection and the loss of cache locality. * There was a small win on iteration performance because its cheaper to loop over a dense array than a sparse array (fewer memory access and elimination of the unpredictable branch). This is nice because iteration performance matters in some key use cases. * I gave up on ordering right away. If we care about performance, keys can be stored in the order added; but no effort should be expended to maintain order if subsequent deletions occur. Likewise, to keep set-to-set operations efficient (i.e. looping over the smaller input), no order guarantee should be given for those operations. In general, we can let order happen but should not guarantee it and work to maintain it or slow-down essential operations to make them ordered. * Compacting does make sets a little smaller but does cost an indirection and incurs a cost for switching index sizes between 1-byte arrays, 2-byte arrays, 4-byte arrays, and 8-byte arrays. Those don't seem very expensive; however, set lookups are already very cheap when the hash values are known (when they're not, the cost of computing the hash value tends to dominate anything done by the setobject itself). * I couldn't find any existing application that would notice the benefit of making sets a bit smaller. Most applications use dictionaries (directly or indirectly) everywhere, so compaction was an overall win. Sets tend to be used more sparsely (no pun intended) and tend to be only a small part of overall memory usage. I had to consider this when bumping the load factor down to 60%, prioritizing speed over space. Raymond From greg at krypto.org Tue Feb 26 16:05:17 2019 From: greg at krypto.org (Gregory P. Smith) Date: Tue, 26 Feb 2019 13:05:17 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> Message-ID: On Tue, Feb 26, 2019 at 8:59 AM INADA Naoki wrote: > > > > With *Homebrew*, `python` points to Homebrew?s Python 2.7.x (if > > installed) otherwise the macOS system Python. That's exactly according > > to the PEP. They tried to switch python to python3 before, and got > > rather nasty backlash citing PEP 394. I assume they will follow the PEP > > quite strictly from now on. > > > > I want to add note here. > When homebrew switched to python -> python3, node-gyp is broken. > It is very widely used tool for web developers. > > Since Google was very lazy about adding Python 3 to gyp, node-gyp > can't support Python 3 for a long time. > > But this situation is changing. Google added Python 3 support to gyp. > node-gyp project is working on Python 3 support for now. > > I think keeping PEP 394 as-is until node-gyp officially support Python 3 > would helps many web developers. > In practice, does what /usr/bin/python is even matter to node-gyp? I'd *hope* that it would refer to /usr/bin/python2.7... (does anyone use something as modern as node.js on a system with python 2 but without /usr/bin/python2.7? [i probably don't want to know the answer to that...]) node-gyp's got a great issue number for this: https://github.com/nodejs/node-gyp/issues/1337 -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Tue Feb 26 16:20:30 2019 From: greg at krypto.org (Gregory P. Smith) Date: Tue, 26 Feb 2019 13:20:30 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> Message-ID: On Tue, Feb 26, 2019 at 9:55 AM Barry Warsaw wrote: > > There haven't been many new ideas since this summary ? mostly it was > explaining and re-hashing what's been mentioned before. > > Thanks for the summary Petr. > > Here?s another way to think about the problem. I know Nick and I have > talked about this before, but I don?t think any distros have actually done > this, though I?ve been out of that business a while now so correct me if > I?m wrong. > > I see this question as having several parts, and the conflation of them is > part of the reason why the unversioned `python` command is so problematic. > Python is used for: > > * OS functionality > * to run applications that aren?t critical to the OS but are delivered on > the OS > * as the entry point to the interactive interpreter > * to run applications written and deployed on the OS but completely > outside of it > > Which `python` are we trying to change? All of them? > > For OS functionality, there should probably be a separate command not > conflated with /usr/bin/python. The OS can make any adjustments it needs, > calling it `spython` (as I think Nick once suggested), or whatever. Nobody > but OS maintainers cares what this is called or what version of Python it > exposes. > > I strongly believe that (eventually) the interactive interpreter command > should be /usr/bin/python and that this should point to Python 3, since > this provides the best experience for beginners, dabblers, etc. > > So what about the other two use cases? Well, for applications packages > within the OS but aren?t critical to it, I think they should always use the > versioned shebang, never the unversioned shebang. Distros can control > this, so that transition should be easier. > > The tricky part then seems to me what to do for 3rd parties which are > using the distro Python in their shebangs? Nobody sees their code but > them, and changing the shebang out from under them could cause their code > to break. But don?t they already take lots of precautions and planning for > any OS upgrade? Changing the shebang for Python 2 would be just one of the > things they?d have to worry about in an OS upgrade. > A feature that *I* find missing from posix-y OSes that support #! lines is an ability to restrict what can use a given interpreter. For an OS distro provided interpreter, being able to restrict its use to only OS distro provided software would be ideal (so ideal that people who haven't learned the hard distro maintenance lessons may hate me for it). Such a restriction could be implemented within the interpreter itself. For example: Say that only this set of fully qualified path whitelisted .py files are allowed to invoke it, with no interactive, stdin, or command line "-c" use allowed. I'm not aware of anyone actually having done that. It's hard to see how to do that in a *maintainable* manner that people using many distros wouldn't just naively work around by adding themselves to the whitelist rather than providing their own interpreter for their own software stack. It feels more doable without workarounds for something like macOS or any other distro wholly controlled and maintained as a single set of software rather than a widely varying packages. Solving that is way outside the scope of PEP 394. Just food for thought that I'd like to leave as an earworm for the future for distro minded folks. I some people to hate this idea. -gps > > I don?t know whether this analysis is complete or correct, but perhaps it > helps inform a way forward on PEP 394. > > Cheers, > -Barry > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From encukou at gmail.com Tue Feb 26 16:34:50 2019 From: encukou at gmail.com (Petr Viktorin) Date: Tue, 26 Feb 2019 22:34:50 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> Message-ID: <57a58939-21a7-286c-09bb-e5a04172d06e@gmail.com> On 2/26/19 6:54 PM, Barry Warsaw wrote: >> There haven't been many new ideas since this summary ? mostly it was explaining and re-hashing what's been mentioned before. > > Thanks for the summary Petr. > > Here?s another way to think about the problem. I know Nick and I have talked about this before, but I don?t think any distros have actually done this, though I?ve been out of that business a while now so correct me if I?m wrong. > > I see this question as having several parts, and the conflation of them is part of the reason why the unversioned `python` command is so problematic. Python is used for: > > * OS functionality > * to run applications that aren?t critical to the OS but are delivered on the OS > * as the entry point to the interactive interpreter > * to run applications written and deployed on the OS but completely outside of it > > Which `python` are we trying to change? All of them? > > For OS functionality, there should probably be a separate command not conflated with /usr/bin/python. The OS can make any adjustments it needs, calling it `spython` (as I think Nick once suggested), or whatever. Nobody but OS maintainers cares what this is called or what version of Python it exposes. Yup. RHEL 8 actually has exactly that. (It's called /usr/libexec/platform-python; please don't use it!) Fedora (and most other distros) makes this the same as the interpreter for other packaged software. For Fedora the main reason is that don't want to maintain two full separate Python stacks. > I strongly believe that (eventually) the interactive interpreter command should be /usr/bin/python and that this should point to Python 3, since this provides the best experience for beginners, dabblers, etc. +1 > So what about the other two use cases? Well, for applications packages within the OS but aren?t critical to it, I think they should always use the versioned shebang, never the unversioned shebang. Distros can control this, so that transition should be easier. +1 > The tricky part then seems to me what to do for 3rd parties which are using the distro Python in their shebangs? Nobody sees their code but them, and changing the shebang out from under them could cause their code to break. But don?t they already take lots of precautions and planning for any OS upgrade? Changing the shebang for Python 2 would be just one of the things they?d have to worry about in an OS upgrade. Also, things will break for them anyway, it's just a matter of time. Python 2 *is* going away, eventually. (Right?) I don't think we're doing that many people a favor by keeping /usr/bin/python ? python2 around. Instead, we're *hiding* the problem from them. Too many people think python2 is still the "default". Making /usr/bin/python be missing for some time, rather than pointing it to python3 now, is the more explicit way to do the transition. > I don?t know whether this analysis is complete or correct, but perhaps it helps inform a way forward on PEP 394. I have two very different questions in mind for moving this forward. Who gets to decide on PEP 394 changes? Since so many people on python-dev are in agreement, where do I go for opposing voices? From vstinner at redhat.com Tue Feb 26 16:34:46 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 26 Feb 2019 22:34:46 +0100 Subject: [Python-Dev] Compact ordered set In-Reply-To: References: Message-ID: Le mar. 26 f?vr. 2019 ? 17:33, INADA Naoki a ?crit : > My company gives me dedicated Linux machine with Core(TM) i7-6700. > So I think it's not issue of my machine. Oh great :-) > perf shows this line caused many page fault. > https://github.com/python/cpython/blob/c606a9cbd48f69d3f4a09204c781dda9864218b7/Objects/obmalloc.c#L1513 > > This line is executed when pymalloc can't reuse existing pool and uses new pool. > So I suspect there is some weak point about pymalloc and adding more hysteresis > may help it. But I'm not sure yet. I'll investigate it later. You might want to try PYTHONMALLOC=malloc to force the usage of system malloc() and so disable pymalloc. You might also try jemalloc with LD_PRELOAD and PYTHONMALLOC=malloc. Not sure if it helps :-) > Ah, another interesting point, this huge slowdown happens only when bm_pickle.py > is executed through pyperformance. When run it directly, slowdown is > not so large. pyperformance runs benchmarks in a virtual environment. I don't know if it has any impact on bm_pickle. Most pyperformance can be run outside a virtual env if required modules are installed on the system. (bm_pickle only requires the stdlib and perf.) Victor -- Night gathers, and now my watch begins. It shall not end until my death. From raymond.hettinger at gmail.com Tue Feb 26 16:38:45 2019 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 26 Feb 2019 13:38:45 -0800 Subject: [Python-Dev] Possible performance regression In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> Message-ID: <85C60B5A-A3CD-43F4-A38D-887576A01F8E@gmail.com> On Feb 25, 2019, at 8:23 PM, Eric Snow wrote: > > So it looks like commit ef4ac967 is not responsible for a performance > regression. I did narrow it down to that commit and I can consistently reproduce the timing differences. That said, I'm only observing the effect when building with the Mac default Clang (Apple LLVM version 10.0.0 (clang-1000.11.45.5). When building GCC 8.3.0, there is no change in performance. I conclude this is only an issue for Mac builds. > I ran the "performance" suite (https://github.com/python/performance), > which has 57 different benchmarks. Many of those benchmarks don't measure eval-loop performance. Instead, they exercise json, pickle, sqlite etc. So, I would expect no change in many of those because they weren't touched. Victor said he generally doesn't care about 5% regressions. That makes sense for odd corners of Python. The reason I was concerned about this one is that it hits the eval-loop and seems to effect every single op code. The regression applies somewhat broadly (increasing the cost of reading and writing local variables by about 20%). The effect is somewhat broad based. That said, it seems to be compiler specific and only affects the Mac builds, so maybe we can decide that we don't care. Raymond From guido at python.org Tue Feb 26 16:42:29 2019 From: guido at python.org (Guido van Rossum) Date: Tue, 26 Feb 2019 13:42:29 -0800 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: <20190226205841.wo6a6bc65igmxpk3@python.ca> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> Message-ID: Yes, this should totally be attempted. All the stack manipulation opcodes could be dropped if we just made (nearly) everything use 3-address codes, e.g. ADD would take the names of three registers, left, right and result. The compiler would keep track of which registers contain a live object (for reference counting) but that can't be much more complicated than checking for stack under- and over-flow. Also, nothing new indeed -- my first computer (a control data cyber mainframe) had 3-address code. https://en.wikipedia.org/wiki/CDC_6600#Central_Processor_(CP) On Tue, Feb 26, 2019 at 1:01 PM Neil Schemenauer wrote: > On 2019-02-26, Victor Stinner wrote: > > I made an attempt once and it was faster: > > https://faster-cpython.readthedocs.io/registervm.html > > Interesting. I don't think I have seen that before. Were you aware > of "Rattlesnake" before you started on that? It seems your approach > is similar. Probably not because I don't think it is easy to find. > I uploaded a tarfile I had on my PC to my web site: > > http://python.ca/nas/python/rattlesnake20010813/ > > It seems his name doesn't appear in the readme or source but I think > Rattlesnake was Skip Montanaro's project. I suppose my idea of > unifying the local variables and the registers could have came from > Rattlesnake. Very little new in the world. ;-P > > Cheers, > > Neil > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vstinner at redhat.com Tue Feb 26 16:42:52 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 26 Feb 2019 22:42:52 +0100 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: <20190226205841.wo6a6bc65igmxpk3@python.ca> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> Message-ID: No, I wasn't aware of this project. My starting point was: http://static.usenix.org/events/vee05/full_papers/p153-yunhe.pdf Yunhe Shi, David Gregg, Andrew Beatty, M. Anton Ertl, 2005 See also my email to python-dev that I sent in 2012: https://mail.python.org/pipermail/python-dev/2012-November/122777.html Ah, my main issue was my implementation is that I started without taking care of clearing registers when the stack-based bytecode implicitly cleared a reference (decref), like "POP_TOP" operation. I added "CLEAR_REG" late in the development and it caused me troubles, and the "correct" register-based bytecode was less efficient than bytecode without CLEAR_REG. But my optimizer was very limited, too limited. Another implementation issue that I had was to understand some "implicit usage" of the stack like try/except which do black magic, whereas I wanted to make everything explicit for registers. I'm talking about things like "POP_BLOCK" and "SETUP_EXCEPT". In my implementation, I kept support for stack-based bytecode, and so I had some inefficient code and some corner cases. My approach was to convert stack-based bytecode to register-based bytecode on the fly. Having both in the same code allowed to me run some benchmarks. Maybe it wasn't the best approach, but I didn't feel able to write a real compiler (AST => bytecode). Victor Le mar. 26 f?vr. 2019 ? 21:58, Neil Schemenauer a ?crit : > > On 2019-02-26, Victor Stinner wrote: > > I made an attempt once and it was faster: > > https://faster-cpython.readthedocs.io/registervm.html > > Interesting. I don't think I have seen that before. Were you aware > of "Rattlesnake" before you started on that? It seems your approach > is similar. Probably not because I don't think it is easy to find. > I uploaded a tarfile I had on my PC to my web site: > > http://python.ca/nas/python/rattlesnake20010813/ > > It seems his name doesn't appear in the readme or source but I think > Rattlesnake was Skip Montanaro's project. I suppose my idea of > unifying the local variables and the registers could have came from > Rattlesnake. Very little new in the world. ;-P > > Cheers, > > Neil -- Night gathers, and now my watch begins. It shall not end until my death. From christian at python.org Tue Feb 26 16:45:27 2019 From: christian at python.org (Christian Heimes) Date: Tue, 26 Feb 2019 22:45:27 +0100 Subject: [Python-Dev] OpenSSL 1.1.1 update for 3.7/3.8 In-Reply-To: References: Message-ID: <129ab7ef-fc1e-4f44-7a1e-e75b89f3def0@python.org> On 26/02/2019 21.31, Wes Turner wrote: >> IMHO it's > fine to ship the last 2.7 build with an OpenSSL version that was EOLed > just 24h earlier. > > Is this a time / cost issue or a branch policy issue? > > If someone was to back port the forthcoming 1.1.1 to 2.7 significantly > before the EOL date, could that be merged? My mail is about official binary Python packages for Windows and macOS. We stick to an OpenSSL version to guarantee maximum backwards compatibility within a minor release. OpenSSL 1.1.1 has TLS 1.3 support and prefers TLS 1.3 over TLS 1.2. There is a small change that TLS 1.3 breaks some assumptions. Python 2.7 works mostly fine with OpenSSL 1.1.1. There are some minor test issues related to TLS 1.3 but nothing serious. Linux distros have been shipping Python 2.7 with OpenSSL 1.1.1 for a while. > There are all sorts of e.g. legacy academic works that'll never be > upgraded etc etc That topic is out of scope and has been discussed countless times. From J.Demeyer at UGent.be Tue Feb 26 16:53:44 2019 From: J.Demeyer at UGent.be (Jeroen Demeyer) Date: Tue, 26 Feb 2019 22:53:44 +0100 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> Message-ID: <5C75B568.8060806@UGent.be> Let me just say that the code for METH_FASTCALL function/method calls is optimized for a stack layout: a piece of the stack is used directly for calling METH_FASTCALL functions (without any copying any PyObject* pointers). So this would probably be slower with a register-based VM (which doesn't imply that it's a bad idea, it's just a single point to take into account). From vstinner at redhat.com Tue Feb 26 16:53:52 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 26 Feb 2019 22:53:52 +0100 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> Message-ID: Hum, I read again my old REGISTERVM.txt that I wrote a few years ago. A little bit more context. In my "registervm" fork I also tried to implement further optimizations like moving invariants out of the loop. Some optimizations could change the Python semantics, like remove "duplicated" LOAD_GLOBAL whereas the global might be modified in the middle. I wanted to experiment such optimizations. Maybe it was a bad idea to convert stack-based bytecode to register-based bytecode and experiment these optimizations at the same time. Victor Le mar. 26 f?vr. 2019 ? 22:42, Victor Stinner a ?crit : > > No, I wasn't aware of this project. My starting point was: > > http://static.usenix.org/events/vee05/full_papers/p153-yunhe.pdf > Yunhe Shi, David Gregg, Andrew Beatty, M. Anton Ertl, 2005 > > See also my email to python-dev that I sent in 2012: > https://mail.python.org/pipermail/python-dev/2012-November/122777.html > > Ah, my main issue was my implementation is that I started without > taking care of clearing registers when the stack-based bytecode > implicitly cleared a reference (decref), like "POP_TOP" operation. > > I added "CLEAR_REG" late in the development and it caused me troubles, > and the "correct" register-based bytecode was less efficient than > bytecode without CLEAR_REG. But my optimizer was very limited, too > limited. > > Another implementation issue that I had was to understand some > "implicit usage" of the stack like try/except which do black magic, > whereas I wanted to make everything explicit for registers. I'm > talking about things like "POP_BLOCK" and "SETUP_EXCEPT". In my > implementation, I kept support for stack-based bytecode, and so I had > some inefficient code and some corner cases. > > My approach was to convert stack-based bytecode to register-based > bytecode on the fly. Having both in the same code allowed to me run > some benchmarks. Maybe it wasn't the best approach, but I didn't feel > able to write a real compiler (AST => bytecode). > > Victor > > Le mar. 26 f?vr. 2019 ? 21:58, Neil Schemenauer a ?crit : > > > > On 2019-02-26, Victor Stinner wrote: > > > I made an attempt once and it was faster: > > > https://faster-cpython.readthedocs.io/registervm.html > > > > Interesting. I don't think I have seen that before. Were you aware > > of "Rattlesnake" before you started on that? It seems your approach > > is similar. Probably not because I don't think it is easy to find. > > I uploaded a tarfile I had on my PC to my web site: > > > > http://python.ca/nas/python/rattlesnake20010813/ > > > > It seems his name doesn't appear in the readme or source but I think > > Rattlesnake was Skip Montanaro's project. I suppose my idea of > > unifying the local variables and the registers could have came from > > Rattlesnake. Very little new in the world. ;-P > > > > Cheers, > > > > Neil > > > > -- > Night gathers, and now my watch begins. It shall not end until my death. -- Night gathers, and now my watch begins. It shall not end until my death. From vstinner at redhat.com Tue Feb 26 16:59:19 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 26 Feb 2019 22:59:19 +0100 Subject: [Python-Dev] Possible performance regression In-Reply-To: <85C60B5A-A3CD-43F4-A38D-887576A01F8E@gmail.com> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <85C60B5A-A3CD-43F4-A38D-887576A01F8E@gmail.com> Message-ID: Le mar. 26 f?vr. 2019 ? 22:45, Raymond Hettinger a ?crit : > Victor said he generally doesn't care about 5% regressions. That makes sense for odd corners of Python. The reason I was concerned about this one is that it hits the eval-loop and seems to effect every single op code. The regression applies somewhat broadly (increasing the cost of reading and writing local variables by about 20%). The effect is somewhat broad based. I ignore changes smaller than 5% because they are usually what I call the "noise" of the benchmark. It means that testing 3 commits give 3 different timings, even if the commits don't touch anything used in the benchmark. There are multiple explanation: PGO compilation in not deterministic, some benchmarks are too close to the performance of the CPU L1-instruction cache and so are heavily impacted by the "code locality" (exact address in memory), and many other things. Hum, sometimes running the same benchmark on the same code on the same hardware with the same strict procedure gives different timings at each attempt. At some point, I decided to give up on these 5% to not loose my mind :-) Victor From nas-python at arctrix.com Tue Feb 26 17:04:18 2019 From: nas-python at arctrix.com (Neil Schemenauer) Date: Tue, 26 Feb 2019 16:04:18 -0600 Subject: [Python-Dev] Compile-time resolution of packages [Was: Another update for PEP 394...] In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> Message-ID: <20190226220418.b36jw33qthdv5i5l@python.ca> On 2019-02-26, Gregory P. Smith wrote: > On Tue, Feb 26, 2019 at 9:55 AM Barry Warsaw wrote: > For an OS distro provided interpreter, being able to restrict its use to > only OS distro provided software would be ideal (so ideal that people who > haven't learned the hard distro maintenance lessons may hate me for it). Interesting idea. I remember when I was helping develop Debian packaging guides for Python software. I had to fight with people to convince them that Debian packages should use #!/usr/bin/pythonX.Y rather than #!/usr/bin/env python The situtation is much better now but I still sometimes have packaged software fail because it picks up my version of /usr/local/bin/python. I don't understand how people can believe grabbing /usr/local/bin/python is going to be a way to build a reliable system. > Such a restriction could be implemented within the interpreter itself. For > example: Say that only this set of fully qualified path whitelisted .py > files are allowed to invoke it, with no interactive, stdin, or command line > "-c" use allowed. I think this is related to an idea I was tinkering with on the weekend. Why shouldn't we do more compile time linkage of Python packages? At least, I think we give people the option to do it. Obviously you still need to also support run-time import search (interactive REPL, support __import__(unknown_at_compiletime)__). Here is the sketch of the idea (probably half-baked, as most of my ideas are): - add PYTHONPACKAGES envvar and -p options to 'python' - the argument for these options would be a colon separated list of Python package archives (crates, bales, bundles?). The -p option could be a colon separated list or provided multiple times to specify more packages. - the modules/packages contained in those archives become the preferred bytecode code source when those names are imported. We look there first. The crawling around behavor (dynamic import based on sys.path) happens only if a module is not found and could be turned off. - the linking of the modules could be computed when the code is compiled and the package archive created, rather than when the 'import' statement gets executed. This would provide a number of advantages. It would be faster. Code analysis tools could statically determine which modules imported code corresponds too. E.g. if your code calls module.foo, assuming no monkey patching, you know what code 'foo' actually is. - to get extra fancy, the package archives could be dynamic link libraries containing "frozen modules" like this FB experiment: https://github.com/python/cpython/pull/9320 That way, you avoid the unmarshal step and just execute the module bytecode directly. On startup, Python would dlopen all of the package archives specified by PYTHONPACKAGES. On init, it would build an index of the package tree and it would have the memory location for the code object for each module. That would seem like quite a useful thing. For an application like Mercurial, they could build all the modules/packages required into a single package archive. Or, there would be a small number of archives (one for standard Python library, one for everything else that Mercurial needs). Now that I write this, it sounds a lot like the debate between static linking and dynamic linking. Golang does static linking and people seem to like the single executable distribution. Regards, Neil From vstinner at redhat.com Tue Feb 26 17:08:07 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 26 Feb 2019 23:08:07 +0100 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: <20190226205841.wo6a6bc65igmxpk3@python.ca> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> Message-ID: Le mar. 26 f?vr. 2019 ? 21:58, Neil Schemenauer a ?crit : > It seems his name doesn't appear in the readme or source but I think > Rattlesnake was Skip Montanaro's project. I suppose my idea of > unifying the local variables and the registers could have came from > Rattlesnake. Very little new in the world. ;-P In my implementation, constants, local variables and registers live all in the same array: frame.f_localsplus. Technically, there isn't much difference between a constant, local variable or a register. It's just the disassembler which has to worry to display "R3" or "x" depending on the register index ;-) There was a LOAD_CONST_REG instruction in my implementation, but it was more to keep a smooth transition from existing LOAD_CONST instruction. LOAD_CONST_REG could be avoided to pass directly the constant (ex: as a function argument). For example, I compiled "range(2, n)" as: LOAD_CONST_REG R0, 2 (const#2) LOAD_GLOBAL_REG R1, 'range' (name#0) CALL_FUNCTION_REG 4, R1, R1, R0, 'n' Whereas it could be just: LOAD_GLOBAL_REG R1, 'range' (name#0) CALL_FUNCTION_REG 4, R1, R1, , 'n' Compare it to stack-based bytecode: LOAD_GLOBAL 0 (range) LOAD_CONST 2 (const#2) LOAD_FAST 'n' CALL_FUNCTION 2 (2 positional, 0 keyword pair) Victor -- Night gathers, and now my watch begins. It shall not end until my death. From greg.ewing at canterbury.ac.nz Tue Feb 26 17:18:59 2019 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 27 Feb 2019 11:18:59 +1300 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: <5C75B568.8060806@UGent.be> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> <5C75B568.8060806@UGent.be> Message-ID: <5C75BB53.8020304@canterbury.ac.nz> Jeroen Demeyer wrote: > Let me just say that the code for METH_FASTCALL function/method calls is > optimized for a stack layout: a piece of the stack is used directly for > calling METH_FASTCALL functions We might be able to get some ideas for dealing with this kind of thing from register-window architectures such as the SPARC, where the registers containing the locals of a calling function become the input parameters to a called function. More generally, it's common to have a calling convention where the first N parameters are assumed to reside in a specific range of registers. If the compiler is smart enough, it can often arrange the evaluation of the parameter expressions so that the results end up in the right registers for making the call. -- Greg From jjevnik at quantopian.com Tue Feb 26 17:10:34 2019 From: jjevnik at quantopian.com (Joe Jevnik) Date: Tue, 26 Feb 2019 17:10:34 -0500 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> Message-ID: METH_FASTCALL passing arguments on the stack doesn't necessarily mean it will be slow. In x86 there are calling conventions that read all the arguments from the stack, but the rest of the machine is register based. Python could also look at ABI calling conventions for inspiration, like x86-64 where some arguments up to a fixed amount are passed on the stack and the rest are passed on the stack. One thing that I am wondering is would Python want to use a global set of registers and a global data stack, or continue to have a new data stack (and now registers) per call stack. If Python switched to a global stack and global registers we may be able to eliminate a lot of instructions that just shuffle data from the caller's stack to the callee's stack. On Tue, Feb 26, 2019 at 4:55 PM Victor Stinner wrote: > Hum, I read again my old REGISTERVM.txt that I wrote a few years ago. > > A little bit more context. In my "registervm" fork I also tried to > implement further optimizations like moving invariants out of the > loop. Some optimizations could change the Python semantics, like > remove "duplicated" LOAD_GLOBAL whereas the global might be modified > in the middle. I wanted to experiment such optimizations. Maybe it was > a bad idea to convert stack-based bytecode to register-based bytecode > and experiment these optimizations at the same time. > > Victor > > Le mar. 26 f?vr. 2019 ? 22:42, Victor Stinner a > ?crit : > > > > No, I wasn't aware of this project. My starting point was: > > > > http://static.usenix.org/events/vee05/full_papers/p153-yunhe.pdf > > Yunhe Shi, David Gregg, Andrew Beatty, M. Anton Ertl, 2005 > > > > See also my email to python-dev that I sent in 2012: > > https://mail.python.org/pipermail/python-dev/2012-November/122777.html > > > > Ah, my main issue was my implementation is that I started without > > taking care of clearing registers when the stack-based bytecode > > implicitly cleared a reference (decref), like "POP_TOP" operation. > > > > I added "CLEAR_REG" late in the development and it caused me troubles, > > and the "correct" register-based bytecode was less efficient than > > bytecode without CLEAR_REG. But my optimizer was very limited, too > > limited. > > > > Another implementation issue that I had was to understand some > > "implicit usage" of the stack like try/except which do black magic, > > whereas I wanted to make everything explicit for registers. I'm > > talking about things like "POP_BLOCK" and "SETUP_EXCEPT". In my > > implementation, I kept support for stack-based bytecode, and so I had > > some inefficient code and some corner cases. > > > > My approach was to convert stack-based bytecode to register-based > > bytecode on the fly. Having both in the same code allowed to me run > > some benchmarks. Maybe it wasn't the best approach, but I didn't feel > > able to write a real compiler (AST => bytecode). > > > > Victor > > > > Le mar. 26 f?vr. 2019 ? 21:58, Neil Schemenauer > a ?crit : > > > > > > On 2019-02-26, Victor Stinner wrote: > > > > I made an attempt once and it was faster: > > > > https://faster-cpython.readthedocs.io/registervm.html > > > > > > Interesting. I don't think I have seen that before. Were you aware > > > of "Rattlesnake" before you started on that? It seems your approach > > > is similar. Probably not because I don't think it is easy to find. > > > I uploaded a tarfile I had on my PC to my web site: > > > > > > http://python.ca/nas/python/rattlesnake20010813/ > > > > > > It seems his name doesn't appear in the readme or source but I think > > > Rattlesnake was Skip Montanaro's project. I suppose my idea of > > > unifying the local variables and the registers could have came from > > > Rattlesnake. Very little new in the world. ;-P > > > > > > Cheers, > > > > > > Neil > > > > > > > > -- > > Night gathers, and now my watch begins. It shall not end until my death. > > > > -- > Night gathers, and now my watch begins. It shall not end until my death. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/joe%40quantopian.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vstinner at redhat.com Tue Feb 26 17:27:58 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 26 Feb 2019 23:27:58 +0100 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> Message-ID: Le mar. 26 f?vr. 2019 ? 22:24, Gregory P. Smith a ?crit : > A feature that I find missing from posix-y OSes that support #! lines is an ability to restrict what can use a given interpreter. Fedora runs system tools (like "/usr/bin/semanage", tool to manager SELinux) with "python3 -Es": $ head /usr/sbin/semanage #! /usr/bin/python3 -Es -E: ignore PYTHON* environment variables (such as PYTHONPATH) -s: don't add user site directory to sys.path Is it what you mean? > Such a restriction could be implemented within the interpreter itself. For example: Say that only this set of fully qualified path whitelisted .py files are allowed to invoke it, with no interactive, stdin, or command line "-c" use allowed. I'm not aware of anyone actually having done that. It's hard to see how to do that in a maintainable manner that people using many distros wouldn't just naively work around by adding themselves to the whitelist rather than providing their own interpreter for their own software stack. It feels more doable without workarounds for something like macOS or any other distro wholly controlled and maintained as a single set of software rather than a widely varying packages. Technically, Python initialization is highly customizable: see _PyCoreConfig in Include/coreconfig.h. But we lack a public API for that :-) https://www.python.org/dev/peps/pep-0432/ is a work-in-progress. With a proper public API, building your own interpreter would take a few lines of C to give you fine control on what Python can do or not. Extract of Programs/_freeze_importlib.c (give you an idea of what can be done): --- _PyCoreConfig config = _PyCoreConfig_INIT; config.user_site_directory = 0; config.site_import = 0; config.use_environment = 0; config.program_name = L"./_freeze_importlib"; /* Don't install importlib, since it could execute outdated bytecode. */ config._install_importlib = 0; config._frozen = 1; _PyInitError err = _Py_InitializeFromConfig(&config); --- As Petr wrote below, RHEL 8 has a private /usr/libexec/platform-python which is the Python used to run system tools (written in Python). But this Python isn't customized. I'm not sure that there is a strong need to customize Python default configuration for this interpreter. Note: Sorry to hijack again this thread with unrelated discussions :-( Victor -- Night gathers, and now my watch begins. It shall not end until my death. From nas-python at python.ca Tue Feb 26 17:28:14 2019 From: nas-python at python.ca (Neil Schemenauer) Date: Tue, 26 Feb 2019 16:28:14 -0600 Subject: [Python-Dev] Possible performance regression In-Reply-To: <85C60B5A-A3CD-43F4-A38D-887576A01F8E@gmail.com> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <85C60B5A-A3CD-43F4-A38D-887576A01F8E@gmail.com> Message-ID: <20190226222814.ggzf6q3nk36bfzwx@python.ca> On 2019-02-26, Raymond Hettinger wrote: > That said, I'm only observing the effect when building with the > Mac default Clang (Apple LLVM version 10.0.0 (clang-1000.11.45.5). > When building GCC 8.3.0, there is no change in performance. My guess is that the code in _PyEval_EvalFrameDefault() got changed enough that Clang started emitting a bit different machine code. If the conditional jumps are a bit different, I understand that could have a significant difference on performance. Are you compiling with --enable-optimizations (i.e. PGO)? In my experience, that is needed to get meaningful results. Victor also mentions that on his "how-to-get-stable-benchmarks" page. Building with PGO is really (really) slow so I supect you are not doing it when bisecting. You can speed it up greatly by using a simpler command for PROFILE_TASK in Makefile.pre.in. E.g. PROFILE_TASK=$(srcdir)/my_benchmark.py Now that you have narrowed it down to a single commit, it would be worth doing the comparison with PGO builds (assuming Clang supports that). > That said, it seems to be compiler specific and only affects the > Mac builds, so maybe we can decide that we don't care. I think the key question is if the ceval loop got a bit slower due to logic changes or if Clang just happened to generate a bit worse code due to source code details. A PGO build could help answer that. I suppose trying to compare machine code is going to produce too large of a diff. Could you try hoisting the eval_breaker expression, as suggested by Antoine: https://discuss.python.org/t/profiling-cpython-with-perf/940/2 If you think a slowdown affects most opcodes, I think the DISPATCH change looks like the only cause. Maybe I missed something though. Also, maybe there would be some value in marking key branches as likely/unlikely if it helps Clang generate better machine code. Then, even if you compile without PGO (as many people do), you still get the better machine code. Regards, Neil From steve.dower at python.org Tue Feb 26 17:29:18 2019 From: steve.dower at python.org (Steve Dower) Date: Tue, 26 Feb 2019 14:29:18 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> Message-ID: <32efee43-34cf-9850-09e0-01b258cc2d22@python.org> On 2/26/2019 1:20 PM, Gregory P. Smith wrote: > For an OS distro provided interpreter, being able to restrict its use to > only OS distro provided software would be ideal (so ideal that people > who haven't learned the hard distro maintenance lessons may hate me for it). > > Such a restriction could be implemented within the interpreter itself. > For example: Say that only this set of fully qualified path whitelisted > .py files are allowed to invoke it, with no interactive, stdin, or > command line "-c" use allowed.? I'm not aware of anyone actually having > done that.? It's hard to see how to do that in a /maintainable/ manner > that people using many distros wouldn't just naively work around by > adding themselves to the whitelist rather than providing their own > interpreter for their own software stack.? It feels more doable without > workarounds for something like macOS or any other distro wholly > controlled and maintained as a single set of software rather than a > widely varying packages. > > Solving that is way outside the scope of PEP 394.? Just food for thought > that I'd like to leave as an earworm for the future for distro minded > folks.? I some people to hate this idea. I haven't caught up on this thread yet, but this sounds a lot like the "Restricting the entry point" section of https://www.python.org/dev/peps/pep-0551/ (which is still a draft, so if anyone wants to help make it more like what they want, I'm happy to have contributors). So I'm in favour of making this easy (since I'm already having to deal with it being difficult ;) ), as it's extremely valuable for security-conscious deployments as well as the distro package cases mentioned by Gregory. Cheers, Steve From greg.ewing at canterbury.ac.nz Tue Feb 26 17:31:34 2019 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 27 Feb 2019 11:31:34 +1300 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> Message-ID: <5C75BE46.50905@canterbury.ac.nz> Joe Jevnik via Python-Dev wrote: > If Python switched to a global > stack and global registers we may be able to eliminate a lot of > instructions that just shuffle data from the caller's stack to the > callee's stack. That would make implementing generators more complicated. -- Greg From greg.ewing at canterbury.ac.nz Tue Feb 26 17:32:33 2019 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 27 Feb 2019 11:32:33 +1300 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> Message-ID: <5C75BE81.4060209@canterbury.ac.nz> Victor Stinner wrote: > LOAD_CONST_REG R0, 2 (const#2) > LOAD_GLOBAL_REG R1, 'range' (name#0) > CALL_FUNCTION_REG 4, R1, R1, R0, 'n' Out of curiosity, why is the function being passed twice here? -- Greg From chris.barker at noaa.gov Tue Feb 26 17:43:24 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 26 Feb 2019 14:43:24 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> Message-ID: On Tue, Feb 26, 2019 at 9:58 AM Barry Warsaw wrote: > I see this question as having several parts, and the conflation of them is > part of the reason why the unversioned `python` command is so problematic. > Python is used for: > > * OS functionality > * to run applications that aren?t critical to the OS but are delivered on > the OS > * as the entry point to the interactive interpreter > * to run applications written and deployed on the OS but completely > outside of it > > For OS functionality, there should probably be a separate command not > conflated with /usr/bin/python. The OS can make any adjustments it needs, > calling it `spython` (as I think Nick once suggested), or whatever. Nobody > but OS maintainers cares what this is called or what version of Python it > exposes. > I'm not sure that's necessary at all -- the OS should simply use an unambiguous path! I was a RedHat user way back when when in the midst of the python1.5 => 2.0 transition. RedHat had a bunch of system scripts that had (I think): #!/usr/bin/env python In them (or maybe /usr/bin/python, but I'm pretty sure it was the env version). In any case, when you installed python2 (to local) your system scripts would ll break (even if they were python2 compatible, there wasn't a lot of difference, but RedHat also depended on extra packages...) So what we had to do was install python 2, remove the "python" command that came with it, and use "python2" in all our scripts. This was simply broken, and it was RedHat's fault. If they had used: /usr/bin/python1.5 in their shebang lines, there would have been no problem. And users could still use the system python1.5 if they wanted, or install an update, or whatever. My pint is: Any OS that ships OS scripts that expect "python" to be a specific version (or worse, a specific install) is broken. Some distros are going to ignore the PEP anyway, so there is no harm (and some good) in specifying in teh PEP the way we think it SHOULD be done, and then see what happens -- we don't need to make the PEP match current practice. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From skip.montanaro at gmail.com Tue Feb 26 18:01:20 2019 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Tue, 26 Feb 2019 17:01:20 -0600 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: <20190226205841.wo6a6bc65igmxpk3@python.ca> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> Message-ID: > I uploaded a tarfile I had on my PC to my web site: > > http://python.ca/nas/python/rattlesnake20010813/ > > It seems his name doesn't appear in the readme or source but I think > Rattlesnake was Skip Montanaro's project. I suppose my idea of > unifying the local variables and the registers could have came from > Rattlesnake. Very little new in the world. ;-P Lot of water under the bridge since then. I would have to poke around a bit, but I think "from module import *" stumped me long enough that I got distracted by some other shiny thing. S From vstinner at redhat.com Tue Feb 26 18:05:41 2019 From: vstinner at redhat.com (Victor Stinner) Date: Wed, 27 Feb 2019 00:05:41 +0100 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: <5C75BE81.4060209@canterbury.ac.nz> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> <5C75BE81.4060209@canterbury.ac.nz> Message-ID: Le mar. 26 f?vr. 2019 ? 23:40, Greg Ewing a ?crit : > Victor Stinner wrote: > > LOAD_CONST_REG R0, 2 (const#2) > > LOAD_GLOBAL_REG R1, 'range' (name#0) > > CALL_FUNCTION_REG 4, R1, R1, R0, 'n' > > Out of curiosity, why is the function being passed twice here? Ah, I should have explained that :-) The first argument of CALL_FUNCTION_REG is the name of the register used to store the result. The compiler begins with using static single assignment form (SSA) but then uses a register allocator to reduce the number of used registers. Usually, at the end you have less than 5 registers for a whole function. Since R1 was only used to store the function before the call and isn't used after, the R1 register can be re-used. Using a different register may require an explicit "CLEAR_REG R1" (decref the reference to the builtin range function) which is less efficient. Note: The CALL_FUNCTION instruction using the stack implicitly put the result into the stack (and "pop" function arguments from the stack). Victor -- Night gathers, and now my watch begins. It shall not end until my death. From vstinner at redhat.com Tue Feb 26 18:17:33 2019 From: vstinner at redhat.com (Victor Stinner) Date: Wed, 27 Feb 2019 00:17:33 +0100 Subject: [Python-Dev] Possible performance regression In-Reply-To: <20190226222814.ggzf6q3nk36bfzwx@python.ca> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <85C60B5A-A3CD-43F4-A38D-887576A01F8E@gmail.com> <20190226222814.ggzf6q3nk36bfzwx@python.ca> Message-ID: Hi, PGO compilation is very slow. I tried very hard to avoid it. I started to annotate the C code with various GCC attributes like "inline", "always_inline", "hot", etc.. I also experimented likely/unlikely Linux macros which use __builtin_expect(). At the end... my efforts were worthless. I still had *major* issue (benchmark *suddenly* 68% slower! WTF?) with code locality and I decided to give up. You can still find some macros like _Py_HOT_FUNCTION and _Py_NO_INLINE in Python ;-) (_Py_NO_INLINE is used to reduce stack memory usage, that's a different story.) My sad story with code placement: https://vstinner.github.io/analysis-python-performance-issue.html tl; dr Use PGO. -- Since that time, I removed call_method from pyperformance to fix the root issue: don't waste your time on micro-benchmarks ;-) ... But I kept these micro-benchmarks in a different project: https://github.com/vstinner/pymicrobench For some specific needs (take a decision on a specific optimizaton), sometimes micro-benchmarks are still useful ;-) Victor Le mar. 26 f?vr. 2019 ? 23:31, Neil Schemenauer a ?crit : > > On 2019-02-26, Raymond Hettinger wrote: > > That said, I'm only observing the effect when building with the > > Mac default Clang (Apple LLVM version 10.0.0 (clang-1000.11.45.5). > > When building GCC 8.3.0, there is no change in performance. > > My guess is that the code in _PyEval_EvalFrameDefault() got changed > enough that Clang started emitting a bit different machine code. If > the conditional jumps are a bit different, I understand that could > have a significant difference on performance. > > Are you compiling with --enable-optimizations (i.e. PGO)? In my > experience, that is needed to get meaningful results. Victor also > mentions that on his "how-to-get-stable-benchmarks" page. Building > with PGO is really (really) slow so I supect you are not doing it > when bisecting. You can speed it up greatly by using a simpler > command for PROFILE_TASK in Makefile.pre.in. E.g. > > PROFILE_TASK=$(srcdir)/my_benchmark.py > > Now that you have narrowed it down to a single commit, it would be > worth doing the comparison with PGO builds (assuming Clang supports > that). > > > That said, it seems to be compiler specific and only affects the > > Mac builds, so maybe we can decide that we don't care. > > I think the key question is if the ceval loop got a bit slower due > to logic changes or if Clang just happened to generate a bit worse > code due to source code details. A PGO build could help answer > that. I suppose trying to compare machine code is going to produce > too large of a diff. > > Could you try hoisting the eval_breaker expression, as suggested by > Antoine: > > https://discuss.python.org/t/profiling-cpython-with-perf/940/2 > > If you think a slowdown affects most opcodes, I think the DISPATCH > change looks like the only cause. Maybe I missed something though. > > Also, maybe there would be some value in marking key branches as > likely/unlikely if it helps Clang generate better machine code. > Then, even if you compile without PGO (as many people do), you still > get the better machine code. > > Regards, > > Neil > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com -- Night gathers, and now my watch begins. It shall not end until my death. From barry at python.org Tue Feb 26 18:21:35 2019 From: barry at python.org (Barry Warsaw) Date: Tue, 26 Feb 2019 15:21:35 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <57a58939-21a7-286c-09bb-e5a04172d06e@gmail.com> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> <57a58939-21a7-286c-09bb-e5a04172d06e@gmail.com> Message-ID: On Feb 26, 2019, at 13:34, Petr Viktorin wrote: > I have two very different questions in mind for moving this forward. > > Who gets to decide on PEP 394 changes? Honestly, I think it?s the active distro maintainers who need to make this decision. They have the pulse of their own communities and users, and can make the best decisions and compromises for their constituents. I personally am not part of that any more, so I have no problem having no say (despite still having opinions :). > Since so many people on python-dev are in agreement, where do I go for opposing voices? Well, why look for more dissent? If you can align Homebrew, Fedora-et-al, Debian-et-al, and we already know what Arch has done, and the PEP authors are in agreement, isn't that enough to JFDI? It couldn?t hurt to reach out to a few other distros, but do you think they will have substantially different opinions than what you?ve gathered already? -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From vstinner at redhat.com Tue Feb 26 18:26:03 2019 From: vstinner at redhat.com (Victor Stinner) Date: Wed, 27 Feb 2019 00:26:03 +0100 Subject: [Python-Dev] Possible performance regression In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <85C60B5A-A3CD-43F4-A38D-887576A01F8E@gmail.com> <20190226222814.ggzf6q3nk36bfzwx@python.ca> Message-ID: Le mer. 27 f?vr. 2019 ? 00:17, Victor Stinner a ?crit : > My sad story with code placement: > https://vstinner.github.io/analysis-python-performance-issue.html > > tl; dr Use PGO. Hum wait, this article isn't complete. You have to see the follow-up: https://bugs.python.org/issue28618#msg286662 """ Victor: "FYI I wrote an article about this issue: https://haypo.github.io/analysis-python-performance-issue.html Sadly, it seems like I was just lucky when adding __attribute__((hot)) fixed the issue, because call_method is slow again!" I upgraded speed-python server (running benchmarks) to Ubuntu 16.04 LTS to support PGO compilation. I removed all old benchmark results and ran again benchmarks with LTO+PGO. It seems like benchmark results are much better now. I'm not sure anymore that _Py_HOT_FUNCTION is really useful to get stable benchmarks, but it may help code placement a little bit. I don't think that it hurts, so I suggest to keep it. Since benchmarks were still unstable with _Py_HOT_FUNCTION, I'm not interested to continue to tag more functions with _Py_HOT_FUNCTION. I will now focus on LTO+PGO for stable benchmarks, and ignore small performance difference when PGO is not used. I close this issue now. """ Now I recall that I tried hard to avoid PGO: the server used by speed.python.org to run benchmarks didn't support PGO. I fixed the issue by upgrading Ubuntu :-) Now speed.python.org uses PGO. I stopped to stop to manually help the compiler with code placement. Victor From barry at python.org Tue Feb 26 18:34:30 2019 From: barry at python.org (Barry Warsaw) Date: Tue, 26 Feb 2019 15:34:30 -0800 Subject: [Python-Dev] Compile-time resolution of packages [Was: Another update for PEP 394...] In-Reply-To: <20190226220418.b36jw33qthdv5i5l@python.ca> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> <20190226220418.b36jw33qthdv5i5l@python.ca> Message-ID: <13E7CBC6-7AE3-4DDB-AFAE-052E3A611BB5@python.org> On Feb 26, 2019, at 14:04, Neil Schemenauer wrote: > > Interesting idea. I remember when I was helping develop Debian > packaging guides for Python software. I had to fight with people > to convince them that Debian packages should use > > #!/usr/bin/pythonX.Y > > rather than > > #!/usr/bin/env python Indeed. I used to fight that battle quite a bit, although at least in my circles that lesson has by now been learned. `/usr/bin/env python` is great for development and terrible for deployment. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From barry at python.org Tue Feb 26 18:38:03 2019 From: barry at python.org (Barry Warsaw) Date: Tue, 26 Feb 2019 15:38:03 -0800 Subject: [Python-Dev] Compact ordered set In-Reply-To: References: Message-ID: On Feb 26, 2019, at 13:02, Raymond Hettinger wrote: > * I gave up on ordering right away. If we care about performance, keys can be stored in the order added; but no effort should be expended to maintain order if subsequent deletions occur. Likewise, to keep set-to-set operations efficient (i.e. looping over the smaller input), no order guarantee should be given for those operations. In general, we can let order happen but should not guarantee it and work to maintain it or slow-down essential operations to make them ordered. One thing that concerns me would be if the ordering for sets is different than dictionaries. Well, it kind of is already, but it?s easier to say ?dict preserve insertion order, sets are unordered?, than to say they are both ordered but with different guarantees. The behavior differences between dicts and sets is already surprising to many users, so we should be careful not to make the situation worse. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From wes.turner at gmail.com Tue Feb 26 18:40:36 2019 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 26 Feb 2019 18:40:36 -0500 Subject: [Python-Dev] OpenSSL 1.1.1 update for 3.7/3.8 In-Reply-To: <129ab7ef-fc1e-4f44-7a1e-e75b89f3def0@python.org> References: <129ab7ef-fc1e-4f44-7a1e-e75b89f3def0@python.org> Message-ID: Thanks, as always On Tue, Feb 26, 2019 at 4:45 PM Christian Heimes wrote: > On 26/02/2019 21.31, Wes Turner wrote: > >> IMHO it's > > fine to ship the last 2.7 build with an OpenSSL version that was EOLed > > just 24h earlier. > > > > Is this a time / cost issue or a branch policy issue? > > > > If someone was to back port the forthcoming 1.1.1 to 2.7 significantly > > before the EOL date, could that be merged? > > My mail is about official binary Python packages for Windows and macOS. > We stick to an OpenSSL version to guarantee maximum backwards > compatibility within a minor release. OpenSSL 1.1.1 has TLS 1.3 support > and prefers TLS 1.3 over TLS 1.2. There is a small change that TLS 1.3 > breaks some assumptions. > > Python 2.7 works mostly fine with OpenSSL 1.1.1. There are some minor > test issues related to TLS 1.3 but nothing serious. Linux distros have > been shipping Python 2.7 with OpenSSL 1.1.1 for a while. > > > > There are all sorts of e.g. legacy academic works that'll never be > > upgraded etc etc > > That topic is out of scope and has been discussed countless times. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nas-python at arctrix.com Tue Feb 26 18:53:50 2019 From: nas-python at arctrix.com (Neil Schemenauer) Date: Tue, 26 Feb 2019 17:53:50 -0600 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: <5C75BE46.50905@canterbury.ac.nz> References: <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> <5C75BE46.50905@canterbury.ac.nz> Message-ID: <20190226235350.5c7unf6j7o5f7jwm@python.ca> On 2019-02-27, Greg Ewing wrote: > Joe Jevnik via Python-Dev wrote: > > If Python switched to a global stack and global registers we may be able > > to eliminate a lot of instructions that just shuffle data from the > > caller's stack to the callee's stack. > > That would make implementing generators more complicated. Right. I wonder though, could we avoid allocating the Python frame object until we actually need it? Two situations when you need a heap allocated frame come to mind immediately: generators that are suspended and frames as part of a traceback. I guess sys._getframe() is another. Any more? I'm thinking that perhaps for regular Python functions and regular calls, you could defer creating the full PyFrame object and put the locals, stack, etc on the C stack. That would make calling Python functions a lot similar to the machine calling convention and presumably could be much faster. If you do need the frame object, copy over the data from the C stack into the frame structure. I'm sure there are all kinds of reasons why this idea is not easy to implement or not possible. It seems somewhat possible though. I wonder how IronPython works in this respect? Apparently it doesn't support sys._getframe(). Regards, Neil From chris.barker at noaa.gov Tue Feb 26 19:03:12 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 26 Feb 2019 16:03:12 -0800 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> <4ca1c28b-d906-beea-875f-224ebb77cd0a@ganssle.io> Message-ID: This thread petered out, seemingly with a consensus that we should update the docs -- is anyone doing that? But anyway, I'd like to offer a counterpoint: >From the OP, it is clear that: * Folks have a need for timedeltas to be converted to numbers values, with units other than seconds (milliseconds, at least). * If left to their own devices, they may well do it wrong (Or at least not as right as they should. So: it would be good to provide a correct, simple, intuitive, and discoverable way to do that. timedelta.total_seconds() Provides that for seconds, but there is no equivalent for other units. a_time_delta / timedelta(microseconds=1) Is now possible in py3, and has been proposed as the canonical way to convert to specific time units. However, while it does provide a correct[1] way to do it, it is: - not very simple - not very intuitive. - not the least bit discoverable Each of these in turn: simple: ===== compare duration = a_timedelta.total_seconds() to duration = a_timedelta / datetime.timedelta(seconds=1) Keep in mind that the timedelta object may have been generated by another module somehow, so the coder that gets it and wants to turn it into a number of seconds (or milliseconds, or ...) needs to import datetime and reference the timedelta object. And if they are converting to a plain number, it's probably because they don't want to be working with timedeltas at all anyway. So no, not so simple. intuitive: ====== A casual user reading the first will very likely know what it means -- a casual user reading the second line will need to think about it carefully, and probably have to go read the datetime docs, or at least do some experiments to make sure it does what they think it does. Granted, a comment: duration = a_timedelta / datetime.timedelta(seconds=1) # convert to seconds would help a lot, but if you need a comment to explain a line of code this simple, then it's not intuitive. A couple more data points: -- I am a physical scientist, I work with unitted quantities all the time (both in code and in other contexts). It never dawned on me to use this approach to convert to seconds or milliseconds, or ... Granted, I still rely on python2 for a fair bit of my work, but still, I had to scratch my head when it was proposed on this thread. -- There are a number of physical unit libraries in Python, and as far as I know, none of them let you do this to create a unitless value in a particular unit. "pint" for example: https://pint.readthedocs.io/en/latest/ In pint, you can create objects with units, including time: In [50]: timespan = 2 * ureg.day In [51]: print(timespan) 2 day But if you divide a value of days by a value in seconds, you don't get a unitless seconds per day: In [54]: unitless = timespan / (1 * ureg.second) In [55]: print(unitless) 2.0 day / second Though pint does know it is dimensionless: In [56]: unitless.dimensionless Out[56]: True And you can reduce it to a dimensionless object: In [57]: unitless.to_reduced_units() Out[57]: 172800.0 And there is your seconds value. But the "right" way to get a given pint object of time into particular units is to convert, and then, if you want a plain number, get the magnitude: In [53]: print(timespan.to('second').magnitude) 172800.0 So no -- dividing a datetime by another datetime with the value you want is not intuitive: not to a physical scientist, not to a user of other physical quantities libraries -- is it intuitive to anyone other than someone that was involved in python datetime development?? Discoverable: ========== It is clearly not discoverable -- the OP didn't find it, and no one other than Alexander found it on this thread (I'm pretty sure). That could be made much better with docs, but we all know no one reads docs -- I'm willing to bet that even if we better document it, folks will still be writing utility functions like the OP posted. And (this is also a doc issue) -- I wanted to know what the options were for units we could specify to the datetime constructor, so I used the nifty iPython ?, and got: In [59]: timedelta? Init signature: timedelta(self, /, *args, **kwargs) Docstring: Difference between two datetime values. Gosh, that's helpful! (we really need to fix that regardless of this thread). And someone earlier mentioned "weeks" without realizing that is was already supported: In [60]: timedelta(weeks=1) Out[60]: datetime.timedelta(days=7) So we have a real discoverability problem -- we really need to fix that. On the other hand, if we add a few convenience methods, we will have a way for folks to do this that is: correct simple intuitive discoverable And we really don't need to add many. Looking at the docs ('cause the docstring is useless), I see that the timedelta takes: datetime.timedelta(days=0, seconds=0, microseconds=0, milliseconds=0, minutes=0, hours=0, weeks=0) So at most, we could have: .total_microseconds() .total_seconds() .total_minutes() .total_hours() .total_days() .total_weeks() Yes, I know that that's more code to write, maintain and document, but really, it will not take long to write, will take less work to document than the doc improvements we need anyway, and hardly any extra maintenance burden. (and the utility of weeks and minutes is questionable) BTW, why are these methods, rather than properties? Another option, is, of course, to add a: .to_unit() method that takes the above as strings. -- but unless we ar going to support a bunch more units than those six, I'd rather see the methods. And what would those others be? fortnights? I've made my case -- and maybe we won't do this. But let's please at least update the docstring of timedleta! -CHB [1] is it the best possible for microseconds? it returns a float, which can only carry 15 or so digits, which is "only" 300 or so years. So microsecond precision is lost for timedeltas representing longer than that -- does that matter ??? -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Feb 26 19:07:03 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 26 Feb 2019 16:07:03 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> <57a58939-21a7-286c-09bb-e5a04172d06e@gmail.com> Message-ID: On Tue, Feb 26, 2019 at 3:25 PM Barry Warsaw wrote: > > Who gets to decide on PEP 394 changes? > > Honestly, I think it?s the active distro maintainers who need to make this > decision. They have the pulse of their own communities and users, and can > make the best decisions and compromises for their constituents. I > personally am not part of that any more, so I have no problem having no say > (despite still having opinions :). > The PEP is what the Python community recommends. The distro maintainers can (and will) do whatever they want. IF we are going to let the distros drive it, then there is no point to a the PEP. Well, why look for more dissent? If you can align Homebrew, Fedora-et-al, > Debian-et-al, and we already know what Arch has done, and the PEP authors > are in agreement, isn't that enough to JFDI? more than enough :-) -CHB ----- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From dinoviehland at gmail.com Tue Feb 26 19:56:14 2019 From: dinoviehland at gmail.com (Dino Viehland) Date: Tue, 26 Feb 2019 16:56:14 -0800 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: <20190226235350.5c7unf6j7o5f7jwm@python.ca> References: <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> <5C75BE46.50905@canterbury.ac.nz> <20190226235350.5c7unf6j7o5f7jwm@python.ca> Message-ID: On Tue, Feb 26, 2019 at 3:56 PM Neil Schemenauer wrote: > Right. I wonder though, could we avoid allocating the Python frame > object until we actually need it? Two situations when you need a > heap allocated frame come to mind immediately: generators that are > suspended and frames as part of a traceback. I guess > sys._getframe() is another. Any more? > > I've been thinking about that as well... I think in some ways the easy part of this is actually the easy part of this is the reification of the frame it's self. You can have a PyFrameObject which is just declared on the stack and add a new field to it which captures the address of the PyFrameObject* f (e.g. PyFrameObject **f_stackaddr). When you need to move to aheap allocated one you copy everything over as you say and update *f_stackaddr to point at the new heap address. It seems a little bit annoying with the various levels of indirection from the frame getting created in PyEval_EvalCodeEx and flowing down into _PyEval_EvalFrameDefault - so there may need to be some breakage there for certain low-level tools. I'm also a little bit worried about things which go looking at PyThreadState and might make nasty assumptions about the frames already being heap allocated. FYI IronPython does support sys._getframe(), you just need to run it with a special flag (and there are various levels - e.g. -X:Frames and -X:FullFrames, the latter which guarantees your locals are in the frame too). IronPython is more challenged here in that it always generates "safe" code from a CLR perspective and tracking the address of stack-allocated frame objects is therefore challenging (although maybe more possible now then before with various C# ref improvements). I'm not sure exactly how much this approach would get though... It seems like the frame caches are pretty effective, and a lot of the cost of them is initializing them / decref'ing the things which are still alive in them. But it doesn't seem a like a super complicated change to try out... It's actually something I'd at least like to try prototyping at some point. -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.hettinger at gmail.com Tue Feb 26 21:32:38 2019 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 26 Feb 2019 18:32:38 -0800 Subject: [Python-Dev] Possible performance regression In-Reply-To: <20190226222814.ggzf6q3nk36bfzwx@python.ca> References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <85C60B5A-A3CD-43F4-A38D-887576A01F8E@gmail.com> <20190226222814.ggzf6q3nk36bfzwx@python.ca> Message-ID: On Feb 26, 2019, at 2:28 PM, Neil Schemenauer wrote: > > Are you compiling with --enable-optimizations (i.e. PGO)? In my > experience, that is needed to get meaningful results. I'm not and I would worry that PGO would give less stable comparisons because it is highly sensitive to changes its training set as well as the actual CPython implementation (two moving targets instead of one). That said, it doesn't really matter to the world how I build *my* Python. We're trying to keep performant the ones that people actually use. For the Mac, I think there are only four that matter: 1) The one we distribute on the python.org website at https://www.python.org/ftp/python/3.8.0/python-3.8.0a2-macosx10.9.pkg 2) The one installed by homebrew 3) The way folks typically roll their own: $ ./configure && make (or some variant of make install) 4) The one shipped by Apple and put in /usr/bin Of the four, the ones I've been timing are #1 and #3. I'm happy to drop this. I was looking for independent confirmation and didn't get it. We can't move forward unless some else also observes a consistently measurable regression for a benchmark they care about on a build that they care about. If I'm the only who notices then it really doesn't matter. Also, it was reassuring to not see the same effect on a GCC-8 build. Since the effect seems to be compiler specific, it may be that we knocked it out of a local minimum and that performance will return the next time someone touches the eval-loop. Raymond From tjreedy at udel.edu Tue Feb 26 22:15:17 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 26 Feb 2019 22:15:17 -0500 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> <4ca1c28b-d906-beea-875f-224ebb77cd0a@ganssle.io> Message-ID: On 2/26/2019 7:03 PM, Chris Barker via Python-Dev wrote: > So: it would be good to provide a correct, simple,? intuitive, and > discoverable way to do that. > > timedelta.total_seconds() To me, total_x implies that there is a summation of multiple timedeltas, and there is not. So not intuitive to me. (Neither us current obscure option). It is also not obvious is answer is rounded to nearest second or not. > > So at most, we could have: > > .total_microseconds() > .total_seconds() > .total_minutes() > .total_hours() > .total_days() > .total_weeks() I am also not enthusiastic about multiple methods doing essentially the same thing. I might prefer one method, .convert? with an argument specifying the conversion unit, 'microseconds', 'seconds', ... . I think this is in python-ideas territory. -- Terry Jan Reedy From greg.ewing at canterbury.ac.nz Wed Feb 27 00:09:57 2019 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 27 Feb 2019 18:09:57 +1300 Subject: [Python-Dev] Register-based VM [Was: Possible performance regression] In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <20190226193512.m5wzrecaxt3yfm7l@python.ca> <20190226205841.wo6a6bc65igmxpk3@python.ca> <5C75BE81.4060209@canterbury.ac.nz> Message-ID: <5C761BA5.4050702@canterbury.ac.nz> Victor Stinner wrote: > Using a different register may require an explicit "CLEAR_REG R1" > (decref the reference to the builtin range function) which is less > efficient. Maybe the source operand fields of the bytecodes could have a flag indicating whether to clear the register after use. -- Greg From fred at fdrake.net Wed Feb 27 00:57:08 2019 From: fred at fdrake.net (Fred Drake) Date: Wed, 27 Feb 2019 00:57:08 -0500 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> <4ca1c28b-d906-beea-875f-224ebb77cd0a@ganssle.io> Message-ID: On Tue, Feb 26, 2019 at 10:20 PM Terry Reedy wrote: > To me, total_x implies that there is a summation of multiple timedeltas, > and there is not. Do you believe this is a particularly dominant perception? I don't, but specific backgrounds probably play into this heavily. I'd expect to total a bunch of timedelta values using sum([d0, d1, ..., dn]). Given we already have total_seconds(), it's not clear avoiding additional methods is meaningful, unless we're going to deprecate total_seconds(). Not really a win in my book. I'd rather stick with the existing pattern, if anything even needs to be done. I'm quite happy to use d.total_seconds() * 1000000 as long as the accuracy is sufficient. Someone with more floating point expertise can probably think of a reason that's not good enough, in which case... an appropriate method wouldn't be poorly named as total_microseconds. > I might prefer one method, .convert? with an argument > specifying the conversion unit, 'microseconds', 'seconds', ... . Using a function that takes a units indicator (as d.convert(units='microseconds')) seems like a poor choice; most uses will hard-code exactly one value for the units, rather than passing in a variable. Getting a more specific name seems reasonable. > It is also not obvious is answer is rounded to nearest second > or not. No, but that's a problem we have now with total_seconds(). Best handled by maintaining the pattern and documenting the behavior. While fractional microseconds aren't a thing with timedelta values now (and probably not in any near future), it seems good to keep these floats so things stay consistent if we can ever get better clocks. :-) -Fred -- Fred L. Drake, Jr. "A storm broke loose in my mind." --Albert Einstein From liu.denton at gmail.com Wed Feb 27 01:40:56 2019 From: liu.denton at gmail.com (Denton Liu) Date: Tue, 26 Feb 2019 22:40:56 -0800 Subject: [Python-Dev] [bpo-35155] Requesting a review In-Reply-To: <20190212101455.GA29427@archbookpro.localdomain> References: <20190212101455.GA29427@archbookpro.localdomain> Message-ID: <20190227064056.GA2611@archbookpro.localdomain> On Tue, Feb 12, 2019 at 02:14:55AM -0800, Denton Liu wrote: > Hello all, > > A couple months back, I reported bpo-35155[1] and I submitted a PR for > consideration[2]. After a couple of reviews, it seems like progress has > stalled. Would it be possible for someone to review this? > > Thanks, > > Denton > > [1]: https://bugs.python.org/issue35155 > [2]: https://github.com/python/cpython/pull/10313 Thanks for the comments and help on the PR! It seems like progress on this change has stalled again. If there aren't anymore comments, I believe that this PR is ready to be merged. Thanks, Denton From turnbull.stephen.fw at u.tsukuba.ac.jp Wed Feb 27 02:51:55 2019 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Wed, 27 Feb 2019 16:51:55 +0900 Subject: [Python-Dev] Possible performance regression In-Reply-To: References: <074113AA-3CC2-40EB-956A-26FC26852D9C@gmail.com> <20190225115452.30ed11bb@fsol> <9D28FA05-1FDA-43E0-9CD1-DCDDBCE05A03@gmail.com> <85C60B5A-A3CD-43F4-A38D-887576A01F8E@gmail.com> <20190226222814.ggzf6q3nk36bfzwx@python.ca> Message-ID: <23670.16795.261480.762802@turnbull.sk.tsukuba.ac.jp> Raymond Hettinger writes: > We're trying to keep performant the ones that people actually use. > For the Mac, I think there are only four that matter: > > 1) The one we distribute on the python.org > website at https://www.python.org/ftp/python/3.8.0/python-3.8.0a2-macosx10.9.pkg > > 2) The one installed by homebrew > > 3) The way folks typically roll their own: > $ ./configure && make (or some variant of make install) > > 4) The one shipped by Apple and put in /usr/bin I don't see the relevance of (4) since we're talking about the bleeding edge AFAICT. Not clear about Homebrew -- since I've been experimenting with it recently I use the bottled versions, which aren't bleeding edge. If prebuilt packages matter, I would add MacPorts (or substitute it for (4) since nothing seems to get Apple's attention) and Anaconda (which is what I recommend to my students). But I haven't looked at MacPorts' recent download stats, and maybe I'm just the odd one out. Steve -- Associate Professor Division of Policy and Planning Science http://turnbull.sk.tsukuba.ac.jp/ Faculty of Systems and Information Email: turnbull at sk.tsukuba.ac.jp University of Tsukuba Tel: 029-853-5175 Tennodai 1-1-1, Tsukuba 305-8573 JAPAN From turnbull.stephen.fw at u.tsukuba.ac.jp Wed Feb 27 02:52:14 2019 From: turnbull.stephen.fw at u.tsukuba.ac.jp (Stephen J. Turnbull) Date: Wed, 27 Feb 2019 16:52:14 +0900 Subject: [Python-Dev] Compile-time resolution of packages [Was: Another update for PEP 394...] In-Reply-To: <13E7CBC6-7AE3-4DDB-AFAE-052E3A611BB5@python.org> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> <20190226220418.b36jw33qthdv5i5l@python.ca> <13E7CBC6-7AE3-4DDB-AFAE-052E3A611BB5@python.org> Message-ID: <23670.16814.299522.808358@turnbull.sk.tsukuba.ac.jp> Barry Warsaw writes: >`/usr/bin/env python` is great for development and terrible for deployment. Developers of `six` and `2to3`, you mean? Steve From songofacandy at gmail.com Wed Feb 27 03:59:57 2019 From: songofacandy at gmail.com (INADA Naoki) Date: Wed, 27 Feb 2019 17:59:57 +0900 Subject: [Python-Dev] mmap & munmap loop (Was: Compact ordered set In-Reply-To: References: Message-ID: > > > Ah, another interesting point, this huge slowdown happens only when bm_pickle.py > > is executed through pyperformance. When run it directly, slowdown is > > not so large. > > pyperformance runs benchmarks in a virtual environment. I don't know > if it has any impact on bm_pickle. > > Most pyperformance can be run outside a virtual env if required > modules are installed on the system. (bm_pickle only requires the > stdlib and perf.) > Bingo! Without venv: unpickle: Mean +- std dev: 26.9 us +- 0.0 us % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 28.78 0.000438 0 1440 read 27.33 0.000416 1 440 25 stat 9.72 0.000148 1 144 mmap ... 0.79 0.000012 1 11 munmap With venv: % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 57.12 0.099023 2 61471 munmap 41.87 0.072580 1 61618 mmap 0.23 0.000395 1 465 27 stat unpickle and unpickle_list creates massive same-sized objects, then all objects are removed. If all pools in the arena is freed, munmap is called. I think we should save some arenas to reuse. On recent Linux, we may be able to use MADV_FREE instead of munmap. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at holdenweb.com Wed Feb 27 04:32:25 2019 From: steve at holdenweb.com (Steve Holden) Date: Wed, 27 Feb 2019 09:32:25 +0000 Subject: [Python-Dev] Compile-time resolution of packages [Was: Another update for PEP 394...] In-Reply-To: <20190226220418.b36jw33qthdv5i5l@python.ca> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> <20190226220418.b36jw33qthdv5i5l@python.ca> Message-ID: While these are interesting ideas, wouldn't it be better to leave this kind of packaging to snap and similar utilities that bundle the language support and libraries to allow simple isolated installation. Kind regards Steve Holden On Tue, Feb 26, 2019 at 10:05 PM Neil Schemenauer wrote: > On 2019-02-26, Gregory P. Smith wrote: > > On Tue, Feb 26, 2019 at 9:55 AM Barry Warsaw wrote: > > For an OS distro provided interpreter, being able to restrict its use to > > only OS distro provided software would be ideal (so ideal that people who > > haven't learned the hard distro maintenance lessons may hate me for it). > > Interesting idea. I remember when I was helping develop Debian > packaging guides for Python software. I had to fight with people > to convince them that Debian packages should use > > #!/usr/bin/pythonX.Y > > rather than > > #!/usr/bin/env python > > The situtation is much better now but I still sometimes have > packaged software fail because it picks up my version of > /usr/local/bin/python. I don't understand how people can believe > grabbing /usr/local/bin/python is going to be a way to build a > reliable system. > > > Such a restriction could be implemented within the interpreter itself. > For > > example: Say that only this set of fully qualified path whitelisted .py > > files are allowed to invoke it, with no interactive, stdin, or command > line > > "-c" use allowed. > > I think this is related to an idea I was tinkering with on the > weekend. Why shouldn't we do more compile time linkage of Python > packages? At least, I think we give people the option to do it. > Obviously you still need to also support run-time import search > (interactive REPL, support __import__(unknown_at_compiletime)__). > > Here is the sketch of the idea (probably half-baked, as most of my > ideas are): > > - add PYTHONPACKAGES envvar and -p options to 'python' > > - the argument for these options would be a colon separated list of > Python package archives (crates, bales, bundles?). The -p option > could be a colon separated list or provided multiple times to > specify more packages. > > - the modules/packages contained in those archives become the > preferred bytecode code source when those names are imported. We > look there first. The crawling around behavor (dynamic import > based on sys.path) happens only if a module is not found and could > be turned off. > > - the linking of the modules could be computed when the code is > compiled and the package archive created, rather than when the > 'import' statement gets executed. This would provide a number of > advantages. It would be faster. Code analysis tools could > statically determine which modules imported code corresponds too. > E.g. if your code calls module.foo, assuming no monkey patching, > you know what code 'foo' actually is. > > - to get extra fancy, the package archives could be dynamic > link libraries containing "frozen modules" like this FB experiment: > https://github.com/python/cpython/pull/9320 > That way, you avoid the unmarshal step and just execute the module > bytecode directly. On startup, Python would dlopen all of the > package archives specified by PYTHONPACKAGES. On init, it would > build an index of the package tree and it would have the memory > location for the code object for each module. > > That would seem like quite a useful thing. For an application like > Mercurial, they could build all the modules/packages required into a > single package archive. Or, there would be a small number of > archives (one for standard Python library, one for everything else > that Mercurial needs). > > Now that I write this, it sounds a lot like the debate between > static linking and dynamic linking. Golang does static linking and > people seem to like the single executable distribution. > > Regards, > > Neil > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/steve%40holdenweb.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vstinner at redhat.com Wed Feb 27 05:32:35 2019 From: vstinner at redhat.com (Victor Stinner) Date: Wed, 27 Feb 2019 11:32:35 +0100 Subject: [Python-Dev] mmap & munmap loop (Was: Compact ordered set In-Reply-To: References: Message-ID: Any idea why Python calls mmap+munmap more even in a venv? Victor Le mer. 27 f?vr. 2019 ? 10:00, INADA Naoki a ?crit : > > > > > > Ah, another interesting point, this huge slowdown happens only when bm_pickle.py > > > is executed through pyperformance. When run it directly, slowdown is > > > not so large. > > > > pyperformance runs benchmarks in a virtual environment. I don't know > > if it has any impact on bm_pickle. > > > > Most pyperformance can be run outside a virtual env if required > > modules are installed on the system. (bm_pickle only requires the > > stdlib and perf.) > > > > Bingo! > > Without venv: > > unpickle: Mean +- std dev: 26.9 us +- 0.0 us > % time seconds usecs/call calls errors syscall > ------ ----------- ----------- --------- --------- ---------------- > 28.78 0.000438 0 1440 read > 27.33 0.000416 1 440 25 stat > 9.72 0.000148 1 144 mmap > ... > 0.79 0.000012 1 11 munmap > > With venv: > > % time seconds usecs/call calls errors syscall > ------ ----------- ----------- --------- --------- ---------------- > 57.12 0.099023 2 61471 munmap > 41.87 0.072580 1 61618 mmap > 0.23 0.000395 1 465 27 stat > > unpickle and unpickle_list creates massive same-sized objects, then all objects are > removed. If all pools in the arena is freed, munmap is called. > > I think we should save some arenas to reuse. On recent Linux, > we may be able to use MADV_FREE instead of munmap. > -- Night gathers, and now my watch begins. It shall not end until my death. From vstinner at redhat.com Wed Feb 27 05:50:59 2019 From: vstinner at redhat.com (Victor Stinner) Date: Wed, 27 Feb 2019 11:50:59 +0100 Subject: [Python-Dev] mmap & munmap loop (Was: Compact ordered set In-Reply-To: References: Message-ID: Sorry, I didn't get a coffee yet: more *often* in a venv. Le mer. 27 f?vr. 2019 ? 11:32, Victor Stinner a ?crit : > > Any idea why Python calls mmap+munmap more even in a venv? > > Victor > > Le mer. 27 f?vr. 2019 ? 10:00, INADA Naoki a ?crit : > > > > > > > > > Ah, another interesting point, this huge slowdown happens only when bm_pickle.py > > > > is executed through pyperformance. When run it directly, slowdown is > > > > not so large. > > > > > > pyperformance runs benchmarks in a virtual environment. I don't know > > > if it has any impact on bm_pickle. > > > > > > Most pyperformance can be run outside a virtual env if required > > > modules are installed on the system. (bm_pickle only requires the > > > stdlib and perf.) > > > > > > > Bingo! > > > > Without venv: > > > > unpickle: Mean +- std dev: 26.9 us +- 0.0 us > > % time seconds usecs/call calls errors syscall > > ------ ----------- ----------- --------- --------- ---------------- > > 28.78 0.000438 0 1440 read > > 27.33 0.000416 1 440 25 stat > > 9.72 0.000148 1 144 mmap > > ... > > 0.79 0.000012 1 11 munmap > > > > With venv: > > > > % time seconds usecs/call calls errors syscall > > ------ ----------- ----------- --------- --------- ---------------- > > 57.12 0.099023 2 61471 munmap > > 41.87 0.072580 1 61618 mmap > > 0.23 0.000395 1 465 27 stat > > > > unpickle and unpickle_list creates massive same-sized objects, then all objects are > > removed. If all pools in the arena is freed, munmap is called. > > > > I think we should save some arenas to reuse. On recent Linux, > > we may be able to use MADV_FREE instead of munmap. > > > > > -- > Night gathers, and now my watch begins. It shall not end until my death. -- Night gathers, and now my watch begins. It shall not end until my death. From songofacandy at gmail.com Wed Feb 27 06:35:58 2019 From: songofacandy at gmail.com (INADA Naoki) Date: Wed, 27 Feb 2019 20:35:58 +0900 Subject: [Python-Dev] mmap & munmap loop (Was: Compact ordered set In-Reply-To: References: Message-ID: It happened very accidentally. Since venv is used, many paths in the interpreter is changed. So how memory is used are changed. Let's reproduce the accident. $ cat m2.py import pickle, sys LIST = pickle.dumps([[0]*10 for _ in range(10)], pickle.HIGHEST_PROTOCOL) N = 1000 z = [[0]*10 for _ in range(N)] if '-c' in sys.argv: sys._debugmallocstats() sys.exit() for _ in range(100000): pickle.loads(LIST) $ /usr/bin/time python3 m2.py 0.42user 0.00system 0:00.43elapsed 99%CPU (0avgtext+0avgdata 9100maxresident)k 0inputs+0outputs (0major+1139minor)pagefaults 0swaps There are only 1139 faults. It is less than 100000. $ /usr/bin/time python3 m2.py -c ... 14 unused pools * 4096 bytes = 57,344 ... adjust N im m2.py until it shows "0 unused pools". In my case, N=1390. $ /usr/bin/time python3 m2.py 0.51user 0.33system 0:00.85elapsed 99%CPU (0avgtext+0avgdata 9140maxresident)k 0inputs+0outputs (0major+201149minor)pagefaults 0swaps 200000 faults! It seems two page fault / loop. (2 pools are used and returned). On Wed, Feb 27, 2019 at 7:51 PM Victor Stinner wrote: > Sorry, I didn't get a coffee yet: more *often* in a venv. > > Le mer. 27 f?vr. 2019 ? 11:32, Victor Stinner a > ?crit : > > > > Any idea why Python calls mmap+munmap more even in a venv? > > > > Victor > > > > Le mer. 27 f?vr. 2019 ? 10:00, INADA Naoki a > ?crit : > > > > > > > > > > > > Ah, another interesting point, this huge slowdown happens only > when bm_pickle.py > > > > > is executed through pyperformance. When run it directly, slowdown > is > > > > > not so large. > > > > > > > > pyperformance runs benchmarks in a virtual environment. I don't know > > > > if it has any impact on bm_pickle. > > > > > > > > Most pyperformance can be run outside a virtual env if required > > > > modules are installed on the system. (bm_pickle only requires the > > > > stdlib and perf.) > > > > > > > > > > Bingo! > > > > > > Without venv: > > > > > > unpickle: Mean +- std dev: 26.9 us +- 0.0 us > > > % time seconds usecs/call calls errors syscall > > > ------ ----------- ----------- --------- --------- ---------------- > > > 28.78 0.000438 0 1440 read > > > 27.33 0.000416 1 440 25 stat > > > 9.72 0.000148 1 144 mmap > > > ... > > > 0.79 0.000012 1 11 munmap > > > > > > With venv: > > > > > > % time seconds usecs/call calls errors syscall > > > ------ ----------- ----------- --------- --------- ---------------- > > > 57.12 0.099023 2 61471 munmap > > > 41.87 0.072580 1 61618 mmap > > > 0.23 0.000395 1 465 27 stat > > > > > > unpickle and unpickle_list creates massive same-sized objects, then > all objects are > > > removed. If all pools in the arena is freed, munmap is called. > > > > > > I think we should save some arenas to reuse. On recent Linux, > > > we may be able to use MADV_FREE instead of munmap. > > > > > > > > > -- > > Night gathers, and now my watch begins. It shall not end until my death. > > > > -- > Night gathers, and now my watch begins. It shall not end until my death. > -- INADA Naoki -------------- next part -------------- An HTML attachment was scrubbed... URL: From vstinner at redhat.com Wed Feb 27 07:59:16 2019 From: vstinner at redhat.com (Victor Stinner) Date: Wed, 27 Feb 2019 13:59:16 +0100 Subject: [Python-Dev] mmap & munmap loop (Was: Compact ordered set In-Reply-To: References: Message-ID: Maybe pickle is inefficient in its memory management and causes a lot of memory fragmentation? It's hard to write an efficient memory allocator :-( My notes on memory: * "Excessive peak memory consumption by the Python parser" https://bugs.python.org/issue26415 * https://pythondev.readthedocs.io/memory.html * https://vstinner.readthedocs.io/heap_fragmentation.html Sometimes I would like to be able to use a separated memory allocator for one function, to not pollute the global allocator and so avoid "punching holes" in global memory pools or in the heap memory. The problem is to track the lifetime of objects. If the allocated objects live longer than the function, PyObject_Free() should be able to find the alllocator used by the memory block. pymalloc is already able to check if its manages a memory block using its address. If it's not allocated by pymalloc, PyObject_Free() falls back to libc free(). The Python parser already uses PyArena which is a custom memory allocator. It uses PyMem_Malloc() to allocate memory. In Python 3.7, PyMem_Malloc() uses pymalloc: https://docs.python.org/dev/c-api/memory.html#default-memory-allocators Victor Le mer. 27 f?vr. 2019 ? 12:36, INADA Naoki a ?crit : > > It happened very accidentally. Since venv is used, > many paths in the interpreter is changed. So how memory > is used are changed. > > Let's reproduce the accident. > > $ cat m2.py > import pickle, sys > > LIST = pickle.dumps([[0]*10 for _ in range(10)], pickle.HIGHEST_PROTOCOL) > > N = 1000 > z = [[0]*10 for _ in range(N)] > > if '-c' in sys.argv: > sys._debugmallocstats() > sys.exit() > > for _ in range(100000): > pickle.loads(LIST) > > $ /usr/bin/time python3 m2.py > 0.42user 0.00system 0:00.43elapsed 99%CPU (0avgtext+0avgdata 9100maxresident)k > 0inputs+0outputs (0major+1139minor)pagefaults 0swaps > > There are only 1139 faults. It is less than 100000. > > $ /usr/bin/time python3 m2.py -c > ... > 14 unused pools * 4096 bytes = 57,344 > ... > > adjust N im m2.py until it shows "0 unused pools". > In my case, N=1390. > > $ /usr/bin/time python3 m2.py > 0.51user 0.33system 0:00.85elapsed 99%CPU (0avgtext+0avgdata 9140maxresident)k > 0inputs+0outputs (0major+201149minor)pagefaults 0swaps > > 200000 faults! > It seems two page fault / loop. (2 pools are used and returned). > > > On Wed, Feb 27, 2019 at 7:51 PM Victor Stinner wrote: >> >> Sorry, I didn't get a coffee yet: more *often* in a venv. >> >> Le mer. 27 f?vr. 2019 ? 11:32, Victor Stinner a ?crit : >> > >> > Any idea why Python calls mmap+munmap more even in a venv? >> > >> > Victor >> > >> > Le mer. 27 f?vr. 2019 ? 10:00, INADA Naoki a ?crit : >> > > >> > > > >> > > > > Ah, another interesting point, this huge slowdown happens only when bm_pickle.py >> > > > > is executed through pyperformance. When run it directly, slowdown is >> > > > > not so large. >> > > > >> > > > pyperformance runs benchmarks in a virtual environment. I don't know >> > > > if it has any impact on bm_pickle. >> > > > >> > > > Most pyperformance can be run outside a virtual env if required >> > > > modules are installed on the system. (bm_pickle only requires the >> > > > stdlib and perf.) >> > > > >> > > >> > > Bingo! >> > > >> > > Without venv: >> > > >> > > unpickle: Mean +- std dev: 26.9 us +- 0.0 us >> > > % time seconds usecs/call calls errors syscall >> > > ------ ----------- ----------- --------- --------- ---------------- >> > > 28.78 0.000438 0 1440 read >> > > 27.33 0.000416 1 440 25 stat >> > > 9.72 0.000148 1 144 mmap >> > > ... >> > > 0.79 0.000012 1 11 munmap >> > > >> > > With venv: >> > > >> > > % time seconds usecs/call calls errors syscall >> > > ------ ----------- ----------- --------- --------- ---------------- >> > > 57.12 0.099023 2 61471 munmap >> > > 41.87 0.072580 1 61618 mmap >> > > 0.23 0.000395 1 465 27 stat >> > > >> > > unpickle and unpickle_list creates massive same-sized objects, then all objects are >> > > removed. If all pools in the arena is freed, munmap is called. >> > > >> > > I think we should save some arenas to reuse. On recent Linux, >> > > we may be able to use MADV_FREE instead of munmap. >> > > >> > >> > >> > -- >> > Night gathers, and now my watch begins. It shall not end until my death. >> >> >> >> -- >> Night gathers, and now my watch begins. It shall not end until my death. > > > > -- > INADA Naoki -- Night gathers, and now my watch begins. It shall not end until my death. From lukasz at langa.pl Wed Feb 27 08:22:51 2019 From: lukasz at langa.pl (=?utf-8?Q?=C5=81ukasz_Langa?=) Date: Wed, 27 Feb 2019 14:22:51 +0100 Subject: [Python-Dev] Announcing: signups are open for the 2019 Python Language Summit Message-ID: The Python Language Summit is an event for the developers of Python implementations (CPython, PyPy, Jython, and so on) to share information, discuss our shared problems, and ? hopefully ? solve them. These issues might be related to the language itself, the standard library, the development process, status of Python 3.8 (or plans for 3.9), the documentation, packaging, the website, et cetera. The Summit focuses on discussion more than on presentations. If you?d like to attend **and actively participate** in the discussions during the Language Summit, please fill in this form by March 21st 2019: https://goo.gl/forms/pexfOGDjpV0BWMer2 We will be evaluating all applications and confirm your attendance by April 15th. Note: **your attendance is not confirmed** until you heard back from us. You don't need to be registered for PyCon in order to attend the summit. One of the goals of the Language Summit is to speed up the discussions and decision making process. Communication over Discourse (or mailing lists!) is generally more time consuming. As part of efforts to make this event more open and less mysterious, we are not requiring invitations by core developers anymore. However, please understand that we will have to be selective as space and time are limited. In particular, we are prioritizing active core contributors, as well as those who we believe will be able to improve the quality of the discussions at the event and bring a more diverse perspective to core Python developers. As for other changes this year, A. Jesse Jiryu Davis will be covering the event and will post a detailed write up on the official blog of the PSF shortly after the conference. We hope to see you at the Summit! - Mariatta and ?ukasz PS. If you have any questions, the Users section of our Discourse instance (https://discuss.python.org/c/users) is the best place to ask. For private communication, write to mariatta at python.org and/or lukasz at python.org. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From songofacandy at gmail.com Wed Feb 27 08:29:48 2019 From: songofacandy at gmail.com (INADA Naoki) Date: Wed, 27 Feb 2019 22:29:48 +0900 Subject: [Python-Dev] mmap & munmap loop (Was: Compact ordered set In-Reply-To: References: Message-ID: On Wed, Feb 27, 2019 at 9:59 PM Victor Stinner wrote: > Maybe pickle is inefficient in its memory management and causes a lot > of memory fragmentation? > No, it is not relating to pickle efficiency and memory fragmentation. This problem is happened because pymalloc doesn't have no hysteresis between map & unmap arenas. Any workload creating some objects and release them soon may affected by this problem. When there are no free pool, pymalloc use mmap to create new arena (256KB). pymalloc allocates new pools (= pages) from the arena. It cause minor fault. Linux allocates real memory to the page and RSS is increased. Then, when all objects newly created in the pool are destroyed, all pools in the arena are free. pymalloc calls munmap() soon. unpickle can be affected by this problem than pure Python because: * unpickle is creating Python objects quickly. fault overhead is relatively large. * Python code may creates junk memory block (e.g. cached frame object, freeslot, etc) but C pickle code doesn't creates such junks. So newly allocated pools are freed very easily. I think this issue can be avoided easily. When arena is empty but the arena is head of usable_arenas, don't call munmap for it. I confirmed m2.py can't reproduce the issue with this patch. diff --git a/Objects/obmalloc.c b/Objects/obmalloc.c index 1c2a32050f..a19b3aca06 100644 --- a/Objects/obmalloc.c +++ b/Objects/obmalloc.c @@ -1672,7 +1672,7 @@ pymalloc_free(void *ctx, void *p) * nfreepools. * 4. Else there's nothing more to do. */ - if (nf == ao->ntotalpools) { + if (nf == ao->ntotalpools && ao != usable_arenas) { /* Case 1. First unlink ao from usable_arenas. */ assert(ao->prevarena == NULL || -- INADA Naoki -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukasz at langa.pl Wed Feb 27 08:37:04 2019 From: lukasz at langa.pl (=?utf-8?Q?=C5=81ukasz_Langa?=) Date: Wed, 27 Feb 2019 14:37:04 +0100 Subject: [Python-Dev] [python-committers] Announcing: signups are open for the 2019 Python Language Summit In-Reply-To: References: Message-ID: <6EA254C2-B893-48D8-9FE7-7BF73012E392@langa.pl> > On 27 Feb 2019, at 14:22, ?ukasz Langa wrote: > > The Python Language Summit is an event for the developers of Python > implementations (CPython, PyPy, Jython, and so on) to share information, > discuss our shared problems, and ? hopefully ? solve them. Oh, you'd also like to know *when* and *where* it is? Fine. - When: Wednesday, May 1st 2019 - Where: 0Huntington Convention Center in Cleveland, Ohio Sorry for missing this in the original e-mail, ? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From paul at ganssle.io Wed Feb 27 10:11:54 2019 From: paul at ganssle.io (Paul Ganssle) Date: Wed, 27 Feb 2019 10:11:54 -0500 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> <4ca1c28b-d906-beea-875f-224ebb77cd0a@ganssle.io> Message-ID: <84325333-8889-24a5-f6e2-abb72611f58d@ganssle.io> On 2/26/19 7:03 PM, Chris Barker via Python-Dev wrote: > This thread petered out, seemingly with a consensus that we should > update the docs -- is anyone doing that? > I don't think anyone is, I've filed a BPO bug for it: https://bugs.python.org/issue3613 > > -- I am a physical scientist, I work with unitted quantities all the > time (both in code and in other contexts). It never dawned on me to > use this approach to convert to seconds or milliseconds, or ... > Granted, I still rely on python2 for a fair bit of my work, but still, > I had to scratch my head when it was proposed on this thread. > As another data point, I also have a background in the physical sciences, and I actually do find it quite intuitive. The first time I saw this idiom I took it to heart immediately and only stopped using it because many of the libraries I maintain still support Python 2. It seemed pretty obvious that I had a `timedelta` object that represents times, and dividing it by a base value would give me the number of times the "unit" timedelta fits into the "value" timedelta. Seeing the code `timedelta(days=7) / timedelta(days=1)`, I think most people could confidently say that that should return 7, there's really no ambiguity about it. > -- There are a number of physical unit libraries in Python, and as far > as I know, none of them let you do this to create a unitless value in > a particular unit. "pint" for example: > > https://pint.readthedocs.io/en/latest/ > > ... > > And you can reduce it to a dimensionless object: > > In [57]: unitless.to_reduced_units()? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > ? ? ? ? ?? > Out[57]: 172800.0 > I think the analogy with pint's unit-handling behavior is not completely appropriate, because `timedelta` values are not in /specific/ units at all, they are just in abstract duration units. It makes sense to consider "seconds / day" as a specific dimensionless unit in the same sense that "percent" and "ppb" make sense as specific dimensionless units, and so that would be the natural behavior I would expect for unit-handling code. For timedelta, we don't have it as a value in specific units, so it's not clear what the "ratio unit" would be. What we're looking at with timedelta division is really more "how many s are there in this duration". > So no -- dividing a datetime by another datetime with the value you > want is not intuitive: not to a physical scientist, not to a user of > other physical quantities libraries -- is it intuitive to anyone other > than someone that was involved in python datetime development?? > Just to clarify, I am involved in Python datetime development now, but I have only been involved in Python OSS for the last 4-5 years. I remember finding it intuitive when I (likely working as a physicist at the time) first saw it used. > Discoverable: > ========== > I agree that it is not discoverable (which is unfortunate), but you could say the same thing of /all/ operators. There's no tab-completion that will tell you that `3.4 / 1` is a valid operation or that (3,) + (4,) will work, but we don't generally recommend adding methods for those things. I do think the discoverability is hindered by the existence of the total_seconds method, because the fact that total_seconds exists makes you think that it is the correct way to get the number of seconds that a timedelta represents, and that you should be looking for other analogous methods as the "correct" way to do this, when in fact we have a simpler, less ambiguous (for example, it's not obvious whether the methods would truncate or not, whereas __truediv__ and __floordiv__ gives the division operation pretty clear semantics) and more general way to do things. I think it's too late to /remove/ `total_seconds()`, but I don't think we should be compounding the problem by bloating the API with a bunch of other methods, per my earlier arguments. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From steve at holdenweb.com Wed Feb 27 14:14:10 2019 From: steve at holdenweb.com (Steve Holden) Date: Wed, 27 Feb 2019 19:14:10 +0000 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: <84325333-8889-24a5-f6e2-abb72611f58d@ganssle.io> References: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> <4ca1c28b-d906-beea-875f-224ebb77cd0a@ganssle.io> <84325333-8889-24a5-f6e2-abb72611f58d@ganssle.io> Message-ID: We should also consider that before long datetimes are frequently stored to nanosecond resolution (two fields where this is significant are finance and physics, and the latter would probably appreciate femtosedonds as well). So maybe an external library that layers on top of datetime might be preferable? Kind regards Steve Holden On Wed, Feb 27, 2019 at 3:13 PM Paul Ganssle wrote: > > On 2/26/19 7:03 PM, Chris Barker via Python-Dev wrote: > > This thread petered out, seemingly with a consensus that we should update > the docs -- is anyone doing that? > > I don't think anyone is, I've filed a BPO bug for it: > https://bugs.python.org/issue3613 > > > -- I am a physical scientist, I work with unitted quantities all the time > (both in code and in other contexts). It never dawned on me to use this > approach to convert to seconds or milliseconds, or ... Granted, I still > rely on python2 for a fair bit of my work, but still, I had to scratch my > head when it was proposed on this thread. > > As another data point, I also have a background in the physical sciences, > and I actually do find it quite intuitive. The first time I saw this idiom > I took it to heart immediately and only stopped using it because many of > the libraries I maintain still support Python 2. > > It seemed pretty obvious that I had a `timedelta` object that represents > times, and dividing it by a base value would give me the number of times > the "unit" timedelta fits into the "value" timedelta. Seeing the code > `timedelta(days=7) / timedelta(days=1)`, I think most people could > confidently say that that should return 7, there's really no ambiguity > about it. > > > -- There are a number of physical unit libraries in Python, and as far as > I know, none of them let you do this to create a unitless value in a > particular unit. "pint" for example: > > https://pint.readthedocs.io/en/latest/ > > ... > > And you can reduce it to a dimensionless object: > > In [57]: unitless.to_reduced_units() > > Out[57]: 172800.0 > > I think the analogy with pint's unit-handling behavior is not completely > appropriate, because `timedelta` values are not in *specific* units at > all, they are just in abstract duration units. It makes sense to consider > "seconds / day" as a specific dimensionless unit in the same sense that > "percent" and "ppb" make sense as specific dimensionless units, and so that > would be the natural behavior I would expect for unit-handling code. > > For timedelta, we don't have it as a value in specific units, so it's not > clear what the "ratio unit" would be. What we're looking at with timedelta > division is really more "how many s are there in this duration". > > > So no -- dividing a datetime by another datetime with the value you want > is not intuitive: not to a physical scientist, not to a user of other > physical quantities libraries -- is it intuitive to anyone other than > someone that was involved in python datetime development?? > > > Just to clarify, I am involved in Python datetime development now, but I > have only been involved in Python OSS for the last 4-5 years. I remember > finding it intuitive when I (likely working as a physicist at the time) > first saw it used. > > Discoverable: > ========== > > I agree that it is not discoverable (which is unfortunate), but you could > say the same thing of *all* operators. There's no tab-completion that > will tell you that `3.4 / 1` is a valid operation or that (3,) + (4,) will > work, but we don't generally recommend adding methods for those things. > > I do think the discoverability is hindered by the existence of the > total_seconds method, because the fact that total_seconds exists makes you > think that it is the correct way to get the number of seconds that a > timedelta represents, and that you should be looking for other analogous > methods as the "correct" way to do this, when in fact we have a simpler, > less ambiguous (for example, it's not obvious whether the methods would > truncate or not, whereas __truediv__ and __floordiv__ gives the division > operation pretty clear semantics) and more general way to do things. > > I think it's too late to *remove* `total_seconds()`, but I don't think we > should be compounding the problem by bloating the API with a bunch of other > methods, per my earlier arguments. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/steve%40holdenweb.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Wed Feb 27 16:12:52 2019 From: greg at krypto.org (Gregory P. Smith) Date: Wed, 27 Feb 2019 13:12:52 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> Message-ID: On Tue, Feb 26, 2019 at 2:28 PM Victor Stinner wrote: > Le mar. 26 f?vr. 2019 ? 22:24, Gregory P. Smith a ?crit > : > > A feature that I find missing from posix-y OSes that support #! lines is > an ability to restrict what can use a given interpreter. > > Fedora runs system tools (like "/usr/bin/semanage", tool to manager > SELinux) with "python3 -Es": > > $ head /usr/sbin/semanage > #! /usr/bin/python3 -Es > > -E: ignore PYTHON* environment variables (such as PYTHONPATH) > -s: don't add user site directory to sys.path > > Is it what you mean? Not quite. I meant that python interpreter would need to decide /usr/sbin/semanage is allowed to use it as an interpreter. -gps > > > Such a restriction could be implemented within the interpreter itself. > For example: Say that only this set of fully qualified path whitelisted .py > files are allowed to invoke it, with no interactive, stdin, or command line > "-c" use allowed. I'm not aware of anyone actually having done that. It's > hard to see how to do that in a maintainable manner that people using many > distros wouldn't just naively work around by adding themselves to the > whitelist rather than providing their own interpreter for their own > software stack. It feels more doable without workarounds for something > like macOS or any other distro wholly controlled and maintained as a single > set of software rather than a widely varying packages. > > Technically, Python initialization is highly customizable: see > _PyCoreConfig in Include/coreconfig.h. > > But we lack a public API for that :-) > https://www.python.org/dev/peps/pep-0432/ is a work-in-progress. > > With a proper public API, building your own interpreter would take a > few lines of C to give you fine control on what Python can do or not. > > Extract of Programs/_freeze_importlib.c (give you an idea of what can be > done): > --- > _PyCoreConfig config = _PyCoreConfig_INIT; > config.user_site_directory = 0; > config.site_import = 0; > config.use_environment = 0; > config.program_name = L"./_freeze_importlib"; > /* Don't install importlib, since it could execute outdated bytecode. > */ > config._install_importlib = 0; > config._frozen = 1; > > _PyInitError err = _Py_InitializeFromConfig(&config); > --- > > As Petr wrote below, RHEL 8 has a private /usr/libexec/platform-python > which is the Python used to run system tools (written in Python). But > this Python isn't customized. I'm not sure that there is a strong need > to customize Python default configuration for this interpreter. > > Note: Sorry to hijack again this thread with unrelated discussions :-( > > Victor > -- > Night gathers, and now my watch begins. It shall not end until my death. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Wed Feb 27 16:20:17 2019 From: greg at krypto.org (Gregory P. Smith) Date: Wed, 27 Feb 2019 13:20:17 -0800 Subject: [Python-Dev] Another update for PEP 394 -- The "python" Command on Unix-Like Systems In-Reply-To: <32efee43-34cf-9850-09e0-01b258cc2d22@python.org> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> <32efee43-34cf-9850-09e0-01b258cc2d22@python.org> Message-ID: On Tue, Feb 26, 2019 at 2:31 PM Steve Dower wrote: > On 2/26/2019 1:20 PM, Gregory P. Smith wrote: > > For an OS distro provided interpreter, being able to restrict its use to > > only OS distro provided software would be ideal (so ideal that people > > who haven't learned the hard distro maintenance lessons may hate me for > it). > > > > Such a restriction could be implemented within the interpreter itself. > > For example: Say that only this set of fully qualified path whitelisted > > .py files are allowed to invoke it, with no interactive, stdin, or > > command line "-c" use allowed. I'm not aware of anyone actually having > > done that. It's hard to see how to do that in a /maintainable/ manner > > that people using many distros wouldn't just naively work around by > > adding themselves to the whitelist rather than providing their own > > interpreter for their own software stack. It feels more doable without > > workarounds for something like macOS or any other distro wholly > > controlled and maintained as a single set of software rather than a > > widely varying packages. > > > > Solving that is way outside the scope of PEP 394. Just food for thought > > that I'd like to leave as an earworm for the future for distro minded > > folks. I some people to hate this idea. > > I haven't caught up on this thread yet, but this sounds a lot like the > "Restricting the entry point" section of > https://www.python.org/dev/peps/pep-0551/ (which is still a draft, so if > anyone wants to help make it more like what they want, I'm happy to have > contributors). > > So I'm in favour of making this easy (since I'm already having to deal > with it being difficult ;) ), as it's extremely valuable for > security-conscious deployments as well as the distro package cases > mentioned by Gregory. > Similar. What I'm talking about has nothing to do with _security_ (pep-0551's focus) and only to do with installed interpreter maintainability. But an implementation of the concept of deciding what is allowed to use an entry point in what manner might be the same. :) As for 551, I'm not a fan of pretending you can aid security by restricting use of an interpreter; when an adversary has the ability to trigger an exec of new process with args or input of its desire, their opponent lost the game several moves ago. Defense in depth minded people may disagree and still desire the feature. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Feb 27 16:54:45 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 27 Feb 2019 13:54:45 -0800 Subject: [Python-Dev] Compact ordered set In-Reply-To: References: Message-ID: On Tue, Feb 26, 2019 at 3:43 PM Barry Warsaw wrote: > The behavior differences between dicts and sets is already surprising to > many users, so we should be careful not to make the situation worse. > It's a nice to have, but other than the fact that we all used to use a dict when we really wanted a set before set existed, I'm not sure the connection is there to a layperson. A mapping and a set type really don't have much to do with each other other than implementation -- anyone that isn't familiar with python C code, or hash tables in general, wouldn't likely have any expectation of them having anything to do with each other. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Feb 27 17:15:53 2019 From: barry at python.org (Barry Warsaw) Date: Wed, 27 Feb 2019 14:15:53 -0800 Subject: [Python-Dev] Compact ordered set In-Reply-To: References: Message-ID: <38772BFF-2B07-467B-90D9-5806B6CE7D67@python.org> On Feb 27, 2019, at 13:54, Chris Barker via Python-Dev wrote: > > A mapping and a set type really don't have much to do with each other other than implementation -- anyone that isn't familiar with python C code, or hash tables in general, wouldn't likely have any expectation of them having anything to do with each other. I?m just relaying a data point. Some Python folks I?ve worked with do make the connection between dicts and sets, and have questions about the ordering guarantees of then (and how they relate). -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From tahafut at gmail.com Wed Feb 27 17:23:30 2019 From: tahafut at gmail.com (Henry Chen) Date: Wed, 27 Feb 2019 14:23:30 -0800 Subject: [Python-Dev] Compact ordered set In-Reply-To: <38772BFF-2B07-467B-90D9-5806B6CE7D67@python.org> References: <38772BFF-2B07-467B-90D9-5806B6CE7D67@python.org> Message-ID: If sets were ordered, then what ought pop() return - first, last, or nevertheless an arbitrary element? I lean toward arbitrary because in existing code, set.pop often implies that which particular element is immaterial. On Wed, Feb 27, 2019 at 2:18 PM Barry Warsaw wrote: > On Feb 27, 2019, at 13:54, Chris Barker via Python-Dev < > python-dev at python.org> wrote: > > > > A mapping and a set type really don't have much to do with each other > other than implementation -- anyone that isn't familiar with python C code, > or hash tables in general, wouldn't likely have any expectation of them > having anything to do with each other. > > I?m just relaying a data point. Some Python folks I?ve worked with do > make the connection between dicts and sets, and have questions about the > ordering guarantees of then (and how they relate). > > -Barry > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/tahafut%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Feb 27 17:28:50 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 27 Feb 2019 14:28:50 -0800 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> <4ca1c28b-d906-beea-875f-224ebb77cd0a@ganssle.io> Message-ID: On Tue, Feb 26, 2019 at 7:22 PM Terry Reedy wrote: > > timedelta.total_seconds() > > To me, total_x implies that there is a summation of multiple timedeltas, > and there is not. So not intuitive to me. THAT was a discussion for when it was added -- I can't say it's my favorite name either. But at least a quick glance at the docstring will address that. Hmm -- that one could use a little enhancement, too: In [3]: td.total_seconds? Docstring: Total seconds in the duration. But at least it COULD be made clear in a docstring :-) > (Neither us current obscure > option). It is also not obvious is answer is rounded to nearest second > or not. > also could be made clear in a docstring -- the full docs say: """ timedelta.total_seconds() Return the total number of seconds contained in the duration. Equivalent to td / timedelta(seconds=1). Note that for very large time intervals (greater than 270 years on most platforms) this method will lose microsecond accuracy. """ That last point indicates that it is not rounded -- and a quick check will note that it returns a float -- but those docs could be improved to make it clear that a float is returned. (i.e. that is WHY it loses microsecond accuracy). But anyway, you are nevert going to know all the intricacies of a method by its name -- you need to know about the rounding and all if you are trying to decide if you are going to us the method, but if you read it, you probably have a pretty good idea what it does. Anyway -- "I don't really like the name much" is really the answer to a different question (I don't like it either, but it's what's there) > So at most, we could have: > > > > .total_microseconds() > > .total_seconds() > > .total_minutes() > > .total_hours() > > .total_days() > > .total_weeks() > > I am also not enthusiastic about multiple methods doing essentially the > same thing. I might prefer one method, .convert? with an argument > specifying the conversion unit, 'microseconds', 'seconds', ... . yup -- that's been proposed, and is still on the table as far as I'm concerned. Though I'm not sure I like "convert", but would rather, say "in_unit" or "to_unit". But any of these would address the intuitive and discoverability issue -- very few folks would have any trouble guessing what any of: a_datetime.convert("hours") a_datetime.to_unit("hours") a_datetime.total_hours() mean -- at least generally, even if they con't know if they are getting a float or an integer result. And as for discoverable, any of these as methods on a dt object would probably be checked out if you were looking for such a thing. > I think this is in python-ideas territory. > I'd rather not :-) -- have you SEEN the length of the thread on how long the line limiet should be in PEP8. Frankly, I think the arguments have been made, and the options on the table -- if this were a PEP, it would be time for a pronouncement -- or at least one of: - not going to happen or - maybe happen but we need to iron out the details I've lost track of where we are with the governance model, so is there a way to get a clear idea for whether it's time to drop this? BTW, I don't think this is worth a PEP, but if I get a go-ahead, and it needs to be written up, I'd be willing to do so. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Feb 27 17:52:05 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 27 Feb 2019 14:52:05 -0800 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: <84325333-8889-24a5-f6e2-abb72611f58d@ganssle.io> References: <7475b4be-800c-2477-e793-df1ce6cef114@btinternet.com> <4ca1c28b-d906-beea-875f-224ebb77cd0a@ganssle.io> <84325333-8889-24a5-f6e2-abb72611f58d@ganssle.io> Message-ID: On Wed, Feb 27, 2019 at 7:15 AM Paul Ganssle wrote: > As another data point, I also have a background in the physical sciences, > and I actually do find it quite intuitive. The first time I saw this idiom > I took it to heart immediately and only stopped using it because many of > the libraries I maintain still support Python 2. > we know have two contrasting data points -- it's settled! But I'd like to know -- I too will probably remember and use this idiom now that I know it (if not take it to heart ...) -- but would you hvae figured it out on your own? > Seeing the code `timedelta(days=7) / timedelta(days=1)`, I think most > people could confidently say that that should return 7, there's really no > ambiguity about it. > in that example absolutely -- but this is the one I'm worried about: an_arbitrary_dateitme_returned_from_a_lib_you_didn't_write / datetime.timedelta(days=7) not so obvious. > I think the analogy with pint's unit-handling behavior is not completely > appropriate, because `timedelta` values are not in *specific* units at > all, they are just in abstract duration units. > agreed -- but it is a use-case that is worth looking into -- folks will either have no experience with units, or with a lib like that one (there are more, and I think they all handle it a similar way) And if you have a pint object representing time that you do not know the units of, the way to get it into the units you want is to call teh to() method: a_time_in_arbitrary_units.to('second') That is an obvious and familiar API. > I agree that it is not discoverable (which is unfortunate), but you could > say the same thing of *all* operators. > sure, but most operators are really widely used. > There's no tab-completion that will tell you that `3.4 / 1` is a valid > operation > well, THAT one had better be obvious :-) > or that (3,) + (4,) will work, but we don't generally recommend adding > methods for those things. > and that IS a source of confusion, but at least for the built-in types (and ones that conform to the primary ABCs), people are going to learn it pretty early. But conversion of a datetime object into particular units is a pretty specialized operation. Also, in most (all?) cases with builtin (and most of the standard library) the math operators return the same type (or a conceptional same type: int / int => float -- all numbers) > I do think the discoverability is hindered by the existence of the > total_seconds method, because the fact that total_seconds exists makes you > think that it is the correct way to get the number of seconds that a > timedelta represents, and that you should be looking for other analogous > methods as the "correct" way to do this, > well, yes. I for one, was very happy to have found it when I did :-) -- of course that was py2, when it WAS the only easy way to do it. > when in fact we have a simpler, less ambiguous (for example, it's not > obvious whether the methods would truncate or not, whereas __truediv__ and > __floordiv__ gives the division operation pretty clear semantics) > I don't think so, I was actually a touch surprised that: a_timedelta / timedelta(microseconds=1) yielded a float that may have lost precision, when the timedetla type stores ms precision, and python ints can handle any size int. it seems __floordiv__ does preserve precision. Which is kinda ironic -- after all, it's called "ture division" and "floor division", not "float division" and "integer division" My point being -- if you care about the details, you're going to have to dig deeper no matter what method is used. I think it's too late to *remove* `total_seconds()`, but I don't think we > should be compounding the problem by bloating the API with a bunch of other > methods, per my earlier arguments. > I'm not a big fan of bloated APIs -- but this is not a lot of methods, with a clear naming pattern, on an API without a huge number of methods (seven non-dunder attributes by my count) already. >From a practicality beats purity standpoint -- a few new methods (or a single conversion method) is less pure and "correct", but also a lot more usable. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Feb 27 17:56:06 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 27 Feb 2019 14:56:06 -0800 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: Message-ID: Did we ever hear back from the OP as to whether they were using py2 or 3? If they were unable to find timedelta division in py3 -- that's a pretty good case that we need something else. The OP: """ On Wed, Feb 13, 2019 at 9:10 PM Richard Belleville via Python-Dev < python-dev at python.org> wrote: > In a recent code review, the following snippet was called out as > reinventing the > wheel: > > _MICROSECONDS_PER_SECOND = 1000000 > > > def _timedelta_to_microseconds(delta): > return int(delta.total_seconds() * _MICROSECONDS_PER_SECOND) > > > The reviewer thought that there must already exist a standard library > function > that fulfills this functionality. After we had both satisfied ourselves > that we > hadn't simply missed something in the documentation, we decided that we had > better raise the issue with a wider audience. > """ -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbellevi at google.com Wed Feb 27 18:04:45 2019 From: rbellevi at google.com (Richard Belleville) Date: Wed, 27 Feb 2019 15:04:45 -0800 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: Message-ID: Sorry for the slow response. Timedelta division is quite a nice solution to the problem. However, since we're maintaining a python version agnostic library at least until 2020, we need a solution that works in python 2 as well. For the moment, we've left the code as in the original snippet. Richard Belleville On Wed, Feb 27, 2019 at 2:56 PM Chris Barker wrote: > Did we ever hear back from the OP as to whether they were using py2 or 3? > > If they were unable to find timedelta division in py3 -- that's a pretty > good case that we need something else. > > > The OP: > """ > On Wed, Feb 13, 2019 at 9:10 PM Richard Belleville via Python-Dev < > python-dev at python.org> wrote: > >> In a recent code review, the following snippet was called out as >> reinventing the >> wheel: >> >> _MICROSECONDS_PER_SECOND = 1000000 >> >> >> def _timedelta_to_microseconds(delta): >> return int(delta.total_seconds() * _MICROSECONDS_PER_SECOND) >> >> >> The reviewer thought that there must already exist a standard library >> function >> that fulfills this functionality. After we had both satisfied ourselves >> that we >> hadn't simply missed something in the documentation, we decided that we >> had >> better raise the issue with a wider audience. >> > """ > > -CHB > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Feb 27 18:18:20 2019 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 27 Feb 2019 15:18:20 -0800 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: Message-ID: On Wed, Feb 27, 2019 at 3:04 PM Richard Belleville wrote: > Timedelta division is quite a nice solution to the problem. However, since > we're maintaining a python version agnostic library at least until 2020, we > need a solution that works in python 2 as well. > So you were limited to a py2 solution. But di you poke around in py3 before posting? (you should have, python-dev is really about active development, i.e. python 3 -- but that's not the point here) > For the moment, we've left the code as in the original snippet. > If you care about microsecond precision for large timeseltas, you may want to improve that. I *think* this is the "correct" way to do it: def timedelta_to_microseconds(td): return td.microseconds + td.seconds * 1000 + td.days * 86400000 (hardly tested) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.badger at gmail.com Wed Feb 27 20:12:08 2019 From: a.badger at gmail.com (Toshio Kuratomi) Date: Wed, 27 Feb 2019 17:12:08 -0800 Subject: [Python-Dev] Compile-time resolution of packages [Was: Another update for PEP 394...] In-Reply-To: <20190226220418.b36jw33qthdv5i5l@python.ca> References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> <20190226220418.b36jw33qthdv5i5l@python.ca> Message-ID: On Tue, Feb 26, 2019 at 2:07 PM Neil Schemenauer wrote: > On 2019-02-26, Gregory P. Smith wrote: > > On Tue, Feb 26, 2019 at 9:55 AM Barry Warsaw wrote: > > For an OS distro provided interpreter, being able to restrict its use to > > only OS distro provided software would be ideal (so ideal that people who > > haven't learned the hard distro maintenance lessons may hate me for it). > > This idea has some definite problems. I think enforcing it via convention is about as much as would be good to do. Anything more and you make it hard for people who really need to use the vendor provided interpreter from being able to do so. Why might someone need to use the distro provided interpreter? * Vendor provides some python modules in their system packages which are not installable from pip (possibly even a proprietary extension module, so not even buildable from source or copyable from the system location) which the end user needs to use to do something to their system. * End user writes a python module which is a plugin to a system tool which has to be installed into the system python to from which that system tool runs. The user then wants to write a script which uses the system tool with the plugin in order to do something to their system outside of the system tool (perhaps the system tool is GUI-driven and the user wants to automate a part of it via the python module). They need their script to use the system python so that they are using the same code as the system tool itself would use. There's probably other scenarios where the benefits of locking the user out of the system python outweigh the benefits but these are the ones that I've run across lately. -Toshio -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Wed Feb 27 20:52:03 2019 From: larry at hastings.org (Larry Hastings) Date: Wed, 27 Feb 2019 17:52:03 -0800 Subject: [Python-Dev] Proposed dates for Python 3.4.10 and Python 3.5.7 In-Reply-To: References: Message-ID: My thanks to Miro and (especially!) Victor for quickly putting together those lovely PRs.? I've now merged everything outstanding for 3.4 and 3.5 except this: https://github.com/python/cpython/pull/10994 It's a backport of LibreSSL 2.7.0 support for 3.5.? This is something I believe Christian Heimes wanted.? As it stands, the issue needs a reviewer; I've contacted Christian but received no reply.? I'm happy to merge the PR as long as some security-aware core dev approves it. FWIW, there doesn't appear to be a backport of this patch for 3.4.? I don't know if 3.4 should get this backport or not, and there's no discussion of 3.4 on the bpo issue: https://bugs.python.org/issue33127 Anyway, I'm hoping either to merge or reject this PR before Saturday, so there's no huge rush.? Still I'd appreciate it if someone could at least tag themselves as a reviewer in the next day or so. Putting 3.4 to bed, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Thu Feb 28 03:51:03 2019 From: songofacandy at gmail.com (INADA Naoki) Date: Thu, 28 Feb 2019 17:51:03 +0900 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: Message-ID: > > I *think* this is the "correct" way to do it: > > def timedelta_to_microseconds(td): > return td.microseconds + td.seconds * 1000 + td.days * 86400000 > > (hardly tested) > > -CHB > 1000? milli? micro? -- INADA Naoki -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Thu Feb 28 04:38:21 2019 From: songofacandy at gmail.com (INADA Naoki) Date: Thu, 28 Feb 2019 18:38:21 +0900 Subject: [Python-Dev] Compact ordered set In-Reply-To: References: <38772BFF-2B07-467B-90D9-5806B6CE7D67@python.org> Message-ID: On Thu, Feb 28, 2019 at 7:23 AM Henry Chen wrote: > If sets were ordered, then what ought pop() return - first, last, or > nevertheless an arbitrary element? I lean toward arbitrary because in > existing code, set.pop often implies that which particular element is > immaterial. > > dict.popitem() pops last inserted pair. So set.pop() must remove last element. https://docs.python.org/3/library/stdtypes.html#dict.popitem -- INADA Naoki -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Thu Feb 28 06:43:04 2019 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 28 Feb 2019 22:43:04 +1100 Subject: [Python-Dev] Compact ordered set In-Reply-To: <38772BFF-2B07-467B-90D9-5806B6CE7D67@python.org> References: <38772BFF-2B07-467B-90D9-5806B6CE7D67@python.org> Message-ID: <20190228114303.GD4465@ando.pearwood.info> On Wed, Feb 27, 2019 at 02:15:53PM -0800, Barry Warsaw wrote: > I?m just relaying a data point. Some Python folks I?ve worked with do > make the connection between dicts and sets, and have questions about > the ordering guarantees of then (and how they relate). Sets and dicts are not related by inheritence (except that they're both subclasses of ``object``, but so is everything else). They don't share an implementation. They don't provide the same API. They don't do the same thing, except in the most general sense that they are both collections. What connection are these folks making? If they're old-timers, or read some Python history, they might remember back in the ancient days when sets were implemented on top of dicts, or even before then, when we didn't have a set type at all and used dicts in an ad-hoc way. But apart from that long-obsolete historical connection, I think the two types are unrelated and the behaviour of one has little or no implications for the behaviour of the other. "Cars have windows that you can open. Submarines should have the same. They're both vehicles, right?" -- Steven From solipsis at pitrou.net Thu Feb 28 06:56:55 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 28 Feb 2019 12:56:55 +0100 Subject: [Python-Dev] Compact ordered set References: <38772BFF-2B07-467B-90D9-5806B6CE7D67@python.org> <20190228114303.GD4465@ando.pearwood.info> Message-ID: <20190228125655.71733861@fsol> On Thu, 28 Feb 2019 22:43:04 +1100 Steven D'Aprano wrote: > On Wed, Feb 27, 2019 at 02:15:53PM -0800, Barry Warsaw wrote: > > > I?m just relaying a data point. Some Python folks I?ve worked with do > > make the connection between dicts and sets, and have questions about > > the ordering guarantees of then (and how they relate). > > Sets and dicts are not related by inheritence (except that they're both > subclasses of ``object``, but so is everything else). They don't share > an implementation. They don't provide the same API. They don't do the > same thing, except in the most general sense that they are both > collections. > > What connection are these folks making? Some of them may be coming from C++, where the respective characteristics of set and map (or unordered_set and unordered_multimap) are closely related. I'm sure other languages show similar analogies. On a more abstract level, set and dict are both content-addressed collections parametered on hash and equality functions. For algorithmically-minded people it makes sense to see a close connection between them. Regards Antoine. From python-dev at masklinn.net Thu Feb 28 07:08:53 2019 From: python-dev at masklinn.net (Xavier Morel) Date: Thu, 28 Feb 2019 13:08:53 +0100 Subject: [Python-Dev] Compact ordered set In-Reply-To: <20190228125655.71733861@fsol> References: <38772BFF-2B07-467B-90D9-5806B6CE7D67@python.org> <20190228114303.GD4465@ando.pearwood.info> <20190228125655.71733861@fsol> Message-ID: <1F024176-F4B3-4F59-8841-B2971E2CBD0B@masklinn.net> > On 2019-02-28, at 12:56 , Antoine Pitrou wrote: > > On Thu, 28 Feb 2019 22:43:04 +1100 > Steven D'Aprano wrote: >> On Wed, Feb 27, 2019 at 02:15:53PM -0800, Barry Warsaw wrote: >> >>> I?m just relaying a data point. Some Python folks I?ve worked with do >>> make the connection between dicts and sets, and have questions about >>> the ordering guarantees of then (and how they relate). >> >> Sets and dicts are not related by inheritence (except that they're both >> subclasses of ``object``, but so is everything else). They don't share >> an implementation. They don't provide the same API. They don't do the >> same thing, except in the most general sense that they are both >> collections. >> >> What connection are these folks making? > > Some of them may be coming from C++, where the respective > characteristics of set and map (or unordered_set and > unordered_multimap) are closely related. I'm sure other languages > show similar analogies. Indeed e.g. Rust's hashset is a trivial wrapper around a hashmap (with no value): https://doc.rust- lang.org/src/std/collections/hash/set.rs.html#121-123, its btreeset has the exact same relationship to btreemap: https://doc.rust- lang.org/src/alloc/collections/btree/set.rs.html#72-74 > On a more abstract level, set and dict are both content-addressed > collections parametered on hash and equality functions. For > algorithmically-minded people it makes sense to see a close connection > between them. I can't speak for anyone else but before seeing this thread I actually assumed (without any evidence or having checked obviously) that the set builtin was built on top of dict or that they were built on the same base and that the changes to dict's implementation in 3.6 (ordering, space, ?) had affected sets in the same way. That seems intuitively straightforward, even more so with dict.keys() being a set. From rosuav at gmail.com Thu Feb 28 07:18:31 2019 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 28 Feb 2019 23:18:31 +1100 Subject: [Python-Dev] Compact ordered set In-Reply-To: <20190228125655.71733861@fsol> References: <38772BFF-2B07-467B-90D9-5806B6CE7D67@python.org> <20190228114303.GD4465@ando.pearwood.info> <20190228125655.71733861@fsol> Message-ID: On Thu, Feb 28, 2019 at 10:58 PM Antoine Pitrou wrote: > Some of them may be coming from C++, where the respective > characteristics of set and map (or unordered_set and > unordered_multimap) are closely related. I'm sure other languages > show similar analogies. > > On a more abstract level, set and dict are both content-addressed > collections parametered on hash and equality functions. For > algorithmically-minded people it makes sense to see a close connection > between them. Looking from the opposite direction, sets and dicts can be used to solve a lot of the same problems. Want to detect cycles? Stuff things you see into a set. If the thing is in the set, you've already seen it. What if you want to track WHERE the cycle came from? Then stuff things you see into a dict, mapping them to some kind of trace information. Again, if the thing's in the collection, you've already seen it, but now, since it's a dict, you can pull up some more info. And collections.Counter can be used kinda like a multiset, but it's definitely a dictionary. I think the similarities are more pragmatic than pure, but that doesn't mean they aren't real. ChrisA From songofacandy at gmail.com Thu Feb 28 07:59:22 2019 From: songofacandy at gmail.com (INADA Naoki) Date: Thu, 28 Feb 2019 21:59:22 +0900 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> References: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> Message-ID: No stats for last week? On Sat, Feb 9, 2019 at 3:11 AM Python tracker wrote: > > ACTIVITY SUMMARY (2019-02-01 - 2019-02-08) > Python tracker at https://bugs.python.org/ > > To view or respond to any of the issues listed below, click on the issue. > Do NOT respond to this message. > > Issues counts and deltas: > open 6998 (+13) > closed 40696 (+47) > total 47694 (+60) > > Open issues with patches: 2783 > > -- INADA Naoki -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcgoble3 at gmail.com Thu Feb 28 08:07:53 2019 From: jcgoble3 at gmail.com (Jonathan Goble) Date: Thu, 28 Feb 2019 08:07:53 -0500 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: References: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> Message-ID: On Thu, Feb 28, 2019, 8:02 AM INADA Naoki wrote: > No stats for last week? > Been missing for two weeks actually. I did not receive a summary on either the 15th or 22nd. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kadler at us.ibm.com Thu Feb 28 00:48:02 2019 From: kadler at us.ibm.com (Kevin Adler) Date: Thu, 28 Feb 2019 05:48:02 +0000 Subject: [Python-Dev] Can I get a review for PR 10437? Message-ID: An HTML attachment was scrubbed... URL: From brett at python.org Thu Feb 28 12:33:20 2019 From: brett at python.org (Brett Cannon) Date: Thu, 28 Feb 2019 09:33:20 -0800 Subject: [Python-Dev] Can I get a review for PR 10437? In-Reply-To: References: Message-ID: While more reviewers never hurt, Victor has left at least one comment on the PR. On Thu, Feb 28, 2019 at 7:59 AM Kevin Adler wrote: > This PR has been open for nearly 3 months without any comment. Can I > please get someone to review it? > > PR link: https://github.com/python/cpython/pull/10437 > Bug report: https://bugs.python.org/issue35198 > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Thu Feb 28 12:56:55 2019 From: greg at krypto.org (Gregory P. Smith) Date: Thu, 28 Feb 2019 09:56:55 -0800 Subject: [Python-Dev] Compile-time resolution of packages [Was: Another update for PEP 394...] In-Reply-To: References: <9e69c6dc-07cd-2265-b4b8-b9f7e9f81b00@gmail.com> <37ba6931-faa0-0c9c-b9e5-067eb123e313@gmail.com> <58F34E40-11B8-4F36-AF7E-C9022D4F48DF@python.org> <20190226220418.b36jw33qthdv5i5l@python.ca> Message-ID: On Wed, Feb 27, 2019 at 5:12 PM Toshio Kuratomi wrote: > > On Tue, Feb 26, 2019 at 2:07 PM Neil Schemenauer > wrote: > >> On 2019-02-26, Gregory P. Smith wrote: >> > On Tue, Feb 26, 2019 at 9:55 AM Barry Warsaw wrote: >> > For an OS distro provided interpreter, being able to restrict its use to >> > only OS distro provided software would be ideal (so ideal that people >> who >> > haven't learned the hard distro maintenance lessons may hate me for it). >> >> This idea has some definite problems. I think enforcing it via > convention is about as much as would be good to do. Anything more and you > make it hard for people who really need to use the vendor provided > interpreter from being able to do so. > > Why might someone need to use the distro provided interpreter? > > * Vendor provides some python modules in their system packages which are > not installable from pip (possibly even a proprietary extension module, so > not even buildable from source or copyable from the system location) which > the end user needs to use to do something to their system. > * End user writes a python module which is a plugin to a system tool which > has to be installed into the system python to from which that system tool > runs. The user then wants to write a script which uses the system tool > with the plugin in order to do something to their system outside of the > system tool (perhaps the system tool is GUI-driven and the user wants to > automate a part of it via the python module). They need their script to > use the system python so that they are using the same code as the system > tool itself would use. > > There's probably other scenarios where the benefits of locking the user > out of the system python outweigh the benefits but these are the ones that > I've run across lately. > > Agreed. The convention approach as someone said RHEL 8 has apparently done with an os distro reserved interpreter (yay) is likely good enough for most situations. I'd go a *little* further than that and suggest such an os distro reserved interpreter attempt to prevent installation of packages (ie: remove pip/ensurepip/distutils) via any other means than the OS package manager (rpms, debs). Obviously that can't actually prevent someone from figuring out how to run getpip or manually installing trees of packages within its sys.path, but it acts as a deterrent suggesting that this interpreter is not intended for arbitrary software installation. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Thu Feb 28 13:55:37 2019 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu, 28 Feb 2019 13:55:37 -0500 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <1cfc2984-216c-fdc7-7ea2-692662d93971@ganssle.io> Message-ID: > while "some_var / some_other_var" could be doing anything. "At an elementary level the division of two natural numbers is ? among other possible interpretations ? the process of calculating the number of times one number is contained within another one." -- The process of figuring out how many seconds fit into a given interval is called division by a second. I am afraid people who get confused by timedelta / timedelta division know too much about Python where / can indeed mean anything including e.g. joining filesystem paths. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Thu Feb 28 14:52:33 2019 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 28 Feb 2019 14:52:33 -0500 Subject: [Python-Dev] datetime.timedelta total_microseconds In-Reply-To: References: <1cfc2984-216c-fdc7-7ea2-692662d93971@ganssle.io> Message-ID: You could specify the return value type annotations in the docstring of a convert()/to_unit() method, or for each to_unit() method. Are they all floats? div and floordiv are not going away. Without reading the docs, I, too, wouldn't have guessed that division by the desired unit is the correct way. In terms of performance, a convert()/to_unit() method must do either a hash lookup or multiple comparisons to present a more useful exception than that returned by initializing timedelta with something like nanoseconds=. FWIW, astropy.time.TimeDelta supports sub-nanosecond precision and has a .to() method for changing units/quantities. http://docs.astropy.org/en/stable/time/#time-deltas > The TimeDelta class is derived from the Time class and shares many of its properties. One difference is that the time scale has to be one for which one day is exactly 86400 seconds. Hence, the scale cannot be UTC. > > The available time formats are: > > Format Class > sec TimeDeltaSec > jd TimeDeltaJD > datetime TimeDeltaDatetime http://docs.astropy.org/en/stable/api/astropy.time.TimeDelta.html#astropy.time.TimeDelta http://docs.astropy.org/en/stable/_modules/astropy/time/core.html#TimeDelta.to On Thursday, February 28, 2019, Alexander Belopolsky < alexander.belopolsky at gmail.com> wrote: > > while "some_var / some_other_var" could be doing anything. > > "At an elementary level the division of two natural numbers is ? among > other possible interpretations ? the process of calculating the number of > times one number is contained within another one." > > -- > > The process of figuring out how many seconds fit into a given interval is > called division by a second. > > I am afraid people who get confused by timedelta / timedelta division know > too much about Python where / can indeed mean anything including e.g. > joining filesystem paths. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu Feb 28 16:30:14 2019 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 01 Mar 2019 10:30:14 +1300 Subject: [Python-Dev] Compact ordered set In-Reply-To: <20190228125655.71733861@fsol> References: <38772BFF-2B07-467B-90D9-5806B6CE7D67@python.org> <20190228114303.GD4465@ando.pearwood.info> <20190228125655.71733861@fsol> Message-ID: <5C7852E6.80106@canterbury.ac.nz> Antoine Pitrou wrote: > On a more abstract level, set and dict are both content-addressed > collections parametered on hash and equality functions. Indeed. It's been said that a set is like "half a dict", and this is why sets were implemented using dicts in the old days. It's kind of an obvious thing to do. You can argue about how far the analogy should be taken, but you can't blame people for noticing the similarity. -- Greg From tjreedy at udel.edu Thu Feb 28 17:09:01 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 28 Feb 2019 17:09:01 -0500 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: References: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> Message-ID: On 2/28/2019 8:07 AM, Jonathan Goble wrote: > On Thu, Feb 28, 2019, 8:02 AM INADA Naoki > wrote: > > No stats for last week? > > > Been missing for two weeks actually. I did not receive a summary on > either the 15th or 22nd. Ditto for me. I get pydev via gmane. Anyone missing the same issues get pydev directly by email? -- Terry Jan Reedy From jcgoble3 at gmail.com Thu Feb 28 17:18:16 2019 From: jcgoble3 at gmail.com (Jonathan Goble) Date: Thu, 28 Feb 2019 17:18:16 -0500 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: References: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> Message-ID: On Thu, Feb 28, 2019, 5:11 PM Terry Reedy wrote: > On 2/28/2019 8:07 AM, Jonathan Goble wrote: > > On Thu, Feb 28, 2019, 8:02 AM INADA Naoki > > wrote: > > > > No stats for last week? > > > > > > Been missing for two weeks actually. I did not receive a summary on > > either the 15th or 22nd. > > Ditto for me. I get pydev via gmane. Anyone missing the same issues > get pydev directly by email? > I get direct emails only and stated my observation above. :-) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From v+python at g.nevcal.com Thu Feb 28 17:38:32 2019 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 28 Feb 2019 14:38:32 -0800 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: References: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> Message-ID: On 2/28/2019 2:18 PM, Jonathan Goble wrote: > On Thu, Feb 28, 2019, 5:11 PM Terry Reedy > wrote: > > On 2/28/2019 8:07 AM, Jonathan Goble wrote: > > On Thu, Feb 28, 2019, 8:02 AM INADA Naoki > > > >> > wrote: > > > >? ? ?No stats for last week? > > > > > > Been missing for two weeks actually. I did not receive a summary on > > either the 15th or 22nd. > > Ditto for me.? I get pydev via gmane.? Anyone missing the same issues > get pydev directly by email? > > > I get direct emails only and stated my observation above. :-) I confirm and concur with Jonathan's observation, by searching my email archive for this group list. So it is not just something about his particular email address... except we both do use Google Mail services.? I do check my Google SPAM box for the account, and it wasn't there either, by recollection (I don't archive the SPAM, but delete it ASAP). Can someone not using a Google email address also confirm? Glenn -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu Feb 28 17:52:53 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 28 Feb 2019 17:52:53 -0500 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: References: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> Message-ID: On 2/28/2019 5:38 PM, Glenn Linderman wrote: > On 2/28/2019 2:18 PM, Jonathan Goble wrote: >> On Thu, Feb 28, 2019, 5:11 PM Terry Reedy > > wrote: >> >> On 2/28/2019 8:07 AM, Jonathan Goble wrote: >> > On Thu, Feb 28, 2019, 8:02 AM INADA Naoki >> >> > >> >> wrote: >> > >> >? ? ?No stats for last week? >> > >> > >> > Been missing for two weeks actually. I did not receive a summary on >> > either the 15th or 22nd. >> >> Ditto for me.? I get pydev via gmane.? Anyone missing the same issues >> get pydev directly by email? >> >> >> I get direct emails only and stated my observation above. :-) > > I confirm and concur with Jonathan's observation, by searching my email > archive for this group list. So it is not just something about his > particular email address... except we both do use Google Mail services. > I do check my Google SPAM box for the account, and it wasn't there > either, by recollection (I don't archive the SPAM, but delete it ASAP). > > Can someone not using a Google email address also confirm? I effectively did when I said I access via gmane -- as a newsgroup via NNTP. I am sure that the mail server sends directly to news.gmane.org rather than through google. -- Terry Jan Reedy From v+python at g.nevcal.com Thu Feb 28 18:54:46 2019 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 28 Feb 2019 15:54:46 -0800 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: References: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> Message-ID: <5d12e862-5398-5701-2503-b745c0875976@g.nevcal.com> On 2/28/2019 2:52 PM, Terry Reedy wrote: > On 2/28/2019 5:38 PM, Glenn Linderman wrote: >> On 2/28/2019 2:18 PM, Jonathan Goble wrote: >>> On Thu, Feb 28, 2019, 5:11 PM Terry Reedy >> > wrote: >>> >>> ??? On 2/28/2019 8:07 AM, Jonathan Goble wrote: >>> ??? > On Thu, Feb 28, 2019, 8:02 AM INADA Naoki >>> ??? >>> ??? > >> >>> ??? wrote: >>> ??? > >>> ??? >? ? ?No stats for last week? >>> ??? > >>> ??? > >>> ??? > Been missing for two weeks actually. I did not receive a >>> summary on >>> ??? > either the 15th or 22nd. >>> >>> ??? Ditto for me.? I get pydev via gmane.? Anyone missing the same >>> issues >>> ??? get pydev directly by email? >>> >>> >>> I get direct emails only and stated my observation above. :-) >> >> I confirm and concur with Jonathan's observation, by searching my >> email archive for this group list. So it is not just something about >> his particular email address... except we both do use Google Mail >> services.? I do check my Google SPAM box for the account, and it >> wasn't there either, by recollection (I don't archive the SPAM, but >> delete it ASAP). >> >> Can someone not using a Google email address also confirm? > > I effectively did when I said I access via gmane -- as a newsgroup via > NNTP.? I am sure that the mail server sends directly to news.gmane.org > rather than through google. That's a whole different protocol. I don't know all the configurations for the server that sends the Summary messages, or how it is handled, but you confirm it didn't get to gnane via NNTP, and Jonathan and I confirm it didn't get to Google email servers, but neither one really confirms that it didn't get to other email servers. Google email is definitely different than other email servers. There seems to be enough evidence that something went wrong somewhere, though, and whoever maintains that process should start investigating, but it would still be nice to get confirmation from a non-Google email recipient whether they did or did not get the Summary messages. I wonder if there is a way to manually send them, and if the missing two weeks of activity can be reported... once the sending problem is understood and resolved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcgoble3 at gmail.com Thu Feb 28 19:07:14 2019 From: jcgoble3 at gmail.com (Jonathan Goble) Date: Thu, 28 Feb 2019 19:07:14 -0500 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: <5d12e862-5398-5701-2503-b745c0875976@g.nevcal.com> References: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> <5d12e862-5398-5701-2503-b745c0875976@g.nevcal.com> Message-ID: On Thu, Feb 28, 2019 at 6:57 PM Glenn Linderman wrote: > On 2/28/2019 2:52 PM, Terry Reedy wrote: > > On 2/28/2019 5:38 PM, Glenn Linderman wrote: > > On 2/28/2019 2:18 PM, Jonathan Goble wrote: > > On Thu, Feb 28, 2019, 5:11 PM Terry Reedy > wrote: > > On 2/28/2019 8:07 AM, Jonathan Goble wrote: > > On Thu, Feb 28, 2019, 8:02 AM INADA Naoki > > > > > >> > wrote: > > > > No stats for last week? > > > > > > Been missing for two weeks actually. I did not receive a summary on > > either the 15th or 22nd. > > Ditto for me. I get pydev via gmane. Anyone missing the same issues > get pydev directly by email? > > > I get direct emails only and stated my observation above. :-) > > > I confirm and concur with Jonathan's observation, by searching my email > archive for this group list. So it is not just something about his > particular email address... except we both do use Google Mail services. I > do check my Google SPAM box for the account, and it wasn't there either, by > recollection (I don't archive the SPAM, but delete it ASAP). > > Can someone not using a Google email address also confirm? > > > I effectively did when I said I access via gmane -- as a newsgroup via > NNTP. I am sure that the mail server sends directly to news.gmane.org > rather than through google. > > > That's a whole different protocol. I don't know all the configurations for > the server that sends the Summary messages, or how it is handled, but you > confirm it didn't get to gnane via NNTP, and Jonathan and I confirm it > didn't get to Google email servers, but neither one really confirms that it > didn't get to other email servers. Google email is definitely different > than other email servers. > > There seems to be enough evidence that something went wrong somewhere, > though, and whoever maintains that process should start investigating, but > it would still be nice to get confirmation from a non-Google email > recipient whether they did or did not get the Summary messages. > > I wonder if there is a way to manually send them, and if the missing two > weeks of activity can be reported... once the sending problem is understood > and resolved. > It's also possible that the fault is not in sending (we have evidence here that two entirely different protocols have not received it, and they are also not in the archives [1]), but in the generation of the report. Could there have been a subtle change to the bpo tracker itself, or something else along those lines, that is causing the script to fail silently before it ever reaches the point of attempting to send? Or perhaps a disk ran out of space somewhere? [1] https://mail.python.org/pipermail/python-dev/2019-February/date.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Thu Feb 28 19:21:26 2019 From: songofacandy at gmail.com (INADA Naoki) Date: Fri, 1 Mar 2019 09:21:26 +0900 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: References: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> <5d12e862-5398-5701-2503-b745c0875976@g.nevcal.com> Message-ID: > > It's also possible that the fault is not in sending (we have evidence here that two entirely different protocols have not received it, and they are also not in the archives [1]), but in the generation of the report. Could there have been a subtle change to the bpo tracker itself, or something else along those lines, that is causing the script to fail silently before it ever reaches the point of attempting to send? Or perhaps a disk ran out of space somewhere? > > [1] https://mail.python.org/pipermail/python-dev/2019-February/date.html I suspect so. See "Open issues deltas (weekly)" graph in this page. It is ended by 2/8. https://bugs.python.org/issue?@template=stats -- INADA Naoki From nas-python at arctrix.com Thu Feb 28 20:08:55 2019 From: nas-python at arctrix.com (Neil Schemenauer) Date: Thu, 28 Feb 2019 19:08:55 -0600 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: <20190226125903.GA4712@xps> References: <8AFF29B7-D3DB-4DE1-BAF7-CAE6F4017378@langa.pl> <20190226125903.GA4712@xps> Message-ID: <20190301010855.vilysc3oy7i3zqs2@python.ca> On 2019-02-26, Stephane Wirtel wrote: > I also filled an issue [2] for brotlipy (used by httpbin and requests). > The problem is with PyInterpreterState. I tried compiling psycopg2 today and it has a similar problem: psycopg/psycopgmodule.c: In function ?psyco_is_main_interp?: psycopg/psycopgmodule.c:689:18: error: dereferencing pointer to incomplete type ?PyInterpreterState? {aka ?struct _is?} while (interp->next) That code is inside a function: /* Return nonzero if the current one is the main interpreter */ static int psyco_is_main_interp(void) ... I believe the correct fix is to use PEP 3121 per-interpreter module state. I created a new issue: https://github.com/psycopg/psycopg2/issues/854 I think the fix is not trival as the psycopgmodule.c source code has change a fair bit to use the PEP 3121 APIs. Regards, Neil From python at mrabarnett.plus.com Thu Feb 28 20:09:05 2019 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 1 Mar 2019 01:09:05 +0000 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: <5d12e862-5398-5701-2503-b745c0875976@g.nevcal.com> References: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> <5d12e862-5398-5701-2503-b745c0875976@g.nevcal.com> Message-ID: <6b050f81-f727-f26e-32ed-120be0beb330@mrabarnett.plus.com> On 2019-02-28 23:54, Glenn Linderman wrote: > On 2/28/2019 2:52 PM, Terry Reedy wrote: >> On 2/28/2019 5:38 PM, Glenn Linderman wrote: >>> On 2/28/2019 2:18 PM, Jonathan Goble wrote: >>>> On Thu, Feb 28, 2019, 5:11 PM Terry Reedy >>> > wrote: >>>> >>>> ??? On 2/28/2019 8:07 AM, Jonathan Goble wrote: >>>> ??? > On Thu, Feb 28, 2019, 8:02 AM INADA Naoki >>>> ??? >>>> ??? > >> >>>> ??? wrote: >>>> ??? > >>>> ??? >? ? ?No stats for last week? >>>> ??? > >>>> ??? > >>>> ??? > Been missing for two weeks actually. I did not receive a >>>> summary on >>>> ??? > either the 15th or 22nd. >>>> >>>> ??? Ditto for me.? I get pydev via gmane.? Anyone missing the same >>>> issues >>>> ??? get pydev directly by email? >>>> >>>> >>>> I get direct emails only and stated my observation above. :-) >>> >>> I confirm and concur with Jonathan's observation, by searching my >>> email archive for this group list. So it is not just something about >>> his particular email address... except we both do use Google Mail >>> services.? I do check my Google SPAM box for the account, and it >>> wasn't there either, by recollection (I don't archive the SPAM, but >>> delete it ASAP). >>> >>> Can someone not using a Google email address also confirm? >> >> I effectively did when I said I access via gmane -- as a newsgroup via >> NNTP.? I am sure that the mail server sends directly to news.gmane.org >> rather than through google. > > That's a whole different protocol. I don't know all the configurations > for the server that sends the Summary messages, or how it is handled, > but you confirm it didn't get to gnane via NNTP, and Jonathan and I > confirm it didn't get to Google email servers, but neither one really > confirms that it didn't get to other email servers. Google email is > definitely different than other email servers. > > There seems to be enough evidence that something went wrong somewhere, > though, and whoever maintains that process should start investigating, > but it would still be nice to get confirmation from a non-Google email > recipient whether they did or did not get the Summary messages. > > I wonder if there is a way to manually send them, and if the missing two > weeks of activity can be reported... once the sending problem is > understood and resolved. > I subscribed at https://mail.python.org/mailman/listinfo/python-dev and I don't use Google for email. I didn't receive them either. From vstinner at redhat.com Thu Feb 28 20:35:20 2019 From: vstinner at redhat.com (Victor Stinner) Date: Fri, 1 Mar 2019 02:35:20 +0100 Subject: [Python-Dev] [RELEASE] Python 3.8.0a1 is now available for testing In-Reply-To: <20190301010855.vilysc3oy7i3zqs2@python.ca> References: <8AFF29B7-D3DB-4DE1-BAF7-CAE6F4017378@langa.pl> <20190226125903.GA4712@xps> <20190301010855.vilysc3oy7i3zqs2@python.ca> Message-ID: Hi, Le ven. 1 mars 2019 ? 02:12, Neil Schemenauer a ?crit : > I believe the correct fix is to use PEP 3121 per-interpreter module > state. I created a new issue: > > https://github.com/psycopg/psycopg2/issues/854 > > I think the fix is not trival as the psycopgmodule.c source code has > change a fair bit to use the PEP 3121 APIs. The problem is this function: /* Return nonzero if the current one is the main interpreter */ static int psyco_is_main_interp(void) { static PyInterpreterState *main_interp = NULL; /* Cached reference */ PyInterpreterState *interp; if (main_interp) { return (main_interp == PyThreadState_Get()->interp); } /* No cached value: cache the proper value and try again. */ interp = PyInterpreterState_Head(); while (interp->next) interp = interp->next; main_interp = interp; assert (main_interp); return psyco_is_main_interp(); } https://github.com/psycopg/psycopg2/blob/599432552aae4941c2b282e9251330f1357b2a45/psycopg/utils.c#L407 I'm not sure that this code is safe. In CPython, iterating on interp->next is protected by a lock: HEAD_LOCK(); ... HEAD_UNLOCK(); We already expose the main interpreter since Python 3.7: PyInterpreterState_Main(). psycopg can be modified to use directly this function rather than playing black magic with CPython internals. IMHO it's a good thing that the compilation failed: that such bug is found :-) Victor -- Night gathers, and now my watch begins. It shall not end until my death. From tjreedy at udel.edu Thu Feb 28 23:59:22 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 28 Feb 2019 23:59:22 -0500 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: <5d12e862-5398-5701-2503-b745c0875976@g.nevcal.com> References: <20190208180753.1.782F2F2B9AA3561B@roundup.psfhosted.org> <5d12e862-5398-5701-2503-b745c0875976@g.nevcal.com> Message-ID: On 2/28/2019 6:54 PM, Glenn Linderman wrote: > There seems to be enough evidence that something went wrong somewhere, > though, and whoever maintains that process should start investigating, > but it would still be nice to get confirmation from a non-Google email > recipient whether they did or did not get the Summary messages. > > I wonder if there is a way to manually send them, and if the missing two > weeks of activity can be reported... once the sending problem is > understood and resolved. I posted a note to the core-workflow list, but I don't know if anyone with power or knowledge still reads it. To get a listing, go to the tracker search page, put 2019-02-09 to 2019-03-01 in the date box, and change status to don't care. At the moment, this returns 204 issues. -- Terry Jan Reedy