From webhook-mailer at python.org Wed Feb 1 00:48:32 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 01 Feb 2023 05:48:32 -0000 Subject: [Python-checkins] gh-98657: [docs] `array.typecodes` is a module-level attribute (GH-98729) Message-ID: https://github.com/python/cpython/commit/85cc5d0e5dd6cfa44d3e2782cca33d1f7f58231e commit: 85cc5d0e5dd6cfa44d3e2782cca33d1f7f58231e branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-01-31T21:48:19-08:00 summary: gh-98657: [docs] `array.typecodes` is a module-level attribute (GH-98729) * gh-98657: [docs] `array.typecodes` is a module-level attribute * Update array.rst (cherry picked from commit c144e57b316e97a58ed5ad813c847fa8d2341dd7) Co-authored-by: Nikita Sobolev files: M Doc/library/array.rst diff --git a/Doc/library/array.rst b/Doc/library/array.rst index 975670cc81a2..95f1eaf401b0 100644 --- a/Doc/library/array.rst +++ b/Doc/library/array.rst @@ -62,6 +62,14 @@ The actual representation of values is determined by the machine architecture (strictly speaking, by the C implementation). The actual size can be accessed through the :attr:`itemsize` attribute. +The module defines the following item: + + +.. data:: typecodes + + A string with all available type codes. + + The module defines the following type: @@ -79,9 +87,6 @@ The module defines the following type: .. audit-event:: array.__new__ typecode,initializer array.array -.. data:: typecodes - - A string with all available type codes. Array objects support the ordinary sequence operations of indexing, slicing, concatenation, and multiplication. When using slice assignment, the assigned From webhook-mailer at python.org Wed Feb 1 00:48:57 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 01 Feb 2023 05:48:57 -0000 Subject: [Python-checkins] gh-98657: [docs] `array.typecodes` is a module-level attribute (GH-98729) Message-ID: https://github.com/python/cpython/commit/b8bb139e22c488b66efc31992d946217dda27032 commit: b8bb139e22c488b66efc31992d946217dda27032 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-01-31T21:48:51-08:00 summary: gh-98657: [docs] `array.typecodes` is a module-level attribute (GH-98729) * gh-98657: [docs] `array.typecodes` is a module-level attribute * Update array.rst (cherry picked from commit c144e57b316e97a58ed5ad813c847fa8d2341dd7) Co-authored-by: Nikita Sobolev files: M Doc/library/array.rst diff --git a/Doc/library/array.rst b/Doc/library/array.rst index 975670cc81a2..95f1eaf401b0 100644 --- a/Doc/library/array.rst +++ b/Doc/library/array.rst @@ -62,6 +62,14 @@ The actual representation of values is determined by the machine architecture (strictly speaking, by the C implementation). The actual size can be accessed through the :attr:`itemsize` attribute. +The module defines the following item: + + +.. data:: typecodes + + A string with all available type codes. + + The module defines the following type: @@ -79,9 +87,6 @@ The module defines the following type: .. audit-event:: array.__new__ typecode,initializer array.array -.. data:: typecodes - - A string with all available type codes. Array objects support the ordinary sequence operations of indexing, slicing, concatenation, and multiplication. When using slice assignment, the assigned From webhook-mailer at python.org Wed Feb 1 05:01:36 2023 From: webhook-mailer at python.org (abalkin) Date: Wed, 01 Feb 2023 10:01:36 -0000 Subject: [Python-checkins] datetime.rst: fix combine() signature (#101490) Message-ID: https://github.com/python/cpython/commit/75227fba1dd1683289d90ada7abba237bff55d14 commit: 75227fba1dd1683289d90ada7abba237bff55d14 branch: main author: John Belmonte committer: abalkin date: 2023-02-01T14:01:28+04:00 summary: datetime.rst: fix combine() signature (#101490) The default `tzinfo` param of the `combine()` signature pseudocode was erroneously `self.tzinfo`. `self` has no meaning in the context of a classmethod, and the datetime class itself has no `tzinfo` attribute. The correct default pseudocode is `time.tzinfo`, reflecting that the default is the `tzinfo` attribute of the `time` parameter. files: M Doc/library/datetime.rst diff --git a/Doc/library/datetime.rst b/Doc/library/datetime.rst index ebb5f319efda..2f1ab7c3dd4b 100644 --- a/Doc/library/datetime.rst +++ b/Doc/library/datetime.rst @@ -975,7 +975,7 @@ Other constructors, all class methods: microsecond of the result are all 0, and :attr:`.tzinfo` is ``None``. -.. classmethod:: datetime.combine(date, time, tzinfo=self.tzinfo) +.. classmethod:: datetime.combine(date, time, tzinfo=time.tzinfo) Return a new :class:`.datetime` object whose date components are equal to the given :class:`date` object's, and whose time components From webhook-mailer at python.org Wed Feb 1 06:04:07 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Wed, 01 Feb 2023 11:04:07 -0000 Subject: [Python-checkins] gh-101317: Add `ssl_shutdown_timeout` parameter for `asyncio.StreamWriter.start_tls` (#101335) Message-ID: https://github.com/python/cpython/commit/cc407b9de645ab7c137df8ea2409a005369169a5 commit: cc407b9de645ab7c137df8ea2409a005369169a5 branch: main author: beavailable committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-01T16:33:59+05:30 summary: gh-101317: Add `ssl_shutdown_timeout` parameter for `asyncio.StreamWriter.start_tls` (#101335) Co-authored-by: Kumar Aditya <59607654+kumaraditya303 at users.noreply.github.com> files: A Misc/NEWS.d/next/Library/2023-01-26-01-25-56.gh-issue-101317.vWaS1x.rst M Doc/library/asyncio-stream.rst M Lib/asyncio/streams.py diff --git a/Doc/library/asyncio-stream.rst b/Doc/library/asyncio-stream.rst index c1ae8abb9abc..533bdec8e039 100644 --- a/Doc/library/asyncio-stream.rst +++ b/Doc/library/asyncio-stream.rst @@ -335,7 +335,7 @@ StreamWriter returns immediately. .. coroutinemethod:: start_tls(sslcontext, \*, server_hostname=None, \ - ssl_handshake_timeout=None) + ssl_handshake_timeout=None, ssl_shutdown_timeout=None) Upgrade an existing stream-based connection to TLS. @@ -350,8 +350,16 @@ StreamWriter handshake to complete before aborting the connection. ``60.0`` seconds if ``None`` (default). + * *ssl_shutdown_timeout* is the time in seconds to wait for the SSL shutdown + to complete before aborting the connection. ``30.0`` seconds if ``None`` + (default). + .. versionadded:: 3.11 + .. versionchanged:: 3.12 + Added the *ssl_shutdown_timeout* parameter. + + .. method:: is_closing() Return ``True`` if the stream is closed or in the process of diff --git a/Lib/asyncio/streams.py b/Lib/asyncio/streams.py index 0f9098b41956..7d13e961bd2d 100644 --- a/Lib/asyncio/streams.py +++ b/Lib/asyncio/streams.py @@ -378,7 +378,8 @@ async def drain(self): async def start_tls(self, sslcontext, *, server_hostname=None, - ssl_handshake_timeout=None): + ssl_handshake_timeout=None, + ssl_shutdown_timeout=None): """Upgrade an existing stream-based connection to TLS.""" server_side = self._protocol._client_connected_cb is not None protocol = self._protocol @@ -386,7 +387,8 @@ async def start_tls(self, sslcontext, *, new_transport = await self._loop.start_tls( # type: ignore self._transport, protocol, sslcontext, server_side=server_side, server_hostname=server_hostname, - ssl_handshake_timeout=ssl_handshake_timeout) + ssl_handshake_timeout=ssl_handshake_timeout, + ssl_shutdown_timeout=ssl_shutdown_timeout) self._transport = new_transport protocol._replace_writer(self) diff --git a/Misc/NEWS.d/next/Library/2023-01-26-01-25-56.gh-issue-101317.vWaS1x.rst b/Misc/NEWS.d/next/Library/2023-01-26-01-25-56.gh-issue-101317.vWaS1x.rst new file mode 100644 index 000000000000..f1ce0e0a5276 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-01-26-01-25-56.gh-issue-101317.vWaS1x.rst @@ -0,0 +1,2 @@ +Add *ssl_shutdown_timeout* parameter for :meth:`asyncio.StreamWriter.start_tls`. + From webhook-mailer at python.org Wed Feb 1 06:41:36 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Wed, 01 Feb 2023 11:41:36 -0000 Subject: [Python-checkins] gh-101277: Isolate itertools, add group and _grouper types to module state (#101302) Message-ID: https://github.com/python/cpython/commit/2b3e02a705907d0db2ce5266f06ad88a6b6160db commit: 2b3e02a705907d0db2ce5266f06ad88a6b6160db branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-01T12:41:30+01:00 summary: gh-101277: Isolate itertools, add group and _grouper types to module state (#101302) Co-authored-by: Kumar Aditya <59607654+kumaraditya303 at users.noreply.github.com> files: M Lib/test/test_itertools.py M Modules/clinic/itertoolsmodule.c.h M Modules/itertoolsmodule.c diff --git a/Lib/test/test_itertools.py b/Lib/test/test_itertools.py index b447b6cbab9c..7014bc97100c 100644 --- a/Lib/test/test_itertools.py +++ b/Lib/test/test_itertools.py @@ -1694,6 +1694,38 @@ def test_zip_longest_result_gc(self): gc.collect() self.assertTrue(gc.is_tracked(next(it))) + @support.cpython_only + def test_immutable_types(self): + from itertools import _grouper, _tee, _tee_dataobject + dataset = ( + accumulate, + batched, + chain, + combinations, + combinations_with_replacement, + compress, + count, + cycle, + dropwhile, + filterfalse, + groupby, + _grouper, + islice, + pairwise, + permutations, + product, + repeat, + starmap, + takewhile, + _tee, + _tee_dataobject, + zip_longest, + ) + for tp in dataset: + with self.subTest(tp=tp): + with self.assertRaisesRegex(TypeError, "immutable"): + tp.foobar = 1 + class TestExamples(unittest.TestCase): diff --git a/Modules/clinic/itertoolsmodule.c.h b/Modules/clinic/itertoolsmodule.c.h index 70299aceb7a0..c492c33daea5 100644 --- a/Modules/clinic/itertoolsmodule.c.h +++ b/Modules/clinic/itertoolsmodule.c.h @@ -195,7 +195,7 @@ static PyObject * itertools__grouper(PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *return_value = NULL; - PyTypeObject *base_tp = &_grouper_type; + PyTypeObject *base_tp = clinic_state()->_grouper_type; PyObject *parent; PyObject *tgtkey; @@ -206,8 +206,8 @@ itertools__grouper(PyTypeObject *type, PyObject *args, PyObject *kwargs) if (!_PyArg_CheckPositional("_grouper", PyTuple_GET_SIZE(args), 2, 2)) { goto exit; } - if (!PyObject_TypeCheck(PyTuple_GET_ITEM(args, 0), &groupby_type)) { - _PyArg_BadArgument("_grouper", "argument 1", (&groupby_type)->tp_name, PyTuple_GET_ITEM(args, 0)); + if (!PyObject_TypeCheck(PyTuple_GET_ITEM(args, 0), clinic_state_by_cls()->groupby_type)) { + _PyArg_BadArgument("_grouper", "argument 1", (clinic_state_by_cls()->groupby_type)->tp_name, PyTuple_GET_ITEM(args, 0)); goto exit; } parent = PyTuple_GET_ITEM(args, 0); @@ -913,4 +913,4 @@ itertools_count(PyTypeObject *type, PyObject *args, PyObject *kwargs) exit: return return_value; } -/*[clinic end generated code: output=47c8c8ccec8740d7 input=a9049054013a1b77]*/ +/*[clinic end generated code: output=c3069caac417e165 input=a9049054013a1b77]*/ diff --git a/Modules/itertoolsmodule.c b/Modules/itertoolsmodule.c index c1f1e7320db7..a2ee48228b58 100644 --- a/Modules/itertoolsmodule.c +++ b/Modules/itertoolsmodule.c @@ -2,6 +2,7 @@ #include "Python.h" #include "pycore_call.h" // _PyObject_CallNoArgs() #include "pycore_long.h" // _PyLong_GetZero() +#include "pycore_moduleobject.h" // _PyModule_GetState() #include "pycore_object.h" // _PyObject_GC_TRACK() #include "pycore_tuple.h" // _PyTuple_ITEMS() #include // offsetof() @@ -10,10 +11,42 @@ by Raymond D. Hettinger */ +typedef struct { + PyTypeObject *groupby_type; + PyTypeObject *_grouper_type; +} itertools_state; + +static inline itertools_state * +get_module_state(PyObject *mod) +{ + void *state = _PyModule_GetState(mod); + assert(state != NULL); + return (itertools_state *)state; +} + +static inline itertools_state * +get_module_state_by_cls(PyTypeObject *cls) +{ + void *state = PyType_GetModuleState(cls); + assert(state != NULL); + return (itertools_state *)state; +} + +static struct PyModuleDef itertoolsmodule; + +static inline itertools_state * +find_state_by_type(PyTypeObject *tp) +{ + PyObject *mod = PyType_GetModuleByDef(tp, &itertoolsmodule); + assert(mod != NULL); + return get_module_state(mod); +} +#define clinic_state() (find_state_by_type(type)) + /*[clinic input] module itertools -class itertools.groupby "groupbyobject *" "&groupby_type" -class itertools._grouper "_grouperobject *" "&_grouper_type" +class itertools.groupby "groupbyobject *" "clinic_state()->groupby_type" +class itertools._grouper "_grouperobject *" "clinic_state()->_grouper_type" class itertools.teedataobject "teedataobject *" "&teedataobject_type" class itertools._tee "teeobject *" "&tee_type" class itertools.batched "batchedobject *" "&batched_type" @@ -31,10 +64,8 @@ class itertools.filterfalse "filterfalseobject *" "&filterfalse_type" class itertools.count "countobject *" "&count_type" class itertools.pairwise "pairwiseobject *" "&pairwise_type" [clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=1168b274011ce21b]*/ +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=424108522584b55b]*/ -static PyTypeObject groupby_type; -static PyTypeObject _grouper_type; static PyTypeObject teedataobject_type; static PyTypeObject tee_type; static PyTypeObject batched_type; @@ -51,7 +82,10 @@ static PyTypeObject filterfalse_type; static PyTypeObject count_type; static PyTypeObject pairwise_type; +#define clinic_state_by_cls() (get_module_state_by_cls(base_tp)) #include "clinic/itertoolsmodule.c.h" +#undef clinic_state_by_cls +#undef clinic_state /* batched object ************************************************************/ @@ -372,6 +406,7 @@ typedef struct { PyObject *currkey; PyObject *currvalue; const void *currgrouper; /* borrowed reference */ + itertools_state *state; } groupbyobject; static PyObject *_grouper_create(groupbyobject *, PyObject *); @@ -408,24 +443,28 @@ itertools_groupby_impl(PyTypeObject *type, PyObject *it, PyObject *keyfunc) Py_DECREF(gbo); return NULL; } + gbo->state = find_state_by_type(type); return (PyObject *)gbo; } static void groupby_dealloc(groupbyobject *gbo) { + PyTypeObject *tp = Py_TYPE(gbo); PyObject_GC_UnTrack(gbo); Py_XDECREF(gbo->it); Py_XDECREF(gbo->keyfunc); Py_XDECREF(gbo->tgtkey); Py_XDECREF(gbo->currkey); Py_XDECREF(gbo->currvalue); - Py_TYPE(gbo)->tp_free(gbo); + tp->tp_free(gbo); + Py_DECREF(tp); } static int groupby_traverse(groupbyobject *gbo, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(gbo)); Py_VISIT(gbo->it); Py_VISIT(gbo->keyfunc); Py_VISIT(gbo->tgtkey); @@ -546,50 +585,26 @@ static PyMethodDef groupby_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject groupby_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.groupby", /* tp_name */ - sizeof(groupbyobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)groupby_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - itertools_groupby__doc__, /* tp_doc */ - (traverseproc)groupby_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)groupby_next, /* tp_iternext */ - groupby_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_groupby, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot groupby_slots[] = { + {Py_tp_dealloc, groupby_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_groupby__doc__}, + {Py_tp_traverse, groupby_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, groupby_next}, + {Py_tp_methods, groupby_methods}, + {Py_tp_new, itertools_groupby}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, }; +static PyType_Spec groupby_spec = { + .name = "itertools.groupby", + .basicsize= sizeof(groupbyobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = groupby_slots, +}; /* _grouper object (internal) ************************************************/ @@ -603,7 +618,7 @@ typedef struct { @classmethod itertools._grouper.__new__ - parent: object(subclass_of='&groupby_type') + parent: object(subclass_of='clinic_state_by_cls()->groupby_type') tgtkey: object / [clinic start generated code]*/ @@ -611,7 +626,7 @@ itertools._grouper.__new__ static PyObject * itertools__grouper_impl(PyTypeObject *type, PyObject *parent, PyObject *tgtkey) -/*[clinic end generated code: output=462efb1cdebb5914 input=dc180d7771fc8c59]*/ +/*[clinic end generated code: output=462efb1cdebb5914 input=afe05eb477118f12]*/ { return _grouper_create((groupbyobject*) parent, tgtkey); } @@ -619,9 +634,8 @@ itertools__grouper_impl(PyTypeObject *type, PyObject *parent, static PyObject * _grouper_create(groupbyobject *parent, PyObject *tgtkey) { - _grouperobject *igo; - - igo = PyObject_GC_New(_grouperobject, &_grouper_type); + itertools_state *state = parent->state; + _grouperobject *igo = PyObject_GC_New(_grouperobject, state->_grouper_type); if (igo == NULL) return NULL; igo->parent = Py_NewRef(parent); @@ -635,15 +649,18 @@ _grouper_create(groupbyobject *parent, PyObject *tgtkey) static void _grouper_dealloc(_grouperobject *igo) { + PyTypeObject *tp = Py_TYPE(igo); PyObject_GC_UnTrack(igo); Py_DECREF(igo->parent); Py_DECREF(igo->tgtkey); PyObject_GC_Del(igo); + Py_DECREF(tp); } static int _grouper_traverse(_grouperobject *igo, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(igo)); Py_VISIT(igo->parent); Py_VISIT(igo->tgtkey); return 0; @@ -691,48 +708,24 @@ static PyMethodDef _grouper_methods[] = { {NULL, NULL} /* sentinel */ }; +static PyType_Slot _grouper_slots[] = { + {Py_tp_dealloc, _grouper_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_traverse, _grouper_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, _grouper_next}, + {Py_tp_methods, _grouper_methods}, + {Py_tp_new, itertools__grouper}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; -static PyTypeObject _grouper_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools._grouper", /* tp_name */ - sizeof(_grouperobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)_grouper_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */ - 0, /* tp_doc */ - (traverseproc)_grouper_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)_grouper_next, /* tp_iternext */ - _grouper_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools__grouper, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Spec _grouper_spec = { + .name = "itertools._grouper", + .basicsize = sizeof(_grouperobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = _grouper_slots, }; @@ -4979,8 +4972,47 @@ combinations_with_replacement(p, r)\n\ "); static int -itertoolsmodule_exec(PyObject *m) +itertoolsmodule_traverse(PyObject *mod, visitproc visit, void *arg) +{ + itertools_state *state = get_module_state(mod); + Py_VISIT(state->groupby_type); + Py_VISIT(state->_grouper_type); + return 0; +} + +static int +itertoolsmodule_clear(PyObject *mod) +{ + itertools_state *state = get_module_state(mod); + Py_CLEAR(state->groupby_type); + Py_CLEAR(state->_grouper_type); + return 0; +} + +static void +itertoolsmodule_free(void *mod) +{ + (void)itertoolsmodule_clear((PyObject *)mod); +} + +#define ADD_TYPE(module, type, spec) \ +do { \ + type = (PyTypeObject *)PyType_FromModuleAndSpec(module, spec, NULL); \ + if (type == NULL) { \ + return -1; \ + } \ + if (PyModule_AddType(module, type) < 0) { \ + return -1; \ + } \ +} while (0) + +static int +itertoolsmodule_exec(PyObject *mod) { + itertools_state *state = get_module_state(mod); + ADD_TYPE(mod, state->groupby_type, &groupby_spec); + ADD_TYPE(mod, state->_grouper_type, &_grouper_spec); + PyTypeObject *typelist[] = { &accumulate_type, &batched_type, @@ -5000,8 +5032,6 @@ itertoolsmodule_exec(PyObject *m) &permutations_type, &product_type, &repeat_type, - &groupby_type, - &_grouper_type, &tee_type, &teedataobject_type }; @@ -5009,7 +5039,7 @@ itertoolsmodule_exec(PyObject *m) Py_SET_TYPE(&teedataobject_type, &PyType_Type); for (size_t i = 0; i < Py_ARRAY_LENGTH(typelist); i++) { - if (PyModule_AddType(m, typelist[i]) < 0) { + if (PyModule_AddType(mod, typelist[i]) < 0) { return -1; } } @@ -5029,15 +5059,15 @@ static PyMethodDef module_methods[] = { static struct PyModuleDef itertoolsmodule = { - PyModuleDef_HEAD_INIT, - "itertools", - module_doc, - 0, - module_methods, - itertoolsmodule_slots, - NULL, - NULL, - NULL + .m_base = PyModuleDef_HEAD_INIT, + .m_name = "itertools", + .m_doc = module_doc, + .m_size = sizeof(itertools_state), + .m_methods = module_methods, + .m_slots = itertoolsmodule_slots, + .m_traverse = itertoolsmodule_traverse, + .m_clear = itertoolsmodule_clear, + .m_free = itertoolsmodule_free, }; PyMODINIT_FUNC From webhook-mailer at python.org Wed Feb 1 07:50:28 2023 From: webhook-mailer at python.org (iritkatriel) Date: Wed, 01 Feb 2023 12:50:28 -0000 Subject: [Python-checkins] gh-101454: fix documentation for END_ASYNC_FOR (#101455) Message-ID: https://github.com/python/cpython/commit/62251c3da06eb4662502295697f39730565b1717 commit: 62251c3da06eb4662502295697f39730565b1717 branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-01T12:49:59Z summary: gh-101454: fix documentation for END_ASYNC_FOR (#101455) files: M Doc/library/dis.rst diff --git a/Doc/library/dis.rst b/Doc/library/dis.rst index 6a68ec4b14be..1fe2d5d6227d 100644 --- a/Doc/library/dis.rst +++ b/Doc/library/dis.rst @@ -616,10 +616,9 @@ not have to be) the original ``STACK[-2]``. .. opcode:: END_ASYNC_FOR Terminates an :keyword:`async for` loop. Handles an exception raised - when awaiting a next item. If ``STACK[-1]`` is :exc:`StopAsyncIteration` pop 3 - values from the stack and restore the exception state using the second - of them. Otherwise re-raise the exception using the value - from the stack. An exception handler block is removed from the block stack. + when awaiting a next item. The stack contains the async iterable in + ``STACK[-2]`` and the raised exception in ``STACK[-1]``. Both are popped. + If the exception is not :exc:`StopAsyncIteration`, it is re-raised. .. versionadded:: 3.8 From webhook-mailer at python.org Wed Feb 1 09:30:55 2023 From: webhook-mailer at python.org (iritkatriel) Date: Wed, 01 Feb 2023 14:30:55 -0000 Subject: [Python-checkins] [3.11] gh-101454: fix documentation for END_ASYNC_FOR (#101455) (#101493) Message-ID: https://github.com/python/cpython/commit/c796d34b2a121bcb89c6afd79160f95ece7cc945 commit: c796d34b2a121bcb89c6afd79160f95ece7cc945 branch: 3.11 author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-01T14:30:48Z summary: [3.11] gh-101454: fix documentation for END_ASYNC_FOR (#101455) (#101493) gh-101454: fix documentation for END_ASYNC_FOR (#101455) (cherry picked from commit 62251c3da06eb4662502295697f39730565b1717) files: M Doc/library/dis.rst diff --git a/Doc/library/dis.rst b/Doc/library/dis.rst index 1e323bd50066..a61dd75cbeab 100644 --- a/Doc/library/dis.rst +++ b/Doc/library/dis.rst @@ -563,10 +563,9 @@ the original TOS1. .. opcode:: END_ASYNC_FOR Terminates an :keyword:`async for` loop. Handles an exception raised - when awaiting a next item. If TOS is :exc:`StopAsyncIteration` pop 3 - values from the stack and restore the exception state using the second - of them. Otherwise re-raise the exception using the value - from the stack. An exception handler block is removed from the block stack. + when awaiting a next item. The stack contains the async iterable in + TOS1 and the raised exception in TOS. Both are popped. + If the exception is not :exc:`StopAsyncIteration`, it is re-raised. .. versionadded:: 3.8 From webhook-mailer at python.org Wed Feb 1 13:08:38 2023 From: webhook-mailer at python.org (gvanrossum) Date: Wed, 01 Feb 2023 18:08:38 -0000 Subject: [Python-checkins] gh-101498 : Fix asyncio.Timeout example in docs (#101499) Message-ID: https://github.com/python/cpython/commit/95fb0e02582b5673eff49694eb0dce1d7df52301 commit: 95fb0e02582b5673eff49694eb0dce1d7df52301 branch: main author: Raj <51259329+workingpayload at users.noreply.github.com> committer: gvanrossum date: 2023-02-01T10:08:31-08:00 summary: gh-101498 : Fix asyncio.Timeout example in docs (#101499) Doc/library/asyncio-task.rst#timeout files: M Doc/library/asyncio-task.rst diff --git a/Doc/library/asyncio-task.rst b/Doc/library/asyncio-task.rst index 9a42b9631676..39112580285c 100644 --- a/Doc/library/asyncio-task.rst +++ b/Doc/library/asyncio-task.rst @@ -666,7 +666,7 @@ Timeouts except TimeoutError: pass - if cm.expired: + if cm.expired(): print("Looks like we haven't finished on time.") Timeout context managers can be safely nested. From webhook-mailer at python.org Wed Feb 1 13:19:18 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 01 Feb 2023 18:19:18 -0000 Subject: [Python-checkins] gh-101498 : Fix asyncio.Timeout example in docs (GH-101499) Message-ID: https://github.com/python/cpython/commit/89442e18e1e17c0eb0eb06e5da489e1cb2d4219d commit: 89442e18e1e17c0eb0eb06e5da489e1cb2d4219d branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-01T10:19:11-08:00 summary: gh-101498 : Fix asyncio.Timeout example in docs (GH-101499) Doc/library/asyncio-task.rstGH-timeout (cherry picked from commit 95fb0e02582b5673eff49694eb0dce1d7df52301) Co-authored-by: Raj <51259329+workingpayload at users.noreply.github.com> files: M Doc/library/asyncio-task.rst diff --git a/Doc/library/asyncio-task.rst b/Doc/library/asyncio-task.rst index 24e43c3cbfcb..1fd7afad99a8 100644 --- a/Doc/library/asyncio-task.rst +++ b/Doc/library/asyncio-task.rst @@ -666,7 +666,7 @@ Timeouts except TimeoutError: pass - if cm.expired: + if cm.expired(): print("Looks like we haven't finished on time.") Timeout context managers can be safely nested. From webhook-mailer at python.org Wed Feb 1 13:57:02 2023 From: webhook-mailer at python.org (gvanrossum) Date: Wed, 01 Feb 2023 18:57:02 -0000 Subject: [Python-checkins] gh-98831: Modernize the LOAD_ATTR family (#101488) Message-ID: https://github.com/python/cpython/commit/7840ff3cdbdf64f517c9f981f57eff232e676104 commit: 7840ff3cdbdf64f517c9f981f57eff232e676104 branch: main author: Guido van Rossum committer: gvanrossum date: 2023-02-01T10:56:52-08:00 summary: gh-98831: Modernize the LOAD_ATTR family (#101488) files: M Python/bytecodes.c M Python/generated_cases.c.h M Python/opcode_metadata.h diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 336088e08197..bb1deaf3fbeb 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -436,7 +436,7 @@ dummy_func( PREDICT(JUMP_BACKWARD); } - family(store_subscr) = { + family(store_subscr, INLINE_CACHE_ENTRIES_STORE_SUBSCR) = { STORE_SUBSCR, STORE_SUBSCR_DICT, STORE_SUBSCR_LIST_INT, @@ -950,7 +950,7 @@ dummy_func( Py_DECREF(seq); } - family(store_attr) = { + family(store_attr, INLINE_CACHE_ENTRIES_STORE_ATTR) = { STORE_ATTR, STORE_ATTR_INSTANCE_VALUE, STORE_ATTR_SLOT, @@ -1436,6 +1436,20 @@ dummy_func( PREDICT(JUMP_BACKWARD); } + family(load_attr, INLINE_CACHE_ENTRIES_LOAD_ATTR) = { + LOAD_ATTR, + LOAD_ATTR_INSTANCE_VALUE, + LOAD_ATTR_MODULE, + LOAD_ATTR_WITH_HINT, + LOAD_ATTR_SLOT, + LOAD_ATTR_CLASS, + LOAD_ATTR_PROPERTY, + LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN, + LOAD_ATTR_METHOD_WITH_VALUES, + LOAD_ATTR_METHOD_NO_DICT, + LOAD_ATTR_METHOD_LAZY_DICT, + }; + inst(LOAD_ATTR, (unused/9, owner -- res2 if (oparg & 1), res)) { #if ENABLE_SPECIALIZATION _PyAttrCache *cache = (_PyAttrCache *)next_instr; @@ -1485,64 +1499,43 @@ dummy_func( } } - // error: LOAD_ATTR has irregular stack effect - inst(LOAD_ATTR_INSTANCE_VALUE) { + inst(LOAD_ATTR_INSTANCE_VALUE, (unused/1, type_version/2, index/1, unused/5, owner -- res2 if (oparg & 1), res)) { assert(cframe.use_tracing == 0); - PyObject *owner = TOP(); - PyObject *res; PyTypeObject *tp = Py_TYPE(owner); - _PyAttrCache *cache = (_PyAttrCache *)next_instr; - uint32_t type_version = read_u32(cache->version); assert(type_version != 0); DEOPT_IF(tp->tp_version_tag != type_version, LOAD_ATTR); assert(tp->tp_dictoffset < 0); assert(tp->tp_flags & Py_TPFLAGS_MANAGED_DICT); PyDictOrValues dorv = *_PyObject_DictOrValuesPointer(owner); DEOPT_IF(!_PyDictOrValues_IsValues(dorv), LOAD_ATTR); - res = _PyDictOrValues_GetValues(dorv)->values[cache->index]; + res = _PyDictOrValues_GetValues(dorv)->values[index]; DEOPT_IF(res == NULL, LOAD_ATTR); STAT_INC(LOAD_ATTR, hit); Py_INCREF(res); - SET_TOP(NULL); - STACK_GROW((oparg & 1)); - SET_TOP(res); + res2 = NULL; Py_DECREF(owner); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); } - // error: LOAD_ATTR has irregular stack effect - inst(LOAD_ATTR_MODULE) { + inst(LOAD_ATTR_MODULE, (unused/1, type_version/2, index/1, unused/5, owner -- res2 if (oparg & 1), res)) { assert(cframe.use_tracing == 0); - PyObject *owner = TOP(); - PyObject *res; - _PyAttrCache *cache = (_PyAttrCache *)next_instr; DEOPT_IF(!PyModule_CheckExact(owner), LOAD_ATTR); PyDictObject *dict = (PyDictObject *)((PyModuleObject *)owner)->md_dict; assert(dict != NULL); - DEOPT_IF(dict->ma_keys->dk_version != read_u32(cache->version), - LOAD_ATTR); + DEOPT_IF(dict->ma_keys->dk_version != type_version, LOAD_ATTR); assert(dict->ma_keys->dk_kind == DICT_KEYS_UNICODE); - assert(cache->index < dict->ma_keys->dk_nentries); - PyDictUnicodeEntry *ep = DK_UNICODE_ENTRIES(dict->ma_keys) + cache->index; + assert(index < dict->ma_keys->dk_nentries); + PyDictUnicodeEntry *ep = DK_UNICODE_ENTRIES(dict->ma_keys) + index; res = ep->me_value; DEOPT_IF(res == NULL, LOAD_ATTR); STAT_INC(LOAD_ATTR, hit); Py_INCREF(res); - SET_TOP(NULL); - STACK_GROW((oparg & 1)); - SET_TOP(res); + res2 = NULL; Py_DECREF(owner); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); } - // error: LOAD_ATTR has irregular stack effect - inst(LOAD_ATTR_WITH_HINT) { + inst(LOAD_ATTR_WITH_HINT, (unused/1, type_version/2, index/1, unused/5, owner -- res2 if (oparg & 1), res)) { assert(cframe.use_tracing == 0); - PyObject *owner = TOP(); - PyObject *res; PyTypeObject *tp = Py_TYPE(owner); - _PyAttrCache *cache = (_PyAttrCache *)next_instr; - uint32_t type_version = read_u32(cache->version); assert(type_version != 0); DEOPT_IF(tp->tp_version_tag != type_version, LOAD_ATTR); assert(tp->tp_flags & Py_TPFLAGS_MANAGED_DICT); @@ -1552,7 +1545,7 @@ dummy_func( DEOPT_IF(dict == NULL, LOAD_ATTR); assert(PyDict_CheckExact((PyObject *)dict)); PyObject *name = GETITEM(names, oparg>>1); - uint16_t hint = cache->index; + uint16_t hint = index; DEOPT_IF(hint >= (size_t)dict->ma_keys->dk_nentries, LOAD_ATTR); if (DK_IS_UNICODE(dict->ma_keys)) { PyDictUnicodeEntry *ep = DK_UNICODE_ENTRIES(dict->ma_keys) + hint; @@ -1567,73 +1560,49 @@ dummy_func( DEOPT_IF(res == NULL, LOAD_ATTR); STAT_INC(LOAD_ATTR, hit); Py_INCREF(res); - SET_TOP(NULL); - STACK_GROW((oparg & 1)); - SET_TOP(res); + res2 = NULL; Py_DECREF(owner); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); } - // error: LOAD_ATTR has irregular stack effect - inst(LOAD_ATTR_SLOT) { + inst(LOAD_ATTR_SLOT, (unused/1, type_version/2, index/1, unused/5, owner -- res2 if (oparg & 1), res)) { assert(cframe.use_tracing == 0); - PyObject *owner = TOP(); - PyObject *res; PyTypeObject *tp = Py_TYPE(owner); - _PyAttrCache *cache = (_PyAttrCache *)next_instr; - uint32_t type_version = read_u32(cache->version); assert(type_version != 0); DEOPT_IF(tp->tp_version_tag != type_version, LOAD_ATTR); - char *addr = (char *)owner + cache->index; + char *addr = (char *)owner + index; res = *(PyObject **)addr; DEOPT_IF(res == NULL, LOAD_ATTR); STAT_INC(LOAD_ATTR, hit); Py_INCREF(res); - SET_TOP(NULL); - STACK_GROW((oparg & 1)); - SET_TOP(res); + res2 = NULL; Py_DECREF(owner); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); } - // error: LOAD_ATTR has irregular stack effect - inst(LOAD_ATTR_CLASS) { + inst(LOAD_ATTR_CLASS, (unused/1, type_version/2, unused/2, descr/4, cls -- res2 if (oparg & 1), res)) { assert(cframe.use_tracing == 0); - _PyLoadMethodCache *cache = (_PyLoadMethodCache *)next_instr; - PyObject *cls = TOP(); DEOPT_IF(!PyType_Check(cls), LOAD_ATTR); - uint32_t type_version = read_u32(cache->type_version); DEOPT_IF(((PyTypeObject *)cls)->tp_version_tag != type_version, LOAD_ATTR); assert(type_version != 0); STAT_INC(LOAD_ATTR, hit); - PyObject *res = read_obj(cache->descr); + res2 = NULL; + res = descr; assert(res != NULL); Py_INCREF(res); - SET_TOP(NULL); - STACK_GROW((oparg & 1)); - SET_TOP(res); Py_DECREF(cls); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); } - // error: LOAD_ATTR has irregular stack effect - inst(LOAD_ATTR_PROPERTY) { + inst(LOAD_ATTR_PROPERTY, (unused/1, type_version/2, func_version/2, fget/4, owner -- unused if (oparg & 1), unused)) { assert(cframe.use_tracing == 0); DEOPT_IF(tstate->interp->eval_frame, LOAD_ATTR); - _PyLoadMethodCache *cache = (_PyLoadMethodCache *)next_instr; - PyObject *owner = TOP(); PyTypeObject *cls = Py_TYPE(owner); - uint32_t type_version = read_u32(cache->type_version); DEOPT_IF(cls->tp_version_tag != type_version, LOAD_ATTR); assert(type_version != 0); - PyObject *fget = read_obj(cache->descr); assert(Py_IS_TYPE(fget, &PyFunction_Type)); PyFunctionObject *f = (PyFunctionObject *)fget; - uint32_t func_version = read_u32(cache->keys_version); assert(func_version != 0); DEOPT_IF(f->func_version != func_version, LOAD_ATTR); PyCodeObject *code = (PyCodeObject *)f->func_code; @@ -1642,6 +1611,7 @@ dummy_func( STAT_INC(LOAD_ATTR, hit); Py_INCREF(fget); _PyInterpreterFrame *new_frame = _PyFrame_PushUnchecked(tstate, f, 1); + // Manipulate stack directly because we exit with DISPATCH_INLINED(). SET_TOP(NULL); int shrink_stack = !(oparg & 1); STACK_SHRINK(shrink_stack); @@ -1650,20 +1620,14 @@ dummy_func( DISPATCH_INLINED(new_frame); } - // error: LOAD_ATTR has irregular stack effect - inst(LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN) { + inst(LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN, (unused/1, type_version/2, func_version/2, getattribute/4, owner -- unused if (oparg & 1), unused)) { assert(cframe.use_tracing == 0); DEOPT_IF(tstate->interp->eval_frame, LOAD_ATTR); - _PyLoadMethodCache *cache = (_PyLoadMethodCache *)next_instr; - PyObject *owner = TOP(); PyTypeObject *cls = Py_TYPE(owner); - uint32_t type_version = read_u32(cache->type_version); DEOPT_IF(cls->tp_version_tag != type_version, LOAD_ATTR); assert(type_version != 0); - PyObject *getattribute = read_obj(cache->descr); assert(Py_IS_TYPE(getattribute, &PyFunction_Type)); PyFunctionObject *f = (PyFunctionObject *)getattribute; - uint32_t func_version = read_u32(cache->keys_version); assert(func_version != 0); DEOPT_IF(f->func_version != func_version, LOAD_ATTR); PyCodeObject *code = (PyCodeObject *)f->func_code; @@ -1674,6 +1638,7 @@ dummy_func( PyObject *name = GETITEM(names, oparg >> 1); Py_INCREF(f); _PyInterpreterFrame *new_frame = _PyFrame_PushUnchecked(tstate, f, 2); + // Manipulate stack directly because we exit with DISPATCH_INLINED(). SET_TOP(NULL); int shrink_stack = !(oparg & 1); STACK_SHRINK(shrink_stack); @@ -1768,6 +1733,7 @@ dummy_func( ERROR_IF(res == NULL, error); } + // No cache size here, since this is a family of super-instructions. family(compare_and_branch) = { COMPARE_AND_BRANCH, COMPARE_AND_BRANCH_FLOAT, @@ -2373,14 +2339,10 @@ dummy_func( } - // error: LOAD_ATTR has irregular stack effect - inst(LOAD_ATTR_METHOD_WITH_VALUES) { + inst(LOAD_ATTR_METHOD_WITH_VALUES, (unused/1, type_version/2, keys_version/2, descr/4, self -- res2 if (oparg & 1), res)) { /* Cached method object */ assert(cframe.use_tracing == 0); - PyObject *self = TOP(); PyTypeObject *self_cls = Py_TYPE(self); - _PyLoadMethodCache *cache = (_PyLoadMethodCache *)next_instr; - uint32_t type_version = read_u32(cache->type_version); assert(type_version != 0); DEOPT_IF(self_cls->tp_version_tag != type_version, LOAD_ATTR); assert(self_cls->tp_flags & Py_TPFLAGS_MANAGED_DICT); @@ -2388,41 +2350,31 @@ dummy_func( DEOPT_IF(!_PyDictOrValues_IsValues(dorv), LOAD_ATTR); PyHeapTypeObject *self_heap_type = (PyHeapTypeObject *)self_cls; DEOPT_IF(self_heap_type->ht_cached_keys->dk_version != - read_u32(cache->keys_version), LOAD_ATTR); + keys_version, LOAD_ATTR); STAT_INC(LOAD_ATTR, hit); - PyObject *res = read_obj(cache->descr); - assert(res != NULL); - assert(_PyType_HasFeature(Py_TYPE(res), Py_TPFLAGS_METHOD_DESCRIPTOR)); - SET_TOP(Py_NewRef(res)); - PUSH(self); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); + assert(descr != NULL); + res2 = Py_NewRef(descr); + assert(_PyType_HasFeature(Py_TYPE(res2), Py_TPFLAGS_METHOD_DESCRIPTOR)); + res = self; + assert(oparg & 1); } - // error: LOAD_ATTR has irregular stack effect - inst(LOAD_ATTR_METHOD_NO_DICT) { + inst(LOAD_ATTR_METHOD_NO_DICT, (unused/1, type_version/2, unused/2, descr/4, self -- res2 if (oparg & 1), res)) { assert(cframe.use_tracing == 0); - PyObject *self = TOP(); PyTypeObject *self_cls = Py_TYPE(self); - _PyLoadMethodCache *cache = (_PyLoadMethodCache *)next_instr; - uint32_t type_version = read_u32(cache->type_version); DEOPT_IF(self_cls->tp_version_tag != type_version, LOAD_ATTR); assert(self_cls->tp_dictoffset == 0); STAT_INC(LOAD_ATTR, hit); - PyObject *res = read_obj(cache->descr); - assert(res != NULL); - assert(_PyType_HasFeature(Py_TYPE(res), Py_TPFLAGS_METHOD_DESCRIPTOR)); - SET_TOP(Py_NewRef(res)); - PUSH(self); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); + assert(descr != NULL); + assert(_PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_METHOD_DESCRIPTOR)); + res2 = Py_NewRef(descr); + res = self; + assert(oparg & 1); } - // error: LOAD_ATTR has irregular stack effect - inst(LOAD_ATTR_METHOD_LAZY_DICT) { + inst(LOAD_ATTR_METHOD_LAZY_DICT, (unused/1, type_version/2, unused/2, descr/4, self -- res2 if (oparg & 1), res)) { assert(cframe.use_tracing == 0); - PyObject *self = TOP(); PyTypeObject *self_cls = Py_TYPE(self); - _PyLoadMethodCache *cache = (_PyLoadMethodCache *)next_instr; - uint32_t type_version = read_u32(cache->type_version); DEOPT_IF(self_cls->tp_version_tag != type_version, LOAD_ATTR); Py_ssize_t dictoffset = self_cls->tp_dictoffset; assert(dictoffset > 0); @@ -2430,12 +2382,11 @@ dummy_func( /* This object has a __dict__, just not yet created */ DEOPT_IF(dict != NULL, LOAD_ATTR); STAT_INC(LOAD_ATTR, hit); - PyObject *res = read_obj(cache->descr); - assert(res != NULL); - assert(_PyType_HasFeature(Py_TYPE(res), Py_TPFLAGS_METHOD_DESCRIPTOR)); - SET_TOP(Py_NewRef(res)); - PUSH(self); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); + assert(descr != NULL); + assert(_PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_METHOD_DESCRIPTOR)); + res2 = Py_NewRef(descr); + res = self; + assert(oparg & 1); } // stack effect: (__0, __array[oparg] -- ) @@ -3265,7 +3216,7 @@ dummy_func( // Future families go below this point // -family(call) = { +family(call, INLINE_CACHE_ENTRIES_CALL) = { CALL, CALL_PY_EXACT_ARGS, CALL_PY_WITH_DEFAULTS, CALL_BOUND_METHOD_EXACT_ARGS, CALL_BUILTIN_CLASS, CALL_BUILTIN_FAST_WITH_KEYWORDS, CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS, CALL_NO_KW_BUILTIN_FAST, @@ -3273,19 +3224,13 @@ family(call) = { CALL_NO_KW_LIST_APPEND, CALL_NO_KW_METHOD_DESCRIPTOR_FAST, CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS, CALL_NO_KW_METHOD_DESCRIPTOR_O, CALL_NO_KW_STR_1, CALL_NO_KW_TUPLE_1, CALL_NO_KW_TYPE_1 }; -family(for_iter) = { +family(for_iter, INLINE_CACHE_ENTRIES_FOR_ITER) = { FOR_ITER, FOR_ITER_LIST, FOR_ITER_RANGE }; -family(load_attr) = { - LOAD_ATTR, LOAD_ATTR_CLASS, - LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN, LOAD_ATTR_INSTANCE_VALUE, LOAD_ATTR_MODULE, - LOAD_ATTR_PROPERTY, LOAD_ATTR_SLOT, LOAD_ATTR_WITH_HINT, - LOAD_ATTR_METHOD_LAZY_DICT, LOAD_ATTR_METHOD_NO_DICT, - LOAD_ATTR_METHOD_WITH_VALUES }; -family(load_global) = { +family(load_global, INLINE_CACHE_ENTRIES_LOAD_GLOBAL) = { LOAD_GLOBAL, LOAD_GLOBAL_BUILTIN, LOAD_GLOBAL_MODULE }; family(store_fast) = { STORE_FAST, STORE_FAST__LOAD_FAST, STORE_FAST__STORE_FAST }; -family(unpack_sequence) = { +family(unpack_sequence, INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE) = { UNPACK_SEQUENCE, UNPACK_SEQUENCE_LIST, UNPACK_SEQUENCE_TUPLE, UNPACK_SEQUENCE_TWO_TUPLE }; diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index d70d64ecbdd5..e5c5c7e557a3 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -596,6 +596,7 @@ TARGET(STORE_SUBSCR) { PREDICTED(STORE_SUBSCR); + static_assert(INLINE_CACHE_ENTRIES_STORE_SUBSCR == 1, "incorrect cache size"); PyObject *sub = PEEK(1); PyObject *container = PEEK(2); PyObject *v = PEEK(3); @@ -1180,6 +1181,7 @@ TARGET(STORE_ATTR) { PREDICTED(STORE_ATTR); + static_assert(INLINE_CACHE_ENTRIES_STORE_ATTR == 4, "incorrect cache size"); PyObject *owner = PEEK(1); PyObject *v = PEEK(2); uint16_t counter = read_u16(&next_instr[0].cache); @@ -1748,6 +1750,7 @@ TARGET(LOAD_ATTR) { PREDICTED(LOAD_ATTR); + static_assert(INLINE_CACHE_ENTRIES_LOAD_ATTR == 9, "incorrect cache size"); PyObject *owner = PEEK(1); PyObject *res2 = NULL; PyObject *res; @@ -1805,62 +1808,67 @@ } TARGET(LOAD_ATTR_INSTANCE_VALUE) { - assert(cframe.use_tracing == 0); - PyObject *owner = TOP(); + PyObject *owner = PEEK(1); + PyObject *res2 = NULL; PyObject *res; + uint32_t type_version = read_u32(&next_instr[1].cache); + uint16_t index = read_u16(&next_instr[3].cache); + assert(cframe.use_tracing == 0); PyTypeObject *tp = Py_TYPE(owner); - _PyAttrCache *cache = (_PyAttrCache *)next_instr; - uint32_t type_version = read_u32(cache->version); assert(type_version != 0); DEOPT_IF(tp->tp_version_tag != type_version, LOAD_ATTR); assert(tp->tp_dictoffset < 0); assert(tp->tp_flags & Py_TPFLAGS_MANAGED_DICT); PyDictOrValues dorv = *_PyObject_DictOrValuesPointer(owner); DEOPT_IF(!_PyDictOrValues_IsValues(dorv), LOAD_ATTR); - res = _PyDictOrValues_GetValues(dorv)->values[cache->index]; + res = _PyDictOrValues_GetValues(dorv)->values[index]; DEOPT_IF(res == NULL, LOAD_ATTR); STAT_INC(LOAD_ATTR, hit); Py_INCREF(res); - SET_TOP(NULL); - STACK_GROW((oparg & 1)); - SET_TOP(res); + res2 = NULL; Py_DECREF(owner); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); + STACK_GROW(((oparg & 1) ? 1 : 0)); + POKE(1, res); + if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } + JUMPBY(9); DISPATCH(); } TARGET(LOAD_ATTR_MODULE) { - assert(cframe.use_tracing == 0); - PyObject *owner = TOP(); + PyObject *owner = PEEK(1); + PyObject *res2 = NULL; PyObject *res; - _PyAttrCache *cache = (_PyAttrCache *)next_instr; + uint32_t type_version = read_u32(&next_instr[1].cache); + uint16_t index = read_u16(&next_instr[3].cache); + assert(cframe.use_tracing == 0); DEOPT_IF(!PyModule_CheckExact(owner), LOAD_ATTR); PyDictObject *dict = (PyDictObject *)((PyModuleObject *)owner)->md_dict; assert(dict != NULL); - DEOPT_IF(dict->ma_keys->dk_version != read_u32(cache->version), - LOAD_ATTR); + DEOPT_IF(dict->ma_keys->dk_version != type_version, LOAD_ATTR); assert(dict->ma_keys->dk_kind == DICT_KEYS_UNICODE); - assert(cache->index < dict->ma_keys->dk_nentries); - PyDictUnicodeEntry *ep = DK_UNICODE_ENTRIES(dict->ma_keys) + cache->index; + assert(index < dict->ma_keys->dk_nentries); + PyDictUnicodeEntry *ep = DK_UNICODE_ENTRIES(dict->ma_keys) + index; res = ep->me_value; DEOPT_IF(res == NULL, LOAD_ATTR); STAT_INC(LOAD_ATTR, hit); Py_INCREF(res); - SET_TOP(NULL); - STACK_GROW((oparg & 1)); - SET_TOP(res); + res2 = NULL; Py_DECREF(owner); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); + STACK_GROW(((oparg & 1) ? 1 : 0)); + POKE(1, res); + if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } + JUMPBY(9); DISPATCH(); } TARGET(LOAD_ATTR_WITH_HINT) { - assert(cframe.use_tracing == 0); - PyObject *owner = TOP(); + PyObject *owner = PEEK(1); + PyObject *res2 = NULL; PyObject *res; + uint32_t type_version = read_u32(&next_instr[1].cache); + uint16_t index = read_u16(&next_instr[3].cache); + assert(cframe.use_tracing == 0); PyTypeObject *tp = Py_TYPE(owner); - _PyAttrCache *cache = (_PyAttrCache *)next_instr; - uint32_t type_version = read_u32(cache->version); assert(type_version != 0); DEOPT_IF(tp->tp_version_tag != type_version, LOAD_ATTR); assert(tp->tp_flags & Py_TPFLAGS_MANAGED_DICT); @@ -1870,7 +1878,7 @@ DEOPT_IF(dict == NULL, LOAD_ATTR); assert(PyDict_CheckExact((PyObject *)dict)); PyObject *name = GETITEM(names, oparg>>1); - uint16_t hint = cache->index; + uint16_t hint = index; DEOPT_IF(hint >= (size_t)dict->ma_keys->dk_nentries, LOAD_ATTR); if (DK_IS_UNICODE(dict->ma_keys)) { PyDictUnicodeEntry *ep = DK_UNICODE_ENTRIES(dict->ma_keys) + hint; @@ -1885,73 +1893,78 @@ DEOPT_IF(res == NULL, LOAD_ATTR); STAT_INC(LOAD_ATTR, hit); Py_INCREF(res); - SET_TOP(NULL); - STACK_GROW((oparg & 1)); - SET_TOP(res); + res2 = NULL; Py_DECREF(owner); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); + STACK_GROW(((oparg & 1) ? 1 : 0)); + POKE(1, res); + if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } + JUMPBY(9); DISPATCH(); } TARGET(LOAD_ATTR_SLOT) { - assert(cframe.use_tracing == 0); - PyObject *owner = TOP(); + PyObject *owner = PEEK(1); + PyObject *res2 = NULL; PyObject *res; + uint32_t type_version = read_u32(&next_instr[1].cache); + uint16_t index = read_u16(&next_instr[3].cache); + assert(cframe.use_tracing == 0); PyTypeObject *tp = Py_TYPE(owner); - _PyAttrCache *cache = (_PyAttrCache *)next_instr; - uint32_t type_version = read_u32(cache->version); assert(type_version != 0); DEOPT_IF(tp->tp_version_tag != type_version, LOAD_ATTR); - char *addr = (char *)owner + cache->index; + char *addr = (char *)owner + index; res = *(PyObject **)addr; DEOPT_IF(res == NULL, LOAD_ATTR); STAT_INC(LOAD_ATTR, hit); Py_INCREF(res); - SET_TOP(NULL); - STACK_GROW((oparg & 1)); - SET_TOP(res); + res2 = NULL; Py_DECREF(owner); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); + STACK_GROW(((oparg & 1) ? 1 : 0)); + POKE(1, res); + if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } + JUMPBY(9); DISPATCH(); } TARGET(LOAD_ATTR_CLASS) { + PyObject *cls = PEEK(1); + PyObject *res2 = NULL; + PyObject *res; + uint32_t type_version = read_u32(&next_instr[1].cache); + PyObject *descr = read_obj(&next_instr[5].cache); assert(cframe.use_tracing == 0); - _PyLoadMethodCache *cache = (_PyLoadMethodCache *)next_instr; - PyObject *cls = TOP(); DEOPT_IF(!PyType_Check(cls), LOAD_ATTR); - uint32_t type_version = read_u32(cache->type_version); DEOPT_IF(((PyTypeObject *)cls)->tp_version_tag != type_version, LOAD_ATTR); assert(type_version != 0); STAT_INC(LOAD_ATTR, hit); - PyObject *res = read_obj(cache->descr); + res2 = NULL; + res = descr; assert(res != NULL); Py_INCREF(res); - SET_TOP(NULL); - STACK_GROW((oparg & 1)); - SET_TOP(res); Py_DECREF(cls); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); + STACK_GROW(((oparg & 1) ? 1 : 0)); + POKE(1, res); + if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } + JUMPBY(9); DISPATCH(); } TARGET(LOAD_ATTR_PROPERTY) { + PyObject *owner = PEEK(1); + uint32_t type_version = read_u32(&next_instr[1].cache); + uint32_t func_version = read_u32(&next_instr[3].cache); + PyObject *fget = read_obj(&next_instr[5].cache); assert(cframe.use_tracing == 0); DEOPT_IF(tstate->interp->eval_frame, LOAD_ATTR); - _PyLoadMethodCache *cache = (_PyLoadMethodCache *)next_instr; - PyObject *owner = TOP(); PyTypeObject *cls = Py_TYPE(owner); - uint32_t type_version = read_u32(cache->type_version); DEOPT_IF(cls->tp_version_tag != type_version, LOAD_ATTR); assert(type_version != 0); - PyObject *fget = read_obj(cache->descr); assert(Py_IS_TYPE(fget, &PyFunction_Type)); PyFunctionObject *f = (PyFunctionObject *)fget; - uint32_t func_version = read_u32(cache->keys_version); assert(func_version != 0); DEOPT_IF(f->func_version != func_version, LOAD_ATTR); PyCodeObject *code = (PyCodeObject *)f->func_code; @@ -1960,6 +1973,7 @@ STAT_INC(LOAD_ATTR, hit); Py_INCREF(fget); _PyInterpreterFrame *new_frame = _PyFrame_PushUnchecked(tstate, f, 1); + // Manipulate stack directly because we exit with DISPATCH_INLINED(). SET_TOP(NULL); int shrink_stack = !(oparg & 1); STACK_SHRINK(shrink_stack); @@ -1969,18 +1983,17 @@ } TARGET(LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN) { + PyObject *owner = PEEK(1); + uint32_t type_version = read_u32(&next_instr[1].cache); + uint32_t func_version = read_u32(&next_instr[3].cache); + PyObject *getattribute = read_obj(&next_instr[5].cache); assert(cframe.use_tracing == 0); DEOPT_IF(tstate->interp->eval_frame, LOAD_ATTR); - _PyLoadMethodCache *cache = (_PyLoadMethodCache *)next_instr; - PyObject *owner = TOP(); PyTypeObject *cls = Py_TYPE(owner); - uint32_t type_version = read_u32(cache->type_version); DEOPT_IF(cls->tp_version_tag != type_version, LOAD_ATTR); assert(type_version != 0); - PyObject *getattribute = read_obj(cache->descr); assert(Py_IS_TYPE(getattribute, &PyFunction_Type)); PyFunctionObject *f = (PyFunctionObject *)getattribute; - uint32_t func_version = read_u32(cache->keys_version); assert(func_version != 0); DEOPT_IF(f->func_version != func_version, LOAD_ATTR); PyCodeObject *code = (PyCodeObject *)f->func_code; @@ -1991,6 +2004,7 @@ PyObject *name = GETITEM(names, oparg >> 1); Py_INCREF(f); _PyInterpreterFrame *new_frame = _PyFrame_PushUnchecked(tstate, f, 2); + // Manipulate stack directly because we exit with DISPATCH_INLINED(). SET_TOP(NULL); int shrink_stack = !(oparg & 1); STACK_SHRINK(shrink_stack); @@ -2831,12 +2845,15 @@ } TARGET(LOAD_ATTR_METHOD_WITH_VALUES) { + PyObject *self = PEEK(1); + PyObject *res2 = NULL; + PyObject *res; + uint32_t type_version = read_u32(&next_instr[1].cache); + uint32_t keys_version = read_u32(&next_instr[3].cache); + PyObject *descr = read_obj(&next_instr[5].cache); /* Cached method object */ assert(cframe.use_tracing == 0); - PyObject *self = TOP(); PyTypeObject *self_cls = Py_TYPE(self); - _PyLoadMethodCache *cache = (_PyLoadMethodCache *)next_instr; - uint32_t type_version = read_u32(cache->type_version); assert(type_version != 0); DEOPT_IF(self_cls->tp_version_tag != type_version, LOAD_ATTR); assert(self_cls->tp_flags & Py_TPFLAGS_MANAGED_DICT); @@ -2844,41 +2861,51 @@ DEOPT_IF(!_PyDictOrValues_IsValues(dorv), LOAD_ATTR); PyHeapTypeObject *self_heap_type = (PyHeapTypeObject *)self_cls; DEOPT_IF(self_heap_type->ht_cached_keys->dk_version != - read_u32(cache->keys_version), LOAD_ATTR); + keys_version, LOAD_ATTR); STAT_INC(LOAD_ATTR, hit); - PyObject *res = read_obj(cache->descr); - assert(res != NULL); - assert(_PyType_HasFeature(Py_TYPE(res), Py_TPFLAGS_METHOD_DESCRIPTOR)); - SET_TOP(Py_NewRef(res)); - PUSH(self); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); + assert(descr != NULL); + res2 = Py_NewRef(descr); + assert(_PyType_HasFeature(Py_TYPE(res2), Py_TPFLAGS_METHOD_DESCRIPTOR)); + res = self; + assert(oparg & 1); + STACK_GROW(((oparg & 1) ? 1 : 0)); + POKE(1, res); + if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } + JUMPBY(9); DISPATCH(); } TARGET(LOAD_ATTR_METHOD_NO_DICT) { + PyObject *self = PEEK(1); + PyObject *res2 = NULL; + PyObject *res; + uint32_t type_version = read_u32(&next_instr[1].cache); + PyObject *descr = read_obj(&next_instr[5].cache); assert(cframe.use_tracing == 0); - PyObject *self = TOP(); PyTypeObject *self_cls = Py_TYPE(self); - _PyLoadMethodCache *cache = (_PyLoadMethodCache *)next_instr; - uint32_t type_version = read_u32(cache->type_version); DEOPT_IF(self_cls->tp_version_tag != type_version, LOAD_ATTR); assert(self_cls->tp_dictoffset == 0); STAT_INC(LOAD_ATTR, hit); - PyObject *res = read_obj(cache->descr); - assert(res != NULL); - assert(_PyType_HasFeature(Py_TYPE(res), Py_TPFLAGS_METHOD_DESCRIPTOR)); - SET_TOP(Py_NewRef(res)); - PUSH(self); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); + assert(descr != NULL); + assert(_PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_METHOD_DESCRIPTOR)); + res2 = Py_NewRef(descr); + res = self; + assert(oparg & 1); + STACK_GROW(((oparg & 1) ? 1 : 0)); + POKE(1, res); + if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } + JUMPBY(9); DISPATCH(); } TARGET(LOAD_ATTR_METHOD_LAZY_DICT) { + PyObject *self = PEEK(1); + PyObject *res2 = NULL; + PyObject *res; + uint32_t type_version = read_u32(&next_instr[1].cache); + PyObject *descr = read_obj(&next_instr[5].cache); assert(cframe.use_tracing == 0); - PyObject *self = TOP(); PyTypeObject *self_cls = Py_TYPE(self); - _PyLoadMethodCache *cache = (_PyLoadMethodCache *)next_instr; - uint32_t type_version = read_u32(cache->type_version); DEOPT_IF(self_cls->tp_version_tag != type_version, LOAD_ATTR); Py_ssize_t dictoffset = self_cls->tp_dictoffset; assert(dictoffset > 0); @@ -2886,12 +2913,15 @@ /* This object has a __dict__, just not yet created */ DEOPT_IF(dict != NULL, LOAD_ATTR); STAT_INC(LOAD_ATTR, hit); - PyObject *res = read_obj(cache->descr); - assert(res != NULL); - assert(_PyType_HasFeature(Py_TYPE(res), Py_TPFLAGS_METHOD_DESCRIPTOR)); - SET_TOP(Py_NewRef(res)); - PUSH(self); - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_ATTR); + assert(descr != NULL); + assert(_PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_METHOD_DESCRIPTOR)); + res2 = Py_NewRef(descr); + res = self; + assert(oparg & 1); + STACK_GROW(((oparg & 1) ? 1 : 0)); + POKE(1, res); + if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } + JUMPBY(9); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index 171fed363d3b..96e57be8cf5d 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -187,19 +187,19 @@ _PyOpcode_num_popped(int opcode, int oparg) { case LOAD_ATTR: return 1; case LOAD_ATTR_INSTANCE_VALUE: - return -1; + return 1; case LOAD_ATTR_MODULE: - return -1; + return 1; case LOAD_ATTR_WITH_HINT: - return -1; + return 1; case LOAD_ATTR_SLOT: - return -1; + return 1; case LOAD_ATTR_CLASS: - return -1; + return 1; case LOAD_ATTR_PROPERTY: - return -1; + return 1; case LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN: - return -1; + return 1; case STORE_ATTR_INSTANCE_VALUE: return 2; case STORE_ATTR_WITH_HINT: @@ -279,11 +279,11 @@ _PyOpcode_num_popped(int opcode, int oparg) { case PUSH_EXC_INFO: return -1; case LOAD_ATTR_METHOD_WITH_VALUES: - return -1; + return 1; case LOAD_ATTR_METHOD_NO_DICT: - return -1; + return 1; case LOAD_ATTR_METHOD_LAZY_DICT: - return -1; + return 1; case CALL_BOUND_METHOD_EXACT_ARGS: return -1; case KW_NAMES: @@ -533,19 +533,19 @@ _PyOpcode_num_pushed(int opcode, int oparg) { case LOAD_ATTR: return ((oparg & 1) ? 1 : 0) + 1; case LOAD_ATTR_INSTANCE_VALUE: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case LOAD_ATTR_MODULE: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case LOAD_ATTR_WITH_HINT: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case LOAD_ATTR_SLOT: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case LOAD_ATTR_CLASS: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case LOAD_ATTR_PROPERTY: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case STORE_ATTR_INSTANCE_VALUE: return 0; case STORE_ATTR_WITH_HINT: @@ -625,11 +625,11 @@ _PyOpcode_num_pushed(int opcode, int oparg) { case PUSH_EXC_INFO: return -1; case LOAD_ATTR_METHOD_WITH_VALUES: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case LOAD_ATTR_METHOD_NO_DICT: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case LOAD_ATTR_METHOD_LAZY_DICT: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case CALL_BOUND_METHOD_EXACT_ARGS: return -1; case KW_NAMES: @@ -792,13 +792,13 @@ struct opcode_metadata { [DICT_MERGE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [MAP_ADD] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [LOAD_ATTR] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, - [LOAD_ATTR_INSTANCE_VALUE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [LOAD_ATTR_MODULE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [LOAD_ATTR_WITH_HINT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [LOAD_ATTR_SLOT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [LOAD_ATTR_CLASS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [LOAD_ATTR_PROPERTY] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, + [LOAD_ATTR_INSTANCE_VALUE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, + [LOAD_ATTR_MODULE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, + [LOAD_ATTR_WITH_HINT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, + [LOAD_ATTR_SLOT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, + [LOAD_ATTR_CLASS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, + [LOAD_ATTR_PROPERTY] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, + [LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, [STORE_ATTR_INSTANCE_VALUE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IXC000 }, [STORE_ATTR_WITH_HINT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, [STORE_ATTR_SLOT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IXC000 }, @@ -838,9 +838,9 @@ struct opcode_metadata { [BEFORE_WITH] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [WITH_EXCEPT_START] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [PUSH_EXC_INFO] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, - [LOAD_ATTR_METHOD_WITH_VALUES] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, - [LOAD_ATTR_METHOD_NO_DICT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, - [LOAD_ATTR_METHOD_LAZY_DICT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, + [LOAD_ATTR_METHOD_WITH_VALUES] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, + [LOAD_ATTR_METHOD_NO_DICT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, + [LOAD_ATTR_METHOD_LAZY_DICT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, [CALL_BOUND_METHOD_EXACT_ARGS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [KW_NAMES] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [CALL] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, From webhook-mailer at python.org Wed Feb 1 14:38:13 2023 From: webhook-mailer at python.org (iritkatriel) Date: Wed, 01 Feb 2023 19:38:13 -0000 Subject: [Python-checkins] gh-98831: rewrite PUSH_EXC_INFO and conditional jumps in the instruction definition DSL (#101481) Message-ID: https://github.com/python/cpython/commit/b91b42d236c81bd7cbe402b322c82bfcd0d883a1 commit: b91b42d236c81bd7cbe402b322c82bfcd0d883a1 branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-01T19:38:06Z summary: gh-98831: rewrite PUSH_EXC_INFO and conditional jumps in the instruction definition DSL (#101481) files: M Python/bytecodes.c M Python/compile.c M Python/generated_cases.c.h M Python/opcode_metadata.h M Tools/cases_generator/generate_cases.py diff --git a/Python/bytecodes.c b/Python/bytecodes.c index bb1deaf3fbeb..105bd95be0fd 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -1898,9 +1898,7 @@ dummy_func( CHECK_EVAL_BREAKER(); } - // stack effect: (__0 -- ) - inst(POP_JUMP_IF_FALSE) { - PyObject *cond = POP(); + inst(POP_JUMP_IF_FALSE, (cond -- )) { if (Py_IsTrue(cond)) { _Py_DECREF_NO_DEALLOC(cond); } @@ -1911,19 +1909,16 @@ dummy_func( else { int err = PyObject_IsTrue(cond); Py_DECREF(cond); - if (err > 0) - ; - else if (err == 0) { + if (err == 0) { JUMPBY(oparg); } - else - goto error; + else { + ERROR_IF(err < 0, error); + } } } - // stack effect: (__0 -- ) - inst(POP_JUMP_IF_TRUE) { - PyObject *cond = POP(); + inst(POP_JUMP_IF_TRUE, (cond -- )) { if (Py_IsFalse(cond)) { _Py_DECREF_NO_DEALLOC(cond); } @@ -1937,25 +1932,23 @@ dummy_func( if (err > 0) { JUMPBY(oparg); } - else if (err == 0) - ; - else - goto error; + else { + ERROR_IF(err < 0, error); + } } } - // stack effect: (__0 -- ) - inst(POP_JUMP_IF_NOT_NONE) { - PyObject *value = POP(); + inst(POP_JUMP_IF_NOT_NONE, (value -- )) { if (!Py_IsNone(value)) { + Py_DECREF(value); JUMPBY(oparg); } - Py_DECREF(value); + else { + _Py_DECREF_NO_DEALLOC(value); + } } - // stack effect: (__0 -- ) - inst(POP_JUMP_IF_NONE) { - PyObject *value = POP(); + inst(POP_JUMP_IF_NONE, (value -- )) { if (Py_IsNone(value)) { _Py_DECREF_NO_DEALLOC(value); JUMPBY(oparg); @@ -1965,25 +1958,24 @@ dummy_func( } } - // error: JUMP_IF_FALSE_OR_POP stack effect depends on jump flag - inst(JUMP_IF_FALSE_OR_POP) { - PyObject *cond = TOP(); + inst(JUMP_IF_FALSE_OR_POP, (cond -- cond if (jump))) { + bool jump = false; int err; if (Py_IsTrue(cond)) { - STACK_SHRINK(1); _Py_DECREF_NO_DEALLOC(cond); } else if (Py_IsFalse(cond)) { JUMPBY(oparg); + jump = true; } else { err = PyObject_IsTrue(cond); if (err > 0) { - STACK_SHRINK(1); Py_DECREF(cond); } else if (err == 0) { JUMPBY(oparg); + jump = true; } else { goto error; @@ -1991,24 +1983,23 @@ dummy_func( } } - // error: JUMP_IF_TRUE_OR_POP stack effect depends on jump flag - inst(JUMP_IF_TRUE_OR_POP) { - PyObject *cond = TOP(); + inst(JUMP_IF_TRUE_OR_POP, (cond -- cond if (jump))) { + bool jump = false; int err; if (Py_IsFalse(cond)) { - STACK_SHRINK(1); _Py_DECREF_NO_DEALLOC(cond); } else if (Py_IsTrue(cond)) { JUMPBY(oparg); + jump = true; } else { err = PyObject_IsTrue(cond); if (err > 0) { JUMPBY(oparg); + jump = true; } else if (err == 0) { - STACK_SHRINK(1); Py_DECREF(cond); } else { @@ -2321,22 +2312,16 @@ dummy_func( ERROR_IF(res == NULL, error); } - // stack effect: ( -- __0) - inst(PUSH_EXC_INFO) { - PyObject *value = TOP(); - + inst(PUSH_EXC_INFO, (new_exc -- prev_exc, new_exc)) { _PyErr_StackItem *exc_info = tstate->exc_info; if (exc_info->exc_value != NULL) { - SET_TOP(exc_info->exc_value); + prev_exc = exc_info->exc_value; } else { - SET_TOP(Py_NewRef(Py_None)); + prev_exc = Py_NewRef(Py_None); } - - PUSH(Py_NewRef(value)); - assert(PyExceptionInstance_Check(value)); - exc_info->exc_value = value; - + assert(PyExceptionInstance_Check(new_exc)); + exc_info->exc_value = Py_NewRef(new_exc); } inst(LOAD_ATTR_METHOD_WITH_VALUES, (unused/1, type_version/2, keys_version/2, descr/4, self -- res2 if (oparg & 1), res)) { diff --git a/Python/compile.c b/Python/compile.c index a11bcc79a6dd..d9ec68958972 100644 --- a/Python/compile.c +++ b/Python/compile.c @@ -8630,17 +8630,19 @@ opcode_metadata_is_sane(cfg_builder *g) { int opcode = instr->i_opcode; int oparg = instr->i_oparg; assert(opcode <= MAX_REAL_OPCODE); - int popped = _PyOpcode_num_popped(opcode, oparg); - int pushed = _PyOpcode_num_pushed(opcode, oparg); - assert((pushed < 0) == (popped < 0)); - if (pushed >= 0) { - assert(_PyOpcode_opcode_metadata[opcode].valid_entry); - int effect = stack_effect(opcode, instr->i_oparg, -1); - if (effect != pushed - popped) { - fprintf(stderr, - "op=%d: stack_effect (%d) != pushed (%d) - popped (%d)\n", - opcode, effect, pushed, popped); - result = false; + for (int jump = 0; jump <= 1; jump++) { + int popped = _PyOpcode_num_popped(opcode, oparg, jump ? true : false); + int pushed = _PyOpcode_num_pushed(opcode, oparg, jump ? true : false); + assert((pushed < 0) == (popped < 0)); + if (pushed >= 0) { + assert(_PyOpcode_opcode_metadata[opcode].valid_entry); + int effect = stack_effect(opcode, instr->i_oparg, jump); + if (effect != pushed - popped) { + fprintf(stderr, + "op=%d arg=%d jump=%d: stack_effect (%d) != pushed (%d) - popped (%d)\n", + opcode, oparg, jump, effect, pushed, popped); + result = false; + } } } } diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index e5c5c7e557a3..a02d8d79c60d 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -2348,7 +2348,7 @@ TARGET(POP_JUMP_IF_FALSE) { PREDICTED(POP_JUMP_IF_FALSE); - PyObject *cond = POP(); + PyObject *cond = PEEK(1); if (Py_IsTrue(cond)) { _Py_DECREF_NO_DEALLOC(cond); } @@ -2359,19 +2359,19 @@ else { int err = PyObject_IsTrue(cond); Py_DECREF(cond); - if (err > 0) - ; - else if (err == 0) { + if (err == 0) { JUMPBY(oparg); } - else - goto error; + else { + if (err < 0) goto pop_1_error; + } } + STACK_SHRINK(1); DISPATCH(); } TARGET(POP_JUMP_IF_TRUE) { - PyObject *cond = POP(); + PyObject *cond = PEEK(1); if (Py_IsFalse(cond)) { _Py_DECREF_NO_DEALLOC(cond); } @@ -2385,25 +2385,29 @@ if (err > 0) { JUMPBY(oparg); } - else if (err == 0) - ; - else - goto error; + else { + if (err < 0) goto pop_1_error; + } } + STACK_SHRINK(1); DISPATCH(); } TARGET(POP_JUMP_IF_NOT_NONE) { - PyObject *value = POP(); + PyObject *value = PEEK(1); if (!Py_IsNone(value)) { + Py_DECREF(value); JUMPBY(oparg); } - Py_DECREF(value); + else { + _Py_DECREF_NO_DEALLOC(value); + } + STACK_SHRINK(1); DISPATCH(); } TARGET(POP_JUMP_IF_NONE) { - PyObject *value = POP(); + PyObject *value = PEEK(1); if (Py_IsNone(value)) { _Py_DECREF_NO_DEALLOC(value); JUMPBY(oparg); @@ -2411,58 +2415,65 @@ else { Py_DECREF(value); } + STACK_SHRINK(1); DISPATCH(); } TARGET(JUMP_IF_FALSE_OR_POP) { - PyObject *cond = TOP(); + PyObject *cond = PEEK(1); + bool jump = false; int err; if (Py_IsTrue(cond)) { - STACK_SHRINK(1); _Py_DECREF_NO_DEALLOC(cond); } else if (Py_IsFalse(cond)) { JUMPBY(oparg); + jump = true; } else { err = PyObject_IsTrue(cond); if (err > 0) { - STACK_SHRINK(1); Py_DECREF(cond); } else if (err == 0) { JUMPBY(oparg); + jump = true; } else { goto error; } } + STACK_SHRINK(1); + STACK_GROW((jump ? 1 : 0)); DISPATCH(); } TARGET(JUMP_IF_TRUE_OR_POP) { - PyObject *cond = TOP(); + PyObject *cond = PEEK(1); + bool jump = false; int err; if (Py_IsFalse(cond)) { - STACK_SHRINK(1); _Py_DECREF_NO_DEALLOC(cond); } else if (Py_IsTrue(cond)) { JUMPBY(oparg); + jump = true; } else { err = PyObject_IsTrue(cond); if (err > 0) { JUMPBY(oparg); + jump = true; } else if (err == 0) { - STACK_SHRINK(1); Py_DECREF(cond); } else { goto error; } } + STACK_SHRINK(1); + STACK_GROW((jump ? 1 : 0)); DISPATCH(); } @@ -2828,19 +2839,20 @@ } TARGET(PUSH_EXC_INFO) { - PyObject *value = TOP(); - + PyObject *new_exc = PEEK(1); + PyObject *prev_exc; _PyErr_StackItem *exc_info = tstate->exc_info; if (exc_info->exc_value != NULL) { - SET_TOP(exc_info->exc_value); + prev_exc = exc_info->exc_value; } else { - SET_TOP(Py_NewRef(Py_None)); + prev_exc = Py_NewRef(Py_None); } - - PUSH(Py_NewRef(value)); - assert(PyExceptionInstance_Check(value)); - exc_info->exc_value = value; + assert(PyExceptionInstance_Check(new_exc)); + exc_info->exc_value = Py_NewRef(new_exc); + STACK_GROW(1); + POKE(1, new_exc); + POKE(2, prev_exc); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index 96e57be8cf5d..c9f9759e16b3 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -4,7 +4,7 @@ #ifndef NDEBUG static int -_PyOpcode_num_popped(int opcode, int oparg) { +_PyOpcode_num_popped(int opcode, int oparg, bool jump) { switch(opcode) { case NOP: return 0; @@ -233,17 +233,17 @@ _PyOpcode_num_popped(int opcode, int oparg) { case JUMP_BACKWARD: return 0; case POP_JUMP_IF_FALSE: - return -1; + return 1; case POP_JUMP_IF_TRUE: - return -1; + return 1; case POP_JUMP_IF_NOT_NONE: - return -1; + return 1; case POP_JUMP_IF_NONE: - return -1; + return 1; case JUMP_IF_FALSE_OR_POP: - return -1; + return 1; case JUMP_IF_TRUE_OR_POP: - return -1; + return 1; case JUMP_BACKWARD_NO_INTERRUPT: return 0; case GET_LEN: @@ -277,7 +277,7 @@ _PyOpcode_num_popped(int opcode, int oparg) { case WITH_EXCEPT_START: return 4; case PUSH_EXC_INFO: - return -1; + return 1; case LOAD_ATTR_METHOD_WITH_VALUES: return 1; case LOAD_ATTR_METHOD_NO_DICT: @@ -350,7 +350,7 @@ _PyOpcode_num_popped(int opcode, int oparg) { #ifndef NDEBUG static int -_PyOpcode_num_pushed(int opcode, int oparg) { +_PyOpcode_num_pushed(int opcode, int oparg, bool jump) { switch(opcode) { case NOP: return 0; @@ -579,17 +579,17 @@ _PyOpcode_num_pushed(int opcode, int oparg) { case JUMP_BACKWARD: return 0; case POP_JUMP_IF_FALSE: - return -1; + return 0; case POP_JUMP_IF_TRUE: - return -1; + return 0; case POP_JUMP_IF_NOT_NONE: - return -1; + return 0; case POP_JUMP_IF_NONE: - return -1; + return 0; case JUMP_IF_FALSE_OR_POP: - return -1; + return (jump ? 1 : 0); case JUMP_IF_TRUE_OR_POP: - return -1; + return (jump ? 1 : 0); case JUMP_BACKWARD_NO_INTERRUPT: return 0; case GET_LEN: @@ -623,7 +623,7 @@ _PyOpcode_num_pushed(int opcode, int oparg) { case WITH_EXCEPT_START: return 5; case PUSH_EXC_INFO: - return -1; + return 2; case LOAD_ATTR_METHOD_WITH_VALUES: return ((oparg & 1) ? 1 : 0) + 1; case LOAD_ATTR_METHOD_NO_DICT: diff --git a/Tools/cases_generator/generate_cases.py b/Tools/cases_generator/generate_cases.py index 43685450cc0d..3925583b40e7 100644 --- a/Tools/cases_generator/generate_cases.py +++ b/Tools/cases_generator/generate_cases.py @@ -868,7 +868,7 @@ def write_function( ) -> None: self.out.emit("\n#ifndef NDEBUG") self.out.emit("static int") - self.out.emit(f"_PyOpcode_num_{direction}(int opcode, int oparg) {{") + self.out.emit(f"_PyOpcode_num_{direction}(int opcode, int oparg, bool jump) {{") self.out.emit(" switch(opcode) {") for instr, effect in data: self.out.emit(f" case {instr.name}:") From webhook-mailer at python.org Wed Feb 1 16:07:04 2023 From: webhook-mailer at python.org (zooba) Date: Wed, 01 Feb 2023 21:07:04 -0000 Subject: [Python-checkins] gh-101467: Correct py.exe handling of prefix matches and cases when only one runtime is installed (GH-101468) Message-ID: https://github.com/python/cpython/commit/eda60916bc88f8af736790ffd52381e8bb83ae83 commit: eda60916bc88f8af736790ffd52381e8bb83ae83 branch: main author: Steve Dower committer: zooba date: 2023-02-01T21:06:56Z summary: gh-101467: Correct py.exe handling of prefix matches and cases when only one runtime is installed (GH-101468) files: A Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst M Doc/using/windows.rst M Lib/test/test_launcher.py M PC/launcher2.c diff --git a/Doc/using/windows.rst b/Doc/using/windows.rst index bd09666d5e01..1c4e41c0e0e2 100644 --- a/Doc/using/windows.rst +++ b/Doc/using/windows.rst @@ -743,22 +743,47 @@ command:: py -2 -You should find the latest version of Python 3.x starts. - If you see the following error, you do not have the launcher installed:: 'py' is not recognized as an internal or external command, operable program or batch file. -Per-user installations of Python do not add the launcher to :envvar:`PATH` -unless the option was selected on installation. - The command:: py --list displays the currently installed version(s) of Python. +The ``-x.y`` argument is the short form of the ``-V:Company/Tag`` argument, +which allows selecting a specific Python runtime, including those that may have +come from somewhere other than python.org. Any runtime registered by following +:pep:`514` will be discoverable. The ``--list`` command lists all available +runtimes using the ``-V:`` format. + +When using the ``-V:`` argument, specifying the Company will limit selection to +runtimes from that provider, while specifying only the Tag will select from all +providers. Note that omitting the slash implies a tag:: + + # Select any '3.*' tagged runtime + py -V:3 + + # Select any 'PythonCore' released runtime + py -V:PythonCore/ + + # Select PythonCore's latest Python 3 runtime + py -V:PythonCore/3 + +The short form of the argument (``-3``) only ever selects from core Python +releases, and not other distributions. However, the longer form (``-V:3``) will +select from any. + +The Company is matched on the full string, case-insenitive. The Tag is matched +oneither the full string, or a prefix, provided the next character is a dot or a +hyphen. This allows ``-V:3.1`` to match ``3.1-32``, but not ``3.10``. Tags are +sorted using numerical ordering (``3.10`` is newer than ``3.1``), but are +compared using text (``-V:3.01`` does not match ``3.1``). + + Virtual environments ^^^^^^^^^^^^^^^^^^^^ diff --git a/Lib/test/test_launcher.py b/Lib/test/test_launcher.py index 351a638c1dd3..2f35eaf08a2d 100644 --- a/Lib/test/test_launcher.py +++ b/Lib/test/test_launcher.py @@ -56,7 +56,17 @@ None: sys.prefix, } }, - } + }, + "PythonTestSuite1": { + "DisplayName": "Python Test Suite Single", + "3.100": { + "DisplayName": "Single Interpreter", + "InstallPath": { + None: sys.prefix, + "ExecutablePath": sys.executable, + } + } + }, } @@ -206,6 +216,7 @@ def run_py(self, args, env=None, allow_fail=False, expect_returncode=0, argv=Non **{k.upper(): v for k, v in os.environ.items() if k.upper() not in ignore}, "PYLAUNCHER_DEBUG": "1", "PYLAUNCHER_DRYRUN": "1", + "PYLAUNCHER_LIMIT_TO_COMPANY": "", **{k.upper(): v for k, v in (env or {}).items()}, } if not argv: @@ -388,23 +399,33 @@ def test_filter_to_tag(self): self.assertEqual(company, data["env.company"]) self.assertEqual("3.100", data["env.tag"]) - data = self.run_py([f"-V:3.100-3"]) + data = self.run_py([f"-V:3.100-32"]) self.assertEqual("X.Y-32.exe", data["LaunchCommand"]) self.assertEqual(company, data["env.company"]) self.assertEqual("3.100-32", data["env.tag"]) - data = self.run_py([f"-V:3.100-a"]) + data = self.run_py([f"-V:3.100-arm64"]) self.assertEqual("X.Y-arm64.exe -X fake_arg_for_test", data["LaunchCommand"]) self.assertEqual(company, data["env.company"]) self.assertEqual("3.100-arm64", data["env.tag"]) def test_filter_to_company_and_tag(self): company = "PythonTestSuite" - data = self.run_py([f"-V:{company}/3.1"]) + data = self.run_py([f"-V:{company}/3.1"], expect_returncode=103) + + data = self.run_py([f"-V:{company}/3.100"]) self.assertEqual("X.Y.exe", data["LaunchCommand"]) self.assertEqual(company, data["env.company"]) self.assertEqual("3.100", data["env.tag"]) + def test_filter_with_single_install(self): + company = "PythonTestSuite1" + data = self.run_py( + [f"-V:Nonexistent"], + env={"PYLAUNCHER_LIMIT_TO_COMPANY": company}, + expect_returncode=103, + ) + def test_search_major_3(self): try: data = self.run_py(["-3"], allow_fail=True) diff --git a/Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst b/Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst new file mode 100644 index 000000000000..4d4da05afa2d --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst @@ -0,0 +1,3 @@ +The ``py.exe`` launcher now correctly filters when only a single runtime is +installed. It also correctly handles prefix matches on tags so that ``-3.1`` +does not match ``3.11``, but would still match ``3.1-32``. diff --git a/PC/launcher2.c b/PC/launcher2.c index 2052a2ffeb57..beeb2ae46b83 100644 --- a/PC/launcher2.c +++ b/PC/launcher2.c @@ -295,6 +295,30 @@ _startsWithArgument(const wchar_t *x, int xLen, const wchar_t *y, int yLen) } +// Unlike regular startsWith, this function requires that the following +// character is either NULL (that is, the entire string matches) or is one of +// the characters in 'separators'. +bool +_startsWithSeparated(const wchar_t *x, int xLen, const wchar_t *y, int yLen, const wchar_t *separators) +{ + if (!x || !y) { + return false; + } + yLen = yLen < 0 ? (int)wcsnlen_s(y, MAXLEN) : yLen; + xLen = xLen < 0 ? (int)wcsnlen_s(x, MAXLEN) : xLen; + if (xLen < yLen) { + return false; + } + if (xLen == yLen) { + return 0 == _compare(x, xLen, y, yLen); + } + return separators && + 0 == _compare(x, yLen, y, yLen) && + wcschr(separators, x[yLen]) != NULL; +} + + + /******************************************************************************\ *** HELP TEXT *** \******************************************************************************/ @@ -409,6 +433,9 @@ typedef struct { bool listPaths; // if true, display help message before contiuning bool help; + // if set, limits search to registry keys with the specified Company + // This is intended for debugging and testing only + const wchar_t *limitToCompany; // dynamically allocated buffers to free later struct _SearchInfoBuffer *_buffer; } SearchInfo; @@ -489,6 +516,7 @@ dumpSearchInfo(SearchInfo *search) DEBUG_BOOL(list); DEBUG_BOOL(listPaths); DEBUG_BOOL(help); + DEBUG(limitToCompany); #undef DEBUG_BOOL #undef DEBUG_2 #undef DEBUG @@ -1606,6 +1634,10 @@ registrySearch(const SearchInfo *search, EnvironmentInfo **result, HKEY root, in } break; } + if (search->limitToCompany && 0 != _compare(search->limitToCompany, -1, buffer, cchBuffer)) { + debug(L"# Skipping %s due to PYLAUNCHER_LIMIT_TO_COMPANY\n", buffer); + continue; + } HKEY subkey; if (ERROR_SUCCESS == RegOpenKeyExW(root, buffer, 0, KEY_READ, &subkey)) { exitCode = _registrySearchTags(search, result, subkey, sortKey, buffer, fallbackArch); @@ -1884,6 +1916,11 @@ collectEnvironments(const SearchInfo *search, EnvironmentInfo **result) } } + if (search->limitToCompany) { + debug(L"# Skipping APPX search due to PYLAUNCHER_LIMIT_TO_COMPANY\n"); + return 0; + } + for (struct AppxSearchInfo *info = APPX_SEARCH; info->familyName; ++info) { exitCode = appxSearch(search, result, info->familyName, info->tag, info->sortKey); if (exitCode && exitCode != RC_NO_PYTHON) { @@ -2053,12 +2090,15 @@ _companyMatches(const SearchInfo *search, const EnvironmentInfo *env) bool -_tagMatches(const SearchInfo *search, const EnvironmentInfo *env) +_tagMatches(const SearchInfo *search, const EnvironmentInfo *env, int searchTagLength) { - if (!search->tag || !search->tagLength) { + if (searchTagLength < 0) { + searchTagLength = search->tagLength; + } + if (!search->tag || !searchTagLength) { return true; } - return _startsWith(env->tag, -1, search->tag, search->tagLength); + return _startsWithSeparated(env->tag, -1, search->tag, searchTagLength, L".-"); } @@ -2095,7 +2135,7 @@ _selectEnvironment(const SearchInfo *search, EnvironmentInfo *env, EnvironmentIn } if (!search->oldStyleTag) { - if (_companyMatches(search, env) && _tagMatches(search, env)) { + if (_companyMatches(search, env) && _tagMatches(search, env, -1)) { // Because of how our sort tree is set up, we will walk up the // "prev" side and implicitly select the "best" best. By // returning straight after a match, we skip the entire "next" @@ -2120,7 +2160,7 @@ _selectEnvironment(const SearchInfo *search, EnvironmentInfo *env, EnvironmentIn } } - if (_startsWith(env->tag, -1, search->tag, tagLength)) { + if (_tagMatches(search, env, tagLength)) { if (exclude32Bit && _is32Bit(env)) { debug(L"# Excluding %s/%s because it looks like 32bit\n", env->company, env->tag); } else if (only32Bit && !_is32Bit(env)) { @@ -2147,10 +2187,6 @@ selectEnvironment(const SearchInfo *search, EnvironmentInfo *root, EnvironmentIn *best = NULL; return RC_NO_PYTHON_AT_ALL; } - if (!root->next && !root->prev) { - *best = root; - return 0; - } EnvironmentInfo *result = NULL; int exitCode = _selectEnvironment(search, root, &result); @@ -2560,6 +2596,17 @@ process(int argc, wchar_t ** argv) debug(L"argv0: %s\nversion: %S\n", argv[0], PY_VERSION); } + DWORD len = GetEnvironmentVariableW(L"PYLAUNCHER_LIMIT_TO_COMPANY", NULL, 0); + if (len > 1) { + wchar_t *limitToCompany = allocSearchInfoBuffer(&search, len); + search.limitToCompany = limitToCompany; + if (0 == GetEnvironmentVariableW(L"PYLAUNCHER_LIMIT_TO_COMPANY", limitToCompany, len)) { + exitCode = RC_INTERNAL_ERROR; + winerror(0, L"Failed to read PYLAUNCHER_LIMIT_TO_COMPANY variable"); + goto abort; + } + } + search.originalCmdLine = GetCommandLineW(); exitCode = performSearch(&search, &envs); From webhook-mailer at python.org Wed Feb 1 16:12:55 2023 From: webhook-mailer at python.org (gvanrossum) Date: Wed, 01 Feb 2023 21:12:55 -0000 Subject: [Python-checkins] gh-98831: Modernize the LOAD_GLOBAL family (#101502) Message-ID: https://github.com/python/cpython/commit/ae9b38f4241a7d62ec4e9b775d4436d762b11fb3 commit: ae9b38f4241a7d62ec4e9b775d4436d762b11fb3 branch: main author: Guido van Rossum committer: gvanrossum date: 2023-02-01T13:12:49-08:00 summary: gh-98831: Modernize the LOAD_GLOBAL family (#101502) files: M Python/bytecodes.c M Python/generated_cases.c.h M Python/opcode_metadata.h diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 105bd95be0fd..fb41c8387ccc 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -1068,8 +1068,13 @@ dummy_func( } } - // error: LOAD_GLOBAL has irregular stack effect - inst(LOAD_GLOBAL) { + family(load_global, INLINE_CACHE_ENTRIES_LOAD_GLOBAL) = { + LOAD_GLOBAL, + LOAD_GLOBAL_MODULE, + LOAD_GLOBAL_BUILTIN, + }; + + inst(LOAD_GLOBAL, (unused/1, unused/1, unused/2, unused/1 -- null if (oparg & 1), v)) { #if ENABLE_SPECIALIZATION _PyLoadGlobalCache *cache = (_PyLoadGlobalCache *)next_instr; if (ADAPTIVE_COUNTER_IS_ZERO(cache->counter)) { @@ -1082,10 +1087,7 @@ dummy_func( STAT_INC(LOAD_GLOBAL, deferred); DECREMENT_ADAPTIVE_COUNTER(cache->counter); #endif /* ENABLE_SPECIALIZATION */ - int push_null = oparg & 1; - PEEK(0) = NULL; PyObject *name = GETITEM(names, oparg>>1); - PyObject *v; if (PyDict_CheckExact(GLOBALS()) && PyDict_CheckExact(BUILTINS())) { @@ -1099,7 +1101,7 @@ dummy_func( format_exc_check_arg(tstate, PyExc_NameError, NAME_ERROR_MSG, name); } - goto error; + ERROR_IF(true, error); } Py_INCREF(v); } @@ -1109,9 +1111,7 @@ dummy_func( /* namespace 1: globals */ v = PyObject_GetItem(GLOBALS(), name); if (v == NULL) { - if (!_PyErr_ExceptionMatches(tstate, PyExc_KeyError)) { - goto error; - } + ERROR_IF(!_PyErr_ExceptionMatches(tstate, PyExc_KeyError), error); _PyErr_Clear(tstate); /* namespace 2: builtins */ @@ -1122,58 +1122,42 @@ dummy_func( tstate, PyExc_NameError, NAME_ERROR_MSG, name); } - goto error; + ERROR_IF(true, error); } } } - /* Skip over inline cache */ - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_GLOBAL); - STACK_GROW(push_null); - PUSH(v); + null = NULL; } - // error: LOAD_GLOBAL has irregular stack effect - inst(LOAD_GLOBAL_MODULE) { + inst(LOAD_GLOBAL_MODULE, (unused/1, index/1, version/2, unused/1 -- null if (oparg & 1), res)) { assert(cframe.use_tracing == 0); DEOPT_IF(!PyDict_CheckExact(GLOBALS()), LOAD_GLOBAL); PyDictObject *dict = (PyDictObject *)GLOBALS(); - _PyLoadGlobalCache *cache = (_PyLoadGlobalCache *)next_instr; - uint32_t version = read_u32(cache->module_keys_version); DEOPT_IF(dict->ma_keys->dk_version != version, LOAD_GLOBAL); assert(DK_IS_UNICODE(dict->ma_keys)); PyDictUnicodeEntry *entries = DK_UNICODE_ENTRIES(dict->ma_keys); - PyObject *res = entries[cache->index].me_value; + res = entries[index].me_value; DEOPT_IF(res == NULL, LOAD_GLOBAL); - int push_null = oparg & 1; - PEEK(0) = NULL; - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_GLOBAL); + Py_INCREF(res); STAT_INC(LOAD_GLOBAL, hit); - STACK_GROW(push_null+1); - SET_TOP(Py_NewRef(res)); + null = NULL; } - // error: LOAD_GLOBAL has irregular stack effect - inst(LOAD_GLOBAL_BUILTIN) { + inst(LOAD_GLOBAL_BUILTIN, (unused/1, index/1, mod_version/2, bltn_version/1 -- null if (oparg & 1), res)) { assert(cframe.use_tracing == 0); DEOPT_IF(!PyDict_CheckExact(GLOBALS()), LOAD_GLOBAL); DEOPT_IF(!PyDict_CheckExact(BUILTINS()), LOAD_GLOBAL); PyDictObject *mdict = (PyDictObject *)GLOBALS(); PyDictObject *bdict = (PyDictObject *)BUILTINS(); - _PyLoadGlobalCache *cache = (_PyLoadGlobalCache *)next_instr; - uint32_t mod_version = read_u32(cache->module_keys_version); - uint16_t bltn_version = cache->builtin_keys_version; DEOPT_IF(mdict->ma_keys->dk_version != mod_version, LOAD_GLOBAL); DEOPT_IF(bdict->ma_keys->dk_version != bltn_version, LOAD_GLOBAL); assert(DK_IS_UNICODE(bdict->ma_keys)); PyDictUnicodeEntry *entries = DK_UNICODE_ENTRIES(bdict->ma_keys); - PyObject *res = entries[cache->index].me_value; + res = entries[index].me_value; DEOPT_IF(res == NULL, LOAD_GLOBAL); - int push_null = oparg & 1; - PEEK(0) = NULL; - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_GLOBAL); + Py_INCREF(res); STAT_INC(LOAD_GLOBAL, hit); - STACK_GROW(push_null+1); - SET_TOP(Py_NewRef(res)); + null = NULL; } inst(DELETE_FAST, (--)) { @@ -3212,9 +3196,6 @@ family(call, INLINE_CACHE_ENTRIES_CALL) = { family(for_iter, INLINE_CACHE_ENTRIES_FOR_ITER) = { FOR_ITER, FOR_ITER_LIST, FOR_ITER_RANGE }; -family(load_global, INLINE_CACHE_ENTRIES_LOAD_GLOBAL) = { - LOAD_GLOBAL, LOAD_GLOBAL_BUILTIN, - LOAD_GLOBAL_MODULE }; family(store_fast) = { STORE_FAST, STORE_FAST__LOAD_FAST, STORE_FAST__STORE_FAST }; family(unpack_sequence, INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE) = { UNPACK_SEQUENCE, UNPACK_SEQUENCE_LIST, diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index a02d8d79c60d..77f9b00c4f0a 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -1311,6 +1311,9 @@ TARGET(LOAD_GLOBAL) { PREDICTED(LOAD_GLOBAL); + static_assert(INLINE_CACHE_ENTRIES_LOAD_GLOBAL == 5, "incorrect cache size"); + PyObject *null = NULL; + PyObject *v; #if ENABLE_SPECIALIZATION _PyLoadGlobalCache *cache = (_PyLoadGlobalCache *)next_instr; if (ADAPTIVE_COUNTER_IS_ZERO(cache->counter)) { @@ -1323,10 +1326,7 @@ STAT_INC(LOAD_GLOBAL, deferred); DECREMENT_ADAPTIVE_COUNTER(cache->counter); #endif /* ENABLE_SPECIALIZATION */ - int push_null = oparg & 1; - PEEK(0) = NULL; PyObject *name = GETITEM(names, oparg>>1); - PyObject *v; if (PyDict_CheckExact(GLOBALS()) && PyDict_CheckExact(BUILTINS())) { @@ -1340,7 +1340,7 @@ format_exc_check_arg(tstate, PyExc_NameError, NAME_ERROR_MSG, name); } - goto error; + if (true) goto error; } Py_INCREF(v); } @@ -1350,9 +1350,7 @@ /* namespace 1: globals */ v = PyObject_GetItem(GLOBALS(), name); if (v == NULL) { - if (!_PyErr_ExceptionMatches(tstate, PyExc_KeyError)) { - goto error; - } + if (!_PyErr_ExceptionMatches(tstate, PyExc_KeyError)) goto error; _PyErr_Clear(tstate); /* namespace 2: builtins */ @@ -1363,58 +1361,68 @@ tstate, PyExc_NameError, NAME_ERROR_MSG, name); } - goto error; + if (true) goto error; } } } - /* Skip over inline cache */ - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_GLOBAL); - STACK_GROW(push_null); - PUSH(v); + null = NULL; + STACK_GROW(1); + STACK_GROW(((oparg & 1) ? 1 : 0)); + POKE(1, v); + if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), null); } + JUMPBY(5); DISPATCH(); } TARGET(LOAD_GLOBAL_MODULE) { + PyObject *null = NULL; + PyObject *res; + uint16_t index = read_u16(&next_instr[1].cache); + uint32_t version = read_u32(&next_instr[2].cache); assert(cframe.use_tracing == 0); DEOPT_IF(!PyDict_CheckExact(GLOBALS()), LOAD_GLOBAL); PyDictObject *dict = (PyDictObject *)GLOBALS(); - _PyLoadGlobalCache *cache = (_PyLoadGlobalCache *)next_instr; - uint32_t version = read_u32(cache->module_keys_version); DEOPT_IF(dict->ma_keys->dk_version != version, LOAD_GLOBAL); assert(DK_IS_UNICODE(dict->ma_keys)); PyDictUnicodeEntry *entries = DK_UNICODE_ENTRIES(dict->ma_keys); - PyObject *res = entries[cache->index].me_value; + res = entries[index].me_value; DEOPT_IF(res == NULL, LOAD_GLOBAL); - int push_null = oparg & 1; - PEEK(0) = NULL; - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_GLOBAL); + Py_INCREF(res); STAT_INC(LOAD_GLOBAL, hit); - STACK_GROW(push_null+1); - SET_TOP(Py_NewRef(res)); + null = NULL; + STACK_GROW(1); + STACK_GROW(((oparg & 1) ? 1 : 0)); + POKE(1, res); + if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), null); } + JUMPBY(5); DISPATCH(); } TARGET(LOAD_GLOBAL_BUILTIN) { + PyObject *null = NULL; + PyObject *res; + uint16_t index = read_u16(&next_instr[1].cache); + uint32_t mod_version = read_u32(&next_instr[2].cache); + uint16_t bltn_version = read_u16(&next_instr[4].cache); assert(cframe.use_tracing == 0); DEOPT_IF(!PyDict_CheckExact(GLOBALS()), LOAD_GLOBAL); DEOPT_IF(!PyDict_CheckExact(BUILTINS()), LOAD_GLOBAL); PyDictObject *mdict = (PyDictObject *)GLOBALS(); PyDictObject *bdict = (PyDictObject *)BUILTINS(); - _PyLoadGlobalCache *cache = (_PyLoadGlobalCache *)next_instr; - uint32_t mod_version = read_u32(cache->module_keys_version); - uint16_t bltn_version = cache->builtin_keys_version; DEOPT_IF(mdict->ma_keys->dk_version != mod_version, LOAD_GLOBAL); DEOPT_IF(bdict->ma_keys->dk_version != bltn_version, LOAD_GLOBAL); assert(DK_IS_UNICODE(bdict->ma_keys)); PyDictUnicodeEntry *entries = DK_UNICODE_ENTRIES(bdict->ma_keys); - PyObject *res = entries[cache->index].me_value; + res = entries[index].me_value; DEOPT_IF(res == NULL, LOAD_GLOBAL); - int push_null = oparg & 1; - PEEK(0) = NULL; - JUMPBY(INLINE_CACHE_ENTRIES_LOAD_GLOBAL); + Py_INCREF(res); STAT_INC(LOAD_GLOBAL, hit); - STACK_GROW(push_null+1); - SET_TOP(Py_NewRef(res)); + null = NULL; + STACK_GROW(1); + STACK_GROW(((oparg & 1) ? 1 : 0)); + POKE(1, res); + if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), null); } + JUMPBY(5); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index c9f9759e16b3..f0bdedeb667d 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -141,11 +141,11 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case LOAD_NAME: return 0; case LOAD_GLOBAL: - return -1; + return 0; case LOAD_GLOBAL_MODULE: - return -1; + return 0; case LOAD_GLOBAL_BUILTIN: - return -1; + return 0; case DELETE_FAST: return 0; case MAKE_CELL: @@ -487,11 +487,11 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case LOAD_NAME: return 1; case LOAD_GLOBAL: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case LOAD_GLOBAL_MODULE: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case LOAD_GLOBAL_BUILTIN: - return -1; + return ((oparg & 1) ? 1 : 0) + 1; case DELETE_FAST: return 0; case MAKE_CELL: @@ -694,7 +694,7 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { } #endif enum Direction { DIR_NONE, DIR_READ, DIR_WRITE }; -enum InstructionFormat { INSTR_FMT_IB, INSTR_FMT_IBC, INSTR_FMT_IBC0, INSTR_FMT_IBC000, INSTR_FMT_IBC00000000, INSTR_FMT_IBIB, INSTR_FMT_IX, INSTR_FMT_IXC, INSTR_FMT_IXC000 }; +enum InstructionFormat { INSTR_FMT_IB, INSTR_FMT_IBC, INSTR_FMT_IBC0, INSTR_FMT_IBC000, INSTR_FMT_IBC0000, INSTR_FMT_IBC00000000, INSTR_FMT_IBIB, INSTR_FMT_IX, INSTR_FMT_IXC, INSTR_FMT_IXC000 }; struct opcode_metadata { enum Direction dir_op1; enum Direction dir_op2; @@ -769,9 +769,9 @@ struct opcode_metadata { [STORE_GLOBAL] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [DELETE_GLOBAL] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [LOAD_NAME] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [LOAD_GLOBAL] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [LOAD_GLOBAL_MODULE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [LOAD_GLOBAL_BUILTIN] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, + [LOAD_GLOBAL] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC0000 }, + [LOAD_GLOBAL_MODULE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC0000 }, + [LOAD_GLOBAL_BUILTIN] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC0000 }, [DELETE_FAST] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [MAKE_CELL] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [DELETE_DEREF] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, From webhook-mailer at python.org Wed Feb 1 16:31:28 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 01 Feb 2023 21:31:28 -0000 Subject: [Python-checkins] gh-101467: Correct py.exe handling of prefix matches and cases when only one runtime is installed (GH-101468) Message-ID: https://github.com/python/cpython/commit/e98fa7121dd80496c60f07bb51101b648fe27cda commit: e98fa7121dd80496c60f07bb51101b648fe27cda branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-01T13:31:21-08:00 summary: gh-101467: Correct py.exe handling of prefix matches and cases when only one runtime is installed (GH-101468) (cherry picked from commit eda60916bc88f8af736790ffd52381e8bb83ae83) Co-authored-by: Steve Dower files: A Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst M Doc/using/windows.rst M Lib/test/test_launcher.py M PC/launcher2.c diff --git a/Doc/using/windows.rst b/Doc/using/windows.rst index 092ce47ad39a..50c220d77559 100644 --- a/Doc/using/windows.rst +++ b/Doc/using/windows.rst @@ -730,22 +730,47 @@ command:: py -2 -You should find the latest version of Python 3.x starts. - If you see the following error, you do not have the launcher installed:: 'py' is not recognized as an internal or external command, operable program or batch file. -Per-user installations of Python do not add the launcher to :envvar:`PATH` -unless the option was selected on installation. - The command:: py --list displays the currently installed version(s) of Python. +The ``-x.y`` argument is the short form of the ``-V:Company/Tag`` argument, +which allows selecting a specific Python runtime, including those that may have +come from somewhere other than python.org. Any runtime registered by following +:pep:`514` will be discoverable. The ``--list`` command lists all available +runtimes using the ``-V:`` format. + +When using the ``-V:`` argument, specifying the Company will limit selection to +runtimes from that provider, while specifying only the Tag will select from all +providers. Note that omitting the slash implies a tag:: + + # Select any '3.*' tagged runtime + py -V:3 + + # Select any 'PythonCore' released runtime + py -V:PythonCore/ + + # Select PythonCore's latest Python 3 runtime + py -V:PythonCore/3 + +The short form of the argument (``-3``) only ever selects from core Python +releases, and not other distributions. However, the longer form (``-V:3``) will +select from any. + +The Company is matched on the full string, case-insenitive. The Tag is matched +oneither the full string, or a prefix, provided the next character is a dot or a +hyphen. This allows ``-V:3.1`` to match ``3.1-32``, but not ``3.10``. Tags are +sorted using numerical ordering (``3.10`` is newer than ``3.1``), but are +compared using text (``-V:3.01`` does not match ``3.1``). + + Virtual environments ^^^^^^^^^^^^^^^^^^^^ diff --git a/Lib/test/test_launcher.py b/Lib/test/test_launcher.py index d7d51eb6ecaa..d0a1f624564c 100644 --- a/Lib/test/test_launcher.py +++ b/Lib/test/test_launcher.py @@ -57,7 +57,17 @@ None: sys.prefix, } }, - } + }, + "PythonTestSuite1": { + "DisplayName": "Python Test Suite Single", + "3.100": { + "DisplayName": "Single Interpreter", + "InstallPath": { + None: sys.prefix, + "ExecutablePath": sys.executable, + } + } + }, } @@ -207,6 +217,7 @@ def run_py(self, args, env=None, allow_fail=False, expect_returncode=0, argv=Non **{k.upper(): v for k, v in os.environ.items() if k.upper() not in ignore}, "PYLAUNCHER_DEBUG": "1", "PYLAUNCHER_DRYRUN": "1", + "PYLAUNCHER_LIMIT_TO_COMPANY": "", **{k.upper(): v for k, v in (env or {}).items()}, } if not argv: @@ -389,23 +400,33 @@ def test_filter_to_tag(self): self.assertEqual(company, data["env.company"]) self.assertEqual("3.100", data["env.tag"]) - data = self.run_py([f"-V:3.100-3"]) + data = self.run_py([f"-V:3.100-32"]) self.assertEqual("X.Y-32.exe", data["LaunchCommand"]) self.assertEqual(company, data["env.company"]) self.assertEqual("3.100-32", data["env.tag"]) - data = self.run_py([f"-V:3.100-a"]) + data = self.run_py([f"-V:3.100-arm64"]) self.assertEqual("X.Y-arm64.exe -X fake_arg_for_test", data["LaunchCommand"]) self.assertEqual(company, data["env.company"]) self.assertEqual("3.100-arm64", data["env.tag"]) def test_filter_to_company_and_tag(self): company = "PythonTestSuite" - data = self.run_py([f"-V:{company}/3.1"]) + data = self.run_py([f"-V:{company}/3.1"], expect_returncode=103) + + data = self.run_py([f"-V:{company}/3.100"]) self.assertEqual("X.Y.exe", data["LaunchCommand"]) self.assertEqual(company, data["env.company"]) self.assertEqual("3.100", data["env.tag"]) + def test_filter_with_single_install(self): + company = "PythonTestSuite1" + data = self.run_py( + [f"-V:Nonexistent"], + env={"PYLAUNCHER_LIMIT_TO_COMPANY": company}, + expect_returncode=103, + ) + def test_search_major_3(self): try: data = self.run_py(["-3"], allow_fail=True) diff --git a/Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst b/Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst new file mode 100644 index 000000000000..4d4da05afa2d --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst @@ -0,0 +1,3 @@ +The ``py.exe`` launcher now correctly filters when only a single runtime is +installed. It also correctly handles prefix matches on tags so that ``-3.1`` +does not match ``3.11``, but would still match ``3.1-32``. diff --git a/PC/launcher2.c b/PC/launcher2.c index 4c77ec0be439..50685a0ef240 100644 --- a/PC/launcher2.c +++ b/PC/launcher2.c @@ -295,6 +295,30 @@ _startsWithArgument(const wchar_t *x, int xLen, const wchar_t *y, int yLen) } +// Unlike regular startsWith, this function requires that the following +// character is either NULL (that is, the entire string matches) or is one of +// the characters in 'separators'. +bool +_startsWithSeparated(const wchar_t *x, int xLen, const wchar_t *y, int yLen, const wchar_t *separators) +{ + if (!x || !y) { + return false; + } + yLen = yLen < 0 ? (int)wcsnlen_s(y, MAXLEN) : yLen; + xLen = xLen < 0 ? (int)wcsnlen_s(x, MAXLEN) : xLen; + if (xLen < yLen) { + return false; + } + if (xLen == yLen) { + return 0 == _compare(x, xLen, y, yLen); + } + return separators && + 0 == _compare(x, yLen, y, yLen) && + wcschr(separators, x[yLen]) != NULL; +} + + + /******************************************************************************\ *** HELP TEXT *** \******************************************************************************/ @@ -409,6 +433,9 @@ typedef struct { bool listPaths; // if true, display help message before contiuning bool help; + // if set, limits search to registry keys with the specified Company + // This is intended for debugging and testing only + const wchar_t *limitToCompany; // dynamically allocated buffers to free later struct _SearchInfoBuffer *_buffer; } SearchInfo; @@ -485,6 +512,7 @@ dumpSearchInfo(SearchInfo *search) DEBUG_BOOL(list); DEBUG_BOOL(listPaths); DEBUG_BOOL(help); + DEBUG(limitToCompany); #undef DEBUG_BOOL #undef DEBUG_2 #undef DEBUG @@ -1602,6 +1630,10 @@ registrySearch(const SearchInfo *search, EnvironmentInfo **result, HKEY root, in } break; } + if (search->limitToCompany && 0 != _compare(search->limitToCompany, -1, buffer, cchBuffer)) { + debug(L"# Skipping %s due to PYLAUNCHER_LIMIT_TO_COMPANY\n", buffer); + continue; + } HKEY subkey; if (ERROR_SUCCESS == RegOpenKeyExW(root, buffer, 0, KEY_READ, &subkey)) { exitCode = _registrySearchTags(search, result, subkey, sortKey, buffer, fallbackArch); @@ -1880,6 +1912,11 @@ collectEnvironments(const SearchInfo *search, EnvironmentInfo **result) } } + if (search->limitToCompany) { + debug(L"# Skipping APPX search due to PYLAUNCHER_LIMIT_TO_COMPANY\n"); + return 0; + } + for (struct AppxSearchInfo *info = APPX_SEARCH; info->familyName; ++info) { exitCode = appxSearch(search, result, info->familyName, info->tag, info->sortKey); if (exitCode && exitCode != RC_NO_PYTHON) { @@ -2049,12 +2086,15 @@ _companyMatches(const SearchInfo *search, const EnvironmentInfo *env) bool -_tagMatches(const SearchInfo *search, const EnvironmentInfo *env) +_tagMatches(const SearchInfo *search, const EnvironmentInfo *env, int searchTagLength) { - if (!search->tag || !search->tagLength) { + if (searchTagLength < 0) { + searchTagLength = search->tagLength; + } + if (!search->tag || !searchTagLength) { return true; } - return _startsWith(env->tag, -1, search->tag, search->tagLength); + return _startsWithSeparated(env->tag, -1, search->tag, searchTagLength, L".-"); } @@ -2091,7 +2131,7 @@ _selectEnvironment(const SearchInfo *search, EnvironmentInfo *env, EnvironmentIn } if (!search->oldStyleTag) { - if (_companyMatches(search, env) && _tagMatches(search, env)) { + if (_companyMatches(search, env) && _tagMatches(search, env, -1)) { // Because of how our sort tree is set up, we will walk up the // "prev" side and implicitly select the "best" best. By // returning straight after a match, we skip the entire "next" @@ -2116,7 +2156,7 @@ _selectEnvironment(const SearchInfo *search, EnvironmentInfo *env, EnvironmentIn } } - if (_startsWith(env->tag, -1, search->tag, tagLength)) { + if (_tagMatches(search, env, tagLength)) { if (exclude32Bit && _is32Bit(env)) { debug(L"# Excluding %s/%s because it looks like 32bit\n", env->company, env->tag); } else if (only32Bit && !_is32Bit(env)) { @@ -2143,10 +2183,6 @@ selectEnvironment(const SearchInfo *search, EnvironmentInfo *root, EnvironmentIn *best = NULL; return RC_NO_PYTHON_AT_ALL; } - if (!root->next && !root->prev) { - *best = root; - return 0; - } EnvironmentInfo *result = NULL; int exitCode = _selectEnvironment(search, root, &result); @@ -2556,6 +2592,17 @@ process(int argc, wchar_t ** argv) debug(L"argv0: %s\nversion: %S\n", argv[0], PY_VERSION); } + DWORD len = GetEnvironmentVariableW(L"PYLAUNCHER_LIMIT_TO_COMPANY", NULL, 0); + if (len > 1) { + wchar_t *limitToCompany = allocSearchInfoBuffer(&search, len); + search.limitToCompany = limitToCompany; + if (0 == GetEnvironmentVariableW(L"PYLAUNCHER_LIMIT_TO_COMPANY", limitToCompany, len)) { + exitCode = RC_INTERNAL_ERROR; + winerror(0, L"Failed to read PYLAUNCHER_LIMIT_TO_COMPANY variable"); + goto abort; + } + } + search.originalCmdLine = GetCommandLineW(); exitCode = performSearch(&search, &envs); From webhook-mailer at python.org Wed Feb 1 18:52:37 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Wed, 01 Feb 2023 23:52:37 -0000 Subject: [Python-checkins] Docs: improve accuracy of sqlite3 `check_same_thread` parameter (#101351) Message-ID: https://github.com/python/cpython/commit/ee21110086e277a0ac24f5f768f35847b4db3380 commit: ee21110086e277a0ac24f5f768f35847b4db3380 branch: main author: Marcos Pereira <3464445+marcospgp at users.noreply.github.com> committer: erlend-aasland date: 2023-02-02T00:52:29+01:00 summary: Docs: improve accuracy of sqlite3 `check_same_thread` parameter (#101351) Co-authored-by: Erlend E. Aasland Co-authored-by: C.A.M. Gerlach files: M Doc/library/sqlite3.rst diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst index c0f0f133f4a2..bbdc891c930c 100644 --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -302,10 +302,13 @@ Module functions :type isolation_level: str | None :param bool check_same_thread: - If ``True`` (default), only the creating thread may use the connection. - If ``False``, the connection may be shared across multiple threads; - if so, write operations should be serialized by the user to avoid data - corruption. + If ``True`` (default), :exc:`ProgrammingError` will be raised + if the database connection is used by a thread + other than the one that created it. + If ``False``, the connection may be accessed in multiple threads; + write operations may need to be serialized by the user + to avoid data corruption. + See :attr:`threadsafety` for more information. :param Connection factory: A custom subclass of :class:`Connection` to create the connection with, From webhook-mailer at python.org Wed Feb 1 19:00:27 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 02 Feb 2023 00:00:27 -0000 Subject: [Python-checkins] Docs: improve accuracy of sqlite3 `check_same_thread` parameter (GH-101351) Message-ID: https://github.com/python/cpython/commit/f8abe755e84185ca9ecfd9cb8612c13f28e70d67 commit: f8abe755e84185ca9ecfd9cb8612c13f28e70d67 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-01T16:00:21-08:00 summary: Docs: improve accuracy of sqlite3 `check_same_thread` parameter (GH-101351) (cherry picked from commit ee21110086e277a0ac24f5f768f35847b4db3380) Co-authored-by: Marcos Pereira <3464445+marcospgp at users.noreply.github.com> Co-authored-by: Erlend E. Aasland Co-authored-by: C.A.M. Gerlach files: M Doc/library/sqlite3.rst diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst index 065243c7eaad..3a6925473b78 100644 --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -299,10 +299,13 @@ Module functions :type isolation_level: str | None :param bool check_same_thread: - If ``True`` (default), only the creating thread may use the connection. - If ``False``, the connection may be shared across multiple threads; - if so, write operations should be serialized by the user to avoid data - corruption. + If ``True`` (default), :exc:`ProgrammingError` will be raised + if the database connection is used by a thread + other than the one that created it. + If ``False``, the connection may be accessed in multiple threads; + write operations may need to be serialized by the user + to avoid data corruption. + See :attr:`threadsafety` for more information. :param Connection factory: A custom subclass of :class:`Connection` to create the connection with, From webhook-mailer at python.org Wed Feb 1 19:00:34 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 02 Feb 2023 00:00:34 -0000 Subject: [Python-checkins] Docs: improve accuracy of sqlite3 `check_same_thread` parameter (GH-101351) Message-ID: https://github.com/python/cpython/commit/c2c970fc26e6de5b9e338413eac71d46aea5ca5e commit: c2c970fc26e6de5b9e338413eac71d46aea5ca5e branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-01T16:00:27-08:00 summary: Docs: improve accuracy of sqlite3 `check_same_thread` parameter (GH-101351) (cherry picked from commit ee21110086e277a0ac24f5f768f35847b4db3380) Co-authored-by: Marcos Pereira <3464445+marcospgp at users.noreply.github.com> Co-authored-by: Erlend E. Aasland Co-authored-by: C.A.M. Gerlach files: M Doc/library/sqlite3.rst diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst index ad5d063981f3..8a0d84d1c4b1 100644 --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -299,10 +299,13 @@ Module functions :type isolation_level: str | None :param bool check_same_thread: - If ``True`` (default), only the creating thread may use the connection. - If ``False``, the connection may be shared across multiple threads; - if so, write operations should be serialized by the user to avoid data - corruption. + If ``True`` (default), :exc:`ProgrammingError` will be raised + if the database connection is used by a thread + other than the one that created it. + If ``False``, the connection may be accessed in multiple threads; + write operations may need to be serialized by the user + to avoid data corruption. + See :attr:`threadsafety` for more information. :param Connection factory: A custom subclass of :class:`Connection` to create the connection with, From webhook-mailer at python.org Thu Feb 2 05:03:17 2023 From: webhook-mailer at python.org (iritkatriel) Date: Thu, 02 Feb 2023 10:03:17 -0000 Subject: [Python-checkins] gh-98831: rewrite RERAISE and CLEANUP_THROW in the instruction definition DSL (#101511) Message-ID: https://github.com/python/cpython/commit/0675b8f032c69d265468b31d5cadac6a7ce4bd9c commit: 0675b8f032c69d265468b31d5cadac6a7ce4bd9c branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-02T10:02:57Z summary: gh-98831: rewrite RERAISE and CLEANUP_THROW in the instruction definition DSL (#101511) files: M Python/bytecodes.c M Python/generated_cases.c.h M Python/opcode_metadata.h diff --git a/Python/bytecodes.c b/Python/bytecodes.c index fb41c8387ccc..169a2647866b 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -739,10 +739,10 @@ dummy_func( Py_XSETREF(exc_info->exc_value, exc_value); } - // stack effect: (__0 -- ) - inst(RERAISE) { + inst(RERAISE, (values[oparg], exc -- values[oparg])) { + assert(oparg >= 0 && oparg <= 2); if (oparg) { - PyObject *lasti = PEEK(oparg + 1); + PyObject *lasti = values[0]; if (PyLong_Check(lasti)) { frame->prev_instr = _PyCode_CODE(frame->f_code) + PyLong_AsLong(lasti); assert(!_PyErr_Occurred(tstate)); @@ -753,11 +753,11 @@ dummy_func( goto error; } } - PyObject *val = POP(); - assert(val && PyExceptionInstance_Check(val)); - PyObject *exc = Py_NewRef(PyExceptionInstance_Class(val)); - PyObject *tb = PyException_GetTraceback(val); - _PyErr_Restore(tstate, exc, val, tb); + assert(exc && PyExceptionInstance_Check(exc)); + Py_INCREF(exc); + PyObject *typ = Py_NewRef(PyExceptionInstance_Class(exc)); + PyObject *tb = PyException_GetTraceback(exc); + _PyErr_Restore(tstate, typ, exc, tb); goto exception_unwind; } @@ -784,18 +784,12 @@ dummy_func( } } - // stack effect: (__0, __1 -- ) - inst(CLEANUP_THROW) { + inst(CLEANUP_THROW, (sub_iter, last_sent_val, exc_value -- value)) { assert(throwflag); - PyObject *exc_value = TOP(); assert(exc_value && PyExceptionInstance_Check(exc_value)); if (PyErr_GivenExceptionMatches(exc_value, PyExc_StopIteration)) { - PyObject *value = ((PyStopIterationObject *)exc_value)->value; - Py_INCREF(value); - Py_DECREF(POP()); // The StopIteration. - Py_DECREF(POP()); // The last sent value. - Py_DECREF(POP()); // The delegated sub-iterator. - PUSH(value); + value = Py_NewRef(((PyStopIterationObject *)exc_value)->value); + DECREF_INPUTS(); } else { PyObject *exc_type = Py_NewRef(Py_TYPE(exc_value)); diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index 77f9b00c4f0a..97263866fe91 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -945,8 +945,11 @@ } TARGET(RERAISE) { + PyObject *exc = PEEK(1); + PyObject **values = &PEEK(1 + oparg); + assert(oparg >= 0 && oparg <= 2); if (oparg) { - PyObject *lasti = PEEK(oparg + 1); + PyObject *lasti = values[0]; if (PyLong_Check(lasti)) { frame->prev_instr = _PyCode_CODE(frame->f_code) + PyLong_AsLong(lasti); assert(!_PyErr_Occurred(tstate)); @@ -957,11 +960,11 @@ goto error; } } - PyObject *val = POP(); - assert(val && PyExceptionInstance_Check(val)); - PyObject *exc = Py_NewRef(PyExceptionInstance_Class(val)); - PyObject *tb = PyException_GetTraceback(val); - _PyErr_Restore(tstate, exc, val, tb); + assert(exc && PyExceptionInstance_Check(exc)); + Py_INCREF(exc); + PyObject *typ = Py_NewRef(PyExceptionInstance_Class(exc)); + PyObject *tb = PyException_GetTraceback(exc); + _PyErr_Restore(tstate, typ, exc, tb); goto exception_unwind; } @@ -1001,16 +1004,17 @@ } TARGET(CLEANUP_THROW) { + PyObject *exc_value = PEEK(1); + PyObject *last_sent_val = PEEK(2); + PyObject *sub_iter = PEEK(3); + PyObject *value; assert(throwflag); - PyObject *exc_value = TOP(); assert(exc_value && PyExceptionInstance_Check(exc_value)); if (PyErr_GivenExceptionMatches(exc_value, PyExc_StopIteration)) { - PyObject *value = ((PyStopIterationObject *)exc_value)->value; - Py_INCREF(value); - Py_DECREF(POP()); // The StopIteration. - Py_DECREF(POP()); // The last sent value. - Py_DECREF(POP()); // The delegated sub-iterator. - PUSH(value); + value = Py_NewRef(((PyStopIterationObject *)exc_value)->value); + Py_DECREF(sub_iter); + Py_DECREF(last_sent_val); + Py_DECREF(exc_value); } else { PyObject *exc_type = Py_NewRef(Py_TYPE(exc_value)); @@ -1018,6 +1022,8 @@ _PyErr_Restore(tstate, exc_type, Py_NewRef(exc_value), exc_traceback); goto exception_unwind; } + STACK_SHRINK(2); + POKE(1, value); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index f0bdedeb667d..ca3dde363bfd 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -105,13 +105,13 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case POP_EXCEPT: return 1; case RERAISE: - return -1; + return oparg + 1; case PREP_RERAISE_STAR: return 2; case END_ASYNC_FOR: return 2; case CLEANUP_THROW: - return -1; + return 3; case LOAD_ASSERTION_ERROR: return 0; case LOAD_BUILD_CLASS: @@ -451,13 +451,13 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case POP_EXCEPT: return 0; case RERAISE: - return -1; + return oparg; case PREP_RERAISE_STAR: return 1; case END_ASYNC_FOR: return 0; case CLEANUP_THROW: - return -1; + return 1; case LOAD_ASSERTION_ERROR: return 1; case LOAD_BUILD_CLASS: From webhook-mailer at python.org Thu Feb 2 15:13:04 2023 From: webhook-mailer at python.org (ethanfurman) Date: Thu, 02 Feb 2023 20:13:04 -0000 Subject: [Python-checkins] docs: Fix enum reassign `str` documentation (GH-101507) Message-ID: https://github.com/python/cpython/commit/24cbc7a2a09bf22ff8122c1d50135e1c56622c95 commit: 24cbc7a2a09bf22ff8122c1d50135e1c56622c95 branch: main author: Peter Gessler committer: ethanfurman date: 2023-02-02T12:12:57-08:00 summary: docs: Fix enum reassign `str` documentation (GH-101507) files: M Doc/library/enum.rst diff --git a/Doc/library/enum.rst b/Doc/library/enum.rst index cfe4fbeb0698..13591a1bdc73 100644 --- a/Doc/library/enum.rst +++ b/Doc/library/enum.rst @@ -922,6 +922,6 @@ Notes or you can reassign the appropriate :meth:`str`, etc., in your enum:: - >>> from enum import IntEnum + >>> from enum import Enum, IntEnum >>> class MyIntEnum(IntEnum): - ... __str__ = IntEnum.__str__ + ... __str__ = Enum.__str__ From webhook-mailer at python.org Thu Feb 2 15:30:57 2023 From: webhook-mailer at python.org (gpshead) Date: Thu, 02 Feb 2023 20:30:57 -0000 Subject: [Python-checkins] gh-96305: Fix AIX build by avoiding subprocess during bootstrap (#96429) Message-ID: https://github.com/python/cpython/commit/ba4731d149185894c77d201bc5804da90ff45eee commit: ba4731d149185894c77d201bc5804da90ff45eee branch: main author: Ayappan Perumal committer: gpshead date: 2023-02-02T12:30:49-08:00 summary: gh-96305: Fix AIX build by avoiding subprocess during bootstrap (#96429) * Fix AIX build by avoiding `subprocess` during bootstrap. files: A Misc/NEWS.d/next/Build/2022-08-30-10-16-31.gh-issue-96305.274i8B.rst M Lib/_aix_support.py diff --git a/Lib/_aix_support.py b/Lib/_aix_support.py index 18533e769b75..dadc75c2bf42 100644 --- a/Lib/_aix_support.py +++ b/Lib/_aix_support.py @@ -1,10 +1,28 @@ """Shared AIX support functions.""" -import subprocess import sys import sysconfig +# Taken from _osx_support _read_output function +def _read_cmd_output(commandstring, capture_stderr=False): + """Output from successful command execution or None""" + # Similar to os.popen(commandstring, "r").read(), + # but without actually using os.popen because that + # function is not usable during python bootstrap. + import os + import contextlib + fp = open("/tmp/_aix_support.%s"%( + os.getpid(),), "w+b") + + with contextlib.closing(fp) as fp: + if capture_stderr: + cmd = "%s >'%s' 2>&1" % (commandstring, fp.name) + else: + cmd = "%s 2>/dev/null >'%s'" % (commandstring, fp.name) + return fp.read() if not os.system(cmd) else None + + def _aix_tag(vrtl, bd): # type: (List[int], int) -> str # Infer the ABI bitwidth from maxsize (assuming 64 bit as the default) @@ -30,7 +48,12 @@ def _aix_bos_rte(): If no builddate is found give a value that will satisfy pep425 related queries """ # All AIX systems to have lslpp installed in this location - out = subprocess.check_output(["/usr/bin/lslpp", "-Lqc", "bos.rte"]) + # subprocess may not be available during python bootstrap + try: + import subprocess + out = subprocess.check_output(["/usr/bin/lslpp", "-Lqc", "bos.rte"]) + except ImportError: + out = _read_cmd_output("/usr/bin/lslpp -Lqc bos.rte") out = out.decode("utf-8") out = out.strip().split(":") # type: ignore _bd = int(out[-1]) if out[-1] != '' else 9988 diff --git a/Misc/NEWS.d/next/Build/2022-08-30-10-16-31.gh-issue-96305.274i8B.rst b/Misc/NEWS.d/next/Build/2022-08-30-10-16-31.gh-issue-96305.274i8B.rst new file mode 100644 index 000000000000..64a48da658f2 --- /dev/null +++ b/Misc/NEWS.d/next/Build/2022-08-30-10-16-31.gh-issue-96305.274i8B.rst @@ -0,0 +1,2 @@ +``_aix_support`` now uses a simple code to get platform details rather than +the now non-existent ``_bootsubprocess`` during bootstrap. From webhook-mailer at python.org Thu Feb 2 15:32:40 2023 From: webhook-mailer at python.org (gpshead) Date: Thu, 02 Feb 2023 20:32:40 -0000 Subject: [Python-checkins] gh-98705: Fix AIX build by undefining `__bool__` in C (#98768) Message-ID: https://github.com/python/cpython/commit/618b7a8260bb40290d6551f24885931077309590 commit: 618b7a8260bb40290d6551f24885931077309590 branch: main author: Ayappan Perumal committer: gpshead date: 2023-02-02T12:32:33-08:00 summary: gh-98705: Fix AIX build by undefining `__bool__` in C (#98768) files: A Misc/NEWS.d/next/Build/2022-10-27-09-57-12.gh-issue-98705.H11XmR.rst M Include/pyport.h diff --git a/Include/pyport.h b/Include/pyport.h index b1b2a7477969..22085049a304 100644 --- a/Include/pyport.h +++ b/Include/pyport.h @@ -726,4 +726,10 @@ extern char * _getpty(int *, int, mode_t, int); # endif #endif + +/* AIX has __bool__ redefined in it's system header file. */ +#if defined(_AIX) && defined(__bool__) +#undef __bool__ +#endif + #endif /* Py_PYPORT_H */ diff --git a/Misc/NEWS.d/next/Build/2022-10-27-09-57-12.gh-issue-98705.H11XmR.rst b/Misc/NEWS.d/next/Build/2022-10-27-09-57-12.gh-issue-98705.H11XmR.rst new file mode 100644 index 000000000000..4519853cdbad --- /dev/null +++ b/Misc/NEWS.d/next/Build/2022-10-27-09-57-12.gh-issue-98705.H11XmR.rst @@ -0,0 +1,2 @@ +``__bool__`` is defined in AIX system header files which breaks the build in +AIX, so undefine it. From webhook-mailer at python.org Thu Feb 2 15:40:38 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 02 Feb 2023 20:40:38 -0000 Subject: [Python-checkins] docs: Fix enum reassign `str` documentation (GH-101507) Message-ID: https://github.com/python/cpython/commit/08f5c773594671606d8f51656b0881d36f4b5f11 commit: 08f5c773594671606d8f51656b0881d36f4b5f11 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-02T12:40:30-08:00 summary: docs: Fix enum reassign `str` documentation (GH-101507) (cherry picked from commit 24cbc7a2a09bf22ff8122c1d50135e1c56622c95) Co-authored-by: Peter Gessler files: M Doc/library/enum.rst diff --git a/Doc/library/enum.rst b/Doc/library/enum.rst index b58f3dba0310..1973578e788c 100644 --- a/Doc/library/enum.rst +++ b/Doc/library/enum.rst @@ -914,6 +914,6 @@ Notes or you can reassign the appropriate :meth:`str`, etc., in your enum:: - >>> from enum import IntEnum + >>> from enum import Enum, IntEnum >>> class MyIntEnum(IntEnum): - ... __str__ = IntEnum.__str__ + ... __str__ = Enum.__str__ From webhook-mailer at python.org Thu Feb 2 18:50:42 2023 From: webhook-mailer at python.org (gpshead) Date: Thu, 02 Feb 2023 23:50:42 -0000 Subject: [Python-checkins] GH-84559: Deprecate fork being the multiprocessing default. (#100618) Message-ID: https://github.com/python/cpython/commit/0ca67e6313c11263ecaef7ce182308eeb5aa6814 commit: 0ca67e6313c11263ecaef7ce182308eeb5aa6814 branch: main author: Gregory P. Smith committer: gpshead date: 2023-02-02T15:50:35-08:00 summary: GH-84559: Deprecate fork being the multiprocessing default. (#100618) This starts the process. Users who don't specify their own start method and use the default on platforms where it is 'fork' will see a DeprecationWarning upon multiprocessing.Pool() construction or upon multiprocessing.Process.start() or concurrent.futures.ProcessPool use. See the related issue and documentation within this change for details. files: A Lib/test/test_multiprocessing_defaults.py A Misc/NEWS.d/next/Library/2023-01-01-01-19-33.gh-issue-84559.zEjsEJ.rst M Doc/library/concurrent.futures.rst M Doc/library/multiprocessing.rst M Doc/whatsnew/3.12.rst M Lib/compileall.py M Lib/concurrent/futures/process.py M Lib/multiprocessing/context.py M Lib/test/_test_multiprocessing.py M Lib/test/_test_venv_multiprocessing.py M Lib/test/test_asyncio/test_events.py M Lib/test/test_concurrent_futures.py M Lib/test/test_fcntl.py M Lib/test/test_logging.py M Lib/test/test_pickle.py M Lib/test/test_re.py diff --git a/Doc/library/concurrent.futures.rst b/Doc/library/concurrent.futures.rst index 8106cc235e5a..10cffdaee0bb 100644 --- a/Doc/library/concurrent.futures.rst +++ b/Doc/library/concurrent.futures.rst @@ -250,9 +250,10 @@ to a :class:`ProcessPoolExecutor` will result in deadlock. then :exc:`ValueError` will be raised. If *max_workers* is ``None``, then the default chosen will be at most ``61``, even if more processors are available. - *mp_context* can be a multiprocessing context or None. It will be used to - launch the workers. If *mp_context* is ``None`` or not given, the default - multiprocessing context is used. + *mp_context* can be a :mod:`multiprocessing` context or ``None``. It will be + used to launch the workers. If *mp_context* is ``None`` or not given, the + default :mod:`multiprocessing` context is used. + See :ref:`multiprocessing-start-methods`. *initializer* is an optional callable that is called at the start of each worker process; *initargs* is a tuple of arguments passed to the @@ -284,6 +285,13 @@ to a :class:`ProcessPoolExecutor` will result in deadlock. The *max_tasks_per_child* argument was added to allow users to control the lifetime of workers in the pool. + .. versionchanged:: 3.12 + The implicit use of the :mod:`multiprocessing` *fork* start method as a + platform default (see :ref:`multiprocessing-start-methods`) now raises a + :exc:`DeprecationWarning`. The default will change in Python 3.14. + Code that requires *fork* should explicitly specify that when creating + their :class:`ProcessPoolExecutor` by passing a + ``mp_context=multiprocessing.get_context('fork')`` parameter. .. _processpoolexecutor-example: diff --git a/Doc/library/multiprocessing.rst b/Doc/library/multiprocessing.rst index b5ceeb796f8f..c60b229ae2d0 100644 --- a/Doc/library/multiprocessing.rst +++ b/Doc/library/multiprocessing.rst @@ -19,7 +19,7 @@ offers both local and remote concurrency, effectively side-stepping the :term:`Global Interpreter Lock ` by using subprocesses instead of threads. Due to this, the :mod:`multiprocessing` module allows the programmer to fully -leverage multiple processors on a given machine. It runs on both Unix and +leverage multiple processors on a given machine. It runs on both POSIX and Windows. The :mod:`multiprocessing` module also introduces APIs which do not have @@ -99,11 +99,11 @@ necessary, see :ref:`multiprocessing-programming`. +.. _multiprocessing-start-methods: + Contexts and start methods ~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. _multiprocessing-start-methods: - Depending on the platform, :mod:`multiprocessing` supports three ways to start a process. These *start methods* are @@ -115,7 +115,7 @@ to start a process. These *start methods* are will not be inherited. Starting a process using this method is rather slow compared to using *fork* or *forkserver*. - Available on Unix and Windows. The default on Windows and macOS. + Available on POSIX and Windows platforms. The default on Windows and macOS. *fork* The parent process uses :func:`os.fork` to fork the Python @@ -124,32 +124,39 @@ to start a process. These *start methods* are inherited by the child process. Note that safely forking a multithreaded process is problematic. - Available on Unix only. The default on Unix. + Available on POSIX systems. Currently the default on POSIX except macOS. *forkserver* When the program starts and selects the *forkserver* start method, - a server process is started. From then on, whenever a new process + a server process is spawned. From then on, whenever a new process is needed, the parent process connects to the server and requests - that it fork a new process. The fork server process is single - threaded so it is safe for it to use :func:`os.fork`. No - unnecessary resources are inherited. + that it fork a new process. The fork server process is single threaded + unless system libraries or preloaded imports spawn threads as a + side-effect so it is generally safe for it to use :func:`os.fork`. + No unnecessary resources are inherited. - Available on Unix platforms which support passing file descriptors - over Unix pipes. + Available on POSIX platforms which support passing file descriptors + over Unix pipes such as Linux. + +.. versionchanged:: 3.12 + Implicit use of the *fork* start method as the default now raises a + :exc:`DeprecationWarning`. Code that requires it should explicitly + specify *fork* via :func:`get_context` or :func:`set_start_method`. + The default will change away from *fork* in 3.14. .. versionchanged:: 3.8 On macOS, the *spawn* start method is now the default. The *fork* start method should be considered unsafe as it can lead to crashes of the - subprocess. See :issue:`33725`. + subprocess as macOS system libraries may start threads. See :issue:`33725`. .. versionchanged:: 3.4 - *spawn* added on all Unix platforms, and *forkserver* added for - some Unix platforms. + *spawn* added on all POSIX platforms, and *forkserver* added for + some POSIX platforms. Child processes no longer inherit all of the parents inheritable handles on Windows. -On Unix using the *spawn* or *forkserver* start methods will also +On POSIX using the *spawn* or *forkserver* start methods will also start a *resource tracker* process which tracks the unlinked named system resources (such as named semaphores or :class:`~multiprocessing.shared_memory.SharedMemory` objects) created @@ -211,10 +218,10 @@ library user. .. warning:: - The ``'spawn'`` and ``'forkserver'`` start methods cannot currently + The ``'spawn'`` and ``'forkserver'`` start methods generally cannot be used with "frozen" executables (i.e., binaries produced by - packages like **PyInstaller** and **cx_Freeze**) on Unix. - The ``'fork'`` start method does work. + packages like **PyInstaller** and **cx_Freeze**) on POSIX systems. + The ``'fork'`` start method may work if code does not use threads. Exchanging objects between processes @@ -629,14 +636,14 @@ The :mod:`multiprocessing` package mostly replicates the API of the calling :meth:`join()` is simpler. On Windows, this is an OS handle usable with the ``WaitForSingleObject`` - and ``WaitForMultipleObjects`` family of API calls. On Unix, this is + and ``WaitForMultipleObjects`` family of API calls. On POSIX, this is a file descriptor usable with primitives from the :mod:`select` module. .. versionadded:: 3.3 .. method:: terminate() - Terminate the process. On Unix this is done using the ``SIGTERM`` signal; + Terminate the process. On POSIX this is done using the ``SIGTERM`` signal; on Windows :c:func:`TerminateProcess` is used. Note that exit handlers and finally clauses, etc., will not be executed. @@ -653,7 +660,7 @@ The :mod:`multiprocessing` package mostly replicates the API of the .. method:: kill() - Same as :meth:`terminate()` but using the ``SIGKILL`` signal on Unix. + Same as :meth:`terminate()` but using the ``SIGKILL`` signal on POSIX. .. versionadded:: 3.7 @@ -676,16 +683,17 @@ The :mod:`multiprocessing` package mostly replicates the API of the .. doctest:: >>> import multiprocessing, time, signal - >>> p = multiprocessing.Process(target=time.sleep, args=(1000,)) + >>> mp_context = multiprocessing.get_context('spawn') + >>> p = mp_context.Process(target=time.sleep, args=(1000,)) >>> print(p, p.is_alive()) - False + <...Process ... initial> False >>> p.start() >>> print(p, p.is_alive()) - True + <...Process ... started> True >>> p.terminate() >>> time.sleep(0.1) >>> print(p, p.is_alive()) - False + <...Process ... stopped exitcode=-SIGTERM> False >>> p.exitcode == -signal.SIGTERM True @@ -815,7 +823,7 @@ For an example of the usage of queues for interprocess communication see Return the approximate size of the queue. Because of multithreading/multiprocessing semantics, this number is not reliable. - Note that this may raise :exc:`NotImplementedError` on Unix platforms like + Note that this may raise :exc:`NotImplementedError` on platforms like macOS where ``sem_getvalue()`` is not implemented. .. method:: empty() @@ -1034,9 +1042,8 @@ Miscellaneous Returns a list of the supported start methods, the first of which is the default. The possible start methods are ``'fork'``, - ``'spawn'`` and ``'forkserver'``. On Windows only ``'spawn'`` is - available. On Unix ``'fork'`` and ``'spawn'`` are always - supported, with ``'fork'`` being the default. + ``'spawn'`` and ``'forkserver'``. Not all platforms support all + methods. See :ref:`multiprocessing-start-methods`. .. versionadded:: 3.4 @@ -1048,7 +1055,7 @@ Miscellaneous If *method* is ``None`` then the default context is returned. Otherwise *method* should be ``'fork'``, ``'spawn'``, ``'forkserver'``. :exc:`ValueError` is raised if the specified - start method is not available. + start method is not available. See :ref:`multiprocessing-start-methods`. .. versionadded:: 3.4 @@ -1062,8 +1069,7 @@ Miscellaneous is true then ``None`` is returned. The return value can be ``'fork'``, ``'spawn'``, ``'forkserver'`` - or ``None``. ``'fork'`` is the default on Unix, while ``'spawn'`` is - the default on Windows and macOS. + or ``None``. See :ref:`multiprocessing-start-methods`. .. versionchanged:: 3.8 @@ -1084,11 +1090,26 @@ Miscellaneous before they can create child processes. .. versionchanged:: 3.4 - Now supported on Unix when the ``'spawn'`` start method is used. + Now supported on POSIX when the ``'spawn'`` start method is used. .. versionchanged:: 3.11 Accepts a :term:`path-like object`. +.. function:: set_forkserver_preload(module_names) + + Set a list of module names for the forkserver main process to attempt to + import so that their already imported state is inherited by forked + processes. Any :exc:`ImportError` when doing so is silently ignored. + This can be used as a performance enhancement to avoid repeated work + in every process. + + For this to work, it must be called before the forkserver process has been + launched (before creating a :class:`Pool` or starting a :class:`Process`). + + Only meaningful when using the ``'forkserver'`` start method. + + .. versionadded:: 3.4 + .. function:: set_start_method(method, force=False) Set the method which should be used to start child processes. @@ -1102,6 +1123,8 @@ Miscellaneous protected inside the ``if __name__ == '__main__'`` clause of the main module. + See :ref:`multiprocessing-start-methods`. + .. versionadded:: 3.4 .. note:: @@ -1906,7 +1929,8 @@ their parent process exits. The manager classes are defined in the .. doctest:: - >>> manager = multiprocessing.Manager() + >>> mp_context = multiprocessing.get_context('spawn') + >>> manager = mp_context.Manager() >>> Global = manager.Namespace() >>> Global.x = 10 >>> Global.y = 'hello' @@ -2018,8 +2042,8 @@ the proxy). In this way, a proxy can be used just like its referent can: .. doctest:: - >>> from multiprocessing import Manager - >>> manager = Manager() + >>> mp_context = multiprocessing.get_context('spawn') + >>> manager = mp_context.Manager() >>> l = manager.list([i*i for i in range(10)]) >>> print(l) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] @@ -2520,7 +2544,7 @@ multiple connections at the same time. *timeout* is ``None`` then it will block for an unlimited period. A negative timeout is equivalent to a zero timeout. - For both Unix and Windows, an object can appear in *object_list* if + For both POSIX and Windows, an object can appear in *object_list* if it is * a readable :class:`~multiprocessing.connection.Connection` object; @@ -2531,7 +2555,7 @@ multiple connections at the same time. A connection or socket object is ready when there is data available to be read from it, or the other end has been closed. - **Unix**: ``wait(object_list, timeout)`` almost equivalent + **POSIX**: ``wait(object_list, timeout)`` almost equivalent ``select.select(object_list, [], [], timeout)``. The difference is that, if :func:`select.select` is interrupted by a signal, it can raise :exc:`OSError` with an error number of ``EINTR``, whereas @@ -2803,7 +2827,7 @@ Thread safety of proxies Joining zombie processes - On Unix when a process finishes but has not been joined it becomes a zombie. + On POSIX when a process finishes but has not been joined it becomes a zombie. There should never be very many because each time a new process starts (or :func:`~multiprocessing.active_children` is called) all completed processes which have not yet been joined will be joined. Also calling a finished @@ -2866,7 +2890,7 @@ Joining processes that use queues Explicitly pass resources to child processes - On Unix using the *fork* start method, a child process can make + On POSIX using the *fork* start method, a child process can make use of a shared resource created in a parent process using a global resource. However, it is better to pass the object as an argument to the constructor for the child process. diff --git a/Doc/whatsnew/3.12.rst b/Doc/whatsnew/3.12.rst index a071159b800a..e675fada339a 100644 --- a/Doc/whatsnew/3.12.rst +++ b/Doc/whatsnew/3.12.rst @@ -440,6 +440,11 @@ Deprecated warning at compile time. This field will be removed in Python 3.14. (Contributed by Ramvikrams and Kumar Aditya in :gh:`101193`. PEP by Ken Jin.) +* Use of the implicit default ``'fork'`` start method for + :mod:`multiprocessing` and :class:`concurrent.futures.ProcessPoolExecutor` + now emits a :exc:`DeprecationWarning` on Linux and other non-macOS POSIX + systems. Avoid this by explicitly specifying a start method. + See :ref:`multiprocessing-start-methods`. Pending Removal in Python 3.13 ------------------------------ @@ -505,6 +510,9 @@ Pending Removal in Python 3.14 * Testing the truth value of an :class:`xml.etree.ElementTree.Element` is deprecated and will raise an exception in Python 3.14. +* The default :mod:`multiprocessing` start method will change to one of either + ``'forkserver'`` or ``'spawn'`` on all platforms for which ``'fork'`` remains + the default per :gh:`84559`. Pending Removal in Future Versions ---------------------------------- diff --git a/Lib/compileall.py b/Lib/compileall.py index a388931fb5a9..d394156cedc4 100644 --- a/Lib/compileall.py +++ b/Lib/compileall.py @@ -97,9 +97,15 @@ def compile_dir(dir, maxlevels=None, ddir=None, force=False, files = _walk_dir(dir, quiet=quiet, maxlevels=maxlevels) success = True if workers != 1 and ProcessPoolExecutor is not None: + import multiprocessing + if multiprocessing.get_start_method() == 'fork': + mp_context = multiprocessing.get_context('forkserver') + else: + mp_context = None # If workers == 0, let ProcessPoolExecutor choose workers = workers or None - with ProcessPoolExecutor(max_workers=workers) as executor: + with ProcessPoolExecutor(max_workers=workers, + mp_context=mp_context) as executor: results = executor.map(partial(compile_file, ddir=ddir, force=force, rx=rx, quiet=quiet, diff --git a/Lib/concurrent/futures/process.py b/Lib/concurrent/futures/process.py index 7e2f5fa30e82..257dd02fbc6c 100644 --- a/Lib/concurrent/futures/process.py +++ b/Lib/concurrent/futures/process.py @@ -57,6 +57,7 @@ import itertools import sys from traceback import format_exception +import warnings _threads_wakeups = weakref.WeakKeyDictionary() @@ -616,9 +617,9 @@ def __init__(self, max_workers=None, mp_context=None, max_workers: The maximum number of processes that can be used to execute the given calls. If None or not given then as many worker processes will be created as the machine has processors. - mp_context: A multiprocessing context to launch the workers. This - object should provide SimpleQueue, Queue and Process. Useful - to allow specific multiprocessing start methods. + mp_context: A multiprocessing context to launch the workers created + using the multiprocessing.get_context('start method') API. This + object should provide SimpleQueue, Queue and Process. initializer: A callable used to initialize worker processes. initargs: A tuple of arguments to pass to the initializer. max_tasks_per_child: The maximum number of tasks a worker process @@ -650,6 +651,22 @@ def __init__(self, max_workers=None, mp_context=None, mp_context = mp.get_context("spawn") else: mp_context = mp.get_context() + if (mp_context.get_start_method() == "fork" and + mp_context == mp.context._default_context._default_context): + warnings.warn( + "The default multiprocessing start method will change " + "away from 'fork' in Python >= 3.14, per GH-84559. " + "ProcessPoolExecutor uses multiprocessing. " + "If your application requires the 'fork' multiprocessing " + "start method, explicitly specify that by passing a " + "mp_context= parameter. " + "The safest start method is 'spawn'.", + category=mp.context.DefaultForkDeprecationWarning, + stacklevel=2, + ) + # Avoid the equivalent warning from multiprocessing itself via + # a non-default fork context. + mp_context = mp.get_context("fork") self._mp_context = mp_context # https://github.com/python/cpython/issues/90622 diff --git a/Lib/multiprocessing/context.py b/Lib/multiprocessing/context.py index b1960ea296fe..010a920540e8 100644 --- a/Lib/multiprocessing/context.py +++ b/Lib/multiprocessing/context.py @@ -23,6 +23,9 @@ class TimeoutError(ProcessError): class AuthenticationError(ProcessError): pass +class DefaultForkDeprecationWarning(DeprecationWarning): + pass + # # Base type for contexts. Bound methods of an instance of this type are included in __all__ of __init__.py # @@ -258,6 +261,7 @@ def get_start_method(self, allow_none=False): return self._actual_context._name def get_all_start_methods(self): + """Returns a list of the supported start methods, default first.""" if sys.platform == 'win32': return ['spawn'] else: @@ -280,6 +284,23 @@ def _Popen(process_obj): from .popen_fork import Popen return Popen(process_obj) + _warn_package_prefixes = (os.path.dirname(__file__),) + + class _DeprecatedForkProcess(ForkProcess): + @classmethod + def _Popen(cls, process_obj): + import warnings + warnings.warn( + "The default multiprocessing start method will change " + "away from 'fork' in Python >= 3.14, per GH-84559. " + "Use multiprocessing.get_context(X) or .set_start_method(X) to " + "explicitly specify it when your application requires 'fork'. " + "The safest start method is 'spawn'.", + category=DefaultForkDeprecationWarning, + skip_file_prefixes=_warn_package_prefixes, + ) + return super()._Popen(process_obj) + class SpawnProcess(process.BaseProcess): _start_method = 'spawn' @staticmethod @@ -303,6 +324,9 @@ class ForkContext(BaseContext): _name = 'fork' Process = ForkProcess + class _DefaultForkContext(ForkContext): + Process = _DeprecatedForkProcess + class SpawnContext(BaseContext): _name = 'spawn' Process = SpawnProcess @@ -318,13 +342,16 @@ def _check_available(self): 'fork': ForkContext(), 'spawn': SpawnContext(), 'forkserver': ForkServerContext(), + # Remove None and _DefaultForkContext() when changing the default + # in 3.14 for https://github.com/python/cpython/issues/84559. + None: _DefaultForkContext(), } if sys.platform == 'darwin': # bpo-33725: running arbitrary code after fork() is no longer reliable # on macOS since macOS 10.14 (Mojave). Use spawn by default instead. _default_context = DefaultContext(_concrete_contexts['spawn']) else: - _default_context = DefaultContext(_concrete_contexts['fork']) + _default_context = DefaultContext(_concrete_contexts[None]) else: diff --git a/Lib/test/_test_multiprocessing.py b/Lib/test/_test_multiprocessing.py index 2fa75eb4d113..e4a60a4d6746 100644 --- a/Lib/test/_test_multiprocessing.py +++ b/Lib/test/_test_multiprocessing.py @@ -4098,9 +4098,10 @@ def test_shared_memory_SharedMemoryServer_ignores_sigint(self): def test_shared_memory_SharedMemoryManager_reuses_resource_tracker(self): # bpo-36867: test that a SharedMemoryManager uses the # same resource_tracker process as its parent. - cmd = '''if 1: + cmd = f'''if 1: from multiprocessing.managers import SharedMemoryManager - + from multiprocessing import set_start_method + set_start_method({multiprocessing.get_start_method()!r}) smm = SharedMemoryManager() smm.start() @@ -4967,11 +4968,13 @@ def run_in_grandchild(cls, conn): conn.send(tuple(sys.flags)) @classmethod - def run_in_child(cls): + def run_in_child(cls, start_method): import json - r, w = multiprocessing.Pipe(duplex=False) - p = multiprocessing.Process(target=cls.run_in_grandchild, args=(w,)) - p.start() + mp = multiprocessing.get_context(start_method) + r, w = mp.Pipe(duplex=False) + p = mp.Process(target=cls.run_in_grandchild, args=(w,)) + with warnings.catch_warnings(category=DeprecationWarning): + p.start() grandchild_flags = r.recv() p.join() r.close() @@ -4982,8 +4985,10 @@ def run_in_child(cls): def test_flags(self): import json # start child process using unusual flags - prog = ('from test._test_multiprocessing import TestFlags; ' + - 'TestFlags.run_in_child()') + prog = ( + 'from test._test_multiprocessing import TestFlags; ' + f'TestFlags.run_in_child({multiprocessing.get_start_method()!r})' + ) data = subprocess.check_output( [sys.executable, '-E', '-S', '-O', '-c', prog]) child_flags, grandchild_flags = json.loads(data.decode('ascii')) diff --git a/Lib/test/_test_venv_multiprocessing.py b/Lib/test/_test_venv_multiprocessing.py index af72e915ba52..044a0c6cd3f5 100644 --- a/Lib/test/_test_venv_multiprocessing.py +++ b/Lib/test/_test_venv_multiprocessing.py @@ -30,6 +30,7 @@ def test_func(): def main(): + multiprocessing.set_start_method('spawn') test_pool = multiprocessing.Process(target=test_func) test_pool.start() test_pool.join() diff --git a/Lib/test/test_asyncio/test_events.py b/Lib/test/test_asyncio/test_events.py index 214544b89bc5..b9069056c3a4 100644 --- a/Lib/test/test_asyncio/test_events.py +++ b/Lib/test/test_asyncio/test_events.py @@ -4,6 +4,7 @@ import concurrent.futures import functools import io +import multiprocessing import os import platform import re @@ -2762,7 +2763,13 @@ def test_get_event_loop_new_process(self): support.skip_if_broken_multiprocessing_synchronize() async def main(): - pool = concurrent.futures.ProcessPoolExecutor() + if multiprocessing.get_start_method() == 'fork': + # Avoid 'fork' DeprecationWarning. + mp_context = multiprocessing.get_context('forkserver') + else: + mp_context = None + pool = concurrent.futures.ProcessPoolExecutor( + mp_context=mp_context) result = await self.loop.run_in_executor( pool, _test_get_event_loop_new_process__sub_proc) pool.shutdown() diff --git a/Lib/test/test_concurrent_futures.py b/Lib/test/test_concurrent_futures.py index b3520ae3994e..4493cd312528 100644 --- a/Lib/test/test_concurrent_futures.py +++ b/Lib/test/test_concurrent_futures.py @@ -18,6 +18,7 @@ import threading import time import unittest +import warnings import weakref from pickle import PicklingError @@ -571,6 +572,24 @@ def test_shutdown_no_wait(self): assert all([r == abs(v) for r, v in zip(res, range(-5, 5))]) + at unittest.skipIf(mp.get_all_start_methods()[0] != "fork", "non-fork default.") +class ProcessPoolExecutorDefaultForkWarning(unittest.TestCase): + def test_fork_default_warns(self): + with self.assertWarns(mp.context.DefaultForkDeprecationWarning): + with futures.ProcessPoolExecutor(2): + pass + + def test_explicit_fork_does_not_warn(self): + with warnings.catch_warnings(record=True) as ws: + warnings.simplefilter("ignore") + warnings.filterwarnings( + 'always', category=mp.context.DefaultForkDeprecationWarning) + ctx = mp.get_context("fork") # Non-default fork context. + with futures.ProcessPoolExecutor(2, mp_context=ctx): + pass + self.assertEqual(len(ws), 0, msg=[str(x) for x in ws]) + + create_executor_tests(ProcessPoolShutdownTest, executor_mixins=(ProcessPoolForkMixin, ProcessPoolForkserverMixin, diff --git a/Lib/test/test_fcntl.py b/Lib/test/test_fcntl.py index fc8c39365f12..113c7802821d 100644 --- a/Lib/test/test_fcntl.py +++ b/Lib/test/test_fcntl.py @@ -1,11 +1,11 @@ """Test program for the fcntl C module. """ +import multiprocessing import platform import os import struct import sys import unittest -from multiprocessing import Process from test.support import verbose, cpython_only from test.support.import_helper import import_module from test.support.os_helper import TESTFN, unlink @@ -160,7 +160,8 @@ def test_lockf_exclusive(self): self.f = open(TESTFN, 'wb+') cmd = fcntl.LOCK_EX | fcntl.LOCK_NB fcntl.lockf(self.f, cmd) - p = Process(target=try_lockf_on_other_process_fail, args=(TESTFN, cmd)) + mp = multiprocessing.get_context('spawn') + p = mp.Process(target=try_lockf_on_other_process_fail, args=(TESTFN, cmd)) p.start() p.join() fcntl.lockf(self.f, fcntl.LOCK_UN) @@ -171,7 +172,8 @@ def test_lockf_share(self): self.f = open(TESTFN, 'wb+') cmd = fcntl.LOCK_SH | fcntl.LOCK_NB fcntl.lockf(self.f, cmd) - p = Process(target=try_lockf_on_other_process, args=(TESTFN, cmd)) + mp = multiprocessing.get_context('spawn') + p = mp.Process(target=try_lockf_on_other_process, args=(TESTFN, cmd)) p.start() p.join() fcntl.lockf(self.f, fcntl.LOCK_UN) diff --git a/Lib/test/test_logging.py b/Lib/test/test_logging.py index 072056d37221..8a12d570f26f 100644 --- a/Lib/test/test_logging.py +++ b/Lib/test/test_logging.py @@ -4759,8 +4759,9 @@ def test_multiprocessing(self): # In other processes, processName is correct when multiprocessing in imported, # but it is (incorrectly) defaulted to 'MainProcess' otherwise (bpo-38762). import multiprocessing - parent_conn, child_conn = multiprocessing.Pipe() - p = multiprocessing.Process( + mp = multiprocessing.get_context('spawn') + parent_conn, child_conn = mp.Pipe() + p = mp.Process( target=self._extract_logrecord_process_name, args=(2, LOG_MULTI_PROCESSING, child_conn,) ) diff --git a/Lib/test/test_multiprocessing_defaults.py b/Lib/test/test_multiprocessing_defaults.py new file mode 100644 index 000000000000..1da4c0652383 --- /dev/null +++ b/Lib/test/test_multiprocessing_defaults.py @@ -0,0 +1,82 @@ +"""Test default behavior of multiprocessing.""" + +from inspect import currentframe, getframeinfo +import multiprocessing +from multiprocessing.context import DefaultForkDeprecationWarning +import sys +from test.support import threading_helper +import unittest +import warnings + + +def do_nothing(): + pass + + +# Process has the same API as Thread so this helper works. +join_process = threading_helper.join_thread + + +class DefaultWarningsTest(unittest.TestCase): + + @unittest.skipIf(sys.platform in ('win32', 'darwin'), + 'The default is not "fork" on Windows or macOS.') + def setUp(self): + self.assertEqual(multiprocessing.get_start_method(), 'fork') + self.assertIsInstance(multiprocessing.get_context(), + multiprocessing.context._DefaultForkContext) + + def test_default_fork_start_method_warning_process(self): + with warnings.catch_warnings(record=True) as ws: + warnings.simplefilter('ignore') + warnings.filterwarnings('always', category=DefaultForkDeprecationWarning) + process = multiprocessing.Process(target=do_nothing) + process.start() # warning should point here. + join_process(process) + self.assertIsInstance(ws[0].message, DefaultForkDeprecationWarning) + self.assertIn(__file__, ws[0].filename) + self.assertEqual(getframeinfo(currentframe()).lineno-4, ws[0].lineno) + self.assertIn("'fork'", str(ws[0].message)) + self.assertIn("get_context", str(ws[0].message)) + self.assertEqual(len(ws), 1, msg=[str(x) for x in ws]) + + def test_default_fork_start_method_warning_pool(self): + with warnings.catch_warnings(record=True) as ws: + warnings.simplefilter('ignore') + warnings.filterwarnings('always', category=DefaultForkDeprecationWarning) + pool = multiprocessing.Pool(1) # warning should point here. + pool.terminate() + pool.join() + self.assertIsInstance(ws[0].message, DefaultForkDeprecationWarning) + self.assertIn(__file__, ws[0].filename) + self.assertEqual(getframeinfo(currentframe()).lineno-5, ws[0].lineno) + self.assertIn("'fork'", str(ws[0].message)) + self.assertIn("get_context", str(ws[0].message)) + self.assertEqual(len(ws), 1, msg=[str(x) for x in ws]) + + def test_default_fork_start_method_warning_manager(self): + with warnings.catch_warnings(record=True) as ws: + warnings.simplefilter('ignore') + warnings.filterwarnings('always', category=DefaultForkDeprecationWarning) + manager = multiprocessing.Manager() # warning should point here. + manager.shutdown() + self.assertIsInstance(ws[0].message, DefaultForkDeprecationWarning) + self.assertIn(__file__, ws[0].filename) + self.assertEqual(getframeinfo(currentframe()).lineno-4, ws[0].lineno) + self.assertIn("'fork'", str(ws[0].message)) + self.assertIn("get_context", str(ws[0].message)) + self.assertEqual(len(ws), 1, msg=[str(x) for x in ws]) + + def test_no_mp_warning_when_using_explicit_fork_context(self): + with warnings.catch_warnings(record=True) as ws: + warnings.simplefilter('ignore') + warnings.filterwarnings('always', category=DefaultForkDeprecationWarning) + fork_mp = multiprocessing.get_context('fork') + pool = fork_mp.Pool(1) + pool.terminate() + pool.join() + self.assertEqual(len(ws), 0, msg=[str(x) for x in ws]) + + +if __name__ == '__main__': + unittest.main() diff --git a/Lib/test/test_pickle.py b/Lib/test/test_pickle.py index 44fdca7a6b16..80e7a4d23a4b 100644 --- a/Lib/test/test_pickle.py +++ b/Lib/test/test_pickle.py @@ -533,6 +533,8 @@ def test_exceptions(self): def test_multiprocessing_exceptions(self): module = import_helper.import_module('multiprocessing.context') for name, exc in get_exceptions(module): + if issubclass(exc, Warning): + continue with self.subTest(name): self.assertEqual(reverse_mapping('multiprocessing.context', name), ('multiprocessing', name)) diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py index 11628a236ade..eacb1a7c82a5 100644 --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -2431,7 +2431,8 @@ def test_regression_gh94675(self): input_js = '''a(function() { /////////////////////////////////////////////////////////////////// });''' - p = multiprocessing.Process(target=pattern.sub, args=('', input_js)) + mp = multiprocessing.get_context('spawn') + p = mp.Process(target=pattern.sub, args=('', input_js)) p.start() p.join(SHORT_TIMEOUT) try: diff --git a/Misc/NEWS.d/next/Library/2023-01-01-01-19-33.gh-issue-84559.zEjsEJ.rst b/Misc/NEWS.d/next/Library/2023-01-01-01-19-33.gh-issue-84559.zEjsEJ.rst new file mode 100644 index 000000000000..3793e0f1fddb --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-01-01-01-19-33.gh-issue-84559.zEjsEJ.rst @@ -0,0 +1,11 @@ +The :mod:`multiprocessing` module and +:class:`concurrent.futures.ProcessPoolExecutor` will emit a +:exc:`DeprecationWarning` on Linux and other non-macOS POSIX systems when +the default multiprocessing start method of ``'fork'`` is used implicitly +rather than being explicitly specified through a +:func:`multiprocessing.get_context` context. + +This is in preparation for default start method to change in Python 3.14 to +a default that is safe for multithreaded applications. + +Windows and macOS are unaffected as their default start method is ``spawn``. From webhook-mailer at python.org Thu Feb 2 19:03:34 2023 From: webhook-mailer at python.org (ezio-melotti) Date: Fri, 03 Feb 2023 00:03:34 -0000 Subject: [Python-checkins] gh-100925: Move array methods under class in array doc (#101485) Message-ID: https://github.com/python/cpython/commit/1b6045668d233269f667c4658c7240256f37f111 commit: 1b6045668d233269f667c4658c7240256f37f111 branch: main author: C.A.M. Gerlach committer: ezio-melotti date: 2023-02-03T08:03:27+08:00 summary: gh-100925: Move array methods under class in array doc (#101485) * Move array methods under class in array doc * Fix a few internal references related to the touched lines files: M Doc/library/array.rst diff --git a/Doc/library/array.rst b/Doc/library/array.rst index 95f1eaf401b0..75c49e0f6d1e 100644 --- a/Doc/library/array.rst +++ b/Doc/library/array.rst @@ -60,7 +60,7 @@ Notes: The actual representation of values is determined by the machine architecture (strictly speaking, by the C implementation). The actual size can be accessed -through the :attr:`itemsize` attribute. +through the :attr:`array.itemsize` attribute. The module defines the following item: @@ -85,161 +85,160 @@ The module defines the following type: to add initial items to the array. Otherwise, the iterable initializer is passed to the :meth:`extend` method. - .. audit-event:: array.__new__ typecode,initializer array.array + Array objects support the ordinary sequence operations of indexing, slicing, + concatenation, and multiplication. When using slice assignment, the assigned + value must be an array object with the same type code; in all other cases, + :exc:`TypeError` is raised. Array objects also implement the buffer interface, + and may be used wherever :term:`bytes-like objects ` are supported. + .. audit-event:: array.__new__ typecode,initializer array.array -Array objects support the ordinary sequence operations of indexing, slicing, -concatenation, and multiplication. When using slice assignment, the assigned -value must be an array object with the same type code; in all other cases, -:exc:`TypeError` is raised. Array objects also implement the buffer interface, -and may be used wherever :term:`bytes-like objects ` are supported. -The following data items and methods are also supported: + .. attribute:: typecode -.. attribute:: array.typecode + The typecode character used to create the array. - The typecode character used to create the array. + .. attribute:: itemsize -.. attribute:: array.itemsize + The length in bytes of one array item in the internal representation. - The length in bytes of one array item in the internal representation. + .. method:: append(x) -.. method:: array.append(x) + Append a new item with value *x* to the end of the array. - Append a new item with value *x* to the end of the array. + .. method:: buffer_info() -.. method:: array.buffer_info() + Return a tuple ``(address, length)`` giving the current memory address and the + length in elements of the buffer used to hold array's contents. The size of the + memory buffer in bytes can be computed as ``array.buffer_info()[1] * + array.itemsize``. This is occasionally useful when working with low-level (and + inherently unsafe) I/O interfaces that require memory addresses, such as certain + :c:func:`!ioctl` operations. The returned numbers are valid as long as the array + exists and no length-changing operations are applied to it. - Return a tuple ``(address, length)`` giving the current memory address and the - length in elements of the buffer used to hold array's contents. The size of the - memory buffer in bytes can be computed as ``array.buffer_info()[1] * - array.itemsize``. This is occasionally useful when working with low-level (and - inherently unsafe) I/O interfaces that require memory addresses, such as certain - :c:func:`ioctl` operations. The returned numbers are valid as long as the array - exists and no length-changing operations are applied to it. + .. note:: - .. note:: + When using array objects from code written in C or C++ (the only way to + effectively make use of this information), it makes more sense to use the buffer + interface supported by array objects. This method is maintained for backward + compatibility and should be avoided in new code. The buffer interface is + documented in :ref:`bufferobjects`. - When using array objects from code written in C or C++ (the only way to - effectively make use of this information), it makes more sense to use the buffer - interface supported by array objects. This method is maintained for backward - compatibility and should be avoided in new code. The buffer interface is - documented in :ref:`bufferobjects`. + .. method:: byteswap() -.. method:: array.byteswap() + "Byteswap" all items of the array. This is only supported for values which are + 1, 2, 4, or 8 bytes in size; for other types of values, :exc:`RuntimeError` is + raised. It is useful when reading data from a file written on a machine with a + different byte order. - "Byteswap" all items of the array. This is only supported for values which are - 1, 2, 4, or 8 bytes in size; for other types of values, :exc:`RuntimeError` is - raised. It is useful when reading data from a file written on a machine with a - different byte order. + .. method:: count(x) -.. method:: array.count(x) + Return the number of occurrences of *x* in the array. - Return the number of occurrences of *x* in the array. + .. method:: extend(iterable) -.. method:: array.extend(iterable) + Append items from *iterable* to the end of the array. If *iterable* is another + array, it must have *exactly* the same type code; if not, :exc:`TypeError` will + be raised. If *iterable* is not an array, it must be iterable and its elements + must be the right type to be appended to the array. - Append items from *iterable* to the end of the array. If *iterable* is another - array, it must have *exactly* the same type code; if not, :exc:`TypeError` will - be raised. If *iterable* is not an array, it must be iterable and its elements - must be the right type to be appended to the array. + .. method:: frombytes(s) -.. method:: array.frombytes(s) + Appends items from the string, interpreting the string as an array of machine + values (as if it had been read from a file using the :meth:`fromfile` method). - Appends items from the string, interpreting the string as an array of machine - values (as if it had been read from a file using the :meth:`fromfile` method). + .. versionadded:: 3.2 + :meth:`!fromstring` is renamed to :meth:`frombytes` for clarity. - .. versionadded:: 3.2 - :meth:`fromstring` is renamed to :meth:`frombytes` for clarity. + .. method:: fromfile(f, n) -.. method:: array.fromfile(f, n) + Read *n* items (as machine values) from the :term:`file object` *f* and append + them to the end of the array. If less than *n* items are available, + :exc:`EOFError` is raised, but the items that were available are still + inserted into the array. - Read *n* items (as machine values) from the :term:`file object` *f* and append - them to the end of the array. If less than *n* items are available, - :exc:`EOFError` is raised, but the items that were available are still - inserted into the array. + .. method:: fromlist(list) -.. method:: array.fromlist(list) + Append items from the list. This is equivalent to ``for x in list: + a.append(x)`` except that if there is a type error, the array is unchanged. - Append items from the list. This is equivalent to ``for x in list: - a.append(x)`` except that if there is a type error, the array is unchanged. + .. method:: fromunicode(s) -.. method:: array.fromunicode(s) + Extends this array with data from the given unicode string. The array must + be a type ``'u'`` array; otherwise a :exc:`ValueError` is raised. Use + ``array.frombytes(unicodestring.encode(enc))`` to append Unicode data to an + array of some other type. - Extends this array with data from the given unicode string. The array must - be a type ``'u'`` array; otherwise a :exc:`ValueError` is raised. Use - ``array.frombytes(unicodestring.encode(enc))`` to append Unicode data to an - array of some other type. + .. method:: index(x[, start[, stop]]) -.. method:: array.index(x[, start[, stop]]) + Return the smallest *i* such that *i* is the index of the first occurrence of + *x* in the array. The optional arguments *start* and *stop* can be + specified to search for *x* within a subsection of the array. Raise + :exc:`ValueError` if *x* is not found. - Return the smallest *i* such that *i* is the index of the first occurrence of - *x* in the array. The optional arguments *start* and *stop* can be - specified to search for *x* within a subsection of the array. Raise - :exc:`ValueError` if *x* is not found. + .. versionchanged:: 3.10 + Added optional *start* and *stop* parameters. - .. versionchanged:: 3.10 - Added optional *start* and *stop* parameters. -.. method:: array.insert(i, x) + .. method:: insert(i, x) - Insert a new item with value *x* in the array before position *i*. Negative - values are treated as being relative to the end of the array. + Insert a new item with value *x* in the array before position *i*. Negative + values are treated as being relative to the end of the array. -.. method:: array.pop([i]) + .. method:: pop([i]) - Removes the item with the index *i* from the array and returns it. The optional - argument defaults to ``-1``, so that by default the last item is removed and - returned. + Removes the item with the index *i* from the array and returns it. The optional + argument defaults to ``-1``, so that by default the last item is removed and + returned. -.. method:: array.remove(x) + .. method:: remove(x) - Remove the first occurrence of *x* from the array. + Remove the first occurrence of *x* from the array. -.. method:: array.reverse() + .. method:: reverse() - Reverse the order of the items in the array. + Reverse the order of the items in the array. -.. method:: array.tobytes() + .. method:: tobytes() - Convert the array to an array of machine values and return the bytes - representation (the same sequence of bytes that would be written to a file by - the :meth:`tofile` method.) + Convert the array to an array of machine values and return the bytes + representation (the same sequence of bytes that would be written to a file by + the :meth:`tofile` method.) - .. versionadded:: 3.2 - :meth:`tostring` is renamed to :meth:`tobytes` for clarity. + .. versionadded:: 3.2 + :meth:`!tostring` is renamed to :meth:`tobytes` for clarity. -.. method:: array.tofile(f) + .. method:: tofile(f) - Write all items (as machine values) to the :term:`file object` *f*. + Write all items (as machine values) to the :term:`file object` *f*. -.. method:: array.tolist() + .. method:: tolist() - Convert the array to an ordinary list with the same items. + Convert the array to an ordinary list with the same items. -.. method:: array.tounicode() + .. method:: tounicode() - Convert the array to a unicode string. The array must be a type ``'u'`` array; - otherwise a :exc:`ValueError` is raised. Use ``array.tobytes().decode(enc)`` to - obtain a unicode string from an array of some other type. + Convert the array to a unicode string. The array must be a type ``'u'`` array; + otherwise a :exc:`ValueError` is raised. Use ``array.tobytes().decode(enc)`` to + obtain a unicode string from an array of some other type. When an array object is printed or converted to a string, it is represented as From webhook-mailer at python.org Thu Feb 2 19:11:38 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 03 Feb 2023 00:11:38 -0000 Subject: [Python-checkins] gh-100925: Move array methods under class in array doc (GH-101485) Message-ID: https://github.com/python/cpython/commit/c3dd95a669030ff81f5e841d181110cdfd78e542 commit: c3dd95a669030ff81f5e841d181110cdfd78e542 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-02T16:11:31-08:00 summary: gh-100925: Move array methods under class in array doc (GH-101485) * Move array methods under class in array doc * Fix a few internal references related to the touched lines (cherry picked from commit 1b6045668d233269f667c4658c7240256f37f111) Co-authored-by: C.A.M. Gerlach files: M Doc/library/array.rst diff --git a/Doc/library/array.rst b/Doc/library/array.rst index 95f1eaf401b0..75c49e0f6d1e 100644 --- a/Doc/library/array.rst +++ b/Doc/library/array.rst @@ -60,7 +60,7 @@ Notes: The actual representation of values is determined by the machine architecture (strictly speaking, by the C implementation). The actual size can be accessed -through the :attr:`itemsize` attribute. +through the :attr:`array.itemsize` attribute. The module defines the following item: @@ -85,161 +85,160 @@ The module defines the following type: to add initial items to the array. Otherwise, the iterable initializer is passed to the :meth:`extend` method. - .. audit-event:: array.__new__ typecode,initializer array.array + Array objects support the ordinary sequence operations of indexing, slicing, + concatenation, and multiplication. When using slice assignment, the assigned + value must be an array object with the same type code; in all other cases, + :exc:`TypeError` is raised. Array objects also implement the buffer interface, + and may be used wherever :term:`bytes-like objects ` are supported. + .. audit-event:: array.__new__ typecode,initializer array.array -Array objects support the ordinary sequence operations of indexing, slicing, -concatenation, and multiplication. When using slice assignment, the assigned -value must be an array object with the same type code; in all other cases, -:exc:`TypeError` is raised. Array objects also implement the buffer interface, -and may be used wherever :term:`bytes-like objects ` are supported. -The following data items and methods are also supported: + .. attribute:: typecode -.. attribute:: array.typecode + The typecode character used to create the array. - The typecode character used to create the array. + .. attribute:: itemsize -.. attribute:: array.itemsize + The length in bytes of one array item in the internal representation. - The length in bytes of one array item in the internal representation. + .. method:: append(x) -.. method:: array.append(x) + Append a new item with value *x* to the end of the array. - Append a new item with value *x* to the end of the array. + .. method:: buffer_info() -.. method:: array.buffer_info() + Return a tuple ``(address, length)`` giving the current memory address and the + length in elements of the buffer used to hold array's contents. The size of the + memory buffer in bytes can be computed as ``array.buffer_info()[1] * + array.itemsize``. This is occasionally useful when working with low-level (and + inherently unsafe) I/O interfaces that require memory addresses, such as certain + :c:func:`!ioctl` operations. The returned numbers are valid as long as the array + exists and no length-changing operations are applied to it. - Return a tuple ``(address, length)`` giving the current memory address and the - length in elements of the buffer used to hold array's contents. The size of the - memory buffer in bytes can be computed as ``array.buffer_info()[1] * - array.itemsize``. This is occasionally useful when working with low-level (and - inherently unsafe) I/O interfaces that require memory addresses, such as certain - :c:func:`ioctl` operations. The returned numbers are valid as long as the array - exists and no length-changing operations are applied to it. + .. note:: - .. note:: + When using array objects from code written in C or C++ (the only way to + effectively make use of this information), it makes more sense to use the buffer + interface supported by array objects. This method is maintained for backward + compatibility and should be avoided in new code. The buffer interface is + documented in :ref:`bufferobjects`. - When using array objects from code written in C or C++ (the only way to - effectively make use of this information), it makes more sense to use the buffer - interface supported by array objects. This method is maintained for backward - compatibility and should be avoided in new code. The buffer interface is - documented in :ref:`bufferobjects`. + .. method:: byteswap() -.. method:: array.byteswap() + "Byteswap" all items of the array. This is only supported for values which are + 1, 2, 4, or 8 bytes in size; for other types of values, :exc:`RuntimeError` is + raised. It is useful when reading data from a file written on a machine with a + different byte order. - "Byteswap" all items of the array. This is only supported for values which are - 1, 2, 4, or 8 bytes in size; for other types of values, :exc:`RuntimeError` is - raised. It is useful when reading data from a file written on a machine with a - different byte order. + .. method:: count(x) -.. method:: array.count(x) + Return the number of occurrences of *x* in the array. - Return the number of occurrences of *x* in the array. + .. method:: extend(iterable) -.. method:: array.extend(iterable) + Append items from *iterable* to the end of the array. If *iterable* is another + array, it must have *exactly* the same type code; if not, :exc:`TypeError` will + be raised. If *iterable* is not an array, it must be iterable and its elements + must be the right type to be appended to the array. - Append items from *iterable* to the end of the array. If *iterable* is another - array, it must have *exactly* the same type code; if not, :exc:`TypeError` will - be raised. If *iterable* is not an array, it must be iterable and its elements - must be the right type to be appended to the array. + .. method:: frombytes(s) -.. method:: array.frombytes(s) + Appends items from the string, interpreting the string as an array of machine + values (as if it had been read from a file using the :meth:`fromfile` method). - Appends items from the string, interpreting the string as an array of machine - values (as if it had been read from a file using the :meth:`fromfile` method). + .. versionadded:: 3.2 + :meth:`!fromstring` is renamed to :meth:`frombytes` for clarity. - .. versionadded:: 3.2 - :meth:`fromstring` is renamed to :meth:`frombytes` for clarity. + .. method:: fromfile(f, n) -.. method:: array.fromfile(f, n) + Read *n* items (as machine values) from the :term:`file object` *f* and append + them to the end of the array. If less than *n* items are available, + :exc:`EOFError` is raised, but the items that were available are still + inserted into the array. - Read *n* items (as machine values) from the :term:`file object` *f* and append - them to the end of the array. If less than *n* items are available, - :exc:`EOFError` is raised, but the items that were available are still - inserted into the array. + .. method:: fromlist(list) -.. method:: array.fromlist(list) + Append items from the list. This is equivalent to ``for x in list: + a.append(x)`` except that if there is a type error, the array is unchanged. - Append items from the list. This is equivalent to ``for x in list: - a.append(x)`` except that if there is a type error, the array is unchanged. + .. method:: fromunicode(s) -.. method:: array.fromunicode(s) + Extends this array with data from the given unicode string. The array must + be a type ``'u'`` array; otherwise a :exc:`ValueError` is raised. Use + ``array.frombytes(unicodestring.encode(enc))`` to append Unicode data to an + array of some other type. - Extends this array with data from the given unicode string. The array must - be a type ``'u'`` array; otherwise a :exc:`ValueError` is raised. Use - ``array.frombytes(unicodestring.encode(enc))`` to append Unicode data to an - array of some other type. + .. method:: index(x[, start[, stop]]) -.. method:: array.index(x[, start[, stop]]) + Return the smallest *i* such that *i* is the index of the first occurrence of + *x* in the array. The optional arguments *start* and *stop* can be + specified to search for *x* within a subsection of the array. Raise + :exc:`ValueError` if *x* is not found. - Return the smallest *i* such that *i* is the index of the first occurrence of - *x* in the array. The optional arguments *start* and *stop* can be - specified to search for *x* within a subsection of the array. Raise - :exc:`ValueError` if *x* is not found. + .. versionchanged:: 3.10 + Added optional *start* and *stop* parameters. - .. versionchanged:: 3.10 - Added optional *start* and *stop* parameters. -.. method:: array.insert(i, x) + .. method:: insert(i, x) - Insert a new item with value *x* in the array before position *i*. Negative - values are treated as being relative to the end of the array. + Insert a new item with value *x* in the array before position *i*. Negative + values are treated as being relative to the end of the array. -.. method:: array.pop([i]) + .. method:: pop([i]) - Removes the item with the index *i* from the array and returns it. The optional - argument defaults to ``-1``, so that by default the last item is removed and - returned. + Removes the item with the index *i* from the array and returns it. The optional + argument defaults to ``-1``, so that by default the last item is removed and + returned. -.. method:: array.remove(x) + .. method:: remove(x) - Remove the first occurrence of *x* from the array. + Remove the first occurrence of *x* from the array. -.. method:: array.reverse() + .. method:: reverse() - Reverse the order of the items in the array. + Reverse the order of the items in the array. -.. method:: array.tobytes() + .. method:: tobytes() - Convert the array to an array of machine values and return the bytes - representation (the same sequence of bytes that would be written to a file by - the :meth:`tofile` method.) + Convert the array to an array of machine values and return the bytes + representation (the same sequence of bytes that would be written to a file by + the :meth:`tofile` method.) - .. versionadded:: 3.2 - :meth:`tostring` is renamed to :meth:`tobytes` for clarity. + .. versionadded:: 3.2 + :meth:`!tostring` is renamed to :meth:`tobytes` for clarity. -.. method:: array.tofile(f) + .. method:: tofile(f) - Write all items (as machine values) to the :term:`file object` *f*. + Write all items (as machine values) to the :term:`file object` *f*. -.. method:: array.tolist() + .. method:: tolist() - Convert the array to an ordinary list with the same items. + Convert the array to an ordinary list with the same items. -.. method:: array.tounicode() + .. method:: tounicode() - Convert the array to a unicode string. The array must be a type ``'u'`` array; - otherwise a :exc:`ValueError` is raised. Use ``array.tobytes().decode(enc)`` to - obtain a unicode string from an array of some other type. + Convert the array to a unicode string. The array must be a type ``'u'`` array; + otherwise a :exc:`ValueError` is raised. Use ``array.tobytes().decode(enc)`` to + obtain a unicode string from an array of some other type. When an array object is printed or converted to a string, it is represented as From webhook-mailer at python.org Thu Feb 2 19:12:16 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 03 Feb 2023 00:12:16 -0000 Subject: [Python-checkins] gh-100925: Move array methods under class in array doc (GH-101485) Message-ID: https://github.com/python/cpython/commit/bfac5d9850790c4b4fc125f8087be78f5c5ceb14 commit: bfac5d9850790c4b4fc125f8087be78f5c5ceb14 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-02T16:12:09-08:00 summary: gh-100925: Move array methods under class in array doc (GH-101485) * Move array methods under class in array doc * Fix a few internal references related to the touched lines (cherry picked from commit 1b6045668d233269f667c4658c7240256f37f111) Co-authored-by: C.A.M. Gerlach files: M Doc/library/array.rst diff --git a/Doc/library/array.rst b/Doc/library/array.rst index 95f1eaf401b0..75c49e0f6d1e 100644 --- a/Doc/library/array.rst +++ b/Doc/library/array.rst @@ -60,7 +60,7 @@ Notes: The actual representation of values is determined by the machine architecture (strictly speaking, by the C implementation). The actual size can be accessed -through the :attr:`itemsize` attribute. +through the :attr:`array.itemsize` attribute. The module defines the following item: @@ -85,161 +85,160 @@ The module defines the following type: to add initial items to the array. Otherwise, the iterable initializer is passed to the :meth:`extend` method. - .. audit-event:: array.__new__ typecode,initializer array.array + Array objects support the ordinary sequence operations of indexing, slicing, + concatenation, and multiplication. When using slice assignment, the assigned + value must be an array object with the same type code; in all other cases, + :exc:`TypeError` is raised. Array objects also implement the buffer interface, + and may be used wherever :term:`bytes-like objects ` are supported. + .. audit-event:: array.__new__ typecode,initializer array.array -Array objects support the ordinary sequence operations of indexing, slicing, -concatenation, and multiplication. When using slice assignment, the assigned -value must be an array object with the same type code; in all other cases, -:exc:`TypeError` is raised. Array objects also implement the buffer interface, -and may be used wherever :term:`bytes-like objects ` are supported. -The following data items and methods are also supported: + .. attribute:: typecode -.. attribute:: array.typecode + The typecode character used to create the array. - The typecode character used to create the array. + .. attribute:: itemsize -.. attribute:: array.itemsize + The length in bytes of one array item in the internal representation. - The length in bytes of one array item in the internal representation. + .. method:: append(x) -.. method:: array.append(x) + Append a new item with value *x* to the end of the array. - Append a new item with value *x* to the end of the array. + .. method:: buffer_info() -.. method:: array.buffer_info() + Return a tuple ``(address, length)`` giving the current memory address and the + length in elements of the buffer used to hold array's contents. The size of the + memory buffer in bytes can be computed as ``array.buffer_info()[1] * + array.itemsize``. This is occasionally useful when working with low-level (and + inherently unsafe) I/O interfaces that require memory addresses, such as certain + :c:func:`!ioctl` operations. The returned numbers are valid as long as the array + exists and no length-changing operations are applied to it. - Return a tuple ``(address, length)`` giving the current memory address and the - length in elements of the buffer used to hold array's contents. The size of the - memory buffer in bytes can be computed as ``array.buffer_info()[1] * - array.itemsize``. This is occasionally useful when working with low-level (and - inherently unsafe) I/O interfaces that require memory addresses, such as certain - :c:func:`ioctl` operations. The returned numbers are valid as long as the array - exists and no length-changing operations are applied to it. + .. note:: - .. note:: + When using array objects from code written in C or C++ (the only way to + effectively make use of this information), it makes more sense to use the buffer + interface supported by array objects. This method is maintained for backward + compatibility and should be avoided in new code. The buffer interface is + documented in :ref:`bufferobjects`. - When using array objects from code written in C or C++ (the only way to - effectively make use of this information), it makes more sense to use the buffer - interface supported by array objects. This method is maintained for backward - compatibility and should be avoided in new code. The buffer interface is - documented in :ref:`bufferobjects`. + .. method:: byteswap() -.. method:: array.byteswap() + "Byteswap" all items of the array. This is only supported for values which are + 1, 2, 4, or 8 bytes in size; for other types of values, :exc:`RuntimeError` is + raised. It is useful when reading data from a file written on a machine with a + different byte order. - "Byteswap" all items of the array. This is only supported for values which are - 1, 2, 4, or 8 bytes in size; for other types of values, :exc:`RuntimeError` is - raised. It is useful when reading data from a file written on a machine with a - different byte order. + .. method:: count(x) -.. method:: array.count(x) + Return the number of occurrences of *x* in the array. - Return the number of occurrences of *x* in the array. + .. method:: extend(iterable) -.. method:: array.extend(iterable) + Append items from *iterable* to the end of the array. If *iterable* is another + array, it must have *exactly* the same type code; if not, :exc:`TypeError` will + be raised. If *iterable* is not an array, it must be iterable and its elements + must be the right type to be appended to the array. - Append items from *iterable* to the end of the array. If *iterable* is another - array, it must have *exactly* the same type code; if not, :exc:`TypeError` will - be raised. If *iterable* is not an array, it must be iterable and its elements - must be the right type to be appended to the array. + .. method:: frombytes(s) -.. method:: array.frombytes(s) + Appends items from the string, interpreting the string as an array of machine + values (as if it had been read from a file using the :meth:`fromfile` method). - Appends items from the string, interpreting the string as an array of machine - values (as if it had been read from a file using the :meth:`fromfile` method). + .. versionadded:: 3.2 + :meth:`!fromstring` is renamed to :meth:`frombytes` for clarity. - .. versionadded:: 3.2 - :meth:`fromstring` is renamed to :meth:`frombytes` for clarity. + .. method:: fromfile(f, n) -.. method:: array.fromfile(f, n) + Read *n* items (as machine values) from the :term:`file object` *f* and append + them to the end of the array. If less than *n* items are available, + :exc:`EOFError` is raised, but the items that were available are still + inserted into the array. - Read *n* items (as machine values) from the :term:`file object` *f* and append - them to the end of the array. If less than *n* items are available, - :exc:`EOFError` is raised, but the items that were available are still - inserted into the array. + .. method:: fromlist(list) -.. method:: array.fromlist(list) + Append items from the list. This is equivalent to ``for x in list: + a.append(x)`` except that if there is a type error, the array is unchanged. - Append items from the list. This is equivalent to ``for x in list: - a.append(x)`` except that if there is a type error, the array is unchanged. + .. method:: fromunicode(s) -.. method:: array.fromunicode(s) + Extends this array with data from the given unicode string. The array must + be a type ``'u'`` array; otherwise a :exc:`ValueError` is raised. Use + ``array.frombytes(unicodestring.encode(enc))`` to append Unicode data to an + array of some other type. - Extends this array with data from the given unicode string. The array must - be a type ``'u'`` array; otherwise a :exc:`ValueError` is raised. Use - ``array.frombytes(unicodestring.encode(enc))`` to append Unicode data to an - array of some other type. + .. method:: index(x[, start[, stop]]) -.. method:: array.index(x[, start[, stop]]) + Return the smallest *i* such that *i* is the index of the first occurrence of + *x* in the array. The optional arguments *start* and *stop* can be + specified to search for *x* within a subsection of the array. Raise + :exc:`ValueError` if *x* is not found. - Return the smallest *i* such that *i* is the index of the first occurrence of - *x* in the array. The optional arguments *start* and *stop* can be - specified to search for *x* within a subsection of the array. Raise - :exc:`ValueError` if *x* is not found. + .. versionchanged:: 3.10 + Added optional *start* and *stop* parameters. - .. versionchanged:: 3.10 - Added optional *start* and *stop* parameters. -.. method:: array.insert(i, x) + .. method:: insert(i, x) - Insert a new item with value *x* in the array before position *i*. Negative - values are treated as being relative to the end of the array. + Insert a new item with value *x* in the array before position *i*. Negative + values are treated as being relative to the end of the array. -.. method:: array.pop([i]) + .. method:: pop([i]) - Removes the item with the index *i* from the array and returns it. The optional - argument defaults to ``-1``, so that by default the last item is removed and - returned. + Removes the item with the index *i* from the array and returns it. The optional + argument defaults to ``-1``, so that by default the last item is removed and + returned. -.. method:: array.remove(x) + .. method:: remove(x) - Remove the first occurrence of *x* from the array. + Remove the first occurrence of *x* from the array. -.. method:: array.reverse() + .. method:: reverse() - Reverse the order of the items in the array. + Reverse the order of the items in the array. -.. method:: array.tobytes() + .. method:: tobytes() - Convert the array to an array of machine values and return the bytes - representation (the same sequence of bytes that would be written to a file by - the :meth:`tofile` method.) + Convert the array to an array of machine values and return the bytes + representation (the same sequence of bytes that would be written to a file by + the :meth:`tofile` method.) - .. versionadded:: 3.2 - :meth:`tostring` is renamed to :meth:`tobytes` for clarity. + .. versionadded:: 3.2 + :meth:`!tostring` is renamed to :meth:`tobytes` for clarity. -.. method:: array.tofile(f) + .. method:: tofile(f) - Write all items (as machine values) to the :term:`file object` *f*. + Write all items (as machine values) to the :term:`file object` *f*. -.. method:: array.tolist() + .. method:: tolist() - Convert the array to an ordinary list with the same items. + Convert the array to an ordinary list with the same items. -.. method:: array.tounicode() + .. method:: tounicode() - Convert the array to a unicode string. The array must be a type ``'u'`` array; - otherwise a :exc:`ValueError` is raised. Use ``array.tobytes().decode(enc)`` to - obtain a unicode string from an array of some other type. + Convert the array to a unicode string. The array must be a type ``'u'`` array; + otherwise a :exc:`ValueError` is raised. Use ``array.tobytes().decode(enc)`` to + obtain a unicode string from an array of some other type. When an array object is printed or converted to a string, it is represented as From webhook-mailer at python.org Thu Feb 2 20:14:29 2023 From: webhook-mailer at python.org (gpshead) Date: Fri, 03 Feb 2023 01:14:29 -0000 Subject: [Python-checkins] gh-84559: skip the test when no multiprocessing (wasm, etc) (#101530) Message-ID: https://github.com/python/cpython/commit/5dcae3f0c3e9072251217e814a9438670e5f1e40 commit: 5dcae3f0c3e9072251217e814a9438670e5f1e40 branch: main author: Gregory P. Smith committer: gpshead date: 2023-02-02T17:14:23-08:00 summary: gh-84559: skip the test when no multiprocessing (wasm, etc) (#101530) skip test when no _multiprocessing (wasm, etc) files: M Lib/test/test_multiprocessing_defaults.py diff --git a/Lib/test/test_multiprocessing_defaults.py b/Lib/test/test_multiprocessing_defaults.py index 1da4c0652383..7ea872fef20d 100644 --- a/Lib/test/test_multiprocessing_defaults.py +++ b/Lib/test/test_multiprocessing_defaults.py @@ -4,10 +4,13 @@ import multiprocessing from multiprocessing.context import DefaultForkDeprecationWarning import sys -from test.support import threading_helper +from test.support import import_helper, threading_helper import unittest import warnings +# Skip tests if _multiprocessing wasn't built. +import_helper.import_module('_multiprocessing') + def do_nothing(): pass From webhook-mailer at python.org Fri Feb 3 02:19:01 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Fri, 03 Feb 2023 07:19:01 -0000 Subject: [Python-checkins] gh-100920: Update documentation for `asyncio.StreamWriter.wait_closed` (#101514) Message-ID: https://github.com/python/cpython/commit/5c39daf50b7f388f9b24bb2d6ef415955440bebf commit: 5c39daf50b7f388f9b24bb2d6ef415955440bebf branch: main author: Viet Than committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-03T12:48:39+05:30 summary: gh-100920: Update documentation for `asyncio.StreamWriter.wait_closed` (#101514) files: M Doc/library/asyncio-stream.rst diff --git a/Doc/library/asyncio-stream.rst b/Doc/library/asyncio-stream.rst index 533bdec8e039..3b3c68ab6ef6 100644 --- a/Doc/library/asyncio-stream.rst +++ b/Doc/library/asyncio-stream.rst @@ -295,7 +295,8 @@ StreamWriter The method closes the stream and the underlying socket. - The method should be used along with the ``wait_closed()`` method:: + The method should be used, though not mandatory, + along with the ``wait_closed()`` method:: stream.close() await stream.wait_closed() @@ -372,7 +373,8 @@ StreamWriter Wait until the stream is closed. Should be called after :meth:`close` to wait until the underlying - connection is closed. + connection is closed, ensuring that all data has been flushed + before e.g. exiting the program. .. versionadded:: 3.7 @@ -402,6 +404,7 @@ TCP echo client using the :func:`asyncio.open_connection` function:: print('Close the connection') writer.close() + await writer.wait_closed() asyncio.run(tcp_echo_client('Hello World!')) @@ -434,6 +437,7 @@ TCP echo server using the :func:`asyncio.start_server` function:: print("Close the connection") writer.close() + await writer.wait_closed() async def main(): server = await asyncio.start_server( @@ -490,6 +494,7 @@ Simple example querying HTTP headers of the URL passed on the command line:: # Ignore the body, close the socket writer.close() + await writer.wait_closed() url = sys.argv[1] asyncio.run(print_http_headers(url)) @@ -535,6 +540,7 @@ Coroutine waiting until a socket receives data using the # Got data, we are done: close the socket print("Received:", data.decode()) writer.close() + await writer.wait_closed() # Close the second socket wsock.close() From webhook-mailer at python.org Fri Feb 3 02:23:17 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Fri, 03 Feb 2023 07:23:17 -0000 Subject: [Python-checkins] docs: replace PyPI description with link (#101506) Message-ID: https://github.com/python/cpython/commit/45d014e03ba7ba4c9c912120ec36b2aca02061f4 commit: 45d014e03ba7ba4c9c912120ec36b2aca02061f4 branch: main author: Fran?ois Magimel committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-03T12:53:11+05:30 summary: docs: replace PyPI description with link (#101506) files: M Doc/tutorial/venv.rst diff --git a/Doc/tutorial/venv.rst b/Doc/tutorial/venv.rst index 1fdb370b33d5..05f0e6bbcc1b 100644 --- a/Doc/tutorial/venv.rst +++ b/Doc/tutorial/venv.rst @@ -98,8 +98,8 @@ Managing Packages with pip ========================== You can install, upgrade, and remove packages using a program called -:program:`pip`. By default ``pip`` will install packages from the Python -Package Index, . You can browse the Python +:program:`pip`. By default ``pip`` will install packages from the `Python +Package Index `_. You can browse the Python Package Index by going to it in your web browser. ``pip`` has a number of subcommands: "install", "uninstall", From webhook-mailer at python.org Fri Feb 3 02:26:12 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 03 Feb 2023 07:26:12 -0000 Subject: [Python-checkins] gh-100920: Update documentation for `asyncio.StreamWriter.wait_closed` (GH-101514) Message-ID: https://github.com/python/cpython/commit/4c732bc425976194ae1309753f89aa43b64582fa commit: 4c732bc425976194ae1309753f89aa43b64582fa branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-02T23:26:05-08:00 summary: gh-100920: Update documentation for `asyncio.StreamWriter.wait_closed` (GH-101514) (cherry picked from commit 5c39daf50b7f388f9b24bb2d6ef415955440bebf) Co-authored-by: Viet Than files: M Doc/library/asyncio-stream.rst diff --git a/Doc/library/asyncio-stream.rst b/Doc/library/asyncio-stream.rst index bed10a40fb7e..f0fcac1a7937 100644 --- a/Doc/library/asyncio-stream.rst +++ b/Doc/library/asyncio-stream.rst @@ -281,7 +281,8 @@ StreamWriter The method closes the stream and the underlying socket. - The method should be used along with the ``wait_closed()`` method:: + The method should be used, though not mandatory, + along with the ``wait_closed()`` method:: stream.close() await stream.wait_closed() @@ -332,7 +333,8 @@ StreamWriter Wait until the stream is closed. Should be called after :meth:`close` to wait until the underlying - connection is closed. + connection is closed, ensuring that all data has been flushed + before e.g. exiting the program. .. versionadded:: 3.7 @@ -361,6 +363,7 @@ TCP echo client using the :func:`asyncio.open_connection` function:: print('Close the connection') writer.close() + await writer.wait_closed() asyncio.run(tcp_echo_client('Hello World!')) @@ -393,6 +396,7 @@ TCP echo server using the :func:`asyncio.start_server` function:: print("Close the connection") writer.close() + await writer.wait_closed() async def main(): server = await asyncio.start_server( @@ -449,6 +453,7 @@ Simple example querying HTTP headers of the URL passed on the command line:: # Ignore the body, close the socket writer.close() + await writer.wait_closed() url = sys.argv[1] asyncio.run(print_http_headers(url)) @@ -494,6 +499,7 @@ Coroutine waiting until a socket receives data using the # Got data, we are done: close the socket print("Received:", data.decode()) writer.close() + await writer.wait_closed() # Close the second socket wsock.close() From webhook-mailer at python.org Fri Feb 3 02:26:43 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 03 Feb 2023 07:26:43 -0000 Subject: [Python-checkins] gh-100920: Update documentation for `asyncio.StreamWriter.wait_closed` (GH-101514) Message-ID: https://github.com/python/cpython/commit/2366c1a4fee9bfb55002906086d5a0f6d517c5aa commit: 2366c1a4fee9bfb55002906086d5a0f6d517c5aa branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-02T23:26:37-08:00 summary: gh-100920: Update documentation for `asyncio.StreamWriter.wait_closed` (GH-101514) (cherry picked from commit 5c39daf50b7f388f9b24bb2d6ef415955440bebf) Co-authored-by: Viet Than files: M Doc/library/asyncio-stream.rst diff --git a/Doc/library/asyncio-stream.rst b/Doc/library/asyncio-stream.rst index c1ae8abb9abc..74891306a637 100644 --- a/Doc/library/asyncio-stream.rst +++ b/Doc/library/asyncio-stream.rst @@ -295,7 +295,8 @@ StreamWriter The method closes the stream and the underlying socket. - The method should be used along with the ``wait_closed()`` method:: + The method should be used, though not mandatory, + along with the ``wait_closed()`` method:: stream.close() await stream.wait_closed() @@ -364,7 +365,8 @@ StreamWriter Wait until the stream is closed. Should be called after :meth:`close` to wait until the underlying - connection is closed. + connection is closed, ensuring that all data has been flushed + before e.g. exiting the program. .. versionadded:: 3.7 @@ -394,6 +396,7 @@ TCP echo client using the :func:`asyncio.open_connection` function:: print('Close the connection') writer.close() + await writer.wait_closed() asyncio.run(tcp_echo_client('Hello World!')) @@ -426,6 +429,7 @@ TCP echo server using the :func:`asyncio.start_server` function:: print("Close the connection") writer.close() + await writer.wait_closed() async def main(): server = await asyncio.start_server( @@ -482,6 +486,7 @@ Simple example querying HTTP headers of the URL passed on the command line:: # Ignore the body, close the socket writer.close() + await writer.wait_closed() url = sys.argv[1] asyncio.run(print_http_headers(url)) @@ -527,6 +532,7 @@ Coroutine waiting until a socket receives data using the # Got data, we are done: close the socket print("Received:", data.decode()) writer.close() + await writer.wait_closed() # Close the second socket wsock.close() From webhook-mailer at python.org Fri Feb 3 02:30:03 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 03 Feb 2023 07:30:03 -0000 Subject: [Python-checkins] docs: replace PyPI description with link (GH-101506) Message-ID: https://github.com/python/cpython/commit/42b14044aabee01491acee3a524827358f560ca9 commit: 42b14044aabee01491acee3a524827358f560ca9 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-02T23:29:56-08:00 summary: docs: replace PyPI description with link (GH-101506) (cherry picked from commit 45d014e03ba7ba4c9c912120ec36b2aca02061f4) Co-authored-by: Fran?ois Magimel files: M Doc/tutorial/venv.rst diff --git a/Doc/tutorial/venv.rst b/Doc/tutorial/venv.rst index 221c11ca74b4..2cc30f89e95d 100644 --- a/Doc/tutorial/venv.rst +++ b/Doc/tutorial/venv.rst @@ -93,8 +93,8 @@ Managing Packages with pip ========================== You can install, upgrade, and remove packages using a program called -:program:`pip`. By default ``pip`` will install packages from the Python -Package Index, . You can browse the Python +:program:`pip`. By default ``pip`` will install packages from the `Python +Package Index `_. You can browse the Python Package Index by going to it in your web browser. ``pip`` has a number of subcommands: "install", "uninstall", From webhook-mailer at python.org Fri Feb 3 02:31:48 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 03 Feb 2023 07:31:48 -0000 Subject: [Python-checkins] docs: replace PyPI description with link (GH-101506) Message-ID: https://github.com/python/cpython/commit/4392bf648f125762a0f074e87b847496e5f15a62 commit: 4392bf648f125762a0f074e87b847496e5f15a62 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-02T23:31:43-08:00 summary: docs: replace PyPI description with link (GH-101506) (cherry picked from commit 45d014e03ba7ba4c9c912120ec36b2aca02061f4) Co-authored-by: Fran?ois Magimel files: M Doc/tutorial/venv.rst diff --git a/Doc/tutorial/venv.rst b/Doc/tutorial/venv.rst index 7cdce3f9ca88..8312c5878659 100644 --- a/Doc/tutorial/venv.rst +++ b/Doc/tutorial/venv.rst @@ -98,8 +98,8 @@ Managing Packages with pip ========================== You can install, upgrade, and remove packages using a program called -:program:`pip`. By default ``pip`` will install packages from the Python -Package Index, . You can browse the Python +:program:`pip`. By default ``pip`` will install packages from the `Python +Package Index `_. You can browse the Python Package Index by going to it in your web browser. ``pip`` has a number of subcommands: "install", "uninstall", From webhook-mailer at python.org Fri Feb 3 04:54:34 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Fri, 03 Feb 2023 09:54:34 -0000 Subject: [Python-checkins] gh-101277: Port more `itertools` static types to heap types (#101303) Message-ID: https://github.com/python/cpython/commit/a52cc9853fed39b5cc90b7ffb6a64249307a990b commit: a52cc9853fed39b5cc90b7ffb6a64249307a990b branch: main author: Erlend E. Aasland committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-03T15:24:27+05:30 summary: gh-101277: Port more `itertools` static types to heap types (#101303) Add dropwhile, takewhile, starmap, combinations*, and permutations types to module state. files: M Modules/clinic/itertoolsmodule.c.h M Modules/itertoolsmodule.c diff --git a/Modules/clinic/itertoolsmodule.c.h b/Modules/clinic/itertoolsmodule.c.h index c492c33daea5..be44246cc970 100644 --- a/Modules/clinic/itertoolsmodule.c.h +++ b/Modules/clinic/itertoolsmodule.c.h @@ -345,7 +345,7 @@ static PyObject * itertools_cycle(PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *return_value = NULL; - PyTypeObject *base_tp = &cycle_type; + PyTypeObject *base_tp = clinic_state()->cycle_type; PyObject *iterable; if ((type == base_tp || type->tp_init == base_tp->tp_init) && @@ -377,7 +377,7 @@ static PyObject * itertools_dropwhile(PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *return_value = NULL; - PyTypeObject *base_tp = &dropwhile_type; + PyTypeObject *base_tp = clinic_state()->dropwhile_type; PyObject *func; PyObject *seq; @@ -409,7 +409,7 @@ static PyObject * itertools_takewhile(PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *return_value = NULL; - PyTypeObject *base_tp = &takewhile_type; + PyTypeObject *base_tp = clinic_state()->takewhile_type; PyObject *func; PyObject *seq; @@ -441,7 +441,7 @@ static PyObject * itertools_starmap(PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *return_value = NULL; - PyTypeObject *base_tp = &starmap_type; + PyTypeObject *base_tp = clinic_state()->starmap_type; PyObject *func; PyObject *seq; @@ -913,4 +913,4 @@ itertools_count(PyTypeObject *type, PyObject *args, PyObject *kwargs) exit: return return_value; } -/*[clinic end generated code: output=c3069caac417e165 input=a9049054013a1b77]*/ +/*[clinic end generated code: output=b86fcd99bd32145e input=a9049054013a1b77]*/ diff --git a/Modules/itertoolsmodule.c b/Modules/itertoolsmodule.c index a2ee48228b58..c9baa47e2c0e 100644 --- a/Modules/itertoolsmodule.c +++ b/Modules/itertoolsmodule.c @@ -12,8 +12,15 @@ */ typedef struct { + PyTypeObject *combinations_type; + PyTypeObject *cwr_type; + PyTypeObject *cycle_type; + PyTypeObject *dropwhile_type; PyTypeObject *groupby_type; PyTypeObject *_grouper_type; + PyTypeObject *permutations_type; + PyTypeObject *starmap_type; + PyTypeObject *takewhile_type; } itertools_state; static inline itertools_state * @@ -50,32 +57,25 @@ class itertools._grouper "_grouperobject *" "clinic_state()->_grouper_type" class itertools.teedataobject "teedataobject *" "&teedataobject_type" class itertools._tee "teeobject *" "&tee_type" class itertools.batched "batchedobject *" "&batched_type" -class itertools.cycle "cycleobject *" "&cycle_type" -class itertools.dropwhile "dropwhileobject *" "&dropwhile_type" -class itertools.takewhile "takewhileobject *" "&takewhile_type" -class itertools.starmap "starmapobject *" "&starmap_type" +class itertools.cycle "cycleobject *" "clinic_state()->cycle_type" +class itertools.dropwhile "dropwhileobject *" "clinic_state()->dropwhile_type" +class itertools.takewhile "takewhileobject *" "clinic_state()->takewhile_type" +class itertools.starmap "starmapobject *" "clinic_state()->starmap_type" class itertools.chain "chainobject *" "&chain_type" -class itertools.combinations "combinationsobject *" "&combinations_type" -class itertools.combinations_with_replacement "cwr_object *" "&cwr_type" -class itertools.permutations "permutationsobject *" "&permutations_type" +class itertools.combinations "combinationsobject *" "clinic_state()->combinations_type" +class itertools.combinations_with_replacement "cwr_object *" "clinic_state()->cwr_type" +class itertools.permutations "permutationsobject *" "clinic_state()->permutations_type" class itertools.accumulate "accumulateobject *" "&accumulate_type" class itertools.compress "compressobject *" "&compress_type" class itertools.filterfalse "filterfalseobject *" "&filterfalse_type" class itertools.count "countobject *" "&count_type" class itertools.pairwise "pairwiseobject *" "&pairwise_type" [clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=424108522584b55b]*/ +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=1790ac655869a651]*/ static PyTypeObject teedataobject_type; static PyTypeObject tee_type; static PyTypeObject batched_type; -static PyTypeObject cycle_type; -static PyTypeObject dropwhile_type; -static PyTypeObject takewhile_type; -static PyTypeObject starmap_type; -static PyTypeObject combinations_type; -static PyTypeObject cwr_type; -static PyTypeObject permutations_type; static PyTypeObject accumulate_type; static PyTypeObject compress_type; static PyTypeObject filterfalse_type; @@ -1286,15 +1286,18 @@ itertools_cycle_impl(PyTypeObject *type, PyObject *iterable) static void cycle_dealloc(cycleobject *lz) { + PyTypeObject *tp = Py_TYPE(lz); PyObject_GC_UnTrack(lz); Py_XDECREF(lz->it); Py_XDECREF(lz->saved); - Py_TYPE(lz)->tp_free(lz); + tp->tp_free(lz); + Py_DECREF(tp); } static int cycle_traverse(cycleobject *lz, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(lz)); Py_VISIT(lz->it); Py_VISIT(lz->saved); return 0; @@ -1381,48 +1384,25 @@ static PyMethodDef cycle_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject cycle_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.cycle", /* tp_name */ - sizeof(cycleobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)cycle_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - itertools_cycle__doc__, /* tp_doc */ - (traverseproc)cycle_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)cycle_next, /* tp_iternext */ - cycle_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_cycle, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot cycle_slots[] = { + {Py_tp_dealloc, cycle_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_cycle__doc__}, + {Py_tp_traverse, cycle_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, cycle_next}, + {Py_tp_methods, cycle_methods}, + {Py_tp_new, itertools_cycle}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec cycle_spec = { + .name = "itertools.cycle", + .basicsize = sizeof(cycleobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = cycle_slots, }; @@ -1474,15 +1454,18 @@ itertools_dropwhile_impl(PyTypeObject *type, PyObject *func, PyObject *seq) static void dropwhile_dealloc(dropwhileobject *lz) { + PyTypeObject *tp = Py_TYPE(lz); PyObject_GC_UnTrack(lz); Py_XDECREF(lz->func); Py_XDECREF(lz->it); - Py_TYPE(lz)->tp_free(lz); + tp->tp_free(lz); + Py_DECREF(tp); } static int dropwhile_traverse(dropwhileobject *lz, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(lz)); Py_VISIT(lz->it); Py_VISIT(lz->func); return 0; @@ -1545,48 +1528,25 @@ static PyMethodDef dropwhile_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject dropwhile_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.dropwhile", /* tp_name */ - sizeof(dropwhileobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)dropwhile_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - itertools_dropwhile__doc__, /* tp_doc */ - (traverseproc)dropwhile_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)dropwhile_next, /* tp_iternext */ - dropwhile_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_dropwhile, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot dropwhile_slots[] = { + {Py_tp_dealloc, dropwhile_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_dropwhile__doc__}, + {Py_tp_traverse, dropwhile_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, dropwhile_next}, + {Py_tp_methods, dropwhile_methods}, + {Py_tp_new, itertools_dropwhile}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec dropwhile_spec = { + .name = "itertools.dropwhile", + .basicsize = sizeof(dropwhileobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = dropwhile_slots, }; @@ -1636,15 +1596,18 @@ itertools_takewhile_impl(PyTypeObject *type, PyObject *func, PyObject *seq) static void takewhile_dealloc(takewhileobject *lz) { + PyTypeObject *tp = Py_TYPE(lz); PyObject_GC_UnTrack(lz); Py_XDECREF(lz->func); Py_XDECREF(lz->it); - Py_TYPE(lz)->tp_free(lz); + tp->tp_free(lz); + Py_DECREF(tp); } static int takewhile_traverse(takewhileobject *lz, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(lz)); Py_VISIT(lz->it); Py_VISIT(lz->func); return 0; @@ -1704,48 +1667,25 @@ static PyMethodDef takewhile_reduce_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject takewhile_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.takewhile", /* tp_name */ - sizeof(takewhileobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)takewhile_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - itertools_takewhile__doc__, /* tp_doc */ - (traverseproc)takewhile_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)takewhile_next, /* tp_iternext */ - takewhile_reduce_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_takewhile, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot takewhile_slots[] = { + {Py_tp_dealloc, takewhile_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_takewhile__doc__}, + {Py_tp_traverse, takewhile_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, takewhile_next}, + {Py_tp_methods, takewhile_reduce_methods}, + {Py_tp_new, itertools_takewhile}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec takewhile_spec = { + .name = "itertools.takewhile", + .basicsize = sizeof(takewhileobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = takewhile_slots, }; @@ -2052,15 +1992,18 @@ itertools_starmap_impl(PyTypeObject *type, PyObject *func, PyObject *seq) static void starmap_dealloc(starmapobject *lz) { + PyTypeObject *tp = Py_TYPE(lz); PyObject_GC_UnTrack(lz); Py_XDECREF(lz->func); Py_XDECREF(lz->it); - Py_TYPE(lz)->tp_free(lz); + tp->tp_free(lz); + Py_DECREF(tp); } static int starmap_traverse(starmapobject *lz, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(lz)); Py_VISIT(lz->it); Py_VISIT(lz->func); return 0; @@ -2101,48 +2044,25 @@ static PyMethodDef starmap_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject starmap_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.starmap", /* tp_name */ - sizeof(starmapobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)starmap_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - itertools_starmap__doc__, /* tp_doc */ - (traverseproc)starmap_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)starmap_next, /* tp_iternext */ - starmap_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_starmap, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot starmap_slots[] = { + {Py_tp_dealloc, starmap_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_starmap__doc__}, + {Py_tp_traverse, starmap_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, starmap_next}, + {Py_tp_methods, starmap_methods}, + {Py_tp_new, itertools_starmap}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec starmap_spec = { + .name = "itertools.starmap", + .basicsize = sizeof(starmapobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = starmap_slots, }; @@ -2799,12 +2719,14 @@ itertools_combinations_impl(PyTypeObject *type, PyObject *iterable, static void combinations_dealloc(combinationsobject *co) { + PyTypeObject *tp = Py_TYPE(co); PyObject_GC_UnTrack(co); Py_XDECREF(co->pool); Py_XDECREF(co->result); if (co->indices != NULL) PyMem_Free(co->indices); - Py_TYPE(co)->tp_free(co); + tp->tp_free(co); + Py_DECREF(tp); } static PyObject * @@ -2818,6 +2740,7 @@ combinations_sizeof(combinationsobject *co, void *unused) static int combinations_traverse(combinationsobject *co, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(co)); Py_VISIT(co->pool); Py_VISIT(co->result); return 0; @@ -2988,48 +2911,25 @@ static PyMethodDef combinations_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject combinations_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.combinations", /* tp_name */ - sizeof(combinationsobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)combinations_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - itertools_combinations__doc__, /* tp_doc */ - (traverseproc)combinations_traverse,/* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)combinations_next, /* tp_iternext */ - combinations_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_combinations, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot combinations_slots[] = { + {Py_tp_dealloc, combinations_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_combinations__doc__}, + {Py_tp_traverse, combinations_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, combinations_next}, + {Py_tp_methods, combinations_methods}, + {Py_tp_new, itertools_combinations}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec combinations_spec = { + .name = "itertools.combinations", + .basicsize = sizeof(combinationsobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = combinations_slots, }; @@ -3133,12 +3033,14 @@ itertools_combinations_with_replacement_impl(PyTypeObject *type, static void cwr_dealloc(cwrobject *co) { + PyTypeObject *tp = Py_TYPE(co); PyObject_GC_UnTrack(co); Py_XDECREF(co->pool); Py_XDECREF(co->result); if (co->indices != NULL) PyMem_Free(co->indices); - Py_TYPE(co)->tp_free(co); + tp->tp_free(co); + Py_DECREF(tp); } static PyObject * @@ -3152,6 +3054,7 @@ cwr_sizeof(cwrobject *co, void *unused) static int cwr_traverse(cwrobject *co, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(co)); Py_VISIT(co->pool); Py_VISIT(co->result); return 0; @@ -3312,48 +3215,25 @@ static PyMethodDef cwr_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject cwr_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.combinations_with_replacement", /* tp_name */ - sizeof(cwrobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)cwr_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - itertools_combinations_with_replacement__doc__, /* tp_doc */ - (traverseproc)cwr_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)cwr_next, /* tp_iternext */ - cwr_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_combinations_with_replacement, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot cwr_slots[] = { + {Py_tp_dealloc, cwr_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_combinations_with_replacement__doc__}, + {Py_tp_traverse, cwr_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, cwr_next}, + {Py_tp_methods, cwr_methods}, + {Py_tp_new, itertools_combinations_with_replacement}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec cwr_spec = { + .name = "itertools.combinations_with_replacement", + .basicsize = sizeof(cwrobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = cwr_slots, }; @@ -3476,12 +3356,14 @@ itertools_permutations_impl(PyTypeObject *type, PyObject *iterable, static void permutations_dealloc(permutationsobject *po) { + PyTypeObject *tp = Py_TYPE(po); PyObject_GC_UnTrack(po); Py_XDECREF(po->pool); Py_XDECREF(po->result); PyMem_Free(po->indices); PyMem_Free(po->cycles); - Py_TYPE(po)->tp_free(po); + tp->tp_free(po); + Py_DECREF(tp); } static PyObject * @@ -3496,6 +3378,7 @@ permutations_sizeof(permutationsobject *po, void *unused) static int permutations_traverse(permutationsobject *po, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(po)); Py_VISIT(po->pool); Py_VISIT(po->result); return 0; @@ -3701,48 +3584,25 @@ static PyMethodDef permuations_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject permutations_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.permutations", /* tp_name */ - sizeof(permutationsobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)permutations_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - itertools_permutations__doc__, /* tp_doc */ - (traverseproc)permutations_traverse,/* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)permutations_next, /* tp_iternext */ - permuations_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_permutations, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot permutations_slots[] = { + {Py_tp_dealloc, permutations_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_permutations__doc__}, + {Py_tp_traverse, permutations_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, permutations_next}, + {Py_tp_methods, permuations_methods}, + {Py_tp_new, itertools_permutations}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec permutations_spec = { + .name = "itertools.permutations", + .basicsize = sizeof(permutationsobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = permutations_slots, }; @@ -4975,8 +4835,15 @@ static int itertoolsmodule_traverse(PyObject *mod, visitproc visit, void *arg) { itertools_state *state = get_module_state(mod); + Py_VISIT(state->combinations_type); + Py_VISIT(state->cwr_type); + Py_VISIT(state->cycle_type); + Py_VISIT(state->dropwhile_type); Py_VISIT(state->groupby_type); Py_VISIT(state->_grouper_type); + Py_VISIT(state->permutations_type); + Py_VISIT(state->starmap_type); + Py_VISIT(state->takewhile_type); return 0; } @@ -4984,8 +4851,15 @@ static int itertoolsmodule_clear(PyObject *mod) { itertools_state *state = get_module_state(mod); + Py_CLEAR(state->combinations_type); + Py_CLEAR(state->cwr_type); + Py_CLEAR(state->cycle_type); + Py_CLEAR(state->dropwhile_type); Py_CLEAR(state->groupby_type); Py_CLEAR(state->_grouper_type); + Py_CLEAR(state->permutations_type); + Py_CLEAR(state->starmap_type); + Py_CLEAR(state->takewhile_type); return 0; } @@ -5010,26 +4884,26 @@ static int itertoolsmodule_exec(PyObject *mod) { itertools_state *state = get_module_state(mod); + ADD_TYPE(mod, state->combinations_type, &combinations_spec); + ADD_TYPE(mod, state->cwr_type, &cwr_spec); + ADD_TYPE(mod, state->cycle_type, &cycle_spec); + ADD_TYPE(mod, state->dropwhile_type, &dropwhile_spec); ADD_TYPE(mod, state->groupby_type, &groupby_spec); ADD_TYPE(mod, state->_grouper_type, &_grouper_spec); + ADD_TYPE(mod, state->permutations_type, &permutations_spec); + ADD_TYPE(mod, state->starmap_type, &starmap_spec); + ADD_TYPE(mod, state->takewhile_type, &takewhile_spec); PyTypeObject *typelist[] = { &accumulate_type, &batched_type, - &combinations_type, - &cwr_type, - &cycle_type, - &dropwhile_type, - &takewhile_type, &islice_type, - &starmap_type, &chain_type, &compress_type, &filterfalse_type, &count_type, &ziplongest_type, &pairwise_type, - &permutations_type, &product_type, &repeat_type, &tee_type, From webhook-mailer at python.org Fri Feb 3 06:30:27 2023 From: webhook-mailer at python.org (iritkatriel) Date: Fri, 03 Feb 2023 11:30:27 -0000 Subject: [Python-checkins] gh-98831: rewrite SEND, GET_YIELD_FROM_ITER, RETURN_GENERATOR in the instruction definition DSL (#101516) Message-ID: https://github.com/python/cpython/commit/04e06e20ee61f3c0d1d7a827b2feb4ed41bb198d commit: 04e06e20ee61f3c0d1d7a827b2feb4ed41bb198d branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-03T11:30:21Z summary: gh-98831: rewrite SEND, GET_YIELD_FROM_ITER, RETURN_GENERATOR in the instruction definition DSL (#101516) files: M Python/bytecodes.c M Python/generated_cases.c.h M Python/opcode_metadata.h diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 169a2647866b..74c53ad1579f 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -663,14 +663,10 @@ dummy_func( PREDICT(LOAD_CONST); } - // error: SEND stack effect depends on jump flag - inst(SEND) { + inst(SEND, (receiver, v -- receiver if (!jump), retval)) { assert(frame != &entry_frame); - assert(STACK_LEVEL() >= 2); - PyObject *v = POP(); - PyObject *receiver = TOP(); + bool jump = false; PySendResult gen_status; - PyObject *retval; if (tstate->c_tracefunc == NULL) { gen_status = PyIter_Send(receiver, v, &retval); } else { @@ -695,21 +691,20 @@ dummy_func( gen_status = PYGEN_NEXT; } } - Py_DECREF(v); if (gen_status == PYGEN_ERROR) { assert(retval == NULL); goto error; } + Py_DECREF(v); if (gen_status == PYGEN_RETURN) { assert(retval != NULL); Py_DECREF(receiver); - SET_TOP(retval); JUMPBY(oparg); + jump = true; } else { assert(gen_status == PYGEN_NEXT); assert(retval != NULL); - PUSH(retval); } } @@ -2043,31 +2038,30 @@ dummy_func( ERROR_IF(iter == NULL, error); } - // stack effect: ( -- ) - inst(GET_YIELD_FROM_ITER) { + inst(GET_YIELD_FROM_ITER, (iterable -- iter)) { /* before: [obj]; after [getiter(obj)] */ - PyObject *iterable = TOP(); - PyObject *iter; if (PyCoro_CheckExact(iterable)) { /* `iterable` is a coroutine */ if (!(frame->f_code->co_flags & (CO_COROUTINE | CO_ITERABLE_COROUTINE))) { /* and it is used in a 'yield from' expression of a regular generator. */ - Py_DECREF(iterable); - SET_TOP(NULL); _PyErr_SetString(tstate, PyExc_TypeError, "cannot 'yield from' a coroutine object " "in a non-coroutine generator"); goto error; } + iter = iterable; + } + else if (PyGen_CheckExact(iterable)) { + iter = iterable; } - else if (!PyGen_CheckExact(iterable)) { + else { /* `iterable` is not a generator. */ iter = PyObject_GetIter(iterable); - Py_DECREF(iterable); - SET_TOP(iter); - if (iter == NULL) + if (iter == NULL) { goto error; + } + Py_DECREF(iterable); } PREDICT(LOAD_CONST); } @@ -3010,8 +3004,7 @@ dummy_func( PUSH((PyObject *)func); } - // stack effect: ( -- ) - inst(RETURN_GENERATOR) { + inst(RETURN_GENERATOR, (--)) { assert(PyFunction_Check(frame->f_funcobj)); PyFunctionObject *func = (PyFunctionObject *)frame->f_funcobj; PyGenObject *gen = (PyGenObject *)_Py_MakeCoro(func); diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index 97263866fe91..6f90d9ca4a59 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -865,12 +865,12 @@ } TARGET(SEND) { + PyObject *v = PEEK(1); + PyObject *receiver = PEEK(2); + PyObject *retval; assert(frame != &entry_frame); - assert(STACK_LEVEL() >= 2); - PyObject *v = POP(); - PyObject *receiver = TOP(); + bool jump = false; PySendResult gen_status; - PyObject *retval; if (tstate->c_tracefunc == NULL) { gen_status = PyIter_Send(receiver, v, &retval); } else { @@ -895,22 +895,24 @@ gen_status = PYGEN_NEXT; } } - Py_DECREF(v); if (gen_status == PYGEN_ERROR) { assert(retval == NULL); goto error; } + Py_DECREF(v); if (gen_status == PYGEN_RETURN) { assert(retval != NULL); Py_DECREF(receiver); - SET_TOP(retval); JUMPBY(oparg); + jump = true; } else { assert(gen_status == PYGEN_NEXT); assert(retval != NULL); - PUSH(retval); } + STACK_SHRINK(1); + STACK_GROW(((!jump) ? 1 : 0)); + POKE(1, retval); DISPATCH(); } @@ -2584,30 +2586,33 @@ } TARGET(GET_YIELD_FROM_ITER) { - /* before: [obj]; after [getiter(obj)] */ - PyObject *iterable = TOP(); + PyObject *iterable = PEEK(1); PyObject *iter; + /* before: [obj]; after [getiter(obj)] */ if (PyCoro_CheckExact(iterable)) { /* `iterable` is a coroutine */ if (!(frame->f_code->co_flags & (CO_COROUTINE | CO_ITERABLE_COROUTINE))) { /* and it is used in a 'yield from' expression of a regular generator. */ - Py_DECREF(iterable); - SET_TOP(NULL); _PyErr_SetString(tstate, PyExc_TypeError, "cannot 'yield from' a coroutine object " "in a non-coroutine generator"); goto error; } + iter = iterable; } - else if (!PyGen_CheckExact(iterable)) { + else if (PyGen_CheckExact(iterable)) { + iter = iterable; + } + else { /* `iterable` is not a generator. */ iter = PyObject_GetIter(iterable); - Py_DECREF(iterable); - SET_TOP(iter); - if (iter == NULL) + if (iter == NULL) { goto error; + } + Py_DECREF(iterable); } + POKE(1, iter); PREDICT(LOAD_CONST); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index ca3dde363bfd..256f81a89fcd 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -99,7 +99,7 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case GET_AWAITABLE: return 1; case SEND: - return -1; + return 2; case YIELD_VALUE: return 1; case POP_EXCEPT: @@ -259,7 +259,7 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case GET_ITER: return 1; case GET_YIELD_FROM_ITER: - return -1; + return 1; case FOR_ITER: return -1; case FOR_ITER_LIST: @@ -327,7 +327,7 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case MAKE_FUNCTION: return -1; case RETURN_GENERATOR: - return -1; + return 0; case BUILD_SLICE: return -1; case FORMAT_VALUE: @@ -445,7 +445,7 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case GET_AWAITABLE: return 1; case SEND: - return -1; + return ((!jump) ? 1 : 0) + 1; case YIELD_VALUE: return 1; case POP_EXCEPT: @@ -605,7 +605,7 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case GET_ITER: return 1; case GET_YIELD_FROM_ITER: - return -1; + return 1; case FOR_ITER: return -1; case FOR_ITER_LIST: @@ -673,7 +673,7 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case MAKE_FUNCTION: return -1; case RETURN_GENERATOR: - return -1; + return 0; case BUILD_SLICE: return -1; case FORMAT_VALUE: From webhook-mailer at python.org Fri Feb 3 09:40:56 2023 From: webhook-mailer at python.org (iritkatriel) Date: Fri, 03 Feb 2023 14:40:56 -0000 Subject: [Python-checkins] gh-98831: rewrite MAKE_FUNCTION and BUILD_SLICE in the instruction definition DSL (#101529) Message-ID: https://github.com/python/cpython/commit/433fb3ef08c71b97a0d08e522df56e0afaf3747a commit: 433fb3ef08c71b97a0d08e522df56e0afaf3747a branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-03T14:40:45Z summary: gh-98831: rewrite MAKE_FUNCTION and BUILD_SLICE in the instruction definition DSL (#101529) files: M Python/bytecodes.c M Python/generated_cases.c.h M Python/opcode_metadata.h diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 74c53ad1579f..8993567ac822 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -2972,36 +2972,39 @@ dummy_func( CHECK_EVAL_BREAKER(); } - // error: MAKE_FUNCTION has irregular stack effect - inst(MAKE_FUNCTION) { - PyObject *codeobj = POP(); - PyFunctionObject *func = (PyFunctionObject *) + inst(MAKE_FUNCTION, (defaults if (oparg & 0x01), + kwdefaults if (oparg & 0x02), + annotations if (oparg & 0x04), + closure if (oparg & 0x08), + codeobj -- func)) { + + PyFunctionObject *func_obj = (PyFunctionObject *) PyFunction_New(codeobj, GLOBALS()); Py_DECREF(codeobj); - if (func == NULL) { + if (func_obj == NULL) { goto error; } if (oparg & 0x08) { - assert(PyTuple_CheckExact(TOP())); - func->func_closure = POP(); + assert(PyTuple_CheckExact(closure)); + func_obj->func_closure = closure; } if (oparg & 0x04) { - assert(PyTuple_CheckExact(TOP())); - func->func_annotations = POP(); + assert(PyTuple_CheckExact(annotations)); + func_obj->func_annotations = annotations; } if (oparg & 0x02) { - assert(PyDict_CheckExact(TOP())); - func->func_kwdefaults = POP(); + assert(PyDict_CheckExact(kwdefaults)); + func_obj->func_kwdefaults = kwdefaults; } if (oparg & 0x01) { - assert(PyTuple_CheckExact(TOP())); - func->func_defaults = POP(); + assert(PyTuple_CheckExact(defaults)); + func_obj->func_defaults = defaults; } - func->func_version = ((PyCodeObject *)codeobj)->co_version; - PUSH((PyObject *)func); + func_obj->func_version = ((PyCodeObject *)codeobj)->co_version; + func = (PyObject *)func_obj; } inst(RETURN_GENERATOR, (--)) { @@ -3027,22 +3030,12 @@ dummy_func( goto resume_frame; } - // error: BUILD_SLICE has irregular stack effect - inst(BUILD_SLICE) { - PyObject *start, *stop, *step, *slice; - if (oparg == 3) - step = POP(); - else - step = NULL; - stop = POP(); - start = TOP(); + inst(BUILD_SLICE, (start, stop, step if (oparg == 3) -- slice)) { slice = PySlice_New(start, stop, step); Py_DECREF(start); Py_DECREF(stop); Py_XDECREF(step); - SET_TOP(slice); - if (slice == NULL) - goto error; + ERROR_IF(slice == NULL, error); } // error: FORMAT_VALUE has irregular stack effect diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index 6f90d9ca4a59..e524bfcb99d4 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -3584,34 +3584,42 @@ } TARGET(MAKE_FUNCTION) { - PyObject *codeobj = POP(); - PyFunctionObject *func = (PyFunctionObject *) + PyObject *codeobj = PEEK(1); + PyObject *closure = (oparg & 0x08) ? PEEK(1 + ((oparg & 0x08) ? 1 : 0)) : NULL; + PyObject *annotations = (oparg & 0x04) ? PEEK(1 + ((oparg & 0x08) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0)) : NULL; + PyObject *kwdefaults = (oparg & 0x02) ? PEEK(1 + ((oparg & 0x08) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0) + ((oparg & 0x02) ? 1 : 0)) : NULL; + PyObject *defaults = (oparg & 0x01) ? PEEK(1 + ((oparg & 0x08) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0) + ((oparg & 0x02) ? 1 : 0) + ((oparg & 0x01) ? 1 : 0)) : NULL; + PyObject *func; + + PyFunctionObject *func_obj = (PyFunctionObject *) PyFunction_New(codeobj, GLOBALS()); Py_DECREF(codeobj); - if (func == NULL) { + if (func_obj == NULL) { goto error; } if (oparg & 0x08) { - assert(PyTuple_CheckExact(TOP())); - func->func_closure = POP(); + assert(PyTuple_CheckExact(closure)); + func_obj->func_closure = closure; } if (oparg & 0x04) { - assert(PyTuple_CheckExact(TOP())); - func->func_annotations = POP(); + assert(PyTuple_CheckExact(annotations)); + func_obj->func_annotations = annotations; } if (oparg & 0x02) { - assert(PyDict_CheckExact(TOP())); - func->func_kwdefaults = POP(); + assert(PyDict_CheckExact(kwdefaults)); + func_obj->func_kwdefaults = kwdefaults; } if (oparg & 0x01) { - assert(PyTuple_CheckExact(TOP())); - func->func_defaults = POP(); + assert(PyTuple_CheckExact(defaults)); + func_obj->func_defaults = defaults; } - func->func_version = ((PyCodeObject *)codeobj)->co_version; - PUSH((PyObject *)func); + func_obj->func_version = ((PyCodeObject *)codeobj)->co_version; + func = (PyObject *)func_obj; + STACK_SHRINK(((oparg & 0x01) ? 1 : 0) + ((oparg & 0x02) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0) + ((oparg & 0x08) ? 1 : 0)); + POKE(1, func); DISPATCH(); } @@ -3639,20 +3647,18 @@ } TARGET(BUILD_SLICE) { - PyObject *start, *stop, *step, *slice; - if (oparg == 3) - step = POP(); - else - step = NULL; - stop = POP(); - start = TOP(); + PyObject *step = (oparg == 3) ? PEEK(((oparg == 3) ? 1 : 0)) : NULL; + PyObject *stop = PEEK(1 + ((oparg == 3) ? 1 : 0)); + PyObject *start = PEEK(2 + ((oparg == 3) ? 1 : 0)); + PyObject *slice; slice = PySlice_New(start, stop, step); Py_DECREF(start); Py_DECREF(stop); Py_XDECREF(step); - SET_TOP(slice); - if (slice == NULL) - goto error; + if (slice == NULL) { STACK_SHRINK(((oparg == 3) ? 1 : 0)); goto pop_2_error; } + STACK_SHRINK(((oparg == 3) ? 1 : 0)); + STACK_SHRINK(1); + POKE(1, slice); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index 256f81a89fcd..857526c35aa5 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -325,11 +325,11 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case CALL_FUNCTION_EX: return -1; case MAKE_FUNCTION: - return -1; + return ((oparg & 0x01) ? 1 : 0) + ((oparg & 0x02) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0) + ((oparg & 0x08) ? 1 : 0) + 1; case RETURN_GENERATOR: return 0; case BUILD_SLICE: - return -1; + return ((oparg == 3) ? 1 : 0) + 2; case FORMAT_VALUE: return -1; case COPY: @@ -671,11 +671,11 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case CALL_FUNCTION_EX: return -1; case MAKE_FUNCTION: - return -1; + return 1; case RETURN_GENERATOR: return 0; case BUILD_SLICE: - return -1; + return 1; case FORMAT_VALUE: return -1; case COPY: From webhook-mailer at python.org Fri Feb 3 13:08:41 2023 From: webhook-mailer at python.org (zooba) Date: Fri, 03 Feb 2023 18:08:41 -0000 Subject: [Python-checkins] gh-101522: Allow overriding Windows dependencies versions and paths using MSBuild properties (GH-101523) Message-ID: https://github.com/python/cpython/commit/f6c53b80a16f63825479c5ca0f8a5e2829c3f505 commit: f6c53b80a16f63825479c5ca0f8a5e2829c3f505 branch: main author: Steve Dower committer: zooba date: 2023-02-03T18:08:34Z summary: gh-101522: Allow overriding Windows dependencies versions and paths using MSBuild properties (GH-101523) files: A Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst M PCbuild/python.props M PCbuild/tcltk.props diff --git a/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst b/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst new file mode 100644 index 000000000000..2e7f9029e9ee --- /dev/null +++ b/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst @@ -0,0 +1,2 @@ +Allow overriding Windows dependencies versions and paths using MSBuild +properties. diff --git a/PCbuild/python.props b/PCbuild/python.props index 971c1490d9f1..57360e57baba 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -56,23 +56,32 @@ ..\\.. ..\\..\\.. + - - $(EXTERNALS_DIR) + + + $(EXTERNALS_DIR) $([System.IO.Path]::GetFullPath(`$(PySourcePath)externals`)) $(ExternalsDir)\ - $(ExternalsDir)sqlite-3.39.4.0\ - $(ExternalsDir)bzip2-1.0.8\ - $(ExternalsDir)xz-5.2.5\ - $(ExternalsDir)libffi-3.4.3\ - $(ExternalsDir)libffi-3.4.3\$(ArchName)\ - $(libffiOutDir)include - $(ExternalsDir)openssl-1.1.1s\ - $(ExternalsDir)openssl-bin-1.1.1s\$(ArchName)\ - $(opensslOutDir)include - $(ExternalsDir)\nasm-2.11.06\ - $(ExternalsDir)\zlib-1.2.13\ - + + + + + + $(ExternalsDir)sqlite-3.39.4.0\ + $(ExternalsDir)bzip2-1.0.8\ + $(ExternalsDir)xz-5.2.5\ + $(ExternalsDir)libffi-3.4.3\ + $(libffiDir)$(ArchName)\ + $(libffiOutDir)include + $(ExternalsDir)openssl-1.1.1s\ + $(ExternalsDir)openssl-bin-1.1.1s\$(ArchName)\ + $(opensslOutDir)include + $(ExternalsDir)\nasm-2.11.06\ + $(ExternalsDir)\zlib-1.2.13\ + + + _d diff --git a/PCbuild/tcltk.props b/PCbuild/tcltk.props index 15c03e20fe21..9d5189b3b8e9 100644 --- a/PCbuild/tcltk.props +++ b/PCbuild/tcltk.props @@ -2,22 +2,25 @@ - 8 - 6 - 13 - 0 - $(TclMajorVersion) - $(TclMinorVersion) - $(TclPatchLevel) - $(TclRevision) - 8 - 4 - 3 - 6 - $(ExternalsDir)tcl-core-$(TclMajorVersion).$(TclMinorVersion).$(TclPatchLevel).$(TclRevision)\ - $(ExternalsDir)tk-$(TkMajorVersion).$(TkMinorVersion).$(TkPatchLevel).$(TkRevision)\ - $(ExternalsDir)tix-$(TixMajorVersion).$(TixMinorVersion).$(TixPatchLevel).$(TixRevision)\ - $(ExternalsDir)tcltk-$(TclMajorVersion).$(TclMinorVersion).$(TclPatchLevel).$(TclRevision)\$(ArchName)\ + 8.6.13.0 + $(TclVersion) + 8.4.3.6 + $([System.Version]::Parse($(TclVersion)).Major) + $([System.Version]::Parse($(TclVersion)).Minor) + $([System.Version]::Parse($(TclVersion)).Build) + $([System.Version]::Parse($(TclVersion)).Revision) + $([System.Version]::Parse($(TkVersion)).Major) + $([System.Version]::Parse($(TkVersion)).Minor) + $([System.Version]::Parse($(TkVersion)).Build) + $([System.Version]::Parse($(TkVersion)).Revision) + $([System.Version]::Parse($(TixVersion)).Major) + $([System.Version]::Parse($(TixVersion)).Minor) + $([System.Version]::Parse($(TixVersion)).Build) + $([System.Version]::Parse($(TixVersion)).Revision) + $(ExternalsDir)tcl-core-$(TclVersion)\ + $(ExternalsDir)tk-$(TkVersion)\ + $(ExternalsDir)tix-$(TixVersion)\ + $(ExternalsDir)tcltk-$(TclVersion)\$(ArchName)\ $(tcltkDir)\bin\tclsh$(TclMajorVersion)$(TclMinorVersion)t.exe $(tcltkDir)\..\win32\bin\tclsh$(TclMajorVersion)$(TclMinorVersion)t.exe From webhook-mailer at python.org Fri Feb 3 13:54:51 2023 From: webhook-mailer at python.org (zooba) Date: Fri, 03 Feb 2023 18:54:51 -0000 Subject: [Python-checkins] gh-101522: Allow overriding Windows dependencies versions and paths using MSBuild properties (GH-101523) Message-ID: https://github.com/python/cpython/commit/898de13f914a7db93390bfd6277155b9a7a2c565 commit: 898de13f914a7db93390bfd6277155b9a7a2c565 branch: 3.11 author: Steve Dower committer: zooba date: 2023-02-03T18:54:39Z summary: gh-101522: Allow overriding Windows dependencies versions and paths using MSBuild properties (GH-101523) files: A Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst M PCbuild/python.props M PCbuild/tcltk.props diff --git a/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst b/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst new file mode 100644 index 000000000000..2e7f9029e9ee --- /dev/null +++ b/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst @@ -0,0 +1,2 @@ +Allow overriding Windows dependencies versions and paths using MSBuild +properties. diff --git a/PCbuild/python.props b/PCbuild/python.props index 971c1490d9f1..57360e57baba 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -56,23 +56,32 @@ ..\\.. ..\\..\\.. + - - $(EXTERNALS_DIR) + + + $(EXTERNALS_DIR) $([System.IO.Path]::GetFullPath(`$(PySourcePath)externals`)) $(ExternalsDir)\ - $(ExternalsDir)sqlite-3.39.4.0\ - $(ExternalsDir)bzip2-1.0.8\ - $(ExternalsDir)xz-5.2.5\ - $(ExternalsDir)libffi-3.4.3\ - $(ExternalsDir)libffi-3.4.3\$(ArchName)\ - $(libffiOutDir)include - $(ExternalsDir)openssl-1.1.1s\ - $(ExternalsDir)openssl-bin-1.1.1s\$(ArchName)\ - $(opensslOutDir)include - $(ExternalsDir)\nasm-2.11.06\ - $(ExternalsDir)\zlib-1.2.13\ - + + + + + + $(ExternalsDir)sqlite-3.39.4.0\ + $(ExternalsDir)bzip2-1.0.8\ + $(ExternalsDir)xz-5.2.5\ + $(ExternalsDir)libffi-3.4.3\ + $(libffiDir)$(ArchName)\ + $(libffiOutDir)include + $(ExternalsDir)openssl-1.1.1s\ + $(ExternalsDir)openssl-bin-1.1.1s\$(ArchName)\ + $(opensslOutDir)include + $(ExternalsDir)\nasm-2.11.06\ + $(ExternalsDir)\zlib-1.2.13\ + + + _d diff --git a/PCbuild/tcltk.props b/PCbuild/tcltk.props index 7fd43e8279e8..cd54b2567b23 100644 --- a/PCbuild/tcltk.props +++ b/PCbuild/tcltk.props @@ -2,22 +2,25 @@ - 8 - 6 - 12 - 1 - $(TclMajorVersion) - $(TclMinorVersion) - $(TclPatchLevel) - $(TclRevision) - 8 - 4 - 3 - 6 - $(ExternalsDir)tcl-core-$(TclMajorVersion).$(TclMinorVersion).$(TclPatchLevel).$(TclRevision)\ - $(ExternalsDir)tk-$(TkMajorVersion).$(TkMinorVersion).$(TkPatchLevel).$(TkRevision)\ - $(ExternalsDir)tix-$(TixMajorVersion).$(TixMinorVersion).$(TixPatchLevel).$(TixRevision)\ - $(ExternalsDir)tcltk-$(TclMajorVersion).$(TclMinorVersion).$(TclPatchLevel).$(TclRevision)\$(ArchName)\ + 8.6.12.1 + $(TclVersion) + 8.4.3.6 + $([System.Version]::Parse($(TclVersion)).Major) + $([System.Version]::Parse($(TclVersion)).Minor) + $([System.Version]::Parse($(TclVersion)).Build) + $([System.Version]::Parse($(TclVersion)).Revision) + $([System.Version]::Parse($(TkVersion)).Major) + $([System.Version]::Parse($(TkVersion)).Minor) + $([System.Version]::Parse($(TkVersion)).Build) + $([System.Version]::Parse($(TkVersion)).Revision) + $([System.Version]::Parse($(TixVersion)).Major) + $([System.Version]::Parse($(TixVersion)).Minor) + $([System.Version]::Parse($(TixVersion)).Build) + $([System.Version]::Parse($(TixVersion)).Revision) + $(ExternalsDir)tcl-core-$(TclVersion)\ + $(ExternalsDir)tk-$(TkVersion)\ + $(ExternalsDir)tix-$(TixVersion)\ + $(ExternalsDir)tcltk-$(TclVersion)\$(ArchName)\ $(tcltkDir)\bin\tclsh$(TclMajorVersion)$(TclMinorVersion)t.exe $(tcltkDir)\..\win32\bin\tclsh$(TclMajorVersion)$(TclMinorVersion)t.exe From webhook-mailer at python.org Fri Feb 3 13:54:54 2023 From: webhook-mailer at python.org (zooba) Date: Fri, 03 Feb 2023 18:54:54 -0000 Subject: [Python-checkins] gh-101522: Allow overriding Windows dependencies versions and paths using MSBuild properties (GH-101523) Message-ID: https://github.com/python/cpython/commit/3139ea33ed84190e079d6ff4859baccdad778dae commit: 3139ea33ed84190e079d6ff4859baccdad778dae branch: 3.10 author: Steve Dower committer: zooba date: 2023-02-03T18:54:48Z summary: gh-101522: Allow overriding Windows dependencies versions and paths using MSBuild properties (GH-101523) files: A Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst M PCbuild/python.props M PCbuild/tcltk.props diff --git a/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst b/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst new file mode 100644 index 000000000000..2e7f9029e9ee --- /dev/null +++ b/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst @@ -0,0 +1,2 @@ +Allow overriding Windows dependencies versions and paths using MSBuild +properties. diff --git a/PCbuild/python.props b/PCbuild/python.props index f9dd872a93aa..1f9b0e773634 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -52,23 +52,32 @@ $(PySourcePath)PCbuild\$(ArchName)\ $(BuildPath)\ $(BuildPath)instrumented\ - - - $(EXTERNALS_DIR) + + + + + $(EXTERNALS_DIR) $([System.IO.Path]::GetFullPath(`$(PySourcePath)externals`)) $(ExternalsDir)\ - $(ExternalsDir)sqlite-3.39.4.0\ - $(ExternalsDir)bzip2-1.0.8\ - $(ExternalsDir)xz-5.2.5\ - $(ExternalsDir)libffi-3.3.0\ - $(ExternalsDir)libffi-3.3.0\$(ArchName)\ - $(libffiOutDir)include - $(ExternalsDir)openssl-1.1.1s\ - $(ExternalsDir)openssl-bin-1.1.1s\$(ArchName)\ - $(opensslOutDir)include - $(ExternalsDir)\nasm-2.11.06\ - $(ExternalsDir)\zlib-1.2.13\ - + + + + + + $(ExternalsDir)sqlite-3.39.4.0\ + $(ExternalsDir)bzip2-1.0.8\ + $(ExternalsDir)xz-5.2.5\ + $(ExternalsDir)libffi-3.3.0\ + $(libffiDir)$(ArchName)\ + $(libffiOutDir)include + $(ExternalsDir)openssl-1.1.1s\ + $(ExternalsDir)openssl-bin-1.1.1s\$(ArchName)\ + $(opensslOutDir)include + $(ExternalsDir)\nasm-2.11.06\ + $(ExternalsDir)\zlib-1.2.13\ + + + _d diff --git a/PCbuild/tcltk.props b/PCbuild/tcltk.props index 16dc35d45ebd..6ff152970c75 100644 --- a/PCbuild/tcltk.props +++ b/PCbuild/tcltk.props @@ -2,22 +2,26 @@ - 8 - 6 - 12 - 0 - $(TclMajorVersion) - $(TclMinorVersion) - $(TclPatchLevel) - $(TclRevision) - 8 - 4 - 3 - 6 - $(ExternalsDir)tcl-core-$(TclMajorVersion).$(TclMinorVersion).$(TclPatchLevel).$(TclRevision)\ - $(ExternalsDir)tk-$(TkMajorVersion).$(TkMinorVersion).$(TkPatchLevel).$(TkRevision)\ - $(ExternalsDir)tix-$(TixMajorVersion).$(TixMinorVersion).$(TixPatchLevel).$(TixRevision)\ - $(ExternalsDir)tcltk-$(TclMajorVersion).$(TclMinorVersion).$(TclPatchLevel).$(TclRevision)\$(ArchName)\ + 8.6.12.0 + $(TclVersion) + 8.4.3.6 + $([System.Version]::Parse($(TclVersion)).Major) + $([System.Version]::Parse($(TclVersion)).Minor) + $([System.Version]::Parse($(TclVersion)).Build) + $([System.Version]::Parse($(TclVersion)).Revision) + $([System.Version]::Parse($(TkVersion)).Major) + $([System.Version]::Parse($(TkVersion)).Minor) + $([System.Version]::Parse($(TkVersion)).Build) + $([System.Version]::Parse($(TkVersion)).Revision) + $([System.Version]::Parse($(TixVersion)).Major) + $([System.Version]::Parse($(TixVersion)).Minor) + $([System.Version]::Parse($(TixVersion)).Build) + $([System.Version]::Parse($(TixVersion)).Revision) + $(ExternalsDir)tcl-core-$(TclVersion)\ + $(ExternalsDir)tk-$(TkVersion)\ + $(ExternalsDir)tix-$(TixVersion)\ + $(ExternalsDir)tcltk-$(TclVersion)\$(ArchName)\ + tcl$(TclMajorVersion)$(TclMinorVersion)t$(TclDebugExt).dll tcl$(TclMajorVersion)$(TclMinorVersion)t$(TclDebugExt).lib From webhook-mailer at python.org Fri Feb 3 18:20:57 2023 From: webhook-mailer at python.org (gpshead) Date: Fri, 03 Feb 2023 23:20:57 -0000 Subject: [Python-checkins] gh-84559: Remove the new multiprocessing warning, too disruptive. (#101551) Message-ID: https://github.com/python/cpython/commit/d4c410f0f922683f38c9d435923939d037fbd8c2 commit: d4c410f0f922683f38c9d435923939d037fbd8c2 branch: main author: Gregory P. Smith committer: gpshead date: 2023-02-03T15:20:46-08:00 summary: gh-84559: Remove the new multiprocessing warning, too disruptive. (#101551) This reverts the core of #100618 while leaving relevant documentation improvements and minor refactorings in place. files: D Lib/test/test_multiprocessing_defaults.py D Misc/NEWS.d/next/Library/2023-01-01-01-19-33.gh-issue-84559.zEjsEJ.rst M Doc/library/concurrent.futures.rst M Doc/library/multiprocessing.rst M Doc/whatsnew/3.12.rst M Lib/concurrent/futures/process.py M Lib/multiprocessing/context.py M Lib/test/_test_multiprocessing.py M Lib/test/test_concurrent_futures.py M Lib/test/test_logging.py M Lib/test/test_re.py diff --git a/Doc/library/concurrent.futures.rst b/Doc/library/concurrent.futures.rst index 10cffdaee0bb..c543c849585b 100644 --- a/Doc/library/concurrent.futures.rst +++ b/Doc/library/concurrent.futures.rst @@ -281,18 +281,18 @@ to a :class:`ProcessPoolExecutor` will result in deadlock. Added the *initializer* and *initargs* arguments. + .. note:: + The default :mod:`multiprocessing` start method + (see :ref:`multiprocessing-start-methods`) will change away from + *fork* in Python 3.14. Code that requires *fork* be used for their + :class:`ProcessPoolExecutor` should explicitly specify that by + passing a ``mp_context=multiprocessing.get_context("fork")`` + parameter. + .. versionchanged:: 3.11 The *max_tasks_per_child* argument was added to allow users to control the lifetime of workers in the pool. - .. versionchanged:: 3.12 - The implicit use of the :mod:`multiprocessing` *fork* start method as a - platform default (see :ref:`multiprocessing-start-methods`) now raises a - :exc:`DeprecationWarning`. The default will change in Python 3.14. - Code that requires *fork* should explicitly specify that when creating - their :class:`ProcessPoolExecutor` by passing a - ``mp_context=multiprocessing.get_context('fork')`` parameter. - .. _processpoolexecutor-example: ProcessPoolExecutor Example diff --git a/Doc/library/multiprocessing.rst b/Doc/library/multiprocessing.rst index c60b229ae2d0..0ec47bb956a9 100644 --- a/Doc/library/multiprocessing.rst +++ b/Doc/library/multiprocessing.rst @@ -126,6 +126,11 @@ to start a process. These *start methods* are Available on POSIX systems. Currently the default on POSIX except macOS. + .. note:: + The default start method will change away from *fork* in Python 3.14. + Code that requires *fork* should explicitly specify that via + :func:`get_context` or :func:`set_start_method`. + *forkserver* When the program starts and selects the *forkserver* start method, a server process is spawned. From then on, whenever a new process @@ -138,11 +143,6 @@ to start a process. These *start methods* are Available on POSIX platforms which support passing file descriptors over Unix pipes such as Linux. -.. versionchanged:: 3.12 - Implicit use of the *fork* start method as the default now raises a - :exc:`DeprecationWarning`. Code that requires it should explicitly - specify *fork* via :func:`get_context` or :func:`set_start_method`. - The default will change away from *fork* in 3.14. .. versionchanged:: 3.8 @@ -1107,6 +1107,7 @@ Miscellaneous launched (before creating a :class:`Pool` or starting a :class:`Process`). Only meaningful when using the ``'forkserver'`` start method. + See :ref:`multiprocessing-start-methods`. .. versionadded:: 3.4 diff --git a/Doc/whatsnew/3.12.rst b/Doc/whatsnew/3.12.rst index e675fada339a..0c5a70b64574 100644 --- a/Doc/whatsnew/3.12.rst +++ b/Doc/whatsnew/3.12.rst @@ -440,12 +440,6 @@ Deprecated warning at compile time. This field will be removed in Python 3.14. (Contributed by Ramvikrams and Kumar Aditya in :gh:`101193`. PEP by Ken Jin.) -* Use of the implicit default ``'fork'`` start method for - :mod:`multiprocessing` and :class:`concurrent.futures.ProcessPoolExecutor` - now emits a :exc:`DeprecationWarning` on Linux and other non-macOS POSIX - systems. Avoid this by explicitly specifying a start method. - See :ref:`multiprocessing-start-methods`. - Pending Removal in Python 3.13 ------------------------------ @@ -510,9 +504,13 @@ Pending Removal in Python 3.14 * Testing the truth value of an :class:`xml.etree.ElementTree.Element` is deprecated and will raise an exception in Python 3.14. -* The default :mod:`multiprocessing` start method will change to one of either - ``'forkserver'`` or ``'spawn'`` on all platforms for which ``'fork'`` remains - the default per :gh:`84559`. +* The default :mod:`multiprocessing` start method will change to a safer one on + Linux, BSDs, and other non-macOS POSIX platforms where ``'fork'`` is currently + the default (:gh:`84559`). Adding a runtime warning about this was deemed too + disruptive as the majority of code is not expected to care. Use the + :func:`~multiprocessing.get_context` or + :func:`~multiprocessing.set_start_method` APIs to explicitly specify when + your code *requires* ``'fork'``. See :ref:`multiprocessing-start-methods`. Pending Removal in Future Versions ---------------------------------- diff --git a/Lib/concurrent/futures/process.py b/Lib/concurrent/futures/process.py index 257dd02fbc6c..bee162430a6f 100644 --- a/Lib/concurrent/futures/process.py +++ b/Lib/concurrent/futures/process.py @@ -57,7 +57,6 @@ import itertools import sys from traceback import format_exception -import warnings _threads_wakeups = weakref.WeakKeyDictionary() @@ -651,22 +650,6 @@ def __init__(self, max_workers=None, mp_context=None, mp_context = mp.get_context("spawn") else: mp_context = mp.get_context() - if (mp_context.get_start_method() == "fork" and - mp_context == mp.context._default_context._default_context): - warnings.warn( - "The default multiprocessing start method will change " - "away from 'fork' in Python >= 3.14, per GH-84559. " - "ProcessPoolExecutor uses multiprocessing. " - "If your application requires the 'fork' multiprocessing " - "start method, explicitly specify that by passing a " - "mp_context= parameter. " - "The safest start method is 'spawn'.", - category=mp.context.DefaultForkDeprecationWarning, - stacklevel=2, - ) - # Avoid the equivalent warning from multiprocessing itself via - # a non-default fork context. - mp_context = mp.get_context("fork") self._mp_context = mp_context # https://github.com/python/cpython/issues/90622 diff --git a/Lib/multiprocessing/context.py b/Lib/multiprocessing/context.py index 010a920540e8..de8a264829df 100644 --- a/Lib/multiprocessing/context.py +++ b/Lib/multiprocessing/context.py @@ -23,9 +23,6 @@ class TimeoutError(ProcessError): class AuthenticationError(ProcessError): pass -class DefaultForkDeprecationWarning(DeprecationWarning): - pass - # # Base type for contexts. Bound methods of an instance of this type are included in __all__ of __init__.py # @@ -284,23 +281,6 @@ def _Popen(process_obj): from .popen_fork import Popen return Popen(process_obj) - _warn_package_prefixes = (os.path.dirname(__file__),) - - class _DeprecatedForkProcess(ForkProcess): - @classmethod - def _Popen(cls, process_obj): - import warnings - warnings.warn( - "The default multiprocessing start method will change " - "away from 'fork' in Python >= 3.14, per GH-84559. " - "Use multiprocessing.get_context(X) or .set_start_method(X) to " - "explicitly specify it when your application requires 'fork'. " - "The safest start method is 'spawn'.", - category=DefaultForkDeprecationWarning, - skip_file_prefixes=_warn_package_prefixes, - ) - return super()._Popen(process_obj) - class SpawnProcess(process.BaseProcess): _start_method = 'spawn' @staticmethod @@ -324,9 +304,6 @@ class ForkContext(BaseContext): _name = 'fork' Process = ForkProcess - class _DefaultForkContext(ForkContext): - Process = _DeprecatedForkProcess - class SpawnContext(BaseContext): _name = 'spawn' Process = SpawnProcess @@ -342,16 +319,13 @@ def _check_available(self): 'fork': ForkContext(), 'spawn': SpawnContext(), 'forkserver': ForkServerContext(), - # Remove None and _DefaultForkContext() when changing the default - # in 3.14 for https://github.com/python/cpython/issues/84559. - None: _DefaultForkContext(), } if sys.platform == 'darwin': # bpo-33725: running arbitrary code after fork() is no longer reliable # on macOS since macOS 10.14 (Mojave). Use spawn by default instead. _default_context = DefaultContext(_concrete_contexts['spawn']) else: - _default_context = DefaultContext(_concrete_contexts[None]) + _default_context = DefaultContext(_concrete_contexts['fork']) else: diff --git a/Lib/test/_test_multiprocessing.py b/Lib/test/_test_multiprocessing.py index e4a60a4d6746..9a2db24b4bd5 100644 --- a/Lib/test/_test_multiprocessing.py +++ b/Lib/test/_test_multiprocessing.py @@ -4098,10 +4098,9 @@ def test_shared_memory_SharedMemoryServer_ignores_sigint(self): def test_shared_memory_SharedMemoryManager_reuses_resource_tracker(self): # bpo-36867: test that a SharedMemoryManager uses the # same resource_tracker process as its parent. - cmd = f'''if 1: + cmd = '''if 1: from multiprocessing.managers import SharedMemoryManager - from multiprocessing import set_start_method - set_start_method({multiprocessing.get_start_method()!r}) + smm = SharedMemoryManager() smm.start() diff --git a/Lib/test/test_concurrent_futures.py b/Lib/test/test_concurrent_futures.py index 4493cd312528..b3520ae3994e 100644 --- a/Lib/test/test_concurrent_futures.py +++ b/Lib/test/test_concurrent_futures.py @@ -18,7 +18,6 @@ import threading import time import unittest -import warnings import weakref from pickle import PicklingError @@ -572,24 +571,6 @@ def test_shutdown_no_wait(self): assert all([r == abs(v) for r, v in zip(res, range(-5, 5))]) - at unittest.skipIf(mp.get_all_start_methods()[0] != "fork", "non-fork default.") -class ProcessPoolExecutorDefaultForkWarning(unittest.TestCase): - def test_fork_default_warns(self): - with self.assertWarns(mp.context.DefaultForkDeprecationWarning): - with futures.ProcessPoolExecutor(2): - pass - - def test_explicit_fork_does_not_warn(self): - with warnings.catch_warnings(record=True) as ws: - warnings.simplefilter("ignore") - warnings.filterwarnings( - 'always', category=mp.context.DefaultForkDeprecationWarning) - ctx = mp.get_context("fork") # Non-default fork context. - with futures.ProcessPoolExecutor(2, mp_context=ctx): - pass - self.assertEqual(len(ws), 0, msg=[str(x) for x in ws]) - - create_executor_tests(ProcessPoolShutdownTest, executor_mixins=(ProcessPoolForkMixin, ProcessPoolForkserverMixin, diff --git a/Lib/test/test_logging.py b/Lib/test/test_logging.py index 8a12d570f26f..072056d37221 100644 --- a/Lib/test/test_logging.py +++ b/Lib/test/test_logging.py @@ -4759,9 +4759,8 @@ def test_multiprocessing(self): # In other processes, processName is correct when multiprocessing in imported, # but it is (incorrectly) defaulted to 'MainProcess' otherwise (bpo-38762). import multiprocessing - mp = multiprocessing.get_context('spawn') - parent_conn, child_conn = mp.Pipe() - p = mp.Process( + parent_conn, child_conn = multiprocessing.Pipe() + p = multiprocessing.Process( target=self._extract_logrecord_process_name, args=(2, LOG_MULTI_PROCESSING, child_conn,) ) diff --git a/Lib/test/test_multiprocessing_defaults.py b/Lib/test/test_multiprocessing_defaults.py deleted file mode 100644 index 7ea872fef20d..000000000000 --- a/Lib/test/test_multiprocessing_defaults.py +++ /dev/null @@ -1,85 +0,0 @@ -"""Test default behavior of multiprocessing.""" - -from inspect import currentframe, getframeinfo -import multiprocessing -from multiprocessing.context import DefaultForkDeprecationWarning -import sys -from test.support import import_helper, threading_helper -import unittest -import warnings - -# Skip tests if _multiprocessing wasn't built. -import_helper.import_module('_multiprocessing') - - -def do_nothing(): - pass - - -# Process has the same API as Thread so this helper works. -join_process = threading_helper.join_thread - - -class DefaultWarningsTest(unittest.TestCase): - - @unittest.skipIf(sys.platform in ('win32', 'darwin'), - 'The default is not "fork" on Windows or macOS.') - def setUp(self): - self.assertEqual(multiprocessing.get_start_method(), 'fork') - self.assertIsInstance(multiprocessing.get_context(), - multiprocessing.context._DefaultForkContext) - - def test_default_fork_start_method_warning_process(self): - with warnings.catch_warnings(record=True) as ws: - warnings.simplefilter('ignore') - warnings.filterwarnings('always', category=DefaultForkDeprecationWarning) - process = multiprocessing.Process(target=do_nothing) - process.start() # warning should point here. - join_process(process) - self.assertIsInstance(ws[0].message, DefaultForkDeprecationWarning) - self.assertIn(__file__, ws[0].filename) - self.assertEqual(getframeinfo(currentframe()).lineno-4, ws[0].lineno) - self.assertIn("'fork'", str(ws[0].message)) - self.assertIn("get_context", str(ws[0].message)) - self.assertEqual(len(ws), 1, msg=[str(x) for x in ws]) - - def test_default_fork_start_method_warning_pool(self): - with warnings.catch_warnings(record=True) as ws: - warnings.simplefilter('ignore') - warnings.filterwarnings('always', category=DefaultForkDeprecationWarning) - pool = multiprocessing.Pool(1) # warning should point here. - pool.terminate() - pool.join() - self.assertIsInstance(ws[0].message, DefaultForkDeprecationWarning) - self.assertIn(__file__, ws[0].filename) - self.assertEqual(getframeinfo(currentframe()).lineno-5, ws[0].lineno) - self.assertIn("'fork'", str(ws[0].message)) - self.assertIn("get_context", str(ws[0].message)) - self.assertEqual(len(ws), 1, msg=[str(x) for x in ws]) - - def test_default_fork_start_method_warning_manager(self): - with warnings.catch_warnings(record=True) as ws: - warnings.simplefilter('ignore') - warnings.filterwarnings('always', category=DefaultForkDeprecationWarning) - manager = multiprocessing.Manager() # warning should point here. - manager.shutdown() - self.assertIsInstance(ws[0].message, DefaultForkDeprecationWarning) - self.assertIn(__file__, ws[0].filename) - self.assertEqual(getframeinfo(currentframe()).lineno-4, ws[0].lineno) - self.assertIn("'fork'", str(ws[0].message)) - self.assertIn("get_context", str(ws[0].message)) - self.assertEqual(len(ws), 1, msg=[str(x) for x in ws]) - - def test_no_mp_warning_when_using_explicit_fork_context(self): - with warnings.catch_warnings(record=True) as ws: - warnings.simplefilter('ignore') - warnings.filterwarnings('always', category=DefaultForkDeprecationWarning) - fork_mp = multiprocessing.get_context('fork') - pool = fork_mp.Pool(1) - pool.terminate() - pool.join() - self.assertEqual(len(ws), 0, msg=[str(x) for x in ws]) - - -if __name__ == '__main__': - unittest.main() diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py index eacb1a7c82a5..11628a236ade 100644 --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -2431,8 +2431,7 @@ def test_regression_gh94675(self): input_js = '''a(function() { /////////////////////////////////////////////////////////////////// });''' - mp = multiprocessing.get_context('spawn') - p = mp.Process(target=pattern.sub, args=('', input_js)) + p = multiprocessing.Process(target=pattern.sub, args=('', input_js)) p.start() p.join(SHORT_TIMEOUT) try: diff --git a/Misc/NEWS.d/next/Library/2023-01-01-01-19-33.gh-issue-84559.zEjsEJ.rst b/Misc/NEWS.d/next/Library/2023-01-01-01-19-33.gh-issue-84559.zEjsEJ.rst deleted file mode 100644 index 3793e0f1fddb..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-01-01-19-33.gh-issue-84559.zEjsEJ.rst +++ /dev/null @@ -1,11 +0,0 @@ -The :mod:`multiprocessing` module and -:class:`concurrent.futures.ProcessPoolExecutor` will emit a -:exc:`DeprecationWarning` on Linux and other non-macOS POSIX systems when -the default multiprocessing start method of ``'fork'`` is used implicitly -rather than being explicitly specified through a -:func:`multiprocessing.get_context` context. - -This is in preparation for default start method to change in Python 3.14 to -a default that is safe for multithreaded applications. - -Windows and macOS are unaffected as their default start method is ``spawn``. From webhook-mailer at python.org Fri Feb 3 20:14:53 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Sat, 04 Feb 2023 01:14:53 -0000 Subject: [Python-checkins] gh-101524: Split Up the _xxsubinterpreters Module (gh-101526) Message-ID: https://github.com/python/cpython/commit/c67b00534abfeca83016a00818cf1fd949613d6b commit: c67b00534abfeca83016a00818cf1fd949613d6b branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-03T18:14:43-07:00 summary: gh-101524: Split Up the _xxsubinterpreters Module (gh-101526) This is step 1 in potentially dropping all the "channel"-related code. Channels have already been removed from PEP 554. https://github.com/python/cpython/issues/101524 files: A Lib/test/test__xxinterpchannels.py A Modules/_xxinterpchannelsmodule.c M Lib/test/support/interpreters.py M Lib/test/test__xxsubinterpreters.py M Lib/test/test_interpreters.py M Modules/Setup M Modules/Setup.stdlib.in M Modules/_testcapimodule.c M Modules/_xxsubinterpretersmodule.c M PC/config.c M PCbuild/pythoncore.vcxproj M PCbuild/pythoncore.vcxproj.filters M Tools/build/generate_stdlib_module_names.py M configure M configure.ac diff --git a/Lib/test/support/interpreters.py b/Lib/test/support/interpreters.py index 2935708f9df1..eeff3abe0324 100644 --- a/Lib/test/support/interpreters.py +++ b/Lib/test/support/interpreters.py @@ -2,11 +2,12 @@ import time import _xxsubinterpreters as _interpreters +import _xxinterpchannels as _channels # aliases: -from _xxsubinterpreters import ( +from _xxsubinterpreters import is_shareable +from _xxinterpchannels import ( ChannelError, ChannelNotFoundError, ChannelEmptyError, - is_shareable, ) @@ -102,7 +103,7 @@ def create_channel(): The channel may be used to pass data safely between interpreters. """ - cid = _interpreters.channel_create() + cid = _channels.create() recv, send = RecvChannel(cid), SendChannel(cid) return recv, send @@ -110,14 +111,14 @@ def create_channel(): def list_all_channels(): """Return a list of (recv, send) for all open channels.""" return [(RecvChannel(cid), SendChannel(cid)) - for cid in _interpreters.channel_list_all()] + for cid in _channels.list_all()] class _ChannelEnd: """The base class for RecvChannel and SendChannel.""" def __init__(self, id): - if not isinstance(id, (int, _interpreters.ChannelID)): + if not isinstance(id, (int, _channels.ChannelID)): raise TypeError(f'id must be an int, got {id!r}') self._id = id @@ -152,10 +153,10 @@ def recv(self, *, _sentinel=object(), _delay=10 / 1000): # 10 milliseconds This blocks until an object has been sent, if none have been sent already. """ - obj = _interpreters.channel_recv(self._id, _sentinel) + obj = _channels.recv(self._id, _sentinel) while obj is _sentinel: time.sleep(_delay) - obj = _interpreters.channel_recv(self._id, _sentinel) + obj = _channels.recv(self._id, _sentinel) return obj def recv_nowait(self, default=_NOT_SET): @@ -166,9 +167,9 @@ def recv_nowait(self, default=_NOT_SET): is the same as recv(). """ if default is _NOT_SET: - return _interpreters.channel_recv(self._id) + return _channels.recv(self._id) else: - return _interpreters.channel_recv(self._id, default) + return _channels.recv(self._id, default) class SendChannel(_ChannelEnd): @@ -179,7 +180,7 @@ def send(self, obj): This blocks until the object is received. """ - _interpreters.channel_send(self._id, obj) + _channels.send(self._id, obj) # XXX We are missing a low-level channel_send_wait(). # See bpo-32604 and gh-19829. # Until that shows up we fake it: @@ -194,4 +195,4 @@ def send_nowait(self, obj): # XXX Note that at the moment channel_send() only ever returns # None. This should be fixed when channel_send_wait() is added. # See bpo-32604 and gh-19829. - return _interpreters.channel_send(self._id, obj) + return _channels.send(self._id, obj) diff --git a/Lib/test/test__xxinterpchannels.py b/Lib/test/test__xxinterpchannels.py new file mode 100644 index 000000000000..03bb5c80b8da --- /dev/null +++ b/Lib/test/test__xxinterpchannels.py @@ -0,0 +1,1541 @@ +from collections import namedtuple +import contextlib +import os +import sys +from textwrap import dedent +import threading +import time +import unittest + +from test.support import import_helper + +from test.test__xxsubinterpreters import ( + interpreters, + _run_output, + clean_up_interpreters, +) + + +channels = import_helper.import_module('_xxinterpchannels') + + +################################## +# helpers + +#@contextmanager +#def run_threaded(id, source, **shared): +# def run(): +# run_interp(id, source, **shared) +# t = threading.Thread(target=run) +# t.start() +# yield +# t.join() + + +def run_interp(id, source, **shared): + _run_interp(id, source, shared) + + +def _run_interp(id, source, shared, _mainns={}): + source = dedent(source) + main = interpreters.get_main() + if main == id: + if interpreters.get_current() != main: + raise RuntimeError + # XXX Run a func? + exec(source, _mainns) + else: + interpreters.run_string(id, source, shared) + + +class Interpreter(namedtuple('Interpreter', 'name id')): + + @classmethod + def from_raw(cls, raw): + if isinstance(raw, cls): + return raw + elif isinstance(raw, str): + return cls(raw) + else: + raise NotImplementedError + + def __new__(cls, name=None, id=None): + main = interpreters.get_main() + if id == main: + if not name: + name = 'main' + elif name != 'main': + raise ValueError( + 'name mismatch (expected "main", got "{}")'.format(name)) + id = main + elif id is not None: + if not name: + name = 'interp' + elif name == 'main': + raise ValueError('name mismatch (unexpected "main")') + if not isinstance(id, interpreters.InterpreterID): + id = interpreters.InterpreterID(id) + elif not name or name == 'main': + name = 'main' + id = main + else: + id = interpreters.create() + self = super().__new__(cls, name, id) + return self + + +# XXX expect_channel_closed() is unnecessary once we improve exc propagation. + + at contextlib.contextmanager +def expect_channel_closed(): + try: + yield + except channels.ChannelClosedError: + pass + else: + assert False, 'channel not closed' + + +class ChannelAction(namedtuple('ChannelAction', 'action end interp')): + + def __new__(cls, action, end=None, interp=None): + if not end: + end = 'both' + if not interp: + interp = 'main' + self = super().__new__(cls, action, end, interp) + return self + + def __init__(self, *args, **kwargs): + if self.action == 'use': + if self.end not in ('same', 'opposite', 'send', 'recv'): + raise ValueError(self.end) + elif self.action in ('close', 'force-close'): + if self.end not in ('both', 'same', 'opposite', 'send', 'recv'): + raise ValueError(self.end) + else: + raise ValueError(self.action) + if self.interp not in ('main', 'same', 'other', 'extra'): + raise ValueError(self.interp) + + def resolve_end(self, end): + if self.end == 'same': + return end + elif self.end == 'opposite': + return 'recv' if end == 'send' else 'send' + else: + return self.end + + def resolve_interp(self, interp, other, extra): + if self.interp == 'same': + return interp + elif self.interp == 'other': + if other is None: + raise RuntimeError + return other + elif self.interp == 'extra': + if extra is None: + raise RuntimeError + return extra + elif self.interp == 'main': + if interp.name == 'main': + return interp + elif other and other.name == 'main': + return other + else: + raise RuntimeError + # Per __init__(), there aren't any others. + + +class ChannelState(namedtuple('ChannelState', 'pending closed')): + + def __new__(cls, pending=0, *, closed=False): + self = super().__new__(cls, pending, closed) + return self + + def incr(self): + return type(self)(self.pending + 1, closed=self.closed) + + def decr(self): + return type(self)(self.pending - 1, closed=self.closed) + + def close(self, *, force=True): + if self.closed: + if not force or self.pending == 0: + return self + return type(self)(0 if force else self.pending, closed=True) + + +def run_action(cid, action, end, state, *, hideclosed=True): + if state.closed: + if action == 'use' and end == 'recv' and state.pending: + expectfail = False + else: + expectfail = True + else: + expectfail = False + + try: + result = _run_action(cid, action, end, state) + except channels.ChannelClosedError: + if not hideclosed and not expectfail: + raise + result = state.close() + else: + if expectfail: + raise ... # XXX + return result + + +def _run_action(cid, action, end, state): + if action == 'use': + if end == 'send': + channels.send(cid, b'spam') + return state.incr() + elif end == 'recv': + if not state.pending: + try: + channels.recv(cid) + except channels.ChannelEmptyError: + return state + else: + raise Exception('expected ChannelEmptyError') + else: + channels.recv(cid) + return state.decr() + else: + raise ValueError(end) + elif action == 'close': + kwargs = {} + if end in ('recv', 'send'): + kwargs[end] = True + channels.close(cid, **kwargs) + return state.close() + elif action == 'force-close': + kwargs = { + 'force': True, + } + if end in ('recv', 'send'): + kwargs[end] = True + channels.close(cid, **kwargs) + return state.close(force=True) + else: + raise ValueError(action) + + +def clean_up_channels(): + for cid in channels.list_all(): + try: + channels.destroy(cid) + except channels.ChannelNotFoundError: + pass # already destroyed + + +class TestBase(unittest.TestCase): + + def tearDown(self): + clean_up_channels() + clean_up_interpreters() + + +################################## +# channel tests + +class ChannelIDTests(TestBase): + + def test_default_kwargs(self): + cid = channels._channel_id(10, force=True) + + self.assertEqual(int(cid), 10) + self.assertEqual(cid.end, 'both') + + def test_with_kwargs(self): + cid = channels._channel_id(10, send=True, force=True) + self.assertEqual(cid.end, 'send') + + cid = channels._channel_id(10, send=True, recv=False, force=True) + self.assertEqual(cid.end, 'send') + + cid = channels._channel_id(10, recv=True, force=True) + self.assertEqual(cid.end, 'recv') + + cid = channels._channel_id(10, recv=True, send=False, force=True) + self.assertEqual(cid.end, 'recv') + + cid = channels._channel_id(10, send=True, recv=True, force=True) + self.assertEqual(cid.end, 'both') + + def test_coerce_id(self): + class Int(str): + def __index__(self): + return 10 + + cid = channels._channel_id(Int(), force=True) + self.assertEqual(int(cid), 10) + + def test_bad_id(self): + self.assertRaises(TypeError, channels._channel_id, object()) + self.assertRaises(TypeError, channels._channel_id, 10.0) + self.assertRaises(TypeError, channels._channel_id, '10') + self.assertRaises(TypeError, channels._channel_id, b'10') + self.assertRaises(ValueError, channels._channel_id, -1) + self.assertRaises(OverflowError, channels._channel_id, 2**64) + + def test_bad_kwargs(self): + with self.assertRaises(ValueError): + channels._channel_id(10, send=False, recv=False) + + def test_does_not_exist(self): + cid = channels.create() + with self.assertRaises(channels.ChannelNotFoundError): + channels._channel_id(int(cid) + 1) # unforced + + def test_str(self): + cid = channels._channel_id(10, force=True) + self.assertEqual(str(cid), '10') + + def test_repr(self): + cid = channels._channel_id(10, force=True) + self.assertEqual(repr(cid), 'ChannelID(10)') + + cid = channels._channel_id(10, send=True, force=True) + self.assertEqual(repr(cid), 'ChannelID(10, send=True)') + + cid = channels._channel_id(10, recv=True, force=True) + self.assertEqual(repr(cid), 'ChannelID(10, recv=True)') + + cid = channels._channel_id(10, send=True, recv=True, force=True) + self.assertEqual(repr(cid), 'ChannelID(10)') + + def test_equality(self): + cid1 = channels.create() + cid2 = channels._channel_id(int(cid1)) + cid3 = channels.create() + + self.assertTrue(cid1 == cid1) + self.assertTrue(cid1 == cid2) + self.assertTrue(cid1 == int(cid1)) + self.assertTrue(int(cid1) == cid1) + self.assertTrue(cid1 == float(int(cid1))) + self.assertTrue(float(int(cid1)) == cid1) + self.assertFalse(cid1 == float(int(cid1)) + 0.1) + self.assertFalse(cid1 == str(int(cid1))) + self.assertFalse(cid1 == 2**1000) + self.assertFalse(cid1 == float('inf')) + self.assertFalse(cid1 == 'spam') + self.assertFalse(cid1 == cid3) + + self.assertFalse(cid1 != cid1) + self.assertFalse(cid1 != cid2) + self.assertTrue(cid1 != cid3) + + def test_shareable(self): + chan = channels.create() + + obj = channels.create() + channels.send(chan, obj) + got = channels.recv(chan) + + self.assertEqual(got, obj) + self.assertIs(type(got), type(obj)) + # XXX Check the following in the channel tests? + #self.assertIsNot(got, obj) + + +class ChannelTests(TestBase): + + def test_create_cid(self): + cid = channels.create() + self.assertIsInstance(cid, channels.ChannelID) + + def test_sequential_ids(self): + before = channels.list_all() + id1 = channels.create() + id2 = channels.create() + id3 = channels.create() + after = channels.list_all() + + self.assertEqual(id2, int(id1) + 1) + self.assertEqual(id3, int(id2) + 1) + self.assertEqual(set(after) - set(before), {id1, id2, id3}) + + def test_ids_global(self): + id1 = interpreters.create() + out = _run_output(id1, dedent(""" + import _xxinterpchannels as _channels + cid = _channels.create() + print(cid) + """)) + cid1 = int(out.strip()) + + id2 = interpreters.create() + out = _run_output(id2, dedent(""" + import _xxinterpchannels as _channels + cid = _channels.create() + print(cid) + """)) + cid2 = int(out.strip()) + + self.assertEqual(cid2, int(cid1) + 1) + + def test_channel_list_interpreters_none(self): + """Test listing interpreters for a channel with no associations.""" + # Test for channel with no associated interpreters. + cid = channels.create() + send_interps = channels.list_interpreters(cid, send=True) + recv_interps = channels.list_interpreters(cid, send=False) + self.assertEqual(send_interps, []) + self.assertEqual(recv_interps, []) + + def test_channel_list_interpreters_basic(self): + """Test basic listing channel interpreters.""" + interp0 = interpreters.get_main() + cid = channels.create() + channels.send(cid, "send") + # Test for a channel that has one end associated to an interpreter. + send_interps = channels.list_interpreters(cid, send=True) + recv_interps = channels.list_interpreters(cid, send=False) + self.assertEqual(send_interps, [interp0]) + self.assertEqual(recv_interps, []) + + interp1 = interpreters.create() + _run_output(interp1, dedent(f""" + import _xxinterpchannels as _channels + obj = _channels.recv({cid}) + """)) + # Test for channel that has both ends associated to an interpreter. + send_interps = channels.list_interpreters(cid, send=True) + recv_interps = channels.list_interpreters(cid, send=False) + self.assertEqual(send_interps, [interp0]) + self.assertEqual(recv_interps, [interp1]) + + def test_channel_list_interpreters_multiple(self): + """Test listing interpreters for a channel with many associations.""" + interp0 = interpreters.get_main() + interp1 = interpreters.create() + interp2 = interpreters.create() + interp3 = interpreters.create() + cid = channels.create() + + channels.send(cid, "send") + _run_output(interp1, dedent(f""" + import _xxinterpchannels as _channels + _channels.send({cid}, "send") + """)) + _run_output(interp2, dedent(f""" + import _xxinterpchannels as _channels + obj = _channels.recv({cid}) + """)) + _run_output(interp3, dedent(f""" + import _xxinterpchannels as _channels + obj = _channels.recv({cid}) + """)) + send_interps = channels.list_interpreters(cid, send=True) + recv_interps = channels.list_interpreters(cid, send=False) + self.assertEqual(set(send_interps), {interp0, interp1}) + self.assertEqual(set(recv_interps), {interp2, interp3}) + + def test_channel_list_interpreters_destroyed(self): + """Test listing channel interpreters with a destroyed interpreter.""" + interp0 = interpreters.get_main() + interp1 = interpreters.create() + cid = channels.create() + channels.send(cid, "send") + _run_output(interp1, dedent(f""" + import _xxinterpchannels as _channels + obj = _channels.recv({cid}) + """)) + # Should be one interpreter associated with each end. + send_interps = channels.list_interpreters(cid, send=True) + recv_interps = channels.list_interpreters(cid, send=False) + self.assertEqual(send_interps, [interp0]) + self.assertEqual(recv_interps, [interp1]) + + interpreters.destroy(interp1) + # Destroyed interpreter should not be listed. + send_interps = channels.list_interpreters(cid, send=True) + recv_interps = channels.list_interpreters(cid, send=False) + self.assertEqual(send_interps, [interp0]) + self.assertEqual(recv_interps, []) + + def test_channel_list_interpreters_released(self): + """Test listing channel interpreters with a released channel.""" + # Set up one channel with main interpreter on the send end and two + # subinterpreters on the receive end. + interp0 = interpreters.get_main() + interp1 = interpreters.create() + interp2 = interpreters.create() + cid = channels.create() + channels.send(cid, "data") + _run_output(interp1, dedent(f""" + import _xxinterpchannels as _channels + obj = _channels.recv({cid}) + """)) + channels.send(cid, "data") + _run_output(interp2, dedent(f""" + import _xxinterpchannels as _channels + obj = _channels.recv({cid}) + """)) + # Check the setup. + send_interps = channels.list_interpreters(cid, send=True) + recv_interps = channels.list_interpreters(cid, send=False) + self.assertEqual(len(send_interps), 1) + self.assertEqual(len(recv_interps), 2) + + # Release the main interpreter from the send end. + channels.release(cid, send=True) + # Send end should have no associated interpreters. + send_interps = channels.list_interpreters(cid, send=True) + recv_interps = channels.list_interpreters(cid, send=False) + self.assertEqual(len(send_interps), 0) + self.assertEqual(len(recv_interps), 2) + + # Release one of the subinterpreters from the receive end. + _run_output(interp2, dedent(f""" + import _xxinterpchannels as _channels + _channels.release({cid}) + """)) + # Receive end should have the released interpreter removed. + send_interps = channels.list_interpreters(cid, send=True) + recv_interps = channels.list_interpreters(cid, send=False) + self.assertEqual(len(send_interps), 0) + self.assertEqual(recv_interps, [interp1]) + + def test_channel_list_interpreters_closed(self): + """Test listing channel interpreters with a closed channel.""" + interp0 = interpreters.get_main() + interp1 = interpreters.create() + cid = channels.create() + # Put something in the channel so that it's not empty. + channels.send(cid, "send") + + # Check initial state. + send_interps = channels.list_interpreters(cid, send=True) + recv_interps = channels.list_interpreters(cid, send=False) + self.assertEqual(len(send_interps), 1) + self.assertEqual(len(recv_interps), 0) + + # Force close the channel. + channels.close(cid, force=True) + # Both ends should raise an error. + with self.assertRaises(channels.ChannelClosedError): + channels.list_interpreters(cid, send=True) + with self.assertRaises(channels.ChannelClosedError): + channels.list_interpreters(cid, send=False) + + def test_channel_list_interpreters_closed_send_end(self): + """Test listing channel interpreters with a channel's send end closed.""" + interp0 = interpreters.get_main() + interp1 = interpreters.create() + cid = channels.create() + # Put something in the channel so that it's not empty. + channels.send(cid, "send") + + # Check initial state. + send_interps = channels.list_interpreters(cid, send=True) + recv_interps = channels.list_interpreters(cid, send=False) + self.assertEqual(len(send_interps), 1) + self.assertEqual(len(recv_interps), 0) + + # Close the send end of the channel. + channels.close(cid, send=True) + # Send end should raise an error. + with self.assertRaises(channels.ChannelClosedError): + channels.list_interpreters(cid, send=True) + # Receive end should not be closed (since channel is not empty). + recv_interps = channels.list_interpreters(cid, send=False) + self.assertEqual(len(recv_interps), 0) + + # Close the receive end of the channel from a subinterpreter. + _run_output(interp1, dedent(f""" + import _xxinterpchannels as _channels + _channels.close({cid}, force=True) + """)) + # Both ends should raise an error. + with self.assertRaises(channels.ChannelClosedError): + channels.list_interpreters(cid, send=True) + with self.assertRaises(channels.ChannelClosedError): + channels.list_interpreters(cid, send=False) + + #################### + + def test_send_recv_main(self): + cid = channels.create() + orig = b'spam' + channels.send(cid, orig) + obj = channels.recv(cid) + + self.assertEqual(obj, orig) + self.assertIsNot(obj, orig) + + def test_send_recv_same_interpreter(self): + id1 = interpreters.create() + out = _run_output(id1, dedent(""" + import _xxinterpchannels as _channels + cid = _channels.create() + orig = b'spam' + _channels.send(cid, orig) + obj = _channels.recv(cid) + assert obj is not orig + assert obj == orig + """)) + + def test_send_recv_different_interpreters(self): + cid = channels.create() + id1 = interpreters.create() + out = _run_output(id1, dedent(f""" + import _xxinterpchannels as _channels + _channels.send({cid}, b'spam') + """)) + obj = channels.recv(cid) + + self.assertEqual(obj, b'spam') + + def test_send_recv_different_threads(self): + cid = channels.create() + + def f(): + while True: + try: + obj = channels.recv(cid) + break + except channels.ChannelEmptyError: + time.sleep(0.1) + channels.send(cid, obj) + t = threading.Thread(target=f) + t.start() + + channels.send(cid, b'spam') + t.join() + obj = channels.recv(cid) + + self.assertEqual(obj, b'spam') + + def test_send_recv_different_interpreters_and_threads(self): + cid = channels.create() + id1 = interpreters.create() + out = None + + def f(): + nonlocal out + out = _run_output(id1, dedent(f""" + import time + import _xxinterpchannels as _channels + while True: + try: + obj = _channels.recv({cid}) + break + except _channels.ChannelEmptyError: + time.sleep(0.1) + assert(obj == b'spam') + _channels.send({cid}, b'eggs') + """)) + t = threading.Thread(target=f) + t.start() + + channels.send(cid, b'spam') + t.join() + obj = channels.recv(cid) + + self.assertEqual(obj, b'eggs') + + def test_send_not_found(self): + with self.assertRaises(channels.ChannelNotFoundError): + channels.send(10, b'spam') + + def test_recv_not_found(self): + with self.assertRaises(channels.ChannelNotFoundError): + channels.recv(10) + + def test_recv_empty(self): + cid = channels.create() + with self.assertRaises(channels.ChannelEmptyError): + channels.recv(cid) + + def test_recv_default(self): + default = object() + cid = channels.create() + obj1 = channels.recv(cid, default) + channels.send(cid, None) + channels.send(cid, 1) + channels.send(cid, b'spam') + channels.send(cid, b'eggs') + obj2 = channels.recv(cid, default) + obj3 = channels.recv(cid, default) + obj4 = channels.recv(cid) + obj5 = channels.recv(cid, default) + obj6 = channels.recv(cid, default) + + self.assertIs(obj1, default) + self.assertIs(obj2, None) + self.assertEqual(obj3, 1) + self.assertEqual(obj4, b'spam') + self.assertEqual(obj5, b'eggs') + self.assertIs(obj6, default) + + def test_recv_sending_interp_destroyed(self): + cid = channels.create() + interp = interpreters.create() + interpreters.run_string(interp, dedent(f""" + import _xxinterpchannels as _channels + _channels.send({cid}, b'spam') + """)) + interpreters.destroy(interp) + + with self.assertRaisesRegex(RuntimeError, + 'unrecognized interpreter ID'): + channels.recv(cid) + + def test_allowed_types(self): + cid = channels.create() + objects = [ + None, + 'spam', + b'spam', + 42, + ] + for obj in objects: + with self.subTest(obj): + channels.send(cid, obj) + got = channels.recv(cid) + + self.assertEqual(got, obj) + self.assertIs(type(got), type(obj)) + # XXX Check the following? + #self.assertIsNot(got, obj) + # XXX What about between interpreters? + + def test_run_string_arg_unresolved(self): + cid = channels.create() + interp = interpreters.create() + + out = _run_output(interp, dedent(""" + import _xxinterpchannels as _channels + print(cid.end) + _channels.send(cid, b'spam') + """), + dict(cid=cid.send)) + obj = channels.recv(cid) + + self.assertEqual(obj, b'spam') + self.assertEqual(out.strip(), 'send') + + # XXX For now there is no high-level channel into which the + # sent channel ID can be converted... + # Note: this test caused crashes on some buildbots (bpo-33615). + @unittest.skip('disabled until high-level channels exist') + def test_run_string_arg_resolved(self): + cid = channels.create() + cid = channels._channel_id(cid, _resolve=True) + interp = interpreters.create() + + out = _run_output(interp, dedent(""" + import _xxinterpchannels as _channels + print(chan.id.end) + _channels.send(chan.id, b'spam') + """), + dict(chan=cid.send)) + obj = channels.recv(cid) + + self.assertEqual(obj, b'spam') + self.assertEqual(out.strip(), 'send') + + # close + + def test_close_single_user(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.recv(cid) + channels.close(cid) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'eggs') + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_close_multiple_users(self): + cid = channels.create() + id1 = interpreters.create() + id2 = interpreters.create() + interpreters.run_string(id1, dedent(f""" + import _xxinterpchannels as _channels + _channels.send({cid}, b'spam') + """)) + interpreters.run_string(id2, dedent(f""" + import _xxinterpchannels as _channels + _channels.recv({cid}) + """)) + channels.close(cid) + with self.assertRaises(interpreters.RunFailedError) as cm: + interpreters.run_string(id1, dedent(f""" + _channels.send({cid}, b'spam') + """)) + self.assertIn('ChannelClosedError', str(cm.exception)) + with self.assertRaises(interpreters.RunFailedError) as cm: + interpreters.run_string(id2, dedent(f""" + _channels.send({cid}, b'spam') + """)) + self.assertIn('ChannelClosedError', str(cm.exception)) + + def test_close_multiple_times(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.recv(cid) + channels.close(cid) + + with self.assertRaises(channels.ChannelClosedError): + channels.close(cid) + + def test_close_empty(self): + tests = [ + (False, False), + (True, False), + (False, True), + (True, True), + ] + for send, recv in tests: + with self.subTest((send, recv)): + cid = channels.create() + channels.send(cid, b'spam') + channels.recv(cid) + channels.close(cid, send=send, recv=recv) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'eggs') + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_close_defaults_with_unused_items(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.send(cid, b'ham') + + with self.assertRaises(channels.ChannelNotEmptyError): + channels.close(cid) + channels.recv(cid) + channels.send(cid, b'eggs') + + def test_close_recv_with_unused_items_unforced(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.send(cid, b'ham') + + with self.assertRaises(channels.ChannelNotEmptyError): + channels.close(cid, recv=True) + channels.recv(cid) + channels.send(cid, b'eggs') + channels.recv(cid) + channels.recv(cid) + channels.close(cid, recv=True) + + def test_close_send_with_unused_items_unforced(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.send(cid, b'ham') + channels.close(cid, send=True) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'eggs') + channels.recv(cid) + channels.recv(cid) + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_close_both_with_unused_items_unforced(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.send(cid, b'ham') + + with self.assertRaises(channels.ChannelNotEmptyError): + channels.close(cid, recv=True, send=True) + channels.recv(cid) + channels.send(cid, b'eggs') + channels.recv(cid) + channels.recv(cid) + channels.close(cid, recv=True) + + def test_close_recv_with_unused_items_forced(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.send(cid, b'ham') + channels.close(cid, recv=True, force=True) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'eggs') + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_close_send_with_unused_items_forced(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.send(cid, b'ham') + channels.close(cid, send=True, force=True) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'eggs') + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_close_both_with_unused_items_forced(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.send(cid, b'ham') + channels.close(cid, send=True, recv=True, force=True) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'eggs') + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_close_never_used(self): + cid = channels.create() + channels.close(cid) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'spam') + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_close_by_unassociated_interp(self): + cid = channels.create() + channels.send(cid, b'spam') + interp = interpreters.create() + interpreters.run_string(interp, dedent(f""" + import _xxinterpchannels as _channels + _channels.close({cid}, force=True) + """)) + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + with self.assertRaises(channels.ChannelClosedError): + channels.close(cid) + + def test_close_used_multiple_times_by_single_user(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.send(cid, b'spam') + channels.send(cid, b'spam') + channels.recv(cid) + channels.close(cid, force=True) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'eggs') + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_channel_list_interpreters_invalid_channel(self): + cid = channels.create() + # Test for invalid channel ID. + with self.assertRaises(channels.ChannelNotFoundError): + channels.list_interpreters(1000, send=True) + + channels.close(cid) + # Test for a channel that has been closed. + with self.assertRaises(channels.ChannelClosedError): + channels.list_interpreters(cid, send=True) + + def test_channel_list_interpreters_invalid_args(self): + # Tests for invalid arguments passed to the API. + cid = channels.create() + with self.assertRaises(TypeError): + channels.list_interpreters(cid) + + +class ChannelReleaseTests(TestBase): + + # XXX Add more test coverage a la the tests for close(). + + """ + - main / interp / other + - run in: current thread / new thread / other thread / different threads + - end / opposite + - force / no force + - used / not used (associated / not associated) + - empty / emptied / never emptied / partly emptied + - closed / not closed + - released / not released + - creator (interp) / other + - associated interpreter not running + - associated interpreter destroyed + """ + + """ + use + pre-release + release + after + check + """ + + """ + release in: main, interp1 + creator: same, other (incl. interp2) + + use: None,send,recv,send/recv in None,same,other(incl. interp2),same+other(incl. interp2),all + pre-release: None,send,recv,both in None,same,other(incl. interp2),same+other(incl. interp2),all + pre-release forced: None,send,recv,both in None,same,other(incl. interp2),same+other(incl. interp2),all + + release: same + release forced: same + + use after: None,send,recv,send/recv in None,same,other(incl. interp2),same+other(incl. interp2),all + release after: None,send,recv,send/recv in None,same,other(incl. interp2),same+other(incl. interp2),all + check released: send/recv for same/other(incl. interp2) + check closed: send/recv for same/other(incl. interp2) + """ + + def test_single_user(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.recv(cid) + channels.release(cid, send=True, recv=True) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'eggs') + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_multiple_users(self): + cid = channels.create() + id1 = interpreters.create() + id2 = interpreters.create() + interpreters.run_string(id1, dedent(f""" + import _xxinterpchannels as _channels + _channels.send({cid}, b'spam') + """)) + out = _run_output(id2, dedent(f""" + import _xxinterpchannels as _channels + obj = _channels.recv({cid}) + _channels.release({cid}) + print(repr(obj)) + """)) + interpreters.run_string(id1, dedent(f""" + _channels.release({cid}) + """)) + + self.assertEqual(out.strip(), "b'spam'") + + def test_no_kwargs(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.recv(cid) + channels.release(cid) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'eggs') + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_multiple_times(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.recv(cid) + channels.release(cid, send=True, recv=True) + + with self.assertRaises(channels.ChannelClosedError): + channels.release(cid, send=True, recv=True) + + def test_with_unused_items(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.send(cid, b'ham') + channels.release(cid, send=True, recv=True) + + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_never_used(self): + cid = channels.create() + channels.release(cid) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'spam') + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_by_unassociated_interp(self): + cid = channels.create() + channels.send(cid, b'spam') + interp = interpreters.create() + interpreters.run_string(interp, dedent(f""" + import _xxinterpchannels as _channels + _channels.release({cid}) + """)) + obj = channels.recv(cid) + channels.release(cid) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'eggs') + self.assertEqual(obj, b'spam') + + def test_close_if_unassociated(self): + # XXX Something's not right with this test... + cid = channels.create() + interp = interpreters.create() + interpreters.run_string(interp, dedent(f""" + import _xxinterpchannels as _channels + obj = _channels.send({cid}, b'spam') + _channels.release({cid}) + """)) + + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + def test_partially(self): + # XXX Is partial close too weird/confusing? + cid = channels.create() + channels.send(cid, None) + channels.recv(cid) + channels.send(cid, b'spam') + channels.release(cid, send=True) + obj = channels.recv(cid) + + self.assertEqual(obj, b'spam') + + def test_used_multiple_times_by_single_user(self): + cid = channels.create() + channels.send(cid, b'spam') + channels.send(cid, b'spam') + channels.send(cid, b'spam') + channels.recv(cid) + channels.release(cid, send=True, recv=True) + + with self.assertRaises(channels.ChannelClosedError): + channels.send(cid, b'eggs') + with self.assertRaises(channels.ChannelClosedError): + channels.recv(cid) + + +class ChannelCloseFixture(namedtuple('ChannelCloseFixture', + 'end interp other extra creator')): + + # Set this to True to avoid creating interpreters, e.g. when + # scanning through test permutations without running them. + QUICK = False + + def __new__(cls, end, interp, other, extra, creator): + assert end in ('send', 'recv') + if cls.QUICK: + known = {} + else: + interp = Interpreter.from_raw(interp) + other = Interpreter.from_raw(other) + extra = Interpreter.from_raw(extra) + known = { + interp.name: interp, + other.name: other, + extra.name: extra, + } + if not creator: + creator = 'same' + self = super().__new__(cls, end, interp, other, extra, creator) + self._prepped = set() + self._state = ChannelState() + self._known = known + return self + + @property + def state(self): + return self._state + + @property + def cid(self): + try: + return self._cid + except AttributeError: + creator = self._get_interpreter(self.creator) + self._cid = self._new_channel(creator) + return self._cid + + def get_interpreter(self, interp): + interp = self._get_interpreter(interp) + self._prep_interpreter(interp) + return interp + + def expect_closed_error(self, end=None): + if end is None: + end = self.end + if end == 'recv' and self.state.closed == 'send': + return False + return bool(self.state.closed) + + def prep_interpreter(self, interp): + self._prep_interpreter(interp) + + def record_action(self, action, result): + self._state = result + + def clean_up(self): + clean_up_interpreters() + clean_up_channels() + + # internal methods + + def _new_channel(self, creator): + if creator.name == 'main': + return channels.create() + else: + ch = channels.create() + run_interp(creator.id, f""" + import _xxsubinterpreters + cid = _xxsubchannels.create() + # We purposefully send back an int to avoid tying the + # channel to the other interpreter. + _xxsubchannels.send({ch}, int(cid)) + del _xxsubinterpreters + """) + self._cid = channels.recv(ch) + return self._cid + + def _get_interpreter(self, interp): + if interp in ('same', 'interp'): + return self.interp + elif interp == 'other': + return self.other + elif interp == 'extra': + return self.extra + else: + name = interp + try: + interp = self._known[name] + except KeyError: + interp = self._known[name] = Interpreter(name) + return interp + + def _prep_interpreter(self, interp): + if interp.id in self._prepped: + return + self._prepped.add(interp.id) + if interp.name == 'main': + return + run_interp(interp.id, f""" + import _xxinterpchannels as channels + import test.test__xxinterpchannels as helpers + ChannelState = helpers.ChannelState + try: + cid + except NameError: + cid = channels._channel_id({self.cid}) + """) + + + at unittest.skip('these tests take several hours to run') +class ExhaustiveChannelTests(TestBase): + + """ + - main / interp / other + - run in: current thread / new thread / other thread / different threads + - end / opposite + - force / no force + - used / not used (associated / not associated) + - empty / emptied / never emptied / partly emptied + - closed / not closed + - released / not released + - creator (interp) / other + - associated interpreter not running + - associated interpreter destroyed + + - close after unbound + """ + + """ + use + pre-close + close + after + check + """ + + """ + close in: main, interp1 + creator: same, other, extra + + use: None,send,recv,send/recv in None,same,other,same+other,all + pre-close: None,send,recv in None,same,other,same+other,all + pre-close forced: None,send,recv in None,same,other,same+other,all + + close: same + close forced: same + + use after: None,send,recv,send/recv in None,same,other,extra,same+other,all + close after: None,send,recv,send/recv in None,same,other,extra,same+other,all + check closed: send/recv for same/other(incl. interp2) + """ + + def iter_action_sets(self): + # - used / not used (associated / not associated) + # - empty / emptied / never emptied / partly emptied + # - closed / not closed + # - released / not released + + # never used + yield [] + + # only pre-closed (and possible used after) + for closeactions in self._iter_close_action_sets('same', 'other'): + yield closeactions + for postactions in self._iter_post_close_action_sets(): + yield closeactions + postactions + for closeactions in self._iter_close_action_sets('other', 'extra'): + yield closeactions + for postactions in self._iter_post_close_action_sets(): + yield closeactions + postactions + + # used + for useactions in self._iter_use_action_sets('same', 'other'): + yield useactions + for closeactions in self._iter_close_action_sets('same', 'other'): + actions = useactions + closeactions + yield actions + for postactions in self._iter_post_close_action_sets(): + yield actions + postactions + for closeactions in self._iter_close_action_sets('other', 'extra'): + actions = useactions + closeactions + yield actions + for postactions in self._iter_post_close_action_sets(): + yield actions + postactions + for useactions in self._iter_use_action_sets('other', 'extra'): + yield useactions + for closeactions in self._iter_close_action_sets('same', 'other'): + actions = useactions + closeactions + yield actions + for postactions in self._iter_post_close_action_sets(): + yield actions + postactions + for closeactions in self._iter_close_action_sets('other', 'extra'): + actions = useactions + closeactions + yield actions + for postactions in self._iter_post_close_action_sets(): + yield actions + postactions + + def _iter_use_action_sets(self, interp1, interp2): + interps = (interp1, interp2) + + # only recv end used + yield [ + ChannelAction('use', 'recv', interp1), + ] + yield [ + ChannelAction('use', 'recv', interp2), + ] + yield [ + ChannelAction('use', 'recv', interp1), + ChannelAction('use', 'recv', interp2), + ] + + # never emptied + yield [ + ChannelAction('use', 'send', interp1), + ] + yield [ + ChannelAction('use', 'send', interp2), + ] + yield [ + ChannelAction('use', 'send', interp1), + ChannelAction('use', 'send', interp2), + ] + + # partially emptied + for interp1 in interps: + for interp2 in interps: + for interp3 in interps: + yield [ + ChannelAction('use', 'send', interp1), + ChannelAction('use', 'send', interp2), + ChannelAction('use', 'recv', interp3), + ] + + # fully emptied + for interp1 in interps: + for interp2 in interps: + for interp3 in interps: + for interp4 in interps: + yield [ + ChannelAction('use', 'send', interp1), + ChannelAction('use', 'send', interp2), + ChannelAction('use', 'recv', interp3), + ChannelAction('use', 'recv', interp4), + ] + + def _iter_close_action_sets(self, interp1, interp2): + ends = ('recv', 'send') + interps = (interp1, interp2) + for force in (True, False): + op = 'force-close' if force else 'close' + for interp in interps: + for end in ends: + yield [ + ChannelAction(op, end, interp), + ] + for recvop in ('close', 'force-close'): + for sendop in ('close', 'force-close'): + for recv in interps: + for send in interps: + yield [ + ChannelAction(recvop, 'recv', recv), + ChannelAction(sendop, 'send', send), + ] + + def _iter_post_close_action_sets(self): + for interp in ('same', 'extra', 'other'): + yield [ + ChannelAction('use', 'recv', interp), + ] + yield [ + ChannelAction('use', 'send', interp), + ] + + def run_actions(self, fix, actions): + for action in actions: + self.run_action(fix, action) + + def run_action(self, fix, action, *, hideclosed=True): + end = action.resolve_end(fix.end) + interp = action.resolve_interp(fix.interp, fix.other, fix.extra) + fix.prep_interpreter(interp) + if interp.name == 'main': + result = run_action( + fix.cid, + action.action, + end, + fix.state, + hideclosed=hideclosed, + ) + fix.record_action(action, result) + else: + _cid = channels.create() + run_interp(interp.id, f""" + result = helpers.run_action( + {fix.cid}, + {repr(action.action)}, + {repr(end)}, + {repr(fix.state)}, + hideclosed={hideclosed}, + ) + channels.send({_cid}, result.pending.to_bytes(1, 'little')) + channels.send({_cid}, b'X' if result.closed else b'') + """) + result = ChannelState( + pending=int.from_bytes(channels.recv(_cid), 'little'), + closed=bool(channels.recv(_cid)), + ) + fix.record_action(action, result) + + def iter_fixtures(self): + # XXX threads? + interpreters = [ + ('main', 'interp', 'extra'), + ('interp', 'main', 'extra'), + ('interp1', 'interp2', 'extra'), + ('interp1', 'interp2', 'main'), + ] + for interp, other, extra in interpreters: + for creator in ('same', 'other', 'creator'): + for end in ('send', 'recv'): + yield ChannelCloseFixture(end, interp, other, extra, creator) + + def _close(self, fix, *, force): + op = 'force-close' if force else 'close' + close = ChannelAction(op, fix.end, 'same') + if not fix.expect_closed_error(): + self.run_action(fix, close, hideclosed=False) + else: + with self.assertRaises(channels.ChannelClosedError): + self.run_action(fix, close, hideclosed=False) + + def _assert_closed_in_interp(self, fix, interp=None): + if interp is None or interp.name == 'main': + with self.assertRaises(channels.ChannelClosedError): + channels.recv(fix.cid) + with self.assertRaises(channels.ChannelClosedError): + channels.send(fix.cid, b'spam') + with self.assertRaises(channels.ChannelClosedError): + channels.close(fix.cid) + with self.assertRaises(channels.ChannelClosedError): + channels.close(fix.cid, force=True) + else: + run_interp(interp.id, f""" + with helpers.expect_channel_closed(): + channels.recv(cid) + """) + run_interp(interp.id, f""" + with helpers.expect_channel_closed(): + channels.send(cid, b'spam') + """) + run_interp(interp.id, f""" + with helpers.expect_channel_closed(): + channels.close(cid) + """) + run_interp(interp.id, f""" + with helpers.expect_channel_closed(): + channels.close(cid, force=True) + """) + + def _assert_closed(self, fix): + self.assertTrue(fix.state.closed) + + for _ in range(fix.state.pending): + channels.recv(fix.cid) + self._assert_closed_in_interp(fix) + + for interp in ('same', 'other'): + interp = fix.get_interpreter(interp) + if interp.name == 'main': + continue + self._assert_closed_in_interp(fix, interp) + + interp = fix.get_interpreter('fresh') + self._assert_closed_in_interp(fix, interp) + + def _iter_close_tests(self, verbose=False): + i = 0 + for actions in self.iter_action_sets(): + print() + for fix in self.iter_fixtures(): + i += 1 + if i > 1000: + return + if verbose: + if (i - 1) % 6 == 0: + print() + print(i, fix, '({} actions)'.format(len(actions))) + else: + if (i - 1) % 6 == 0: + print(' ', end='') + print('.', end=''); sys.stdout.flush() + yield i, fix, actions + if verbose: + print('---') + print() + + # This is useful for scanning through the possible tests. + def _skim_close_tests(self): + ChannelCloseFixture.QUICK = True + for i, fix, actions in self._iter_close_tests(): + pass + + def test_close(self): + for i, fix, actions in self._iter_close_tests(): + with self.subTest('{} {} {}'.format(i, fix, actions)): + fix.prep_interpreter(fix.interp) + self.run_actions(fix, actions) + + self._close(fix, force=False) + + self._assert_closed(fix) + # XXX Things slow down if we have too many interpreters. + fix.clean_up() + + def test_force_close(self): + for i, fix, actions in self._iter_close_tests(): + with self.subTest('{} {} {}'.format(i, fix, actions)): + fix.prep_interpreter(fix.interp) + self.run_actions(fix, actions) + + self._close(fix, force=True) + + self._assert_closed(fix) + # XXX Things slow down if we have too many interpreters. + fix.clean_up() + + +if __name__ == '__main__': + unittest.main() diff --git a/Lib/test/test__xxsubinterpreters.py b/Lib/test/test__xxsubinterpreters.py index 18900bb9f716..687fcf3b7705 100644 --- a/Lib/test/test__xxsubinterpreters.py +++ b/Lib/test/test__xxsubinterpreters.py @@ -9,6 +9,7 @@ import time import unittest +import _testcapi from test import support from test.support import import_helper from test.support import script_helper @@ -73,207 +74,6 @@ def run(): t.join() -#@contextmanager -#def run_threaded(id, source, **shared): -# def run(): -# run_interp(id, source, **shared) -# t = threading.Thread(target=run) -# t.start() -# yield -# t.join() - - -def run_interp(id, source, **shared): - _run_interp(id, source, shared) - - -def _run_interp(id, source, shared, _mainns={}): - source = dedent(source) - main = interpreters.get_main() - if main == id: - if interpreters.get_current() != main: - raise RuntimeError - # XXX Run a func? - exec(source, _mainns) - else: - interpreters.run_string(id, source, shared) - - -class Interpreter(namedtuple('Interpreter', 'name id')): - - @classmethod - def from_raw(cls, raw): - if isinstance(raw, cls): - return raw - elif isinstance(raw, str): - return cls(raw) - else: - raise NotImplementedError - - def __new__(cls, name=None, id=None): - main = interpreters.get_main() - if id == main: - if not name: - name = 'main' - elif name != 'main': - raise ValueError( - 'name mismatch (expected "main", got "{}")'.format(name)) - id = main - elif id is not None: - if not name: - name = 'interp' - elif name == 'main': - raise ValueError('name mismatch (unexpected "main")') - if not isinstance(id, interpreters.InterpreterID): - id = interpreters.InterpreterID(id) - elif not name or name == 'main': - name = 'main' - id = main - else: - id = interpreters.create() - self = super().__new__(cls, name, id) - return self - - -# XXX expect_channel_closed() is unnecessary once we improve exc propagation. - - at contextlib.contextmanager -def expect_channel_closed(): - try: - yield - except interpreters.ChannelClosedError: - pass - else: - assert False, 'channel not closed' - - -class ChannelAction(namedtuple('ChannelAction', 'action end interp')): - - def __new__(cls, action, end=None, interp=None): - if not end: - end = 'both' - if not interp: - interp = 'main' - self = super().__new__(cls, action, end, interp) - return self - - def __init__(self, *args, **kwargs): - if self.action == 'use': - if self.end not in ('same', 'opposite', 'send', 'recv'): - raise ValueError(self.end) - elif self.action in ('close', 'force-close'): - if self.end not in ('both', 'same', 'opposite', 'send', 'recv'): - raise ValueError(self.end) - else: - raise ValueError(self.action) - if self.interp not in ('main', 'same', 'other', 'extra'): - raise ValueError(self.interp) - - def resolve_end(self, end): - if self.end == 'same': - return end - elif self.end == 'opposite': - return 'recv' if end == 'send' else 'send' - else: - return self.end - - def resolve_interp(self, interp, other, extra): - if self.interp == 'same': - return interp - elif self.interp == 'other': - if other is None: - raise RuntimeError - return other - elif self.interp == 'extra': - if extra is None: - raise RuntimeError - return extra - elif self.interp == 'main': - if interp.name == 'main': - return interp - elif other and other.name == 'main': - return other - else: - raise RuntimeError - # Per __init__(), there aren't any others. - - -class ChannelState(namedtuple('ChannelState', 'pending closed')): - - def __new__(cls, pending=0, *, closed=False): - self = super().__new__(cls, pending, closed) - return self - - def incr(self): - return type(self)(self.pending + 1, closed=self.closed) - - def decr(self): - return type(self)(self.pending - 1, closed=self.closed) - - def close(self, *, force=True): - if self.closed: - if not force or self.pending == 0: - return self - return type(self)(0 if force else self.pending, closed=True) - - -def run_action(cid, action, end, state, *, hideclosed=True): - if state.closed: - if action == 'use' and end == 'recv' and state.pending: - expectfail = False - else: - expectfail = True - else: - expectfail = False - - try: - result = _run_action(cid, action, end, state) - except interpreters.ChannelClosedError: - if not hideclosed and not expectfail: - raise - result = state.close() - else: - if expectfail: - raise ... # XXX - return result - - -def _run_action(cid, action, end, state): - if action == 'use': - if end == 'send': - interpreters.channel_send(cid, b'spam') - return state.incr() - elif end == 'recv': - if not state.pending: - try: - interpreters.channel_recv(cid) - except interpreters.ChannelEmptyError: - return state - else: - raise Exception('expected ChannelEmptyError') - else: - interpreters.channel_recv(cid) - return state.decr() - else: - raise ValueError(end) - elif action == 'close': - kwargs = {} - if end in ('recv', 'send'): - kwargs[end] = True - interpreters.channel_close(cid, **kwargs) - return state.close() - elif action == 'force-close': - kwargs = { - 'force': True, - } - if end in ('recv', 'send'): - kwargs[end] = True - interpreters.channel_close(cid, **kwargs) - return state.close(force=True) - else: - raise ValueError(action) - - def clean_up_interpreters(): for id in interpreters.list_all(): if id == 0: # main @@ -284,18 +84,9 @@ def clean_up_interpreters(): pass # already destroyed -def clean_up_channels(): - for cid in interpreters.channel_list_all(): - try: - interpreters.channel_destroy(cid) - except interpreters.ChannelNotFoundError: - pass # already destroyed - - class TestBase(unittest.TestCase): def tearDown(self): - clean_up_channels() clean_up_interpreters() @@ -354,30 +145,20 @@ class SubBytes(bytes): class ShareableTypeTests(unittest.TestCase): - def setUp(self): - super().setUp() - self.cid = interpreters.channel_create() - - def tearDown(self): - interpreters.channel_destroy(self.cid) - super().tearDown() - def _assert_values(self, values): for obj in values: with self.subTest(obj): - interpreters.channel_send(self.cid, obj) - got = interpreters.channel_recv(self.cid) + xid = _testcapi.get_crossinterp_data(obj) + got = _testcapi.restore_crossinterp_data(xid) self.assertEqual(got, obj) self.assertIs(type(got), type(obj)) - # XXX Check the following in the channel tests? - #self.assertIsNot(got, obj) def test_singletons(self): for obj in [None]: with self.subTest(obj): - interpreters.channel_send(self.cid, obj) - got = interpreters.channel_recv(self.cid) + xid = _testcapi.get_crossinterp_data(obj) + got = _testcapi.restore_crossinterp_data(xid) # XXX What about between interpreters? self.assertIs(got, obj) @@ -408,7 +189,7 @@ def test_non_shareable_int(self): for i in ints: with self.subTest(i): with self.assertRaises(OverflowError): - interpreters.channel_send(self.cid, i) + _testcapi.get_crossinterp_data(i) class ModuleTests(TestBase): @@ -555,7 +336,7 @@ def test_bad_id(self): self.assertRaises(OverflowError, interpreters.InterpreterID, 2**64) def test_does_not_exist(self): - id = interpreters.channel_create() + id = interpreters.create() with self.assertRaises(RuntimeError): interpreters.InterpreterID(int(id) + 1) # unforced @@ -864,6 +645,22 @@ def f(): self.assertEqual(out, 'it worked!') + def test_shareable_types(self): + interp = interpreters.create() + objects = [ + None, + 'spam', + b'spam', + 42, + ] + for obj in objects: + with self.subTest(obj): + interpreters.run_string( + interp, + f'assert(obj == {obj!r})', + shared=dict(obj=obj), + ) + def test_os_exec(self): expected = 'spam spam spam spam spam' subinterp = interpreters.create() @@ -1130,1285 +927,5 @@ def f(): self.assertEqual(retcode, 0) -################################## -# channel tests - -class ChannelIDTests(TestBase): - - def test_default_kwargs(self): - cid = interpreters._channel_id(10, force=True) - - self.assertEqual(int(cid), 10) - self.assertEqual(cid.end, 'both') - - def test_with_kwargs(self): - cid = interpreters._channel_id(10, send=True, force=True) - self.assertEqual(cid.end, 'send') - - cid = interpreters._channel_id(10, send=True, recv=False, force=True) - self.assertEqual(cid.end, 'send') - - cid = interpreters._channel_id(10, recv=True, force=True) - self.assertEqual(cid.end, 'recv') - - cid = interpreters._channel_id(10, recv=True, send=False, force=True) - self.assertEqual(cid.end, 'recv') - - cid = interpreters._channel_id(10, send=True, recv=True, force=True) - self.assertEqual(cid.end, 'both') - - def test_coerce_id(self): - class Int(str): - def __index__(self): - return 10 - - cid = interpreters._channel_id(Int(), force=True) - self.assertEqual(int(cid), 10) - - def test_bad_id(self): - self.assertRaises(TypeError, interpreters._channel_id, object()) - self.assertRaises(TypeError, interpreters._channel_id, 10.0) - self.assertRaises(TypeError, interpreters._channel_id, '10') - self.assertRaises(TypeError, interpreters._channel_id, b'10') - self.assertRaises(ValueError, interpreters._channel_id, -1) - self.assertRaises(OverflowError, interpreters._channel_id, 2**64) - - def test_bad_kwargs(self): - with self.assertRaises(ValueError): - interpreters._channel_id(10, send=False, recv=False) - - def test_does_not_exist(self): - cid = interpreters.channel_create() - with self.assertRaises(interpreters.ChannelNotFoundError): - interpreters._channel_id(int(cid) + 1) # unforced - - def test_str(self): - cid = interpreters._channel_id(10, force=True) - self.assertEqual(str(cid), '10') - - def test_repr(self): - cid = interpreters._channel_id(10, force=True) - self.assertEqual(repr(cid), 'ChannelID(10)') - - cid = interpreters._channel_id(10, send=True, force=True) - self.assertEqual(repr(cid), 'ChannelID(10, send=True)') - - cid = interpreters._channel_id(10, recv=True, force=True) - self.assertEqual(repr(cid), 'ChannelID(10, recv=True)') - - cid = interpreters._channel_id(10, send=True, recv=True, force=True) - self.assertEqual(repr(cid), 'ChannelID(10)') - - def test_equality(self): - cid1 = interpreters.channel_create() - cid2 = interpreters._channel_id(int(cid1)) - cid3 = interpreters.channel_create() - - self.assertTrue(cid1 == cid1) - self.assertTrue(cid1 == cid2) - self.assertTrue(cid1 == int(cid1)) - self.assertTrue(int(cid1) == cid1) - self.assertTrue(cid1 == float(int(cid1))) - self.assertTrue(float(int(cid1)) == cid1) - self.assertFalse(cid1 == float(int(cid1)) + 0.1) - self.assertFalse(cid1 == str(int(cid1))) - self.assertFalse(cid1 == 2**1000) - self.assertFalse(cid1 == float('inf')) - self.assertFalse(cid1 == 'spam') - self.assertFalse(cid1 == cid3) - - self.assertFalse(cid1 != cid1) - self.assertFalse(cid1 != cid2) - self.assertTrue(cid1 != cid3) - - def test_shareable(self): - chan = interpreters.channel_create() - - obj = interpreters.channel_create() - interpreters.channel_send(chan, obj) - got = interpreters.channel_recv(chan) - - self.assertEqual(got, obj) - self.assertIs(type(got), type(obj)) - # XXX Check the following in the channel tests? - #self.assertIsNot(got, obj) - - -class ChannelTests(TestBase): - - def test_create_cid(self): - cid = interpreters.channel_create() - self.assertIsInstance(cid, interpreters.ChannelID) - - def test_sequential_ids(self): - before = interpreters.channel_list_all() - id1 = interpreters.channel_create() - id2 = interpreters.channel_create() - id3 = interpreters.channel_create() - after = interpreters.channel_list_all() - - self.assertEqual(id2, int(id1) + 1) - self.assertEqual(id3, int(id2) + 1) - self.assertEqual(set(after) - set(before), {id1, id2, id3}) - - def test_ids_global(self): - id1 = interpreters.create() - out = _run_output(id1, dedent(""" - import _xxsubinterpreters as _interpreters - cid = _interpreters.channel_create() - print(cid) - """)) - cid1 = int(out.strip()) - - id2 = interpreters.create() - out = _run_output(id2, dedent(""" - import _xxsubinterpreters as _interpreters - cid = _interpreters.channel_create() - print(cid) - """)) - cid2 = int(out.strip()) - - self.assertEqual(cid2, int(cid1) + 1) - - def test_channel_list_interpreters_none(self): - """Test listing interpreters for a channel with no associations.""" - # Test for channel with no associated interpreters. - cid = interpreters.channel_create() - send_interps = interpreters.channel_list_interpreters(cid, send=True) - recv_interps = interpreters.channel_list_interpreters(cid, send=False) - self.assertEqual(send_interps, []) - self.assertEqual(recv_interps, []) - - def test_channel_list_interpreters_basic(self): - """Test basic listing channel interpreters.""" - interp0 = interpreters.get_main() - cid = interpreters.channel_create() - interpreters.channel_send(cid, "send") - # Test for a channel that has one end associated to an interpreter. - send_interps = interpreters.channel_list_interpreters(cid, send=True) - recv_interps = interpreters.channel_list_interpreters(cid, send=False) - self.assertEqual(send_interps, [interp0]) - self.assertEqual(recv_interps, []) - - interp1 = interpreters.create() - _run_output(interp1, dedent(f""" - import _xxsubinterpreters as _interpreters - obj = _interpreters.channel_recv({cid}) - """)) - # Test for channel that has both ends associated to an interpreter. - send_interps = interpreters.channel_list_interpreters(cid, send=True) - recv_interps = interpreters.channel_list_interpreters(cid, send=False) - self.assertEqual(send_interps, [interp0]) - self.assertEqual(recv_interps, [interp1]) - - def test_channel_list_interpreters_multiple(self): - """Test listing interpreters for a channel with many associations.""" - interp0 = interpreters.get_main() - interp1 = interpreters.create() - interp2 = interpreters.create() - interp3 = interpreters.create() - cid = interpreters.channel_create() - - interpreters.channel_send(cid, "send") - _run_output(interp1, dedent(f""" - import _xxsubinterpreters as _interpreters - _interpreters.channel_send({cid}, "send") - """)) - _run_output(interp2, dedent(f""" - import _xxsubinterpreters as _interpreters - obj = _interpreters.channel_recv({cid}) - """)) - _run_output(interp3, dedent(f""" - import _xxsubinterpreters as _interpreters - obj = _interpreters.channel_recv({cid}) - """)) - send_interps = interpreters.channel_list_interpreters(cid, send=True) - recv_interps = interpreters.channel_list_interpreters(cid, send=False) - self.assertEqual(set(send_interps), {interp0, interp1}) - self.assertEqual(set(recv_interps), {interp2, interp3}) - - def test_channel_list_interpreters_destroyed(self): - """Test listing channel interpreters with a destroyed interpreter.""" - interp0 = interpreters.get_main() - interp1 = interpreters.create() - cid = interpreters.channel_create() - interpreters.channel_send(cid, "send") - _run_output(interp1, dedent(f""" - import _xxsubinterpreters as _interpreters - obj = _interpreters.channel_recv({cid}) - """)) - # Should be one interpreter associated with each end. - send_interps = interpreters.channel_list_interpreters(cid, send=True) - recv_interps = interpreters.channel_list_interpreters(cid, send=False) - self.assertEqual(send_interps, [interp0]) - self.assertEqual(recv_interps, [interp1]) - - interpreters.destroy(interp1) - # Destroyed interpreter should not be listed. - send_interps = interpreters.channel_list_interpreters(cid, send=True) - recv_interps = interpreters.channel_list_interpreters(cid, send=False) - self.assertEqual(send_interps, [interp0]) - self.assertEqual(recv_interps, []) - - def test_channel_list_interpreters_released(self): - """Test listing channel interpreters with a released channel.""" - # Set up one channel with main interpreter on the send end and two - # subinterpreters on the receive end. - interp0 = interpreters.get_main() - interp1 = interpreters.create() - interp2 = interpreters.create() - cid = interpreters.channel_create() - interpreters.channel_send(cid, "data") - _run_output(interp1, dedent(f""" - import _xxsubinterpreters as _interpreters - obj = _interpreters.channel_recv({cid}) - """)) - interpreters.channel_send(cid, "data") - _run_output(interp2, dedent(f""" - import _xxsubinterpreters as _interpreters - obj = _interpreters.channel_recv({cid}) - """)) - # Check the setup. - send_interps = interpreters.channel_list_interpreters(cid, send=True) - recv_interps = interpreters.channel_list_interpreters(cid, send=False) - self.assertEqual(len(send_interps), 1) - self.assertEqual(len(recv_interps), 2) - - # Release the main interpreter from the send end. - interpreters.channel_release(cid, send=True) - # Send end should have no associated interpreters. - send_interps = interpreters.channel_list_interpreters(cid, send=True) - recv_interps = interpreters.channel_list_interpreters(cid, send=False) - self.assertEqual(len(send_interps), 0) - self.assertEqual(len(recv_interps), 2) - - # Release one of the subinterpreters from the receive end. - _run_output(interp2, dedent(f""" - import _xxsubinterpreters as _interpreters - _interpreters.channel_release({cid}) - """)) - # Receive end should have the released interpreter removed. - send_interps = interpreters.channel_list_interpreters(cid, send=True) - recv_interps = interpreters.channel_list_interpreters(cid, send=False) - self.assertEqual(len(send_interps), 0) - self.assertEqual(recv_interps, [interp1]) - - def test_channel_list_interpreters_closed(self): - """Test listing channel interpreters with a closed channel.""" - interp0 = interpreters.get_main() - interp1 = interpreters.create() - cid = interpreters.channel_create() - # Put something in the channel so that it's not empty. - interpreters.channel_send(cid, "send") - - # Check initial state. - send_interps = interpreters.channel_list_interpreters(cid, send=True) - recv_interps = interpreters.channel_list_interpreters(cid, send=False) - self.assertEqual(len(send_interps), 1) - self.assertEqual(len(recv_interps), 0) - - # Force close the channel. - interpreters.channel_close(cid, force=True) - # Both ends should raise an error. - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_list_interpreters(cid, send=True) - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_list_interpreters(cid, send=False) - - def test_channel_list_interpreters_closed_send_end(self): - """Test listing channel interpreters with a channel's send end closed.""" - interp0 = interpreters.get_main() - interp1 = interpreters.create() - cid = interpreters.channel_create() - # Put something in the channel so that it's not empty. - interpreters.channel_send(cid, "send") - - # Check initial state. - send_interps = interpreters.channel_list_interpreters(cid, send=True) - recv_interps = interpreters.channel_list_interpreters(cid, send=False) - self.assertEqual(len(send_interps), 1) - self.assertEqual(len(recv_interps), 0) - - # Close the send end of the channel. - interpreters.channel_close(cid, send=True) - # Send end should raise an error. - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_list_interpreters(cid, send=True) - # Receive end should not be closed (since channel is not empty). - recv_interps = interpreters.channel_list_interpreters(cid, send=False) - self.assertEqual(len(recv_interps), 0) - - # Close the receive end of the channel from a subinterpreter. - _run_output(interp1, dedent(f""" - import _xxsubinterpreters as _interpreters - _interpreters.channel_close({cid}, force=True) - """)) - # Both ends should raise an error. - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_list_interpreters(cid, send=True) - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_list_interpreters(cid, send=False) - - #################### - - def test_send_recv_main(self): - cid = interpreters.channel_create() - orig = b'spam' - interpreters.channel_send(cid, orig) - obj = interpreters.channel_recv(cid) - - self.assertEqual(obj, orig) - self.assertIsNot(obj, orig) - - def test_send_recv_same_interpreter(self): - id1 = interpreters.create() - out = _run_output(id1, dedent(""" - import _xxsubinterpreters as _interpreters - cid = _interpreters.channel_create() - orig = b'spam' - _interpreters.channel_send(cid, orig) - obj = _interpreters.channel_recv(cid) - assert obj is not orig - assert obj == orig - """)) - - def test_send_recv_different_interpreters(self): - cid = interpreters.channel_create() - id1 = interpreters.create() - out = _run_output(id1, dedent(f""" - import _xxsubinterpreters as _interpreters - _interpreters.channel_send({cid}, b'spam') - """)) - obj = interpreters.channel_recv(cid) - - self.assertEqual(obj, b'spam') - - def test_send_recv_different_threads(self): - cid = interpreters.channel_create() - - def f(): - while True: - try: - obj = interpreters.channel_recv(cid) - break - except interpreters.ChannelEmptyError: - time.sleep(0.1) - interpreters.channel_send(cid, obj) - t = threading.Thread(target=f) - t.start() - - interpreters.channel_send(cid, b'spam') - t.join() - obj = interpreters.channel_recv(cid) - - self.assertEqual(obj, b'spam') - - def test_send_recv_different_interpreters_and_threads(self): - cid = interpreters.channel_create() - id1 = interpreters.create() - out = None - - def f(): - nonlocal out - out = _run_output(id1, dedent(f""" - import time - import _xxsubinterpreters as _interpreters - while True: - try: - obj = _interpreters.channel_recv({cid}) - break - except _interpreters.ChannelEmptyError: - time.sleep(0.1) - assert(obj == b'spam') - _interpreters.channel_send({cid}, b'eggs') - """)) - t = threading.Thread(target=f) - t.start() - - interpreters.channel_send(cid, b'spam') - t.join() - obj = interpreters.channel_recv(cid) - - self.assertEqual(obj, b'eggs') - - def test_send_not_found(self): - with self.assertRaises(interpreters.ChannelNotFoundError): - interpreters.channel_send(10, b'spam') - - def test_recv_not_found(self): - with self.assertRaises(interpreters.ChannelNotFoundError): - interpreters.channel_recv(10) - - def test_recv_empty(self): - cid = interpreters.channel_create() - with self.assertRaises(interpreters.ChannelEmptyError): - interpreters.channel_recv(cid) - - def test_recv_default(self): - default = object() - cid = interpreters.channel_create() - obj1 = interpreters.channel_recv(cid, default) - interpreters.channel_send(cid, None) - interpreters.channel_send(cid, 1) - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'eggs') - obj2 = interpreters.channel_recv(cid, default) - obj3 = interpreters.channel_recv(cid, default) - obj4 = interpreters.channel_recv(cid) - obj5 = interpreters.channel_recv(cid, default) - obj6 = interpreters.channel_recv(cid, default) - - self.assertIs(obj1, default) - self.assertIs(obj2, None) - self.assertEqual(obj3, 1) - self.assertEqual(obj4, b'spam') - self.assertEqual(obj5, b'eggs') - self.assertIs(obj6, default) - - def test_recv_sending_interp_destroyed(self): - cid = interpreters.channel_create() - interp = interpreters.create() - interpreters.run_string(interp, dedent(f""" - import _xxsubinterpreters as _interpreters - _interpreters.channel_send({cid}, b'spam') - """)) - interpreters.destroy(interp) - - with self.assertRaisesRegex(RuntimeError, - 'unrecognized interpreter ID'): - interpreters.channel_recv(cid) - - def test_run_string_arg_unresolved(self): - cid = interpreters.channel_create() - interp = interpreters.create() - - out = _run_output(interp, dedent(""" - import _xxsubinterpreters as _interpreters - print(cid.end) - _interpreters.channel_send(cid, b'spam') - """), - dict(cid=cid.send)) - obj = interpreters.channel_recv(cid) - - self.assertEqual(obj, b'spam') - self.assertEqual(out.strip(), 'send') - - # XXX For now there is no high-level channel into which the - # sent channel ID can be converted... - # Note: this test caused crashes on some buildbots (bpo-33615). - @unittest.skip('disabled until high-level channels exist') - def test_run_string_arg_resolved(self): - cid = interpreters.channel_create() - cid = interpreters._channel_id(cid, _resolve=True) - interp = interpreters.create() - - out = _run_output(interp, dedent(""" - import _xxsubinterpreters as _interpreters - print(chan.id.end) - _interpreters.channel_send(chan.id, b'spam') - """), - dict(chan=cid.send)) - obj = interpreters.channel_recv(cid) - - self.assertEqual(obj, b'spam') - self.assertEqual(out.strip(), 'send') - - # close - - def test_close_single_user(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_recv(cid) - interpreters.channel_close(cid) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'eggs') - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_close_multiple_users(self): - cid = interpreters.channel_create() - id1 = interpreters.create() - id2 = interpreters.create() - interpreters.run_string(id1, dedent(f""" - import _xxsubinterpreters as _interpreters - _interpreters.channel_send({cid}, b'spam') - """)) - interpreters.run_string(id2, dedent(f""" - import _xxsubinterpreters as _interpreters - _interpreters.channel_recv({cid}) - """)) - interpreters.channel_close(cid) - with self.assertRaises(interpreters.RunFailedError) as cm: - interpreters.run_string(id1, dedent(f""" - _interpreters.channel_send({cid}, b'spam') - """)) - self.assertIn('ChannelClosedError', str(cm.exception)) - with self.assertRaises(interpreters.RunFailedError) as cm: - interpreters.run_string(id2, dedent(f""" - _interpreters.channel_send({cid}, b'spam') - """)) - self.assertIn('ChannelClosedError', str(cm.exception)) - - def test_close_multiple_times(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_recv(cid) - interpreters.channel_close(cid) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_close(cid) - - def test_close_empty(self): - tests = [ - (False, False), - (True, False), - (False, True), - (True, True), - ] - for send, recv in tests: - with self.subTest((send, recv)): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_recv(cid) - interpreters.channel_close(cid, send=send, recv=recv) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'eggs') - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_close_defaults_with_unused_items(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'ham') - - with self.assertRaises(interpreters.ChannelNotEmptyError): - interpreters.channel_close(cid) - interpreters.channel_recv(cid) - interpreters.channel_send(cid, b'eggs') - - def test_close_recv_with_unused_items_unforced(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'ham') - - with self.assertRaises(interpreters.ChannelNotEmptyError): - interpreters.channel_close(cid, recv=True) - interpreters.channel_recv(cid) - interpreters.channel_send(cid, b'eggs') - interpreters.channel_recv(cid) - interpreters.channel_recv(cid) - interpreters.channel_close(cid, recv=True) - - def test_close_send_with_unused_items_unforced(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'ham') - interpreters.channel_close(cid, send=True) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'eggs') - interpreters.channel_recv(cid) - interpreters.channel_recv(cid) - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_close_both_with_unused_items_unforced(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'ham') - - with self.assertRaises(interpreters.ChannelNotEmptyError): - interpreters.channel_close(cid, recv=True, send=True) - interpreters.channel_recv(cid) - interpreters.channel_send(cid, b'eggs') - interpreters.channel_recv(cid) - interpreters.channel_recv(cid) - interpreters.channel_close(cid, recv=True) - - def test_close_recv_with_unused_items_forced(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'ham') - interpreters.channel_close(cid, recv=True, force=True) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'eggs') - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_close_send_with_unused_items_forced(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'ham') - interpreters.channel_close(cid, send=True, force=True) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'eggs') - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_close_both_with_unused_items_forced(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'ham') - interpreters.channel_close(cid, send=True, recv=True, force=True) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'eggs') - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_close_never_used(self): - cid = interpreters.channel_create() - interpreters.channel_close(cid) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'spam') - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_close_by_unassociated_interp(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interp = interpreters.create() - interpreters.run_string(interp, dedent(f""" - import _xxsubinterpreters as _interpreters - _interpreters.channel_close({cid}, force=True) - """)) - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_close(cid) - - def test_close_used_multiple_times_by_single_user(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'spam') - interpreters.channel_recv(cid) - interpreters.channel_close(cid, force=True) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'eggs') - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_channel_list_interpreters_invalid_channel(self): - cid = interpreters.channel_create() - # Test for invalid channel ID. - with self.assertRaises(interpreters.ChannelNotFoundError): - interpreters.channel_list_interpreters(1000, send=True) - - interpreters.channel_close(cid) - # Test for a channel that has been closed. - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_list_interpreters(cid, send=True) - - def test_channel_list_interpreters_invalid_args(self): - # Tests for invalid arguments passed to the API. - cid = interpreters.channel_create() - with self.assertRaises(TypeError): - interpreters.channel_list_interpreters(cid) - - -class ChannelReleaseTests(TestBase): - - # XXX Add more test coverage a la the tests for close(). - - """ - - main / interp / other - - run in: current thread / new thread / other thread / different threads - - end / opposite - - force / no force - - used / not used (associated / not associated) - - empty / emptied / never emptied / partly emptied - - closed / not closed - - released / not released - - creator (interp) / other - - associated interpreter not running - - associated interpreter destroyed - """ - - """ - use - pre-release - release - after - check - """ - - """ - release in: main, interp1 - creator: same, other (incl. interp2) - - use: None,send,recv,send/recv in None,same,other(incl. interp2),same+other(incl. interp2),all - pre-release: None,send,recv,both in None,same,other(incl. interp2),same+other(incl. interp2),all - pre-release forced: None,send,recv,both in None,same,other(incl. interp2),same+other(incl. interp2),all - - release: same - release forced: same - - use after: None,send,recv,send/recv in None,same,other(incl. interp2),same+other(incl. interp2),all - release after: None,send,recv,send/recv in None,same,other(incl. interp2),same+other(incl. interp2),all - check released: send/recv for same/other(incl. interp2) - check closed: send/recv for same/other(incl. interp2) - """ - - def test_single_user(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_recv(cid) - interpreters.channel_release(cid, send=True, recv=True) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'eggs') - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_multiple_users(self): - cid = interpreters.channel_create() - id1 = interpreters.create() - id2 = interpreters.create() - interpreters.run_string(id1, dedent(f""" - import _xxsubinterpreters as _interpreters - _interpreters.channel_send({cid}, b'spam') - """)) - out = _run_output(id2, dedent(f""" - import _xxsubinterpreters as _interpreters - obj = _interpreters.channel_recv({cid}) - _interpreters.channel_release({cid}) - print(repr(obj)) - """)) - interpreters.run_string(id1, dedent(f""" - _interpreters.channel_release({cid}) - """)) - - self.assertEqual(out.strip(), "b'spam'") - - def test_no_kwargs(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_recv(cid) - interpreters.channel_release(cid) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'eggs') - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_multiple_times(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_recv(cid) - interpreters.channel_release(cid, send=True, recv=True) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_release(cid, send=True, recv=True) - - def test_with_unused_items(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'ham') - interpreters.channel_release(cid, send=True, recv=True) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_never_used(self): - cid = interpreters.channel_create() - interpreters.channel_release(cid) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'spam') - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_by_unassociated_interp(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interp = interpreters.create() - interpreters.run_string(interp, dedent(f""" - import _xxsubinterpreters as _interpreters - _interpreters.channel_release({cid}) - """)) - obj = interpreters.channel_recv(cid) - interpreters.channel_release(cid) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'eggs') - self.assertEqual(obj, b'spam') - - def test_close_if_unassociated(self): - # XXX Something's not right with this test... - cid = interpreters.channel_create() - interp = interpreters.create() - interpreters.run_string(interp, dedent(f""" - import _xxsubinterpreters as _interpreters - obj = _interpreters.channel_send({cid}, b'spam') - _interpreters.channel_release({cid}) - """)) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - def test_partially(self): - # XXX Is partial close too weird/confusing? - cid = interpreters.channel_create() - interpreters.channel_send(cid, None) - interpreters.channel_recv(cid) - interpreters.channel_send(cid, b'spam') - interpreters.channel_release(cid, send=True) - obj = interpreters.channel_recv(cid) - - self.assertEqual(obj, b'spam') - - def test_used_multiple_times_by_single_user(self): - cid = interpreters.channel_create() - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'spam') - interpreters.channel_send(cid, b'spam') - interpreters.channel_recv(cid) - interpreters.channel_release(cid, send=True, recv=True) - - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(cid, b'eggs') - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(cid) - - -class ChannelCloseFixture(namedtuple('ChannelCloseFixture', - 'end interp other extra creator')): - - # Set this to True to avoid creating interpreters, e.g. when - # scanning through test permutations without running them. - QUICK = False - - def __new__(cls, end, interp, other, extra, creator): - assert end in ('send', 'recv') - if cls.QUICK: - known = {} - else: - interp = Interpreter.from_raw(interp) - other = Interpreter.from_raw(other) - extra = Interpreter.from_raw(extra) - known = { - interp.name: interp, - other.name: other, - extra.name: extra, - } - if not creator: - creator = 'same' - self = super().__new__(cls, end, interp, other, extra, creator) - self._prepped = set() - self._state = ChannelState() - self._known = known - return self - - @property - def state(self): - return self._state - - @property - def cid(self): - try: - return self._cid - except AttributeError: - creator = self._get_interpreter(self.creator) - self._cid = self._new_channel(creator) - return self._cid - - def get_interpreter(self, interp): - interp = self._get_interpreter(interp) - self._prep_interpreter(interp) - return interp - - def expect_closed_error(self, end=None): - if end is None: - end = self.end - if end == 'recv' and self.state.closed == 'send': - return False - return bool(self.state.closed) - - def prep_interpreter(self, interp): - self._prep_interpreter(interp) - - def record_action(self, action, result): - self._state = result - - def clean_up(self): - clean_up_interpreters() - clean_up_channels() - - # internal methods - - def _new_channel(self, creator): - if creator.name == 'main': - return interpreters.channel_create() - else: - ch = interpreters.channel_create() - run_interp(creator.id, f""" - import _xxsubinterpreters - cid = _xxsubinterpreters.channel_create() - # We purposefully send back an int to avoid tying the - # channel to the other interpreter. - _xxsubinterpreters.channel_send({ch}, int(cid)) - del _xxsubinterpreters - """) - self._cid = interpreters.channel_recv(ch) - return self._cid - - def _get_interpreter(self, interp): - if interp in ('same', 'interp'): - return self.interp - elif interp == 'other': - return self.other - elif interp == 'extra': - return self.extra - else: - name = interp - try: - interp = self._known[name] - except KeyError: - interp = self._known[name] = Interpreter(name) - return interp - - def _prep_interpreter(self, interp): - if interp.id in self._prepped: - return - self._prepped.add(interp.id) - if interp.name == 'main': - return - run_interp(interp.id, f""" - import _xxsubinterpreters as interpreters - import test.test__xxsubinterpreters as helpers - ChannelState = helpers.ChannelState - try: - cid - except NameError: - cid = interpreters._channel_id({self.cid}) - """) - - - at unittest.skip('these tests take several hours to run') -class ExhaustiveChannelTests(TestBase): - - """ - - main / interp / other - - run in: current thread / new thread / other thread / different threads - - end / opposite - - force / no force - - used / not used (associated / not associated) - - empty / emptied / never emptied / partly emptied - - closed / not closed - - released / not released - - creator (interp) / other - - associated interpreter not running - - associated interpreter destroyed - - - close after unbound - """ - - """ - use - pre-close - close - after - check - """ - - """ - close in: main, interp1 - creator: same, other, extra - - use: None,send,recv,send/recv in None,same,other,same+other,all - pre-close: None,send,recv in None,same,other,same+other,all - pre-close forced: None,send,recv in None,same,other,same+other,all - - close: same - close forced: same - - use after: None,send,recv,send/recv in None,same,other,extra,same+other,all - close after: None,send,recv,send/recv in None,same,other,extra,same+other,all - check closed: send/recv for same/other(incl. interp2) - """ - - def iter_action_sets(self): - # - used / not used (associated / not associated) - # - empty / emptied / never emptied / partly emptied - # - closed / not closed - # - released / not released - - # never used - yield [] - - # only pre-closed (and possible used after) - for closeactions in self._iter_close_action_sets('same', 'other'): - yield closeactions - for postactions in self._iter_post_close_action_sets(): - yield closeactions + postactions - for closeactions in self._iter_close_action_sets('other', 'extra'): - yield closeactions - for postactions in self._iter_post_close_action_sets(): - yield closeactions + postactions - - # used - for useactions in self._iter_use_action_sets('same', 'other'): - yield useactions - for closeactions in self._iter_close_action_sets('same', 'other'): - actions = useactions + closeactions - yield actions - for postactions in self._iter_post_close_action_sets(): - yield actions + postactions - for closeactions in self._iter_close_action_sets('other', 'extra'): - actions = useactions + closeactions - yield actions - for postactions in self._iter_post_close_action_sets(): - yield actions + postactions - for useactions in self._iter_use_action_sets('other', 'extra'): - yield useactions - for closeactions in self._iter_close_action_sets('same', 'other'): - actions = useactions + closeactions - yield actions - for postactions in self._iter_post_close_action_sets(): - yield actions + postactions - for closeactions in self._iter_close_action_sets('other', 'extra'): - actions = useactions + closeactions - yield actions - for postactions in self._iter_post_close_action_sets(): - yield actions + postactions - - def _iter_use_action_sets(self, interp1, interp2): - interps = (interp1, interp2) - - # only recv end used - yield [ - ChannelAction('use', 'recv', interp1), - ] - yield [ - ChannelAction('use', 'recv', interp2), - ] - yield [ - ChannelAction('use', 'recv', interp1), - ChannelAction('use', 'recv', interp2), - ] - - # never emptied - yield [ - ChannelAction('use', 'send', interp1), - ] - yield [ - ChannelAction('use', 'send', interp2), - ] - yield [ - ChannelAction('use', 'send', interp1), - ChannelAction('use', 'send', interp2), - ] - - # partially emptied - for interp1 in interps: - for interp2 in interps: - for interp3 in interps: - yield [ - ChannelAction('use', 'send', interp1), - ChannelAction('use', 'send', interp2), - ChannelAction('use', 'recv', interp3), - ] - - # fully emptied - for interp1 in interps: - for interp2 in interps: - for interp3 in interps: - for interp4 in interps: - yield [ - ChannelAction('use', 'send', interp1), - ChannelAction('use', 'send', interp2), - ChannelAction('use', 'recv', interp3), - ChannelAction('use', 'recv', interp4), - ] - - def _iter_close_action_sets(self, interp1, interp2): - ends = ('recv', 'send') - interps = (interp1, interp2) - for force in (True, False): - op = 'force-close' if force else 'close' - for interp in interps: - for end in ends: - yield [ - ChannelAction(op, end, interp), - ] - for recvop in ('close', 'force-close'): - for sendop in ('close', 'force-close'): - for recv in interps: - for send in interps: - yield [ - ChannelAction(recvop, 'recv', recv), - ChannelAction(sendop, 'send', send), - ] - - def _iter_post_close_action_sets(self): - for interp in ('same', 'extra', 'other'): - yield [ - ChannelAction('use', 'recv', interp), - ] - yield [ - ChannelAction('use', 'send', interp), - ] - - def run_actions(self, fix, actions): - for action in actions: - self.run_action(fix, action) - - def run_action(self, fix, action, *, hideclosed=True): - end = action.resolve_end(fix.end) - interp = action.resolve_interp(fix.interp, fix.other, fix.extra) - fix.prep_interpreter(interp) - if interp.name == 'main': - result = run_action( - fix.cid, - action.action, - end, - fix.state, - hideclosed=hideclosed, - ) - fix.record_action(action, result) - else: - _cid = interpreters.channel_create() - run_interp(interp.id, f""" - result = helpers.run_action( - {fix.cid}, - {repr(action.action)}, - {repr(end)}, - {repr(fix.state)}, - hideclosed={hideclosed}, - ) - interpreters.channel_send({_cid}, result.pending.to_bytes(1, 'little')) - interpreters.channel_send({_cid}, b'X' if result.closed else b'') - """) - result = ChannelState( - pending=int.from_bytes(interpreters.channel_recv(_cid), 'little'), - closed=bool(interpreters.channel_recv(_cid)), - ) - fix.record_action(action, result) - - def iter_fixtures(self): - # XXX threads? - interpreters = [ - ('main', 'interp', 'extra'), - ('interp', 'main', 'extra'), - ('interp1', 'interp2', 'extra'), - ('interp1', 'interp2', 'main'), - ] - for interp, other, extra in interpreters: - for creator in ('same', 'other', 'creator'): - for end in ('send', 'recv'): - yield ChannelCloseFixture(end, interp, other, extra, creator) - - def _close(self, fix, *, force): - op = 'force-close' if force else 'close' - close = ChannelAction(op, fix.end, 'same') - if not fix.expect_closed_error(): - self.run_action(fix, close, hideclosed=False) - else: - with self.assertRaises(interpreters.ChannelClosedError): - self.run_action(fix, close, hideclosed=False) - - def _assert_closed_in_interp(self, fix, interp=None): - if interp is None or interp.name == 'main': - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_recv(fix.cid) - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_send(fix.cid, b'spam') - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_close(fix.cid) - with self.assertRaises(interpreters.ChannelClosedError): - interpreters.channel_close(fix.cid, force=True) - else: - run_interp(interp.id, f""" - with helpers.expect_channel_closed(): - interpreters.channel_recv(cid) - """) - run_interp(interp.id, f""" - with helpers.expect_channel_closed(): - interpreters.channel_send(cid, b'spam') - """) - run_interp(interp.id, f""" - with helpers.expect_channel_closed(): - interpreters.channel_close(cid) - """) - run_interp(interp.id, f""" - with helpers.expect_channel_closed(): - interpreters.channel_close(cid, force=True) - """) - - def _assert_closed(self, fix): - self.assertTrue(fix.state.closed) - - for _ in range(fix.state.pending): - interpreters.channel_recv(fix.cid) - self._assert_closed_in_interp(fix) - - for interp in ('same', 'other'): - interp = fix.get_interpreter(interp) - if interp.name == 'main': - continue - self._assert_closed_in_interp(fix, interp) - - interp = fix.get_interpreter('fresh') - self._assert_closed_in_interp(fix, interp) - - def _iter_close_tests(self, verbose=False): - i = 0 - for actions in self.iter_action_sets(): - print() - for fix in self.iter_fixtures(): - i += 1 - if i > 1000: - return - if verbose: - if (i - 1) % 6 == 0: - print() - print(i, fix, '({} actions)'.format(len(actions))) - else: - if (i - 1) % 6 == 0: - print(' ', end='') - print('.', end=''); sys.stdout.flush() - yield i, fix, actions - if verbose: - print('---') - print() - - # This is useful for scanning through the possible tests. - def _skim_close_tests(self): - ChannelCloseFixture.QUICK = True - for i, fix, actions in self._iter_close_tests(): - pass - - def test_close(self): - for i, fix, actions in self._iter_close_tests(): - with self.subTest('{} {} {}'.format(i, fix, actions)): - fix.prep_interpreter(fix.interp) - self.run_actions(fix, actions) - - self._close(fix, force=False) - - self._assert_closed(fix) - # XXX Things slow down if we have too many interpreters. - fix.clean_up() - - def test_force_close(self): - for i, fix, actions in self._iter_close_tests(): - with self.subTest('{} {} {}'.format(i, fix, actions)): - fix.prep_interpreter(fix.interp) - self.run_actions(fix, actions) - - self._close(fix, force=True) - - self._assert_closed(fix) - # XXX Things slow down if we have too many interpreters. - fix.clean_up() - - if __name__ == '__main__': unittest.main() diff --git a/Lib/test/test_interpreters.py b/Lib/test/test_interpreters.py index b969ddf33d81..d1bebe471583 100644 --- a/Lib/test/test_interpreters.py +++ b/Lib/test/test_interpreters.py @@ -8,6 +8,7 @@ from test import support from test.support import import_helper _interpreters = import_helper.import_module('_xxsubinterpreters') +_channels = import_helper.import_module('_xxinterpchannels') from test.support import interpreters @@ -533,7 +534,7 @@ class TestRecvChannelAttrs(TestBase): def test_id_type(self): rch, _ = interpreters.create_channel() - self.assertIsInstance(rch.id, _interpreters.ChannelID) + self.assertIsInstance(rch.id, _channels.ChannelID) def test_custom_id(self): rch = interpreters.RecvChannel(1) @@ -558,7 +559,7 @@ class TestSendChannelAttrs(TestBase): def test_id_type(self): _, sch = interpreters.create_channel() - self.assertIsInstance(sch.id, _interpreters.ChannelID) + self.assertIsInstance(sch.id, _channels.ChannelID) def test_custom_id(self): sch = interpreters.SendChannel(1) diff --git a/Modules/Setup b/Modules/Setup index e3a82975b5ff..428be0a1bf8f 100644 --- a/Modules/Setup +++ b/Modules/Setup @@ -280,6 +280,7 @@ PYTHONPATH=$(COREPYTHONPATH) # Testing #_xxsubinterpreters _xxsubinterpretersmodule.c +#_xxinterpchannels _xxinterpchannelsmodule.c #_xxtestfuzz _xxtestfuzz/_xxtestfuzz.c _xxtestfuzz/fuzzer.c #_testbuffer _testbuffer.c #_testinternalcapi _testinternalcapi.c diff --git a/Modules/Setup.stdlib.in b/Modules/Setup.stdlib.in index d64752e8ca96..3fd591c70d49 100644 --- a/Modules/Setup.stdlib.in +++ b/Modules/Setup.stdlib.in @@ -43,6 +43,7 @@ @MODULE__STRUCT_TRUE at _struct _struct.c @MODULE__TYPING_TRUE at _typing _typingmodule.c @MODULE__XXSUBINTERPRETERS_TRUE at _xxsubinterpreters _xxsubinterpretersmodule.c + at MODULE__XXINTERPCHANNELS_TRUE@_xxinterpchannels _xxinterpchannelsmodule.c @MODULE__ZONEINFO_TRUE at _zoneinfo _zoneinfo.c # needs libm diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index 46c025bf5340..f0d6e404f54a 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -1639,6 +1639,58 @@ run_in_subinterp_with_config(PyObject *self, PyObject *args, PyObject *kwargs) return PyLong_FromLong(r); } +static void +_xid_capsule_destructor(PyObject *capsule) +{ + _PyCrossInterpreterData *data = \ + (_PyCrossInterpreterData *)PyCapsule_GetPointer(capsule, NULL); + if (data != NULL) { + assert(_PyCrossInterpreterData_Release(data) == 0); + PyMem_Free(data); + } +} + +static PyObject * +get_crossinterp_data(PyObject *self, PyObject *args) +{ + PyObject *obj = NULL; + if (!PyArg_ParseTuple(args, "O:get_crossinterp_data", &obj)) { + return NULL; + } + + _PyCrossInterpreterData *data = PyMem_NEW(_PyCrossInterpreterData, 1); + if (data == NULL) { + PyErr_NoMemory(); + return NULL; + } + if (_PyObject_GetCrossInterpreterData(obj, data) != 0) { + PyMem_Free(data); + return NULL; + } + PyObject *capsule = PyCapsule_New(data, NULL, _xid_capsule_destructor); + if (capsule == NULL) { + assert(_PyCrossInterpreterData_Release(data) == 0); + PyMem_Free(data); + } + return capsule; +} + +static PyObject * +restore_crossinterp_data(PyObject *self, PyObject *args) +{ + PyObject *capsule = NULL; + if (!PyArg_ParseTuple(args, "O:restore_crossinterp_data", &capsule)) { + return NULL; + } + + _PyCrossInterpreterData *data = \ + (_PyCrossInterpreterData *)PyCapsule_GetPointer(capsule, NULL); + if (data == NULL) { + return NULL; + } + return _PyCrossInterpreterData_NewObject(data); +} + static void slot_tp_del(PyObject *self) { @@ -3306,6 +3358,8 @@ static PyMethodDef TestMethods[] = { {"run_in_subinterp_with_config", _PyCFunction_CAST(run_in_subinterp_with_config), METH_VARARGS | METH_KEYWORDS}, + {"get_crossinterp_data", get_crossinterp_data, METH_VARARGS}, + {"restore_crossinterp_data", restore_crossinterp_data, METH_VARARGS}, {"with_tp_del", with_tp_del, METH_VARARGS}, {"create_cfunction", create_cfunction, METH_NOARGS}, {"call_in_temporary_c_thread", call_in_temporary_c_thread, METH_VARARGS, diff --git a/Modules/_xxinterpchannelsmodule.c b/Modules/_xxinterpchannelsmodule.c new file mode 100644 index 000000000000..8601a189e875 --- /dev/null +++ b/Modules/_xxinterpchannelsmodule.c @@ -0,0 +1,2325 @@ + +/* interpreters module */ +/* low-level access to interpreter primitives */ +#ifndef Py_BUILD_CORE_BUILTIN +# define Py_BUILD_CORE_MODULE 1 +#endif + +#include "Python.h" +#include "pycore_pystate.h" // _PyThreadState_GET() +#include "pycore_interpreteridobject.h" + + +#define MODULE_NAME "_xxinterpchannels" + + +static PyInterpreterState * +_get_current_interp(void) +{ + // PyInterpreterState_Get() aborts if lookup fails, so don't need + // to check the result for NULL. + return PyInterpreterState_Get(); +} + +static PyObject * +_get_current_module(void) +{ + PyObject *name = PyUnicode_FromString(MODULE_NAME); + if (name == NULL) { + return NULL; + } + PyObject *mod = PyImport_GetModule(name); + Py_DECREF(name); + if (mod == NULL) { + return NULL; + } + assert(mod != Py_None); + return mod; +} + +static PyObject * +get_module_from_owned_type(PyTypeObject *cls) +{ + assert(cls != NULL); + return _get_current_module(); + // XXX Use the more efficient API now that we use heap types: + //return PyType_GetModule(cls); +} + +static struct PyModuleDef moduledef; + +static PyObject * +get_module_from_type(PyTypeObject *cls) +{ + assert(cls != NULL); + return _get_current_module(); + // XXX Use the more efficient API now that we use heap types: + //return PyType_GetModuleByDef(cls, &moduledef); +} + +static PyObject * +add_new_exception(PyObject *mod, const char *name, PyObject *base) +{ + assert(!PyObject_HasAttrString(mod, name)); + PyObject *exctype = PyErr_NewException(name, base, NULL); + if (exctype == NULL) { + return NULL; + } + int res = PyModule_AddType(mod, (PyTypeObject *)exctype); + if (res < 0) { + Py_DECREF(exctype); + return NULL; + } + return exctype; +} + +#define ADD_NEW_EXCEPTION(MOD, NAME, BASE) \ + add_new_exception(MOD, MODULE_NAME "." Py_STRINGIFY(NAME), BASE) + +static PyTypeObject * +add_new_type(PyObject *mod, PyType_Spec *spec, crossinterpdatafunc shared) +{ + PyTypeObject *cls = (PyTypeObject *)PyType_FromMetaclass( + NULL, mod, spec, NULL); + if (cls == NULL) { + return NULL; + } + if (PyModule_AddType(mod, cls) < 0) { + Py_DECREF(cls); + return NULL; + } + if (shared != NULL) { + if (_PyCrossInterpreterData_RegisterClass(cls, shared)) { + Py_DECREF(cls); + return NULL; + } + } + return cls; +} + +static int +_release_xid_data(_PyCrossInterpreterData *data, int ignoreexc) +{ + PyObject *exctype, *excval, *exctb; + if (ignoreexc) { + PyErr_Fetch(&exctype, &excval, &exctb); + } + int res = _PyCrossInterpreterData_Release(data); + if (res < 0) { + // XXX Fix this! + /* The owning interpreter is already destroyed. + * Ideally, this shouldn't ever happen. When an interpreter is + * about to be destroyed, we should clear out all of its objects + * from every channel associated with that interpreter. + * For now we hack around that to resolve refleaks, by decref'ing + * the released object here, even if its the wrong interpreter. + * The owning interpreter has already been destroyed + * so we should be okay, especially since the currently + * shareable types are all very basic, with no GC. + * That said, it becomes much messier once interpreters + * no longer share a GIL, so this needs to be fixed before then. */ + _PyCrossInterpreterData_Clear(NULL, data); + if (ignoreexc) { + // XXX Emit a warning? + PyErr_Clear(); + } + } + if (ignoreexc) { + PyErr_Restore(exctype, excval, exctb); + } + return res; +} + + +/* module state *************************************************************/ + +typedef struct { + /* heap types */ + PyTypeObject *ChannelIDType; + + /* exceptions */ + PyObject *ChannelError; + PyObject *ChannelNotFoundError; + PyObject *ChannelClosedError; + PyObject *ChannelEmptyError; + PyObject *ChannelNotEmptyError; +} module_state; + +static inline module_state * +get_module_state(PyObject *mod) +{ + assert(mod != NULL); + module_state *state = PyModule_GetState(mod); + assert(state != NULL); + return state; +} + +static int +traverse_module_state(module_state *state, visitproc visit, void *arg) +{ + /* heap types */ + Py_VISIT(state->ChannelIDType); + + /* exceptions */ + Py_VISIT(state->ChannelError); + Py_VISIT(state->ChannelNotFoundError); + Py_VISIT(state->ChannelClosedError); + Py_VISIT(state->ChannelEmptyError); + Py_VISIT(state->ChannelNotEmptyError); + + return 0; +} + +static int +clear_module_state(module_state *state) +{ + /* heap types */ + (void)_PyCrossInterpreterData_UnregisterClass(state->ChannelIDType); + Py_CLEAR(state->ChannelIDType); + + /* exceptions */ + Py_CLEAR(state->ChannelError); + Py_CLEAR(state->ChannelNotFoundError); + Py_CLEAR(state->ChannelClosedError); + Py_CLEAR(state->ChannelEmptyError); + Py_CLEAR(state->ChannelNotEmptyError); + + return 0; +} + + +/* channel-specific code ****************************************************/ + +#define CHANNEL_SEND 1 +#define CHANNEL_BOTH 0 +#define CHANNEL_RECV -1 + +/* channel errors */ + +#define ERR_CHANNEL_NOT_FOUND -2 +#define ERR_CHANNEL_CLOSED -3 +#define ERR_CHANNEL_INTERP_CLOSED -4 +#define ERR_CHANNEL_EMPTY -5 +#define ERR_CHANNEL_NOT_EMPTY -6 +#define ERR_CHANNEL_MUTEX_INIT -7 +#define ERR_CHANNELS_MUTEX_INIT -8 +#define ERR_NO_NEXT_CHANNEL_ID -9 + +static int +exceptions_init(PyObject *mod) +{ + module_state *state = get_module_state(mod); + if (state == NULL) { + return -1; + } + +#define ADD(NAME, BASE) \ + do { \ + assert(state->NAME == NULL); \ + state->NAME = ADD_NEW_EXCEPTION(mod, NAME, BASE); \ + if (state->NAME == NULL) { \ + return -1; \ + } \ + } while (0) + + // A channel-related operation failed. + ADD(ChannelError, PyExc_RuntimeError); + // An operation tried to use a channel that doesn't exist. + ADD(ChannelNotFoundError, state->ChannelError); + // An operation tried to use a closed channel. + ADD(ChannelClosedError, state->ChannelError); + // An operation tried to pop from an empty channel. + ADD(ChannelEmptyError, state->ChannelError); + // An operation tried to close a non-empty channel. + ADD(ChannelNotEmptyError, state->ChannelError); +#undef ADD + + return 0; +} + +static int +handle_channel_error(int err, PyObject *mod, int64_t cid) +{ + if (err == 0) { + assert(!PyErr_Occurred()); + return 0; + } + assert(err < 0); + module_state *state = get_module_state(mod); + assert(state != NULL); + if (err == ERR_CHANNEL_NOT_FOUND) { + PyErr_Format(state->ChannelNotFoundError, + "channel %" PRId64 " not found", cid); + } + else if (err == ERR_CHANNEL_CLOSED) { + PyErr_Format(state->ChannelClosedError, + "channel %" PRId64 " is closed", cid); + } + else if (err == ERR_CHANNEL_INTERP_CLOSED) { + PyErr_Format(state->ChannelClosedError, + "channel %" PRId64 " is already closed", cid); + } + else if (err == ERR_CHANNEL_EMPTY) { + PyErr_Format(state->ChannelEmptyError, + "channel %" PRId64 " is empty", cid); + } + else if (err == ERR_CHANNEL_NOT_EMPTY) { + PyErr_Format(state->ChannelNotEmptyError, + "channel %" PRId64 " may not be closed " + "if not empty (try force=True)", + cid); + } + else if (err == ERR_CHANNEL_MUTEX_INIT) { + PyErr_SetString(state->ChannelError, + "can't initialize mutex for new channel"); + } + else if (err == ERR_CHANNELS_MUTEX_INIT) { + PyErr_SetString(state->ChannelError, + "can't initialize mutex for channel management"); + } + else if (err == ERR_NO_NEXT_CHANNEL_ID) { + PyErr_SetString(state->ChannelError, + "failed to get a channel ID"); + } + else { + assert(PyErr_Occurred()); + } + return 1; +} + +/* the channel queue */ + +struct _channelitem; + +typedef struct _channelitem { + _PyCrossInterpreterData *data; + struct _channelitem *next; +} _channelitem; + +static _channelitem * +_channelitem_new(void) +{ + _channelitem *item = PyMem_NEW(_channelitem, 1); + if (item == NULL) { + PyErr_NoMemory(); + return NULL; + } + item->data = NULL; + item->next = NULL; + return item; +} + +static void +_channelitem_clear(_channelitem *item) +{ + if (item->data != NULL) { + (void)_release_xid_data(item->data, 1); + PyMem_Free(item->data); + item->data = NULL; + } + item->next = NULL; +} + +static void +_channelitem_free(_channelitem *item) +{ + _channelitem_clear(item); + PyMem_Free(item); +} + +static void +_channelitem_free_all(_channelitem *item) +{ + while (item != NULL) { + _channelitem *last = item; + item = item->next; + _channelitem_free(last); + } +} + +static _PyCrossInterpreterData * +_channelitem_popped(_channelitem *item) +{ + _PyCrossInterpreterData *data = item->data; + item->data = NULL; + _channelitem_free(item); + return data; +} + +typedef struct _channelqueue { + int64_t count; + _channelitem *first; + _channelitem *last; +} _channelqueue; + +static _channelqueue * +_channelqueue_new(void) +{ + _channelqueue *queue = PyMem_NEW(_channelqueue, 1); + if (queue == NULL) { + PyErr_NoMemory(); + return NULL; + } + queue->count = 0; + queue->first = NULL; + queue->last = NULL; + return queue; +} + +static void +_channelqueue_clear(_channelqueue *queue) +{ + _channelitem_free_all(queue->first); + queue->count = 0; + queue->first = NULL; + queue->last = NULL; +} + +static void +_channelqueue_free(_channelqueue *queue) +{ + _channelqueue_clear(queue); + PyMem_Free(queue); +} + +static int +_channelqueue_put(_channelqueue *queue, _PyCrossInterpreterData *data) +{ + _channelitem *item = _channelitem_new(); + if (item == NULL) { + return -1; + } + item->data = data; + + queue->count += 1; + if (queue->first == NULL) { + queue->first = item; + } + else { + queue->last->next = item; + } + queue->last = item; + return 0; +} + +static _PyCrossInterpreterData * +_channelqueue_get(_channelqueue *queue) +{ + _channelitem *item = queue->first; + if (item == NULL) { + return NULL; + } + queue->first = item->next; + if (queue->last == item) { + queue->last = NULL; + } + queue->count -= 1; + + return _channelitem_popped(item); +} + +/* channel-interpreter associations */ + +struct _channelend; + +typedef struct _channelend { + struct _channelend *next; + int64_t interp; + int open; +} _channelend; + +static _channelend * +_channelend_new(int64_t interp) +{ + _channelend *end = PyMem_NEW(_channelend, 1); + if (end == NULL) { + PyErr_NoMemory(); + return NULL; + } + end->next = NULL; + end->interp = interp; + end->open = 1; + return end; +} + +static void +_channelend_free(_channelend *end) +{ + PyMem_Free(end); +} + +static void +_channelend_free_all(_channelend *end) +{ + while (end != NULL) { + _channelend *last = end; + end = end->next; + _channelend_free(last); + } +} + +static _channelend * +_channelend_find(_channelend *first, int64_t interp, _channelend **pprev) +{ + _channelend *prev = NULL; + _channelend *end = first; + while (end != NULL) { + if (end->interp == interp) { + break; + } + prev = end; + end = end->next; + } + if (pprev != NULL) { + *pprev = prev; + } + return end; +} + +typedef struct _channelassociations { + // Note that the list entries are never removed for interpreter + // for which the channel is closed. This should not be a problem in + // practice. Also, a channel isn't automatically closed when an + // interpreter is destroyed. + int64_t numsendopen; + int64_t numrecvopen; + _channelend *send; + _channelend *recv; +} _channelends; + +static _channelends * +_channelends_new(void) +{ + _channelends *ends = PyMem_NEW(_channelends, 1); + if (ends== NULL) { + return NULL; + } + ends->numsendopen = 0; + ends->numrecvopen = 0; + ends->send = NULL; + ends->recv = NULL; + return ends; +} + +static void +_channelends_clear(_channelends *ends) +{ + _channelend_free_all(ends->send); + ends->send = NULL; + ends->numsendopen = 0; + + _channelend_free_all(ends->recv); + ends->recv = NULL; + ends->numrecvopen = 0; +} + +static void +_channelends_free(_channelends *ends) +{ + _channelends_clear(ends); + PyMem_Free(ends); +} + +static _channelend * +_channelends_add(_channelends *ends, _channelend *prev, int64_t interp, + int send) +{ + _channelend *end = _channelend_new(interp); + if (end == NULL) { + return NULL; + } + + if (prev == NULL) { + if (send) { + ends->send = end; + } + else { + ends->recv = end; + } + } + else { + prev->next = end; + } + if (send) { + ends->numsendopen += 1; + } + else { + ends->numrecvopen += 1; + } + return end; +} + +static int +_channelends_associate(_channelends *ends, int64_t interp, int send) +{ + _channelend *prev; + _channelend *end = _channelend_find(send ? ends->send : ends->recv, + interp, &prev); + if (end != NULL) { + if (!end->open) { + return ERR_CHANNEL_CLOSED; + } + // already associated + return 0; + } + if (_channelends_add(ends, prev, interp, send) == NULL) { + return -1; + } + return 0; +} + +static int +_channelends_is_open(_channelends *ends) +{ + if (ends->numsendopen != 0 || ends->numrecvopen != 0) { + return 1; + } + if (ends->send == NULL && ends->recv == NULL) { + return 1; + } + return 0; +} + +static void +_channelends_close_end(_channelends *ends, _channelend *end, int send) +{ + end->open = 0; + if (send) { + ends->numsendopen -= 1; + } + else { + ends->numrecvopen -= 1; + } +} + +static int +_channelends_close_interpreter(_channelends *ends, int64_t interp, int which) +{ + _channelend *prev; + _channelend *end; + if (which >= 0) { // send/both + end = _channelend_find(ends->send, interp, &prev); + if (end == NULL) { + // never associated so add it + end = _channelends_add(ends, prev, interp, 1); + if (end == NULL) { + return -1; + } + } + _channelends_close_end(ends, end, 1); + } + if (which <= 0) { // recv/both + end = _channelend_find(ends->recv, interp, &prev); + if (end == NULL) { + // never associated so add it + end = _channelends_add(ends, prev, interp, 0); + if (end == NULL) { + return -1; + } + } + _channelends_close_end(ends, end, 0); + } + return 0; +} + +static void +_channelends_close_all(_channelends *ends, int which, int force) +{ + // XXX Handle the ends. + // XXX Handle force is True. + + // Ensure all the "send"-associated interpreters are closed. + _channelend *end; + for (end = ends->send; end != NULL; end = end->next) { + _channelends_close_end(ends, end, 1); + } + + // Ensure all the "recv"-associated interpreters are closed. + for (end = ends->recv; end != NULL; end = end->next) { + _channelends_close_end(ends, end, 0); + } +} + +/* channels */ + +struct _channel; +struct _channel_closing; +static void _channel_clear_closing(struct _channel *); +static void _channel_finish_closing(struct _channel *); + +typedef struct _channel { + PyThread_type_lock mutex; + _channelqueue *queue; + _channelends *ends; + int open; + struct _channel_closing *closing; +} _PyChannelState; + +static _PyChannelState * +_channel_new(PyThread_type_lock mutex) +{ + _PyChannelState *chan = PyMem_NEW(_PyChannelState, 1); + if (chan == NULL) { + return NULL; + } + chan->mutex = mutex; + chan->queue = _channelqueue_new(); + if (chan->queue == NULL) { + PyMem_Free(chan); + return NULL; + } + chan->ends = _channelends_new(); + if (chan->ends == NULL) { + _channelqueue_free(chan->queue); + PyMem_Free(chan); + return NULL; + } + chan->open = 1; + chan->closing = NULL; + return chan; +} + +static void +_channel_free(_PyChannelState *chan) +{ + _channel_clear_closing(chan); + PyThread_acquire_lock(chan->mutex, WAIT_LOCK); + _channelqueue_free(chan->queue); + _channelends_free(chan->ends); + PyThread_release_lock(chan->mutex); + + PyThread_free_lock(chan->mutex); + PyMem_Free(chan); +} + +static int +_channel_add(_PyChannelState *chan, int64_t interp, + _PyCrossInterpreterData *data) +{ + int res = -1; + PyThread_acquire_lock(chan->mutex, WAIT_LOCK); + + if (!chan->open) { + res = ERR_CHANNEL_CLOSED; + goto done; + } + if (_channelends_associate(chan->ends, interp, 1) != 0) { + res = ERR_CHANNEL_INTERP_CLOSED; + goto done; + } + + if (_channelqueue_put(chan->queue, data) != 0) { + goto done; + } + + res = 0; +done: + PyThread_release_lock(chan->mutex); + return res; +} + +static int +_channel_next(_PyChannelState *chan, int64_t interp, + _PyCrossInterpreterData **res) +{ + int err = 0; + PyThread_acquire_lock(chan->mutex, WAIT_LOCK); + + if (!chan->open) { + err = ERR_CHANNEL_CLOSED; + goto done; + } + if (_channelends_associate(chan->ends, interp, 0) != 0) { + err = ERR_CHANNEL_INTERP_CLOSED; + goto done; + } + + _PyCrossInterpreterData *data = _channelqueue_get(chan->queue); + if (data == NULL && !PyErr_Occurred() && chan->closing != NULL) { + chan->open = 0; + } + *res = data; + +done: + PyThread_release_lock(chan->mutex); + if (chan->queue->count == 0) { + _channel_finish_closing(chan); + } + return err; +} + +static int +_channel_close_interpreter(_PyChannelState *chan, int64_t interp, int end) +{ + PyThread_acquire_lock(chan->mutex, WAIT_LOCK); + + int res = -1; + if (!chan->open) { + res = ERR_CHANNEL_CLOSED; + goto done; + } + + if (_channelends_close_interpreter(chan->ends, interp, end) != 0) { + goto done; + } + chan->open = _channelends_is_open(chan->ends); + + res = 0; +done: + PyThread_release_lock(chan->mutex); + return res; +} + +static int +_channel_close_all(_PyChannelState *chan, int end, int force) +{ + int res = -1; + PyThread_acquire_lock(chan->mutex, WAIT_LOCK); + + if (!chan->open) { + res = ERR_CHANNEL_CLOSED; + goto done; + } + + if (!force && chan->queue->count > 0) { + res = ERR_CHANNEL_NOT_EMPTY; + goto done; + } + + chan->open = 0; + + // We *could* also just leave these in place, since we've marked + // the channel as closed already. + _channelends_close_all(chan->ends, end, force); + + res = 0; +done: + PyThread_release_lock(chan->mutex); + return res; +} + +/* the set of channels */ + +struct _channelref; + +typedef struct _channelref { + int64_t id; + _PyChannelState *chan; + struct _channelref *next; + Py_ssize_t objcount; +} _channelref; + +static _channelref * +_channelref_new(int64_t id, _PyChannelState *chan) +{ + _channelref *ref = PyMem_NEW(_channelref, 1); + if (ref == NULL) { + return NULL; + } + ref->id = id; + ref->chan = chan; + ref->next = NULL; + ref->objcount = 0; + return ref; +} + +//static void +//_channelref_clear(_channelref *ref) +//{ +// ref->id = -1; +// ref->chan = NULL; +// ref->next = NULL; +// ref->objcount = 0; +//} + +static void +_channelref_free(_channelref *ref) +{ + if (ref->chan != NULL) { + _channel_clear_closing(ref->chan); + } + //_channelref_clear(ref); + PyMem_Free(ref); +} + +static _channelref * +_channelref_find(_channelref *first, int64_t id, _channelref **pprev) +{ + _channelref *prev = NULL; + _channelref *ref = first; + while (ref != NULL) { + if (ref->id == id) { + break; + } + prev = ref; + ref = ref->next; + } + if (pprev != NULL) { + *pprev = prev; + } + return ref; +} + +typedef struct _channels { + PyThread_type_lock mutex; + _channelref *head; + int64_t numopen; + int64_t next_id; +} _channels; + +static void +_channels_init(_channels *channels, PyThread_type_lock mutex) +{ + channels->mutex = mutex; + channels->head = NULL; + channels->numopen = 0; + channels->next_id = 0; +} + +static void +_channels_fini(_channels *channels) +{ + assert(channels->numopen == 0); + assert(channels->head == NULL); + if (channels->mutex != NULL) { + PyThread_free_lock(channels->mutex); + channels->mutex = NULL; + } +} + +static int64_t +_channels_next_id(_channels *channels) // needs lock +{ + int64_t id = channels->next_id; + if (id < 0) { + /* overflow */ + return -1; + } + channels->next_id += 1; + return id; +} + +static int +_channels_lookup(_channels *channels, int64_t id, PyThread_type_lock *pmutex, + _PyChannelState **res) +{ + int err = -1; + _PyChannelState *chan = NULL; + PyThread_acquire_lock(channels->mutex, WAIT_LOCK); + if (pmutex != NULL) { + *pmutex = NULL; + } + + _channelref *ref = _channelref_find(channels->head, id, NULL); + if (ref == NULL) { + err = ERR_CHANNEL_NOT_FOUND; + goto done; + } + if (ref->chan == NULL || !ref->chan->open) { + err = ERR_CHANNEL_CLOSED; + goto done; + } + + if (pmutex != NULL) { + // The mutex will be closed by the caller. + *pmutex = channels->mutex; + } + + chan = ref->chan; + err = 0; + +done: + if (pmutex == NULL || *pmutex == NULL) { + PyThread_release_lock(channels->mutex); + } + *res = chan; + return err; +} + +static int64_t +_channels_add(_channels *channels, _PyChannelState *chan) +{ + int64_t cid = -1; + PyThread_acquire_lock(channels->mutex, WAIT_LOCK); + + // Create a new ref. + int64_t id = _channels_next_id(channels); + if (id < 0) { + cid = ERR_NO_NEXT_CHANNEL_ID; + goto done; + } + _channelref *ref = _channelref_new(id, chan); + if (ref == NULL) { + goto done; + } + + // Add it to the list. + // We assume that the channel is a new one (not already in the list). + ref->next = channels->head; + channels->head = ref; + channels->numopen += 1; + + cid = id; +done: + PyThread_release_lock(channels->mutex); + return cid; +} + +/* forward */ +static int _channel_set_closing(struct _channelref *, PyThread_type_lock); + +static int +_channels_close(_channels *channels, int64_t cid, _PyChannelState **pchan, + int end, int force) +{ + int res = -1; + PyThread_acquire_lock(channels->mutex, WAIT_LOCK); + if (pchan != NULL) { + *pchan = NULL; + } + + _channelref *ref = _channelref_find(channels->head, cid, NULL); + if (ref == NULL) { + res = ERR_CHANNEL_NOT_FOUND; + goto done; + } + + if (ref->chan == NULL) { + res = ERR_CHANNEL_CLOSED; + goto done; + } + else if (!force && end == CHANNEL_SEND && ref->chan->closing != NULL) { + res = ERR_CHANNEL_CLOSED; + goto done; + } + else { + int err = _channel_close_all(ref->chan, end, force); + if (err != 0) { + if (end == CHANNEL_SEND && err == ERR_CHANNEL_NOT_EMPTY) { + if (ref->chan->closing != NULL) { + res = ERR_CHANNEL_CLOSED; + goto done; + } + // Mark the channel as closing and return. The channel + // will be cleaned up in _channel_next(). + PyErr_Clear(); + int err = _channel_set_closing(ref, channels->mutex); + if (err != 0) { + res = err; + goto done; + } + if (pchan != NULL) { + *pchan = ref->chan; + } + res = 0; + } + else { + res = err; + } + goto done; + } + if (pchan != NULL) { + *pchan = ref->chan; + } + else { + _channel_free(ref->chan); + } + ref->chan = NULL; + } + + res = 0; +done: + PyThread_release_lock(channels->mutex); + return res; +} + +static void +_channels_remove_ref(_channels *channels, _channelref *ref, _channelref *prev, + _PyChannelState **pchan) +{ + if (ref == channels->head) { + channels->head = ref->next; + } + else { + prev->next = ref->next; + } + channels->numopen -= 1; + + if (pchan != NULL) { + *pchan = ref->chan; + } + _channelref_free(ref); +} + +static int +_channels_remove(_channels *channels, int64_t id, _PyChannelState **pchan) +{ + int res = -1; + PyThread_acquire_lock(channels->mutex, WAIT_LOCK); + + if (pchan != NULL) { + *pchan = NULL; + } + + _channelref *prev = NULL; + _channelref *ref = _channelref_find(channels->head, id, &prev); + if (ref == NULL) { + res = ERR_CHANNEL_NOT_FOUND; + goto done; + } + + _channels_remove_ref(channels, ref, prev, pchan); + + res = 0; +done: + PyThread_release_lock(channels->mutex); + return res; +} + +static int +_channels_add_id_object(_channels *channels, int64_t id) +{ + int res = -1; + PyThread_acquire_lock(channels->mutex, WAIT_LOCK); + + _channelref *ref = _channelref_find(channels->head, id, NULL); + if (ref == NULL) { + res = ERR_CHANNEL_NOT_FOUND; + goto done; + } + ref->objcount += 1; + + res = 0; +done: + PyThread_release_lock(channels->mutex); + return res; +} + +static void +_channels_drop_id_object(_channels *channels, int64_t id) +{ + PyThread_acquire_lock(channels->mutex, WAIT_LOCK); + + _channelref *prev = NULL; + _channelref *ref = _channelref_find(channels->head, id, &prev); + if (ref == NULL) { + // Already destroyed. + goto done; + } + ref->objcount -= 1; + + // Destroy if no longer used. + if (ref->objcount == 0) { + _PyChannelState *chan = NULL; + _channels_remove_ref(channels, ref, prev, &chan); + if (chan != NULL) { + _channel_free(chan); + } + } + +done: + PyThread_release_lock(channels->mutex); +} + +static int64_t * +_channels_list_all(_channels *channels, int64_t *count) +{ + int64_t *cids = NULL; + PyThread_acquire_lock(channels->mutex, WAIT_LOCK); + int64_t *ids = PyMem_NEW(int64_t, (Py_ssize_t)(channels->numopen)); + if (ids == NULL) { + goto done; + } + _channelref *ref = channels->head; + for (int64_t i=0; ref != NULL; ref = ref->next, i++) { + ids[i] = ref->id; + } + *count = channels->numopen; + + cids = ids; +done: + PyThread_release_lock(channels->mutex); + return cids; +} + +/* support for closing non-empty channels */ + +struct _channel_closing { + struct _channelref *ref; +}; + +static int +_channel_set_closing(struct _channelref *ref, PyThread_type_lock mutex) { + struct _channel *chan = ref->chan; + if (chan == NULL) { + // already closed + return 0; + } + int res = -1; + PyThread_acquire_lock(chan->mutex, WAIT_LOCK); + if (chan->closing != NULL) { + res = ERR_CHANNEL_CLOSED; + goto done; + } + chan->closing = PyMem_NEW(struct _channel_closing, 1); + if (chan->closing == NULL) { + goto done; + } + chan->closing->ref = ref; + + res = 0; +done: + PyThread_release_lock(chan->mutex); + return res; +} + +static void +_channel_clear_closing(struct _channel *chan) { + PyThread_acquire_lock(chan->mutex, WAIT_LOCK); + if (chan->closing != NULL) { + PyMem_Free(chan->closing); + chan->closing = NULL; + } + PyThread_release_lock(chan->mutex); +} + +static void +_channel_finish_closing(struct _channel *chan) { + struct _channel_closing *closing = chan->closing; + if (closing == NULL) { + return; + } + _channelref *ref = closing->ref; + _channel_clear_closing(chan); + // Do the things that would have been done in _channels_close(). + ref->chan = NULL; + _channel_free(chan); +} + +/* "high"-level channel-related functions */ + +static int64_t +_channel_create(_channels *channels) +{ + PyThread_type_lock mutex = PyThread_allocate_lock(); + if (mutex == NULL) { + return ERR_CHANNEL_MUTEX_INIT; + } + _PyChannelState *chan = _channel_new(mutex); + if (chan == NULL) { + PyThread_free_lock(mutex); + return -1; + } + int64_t id = _channels_add(channels, chan); + if (id < 0) { + _channel_free(chan); + } + return id; +} + +static int +_channel_destroy(_channels *channels, int64_t id) +{ + _PyChannelState *chan = NULL; + int err = _channels_remove(channels, id, &chan); + if (err != 0) { + return err; + } + if (chan != NULL) { + _channel_free(chan); + } + return 0; +} + +static int +_channel_send(_channels *channels, int64_t id, PyObject *obj) +{ + PyInterpreterState *interp = _get_current_interp(); + if (interp == NULL) { + return -1; + } + + // Look up the channel. + PyThread_type_lock mutex = NULL; + _PyChannelState *chan = NULL; + int err = _channels_lookup(channels, id, &mutex, &chan); + if (err != 0) { + return err; + } + assert(chan != NULL); + // Past this point we are responsible for releasing the mutex. + + if (chan->closing != NULL) { + PyThread_release_lock(mutex); + return ERR_CHANNEL_CLOSED; + } + + // Convert the object to cross-interpreter data. + _PyCrossInterpreterData *data = PyMem_NEW(_PyCrossInterpreterData, 1); + if (data == NULL) { + PyThread_release_lock(mutex); + return -1; + } + if (_PyObject_GetCrossInterpreterData(obj, data) != 0) { + PyThread_release_lock(mutex); + PyMem_Free(data); + return -1; + } + + // Add the data to the channel. + int res = _channel_add(chan, PyInterpreterState_GetID(interp), data); + PyThread_release_lock(mutex); + if (res != 0) { + // We may chain an exception here: + (void)_release_xid_data(data, 0); + PyMem_Free(data); + return res; + } + + return 0; +} + +static int +_channel_recv(_channels *channels, int64_t id, PyObject **res) +{ + int err; + *res = NULL; + + PyInterpreterState *interp = _get_current_interp(); + if (interp == NULL) { + // XXX Is this always an error? + if (PyErr_Occurred()) { + return -1; + } + return 0; + } + + // Look up the channel. + PyThread_type_lock mutex = NULL; + _PyChannelState *chan = NULL; + err = _channels_lookup(channels, id, &mutex, &chan); + if (err != 0) { + return err; + } + assert(chan != NULL); + // Past this point we are responsible for releasing the mutex. + + // Pop off the next item from the channel. + _PyCrossInterpreterData *data = NULL; + err = _channel_next(chan, PyInterpreterState_GetID(interp), &data); + PyThread_release_lock(mutex); + if (err != 0) { + return err; + } + else if (data == NULL) { + assert(!PyErr_Occurred()); + return 0; + } + + // Convert the data back to an object. + PyObject *obj = _PyCrossInterpreterData_NewObject(data); + if (obj == NULL) { + assert(PyErr_Occurred()); + (void)_release_xid_data(data, 1); + PyMem_Free(data); + return -1; + } + int release_res = _release_xid_data(data, 0); + PyMem_Free(data); + if (release_res < 0) { + // The source interpreter has been destroyed already. + assert(PyErr_Occurred()); + Py_DECREF(obj); + return -1; + } + + *res = obj; + return 0; +} + +static int +_channel_drop(_channels *channels, int64_t id, int send, int recv) +{ + PyInterpreterState *interp = _get_current_interp(); + if (interp == NULL) { + return -1; + } + + // Look up the channel. + PyThread_type_lock mutex = NULL; + _PyChannelState *chan = NULL; + int err = _channels_lookup(channels, id, &mutex, &chan); + if (err != 0) { + return err; + } + // Past this point we are responsible for releasing the mutex. + + // Close one or both of the two ends. + int res = _channel_close_interpreter(chan, PyInterpreterState_GetID(interp), send-recv); + PyThread_release_lock(mutex); + return res; +} + +static int +_channel_close(_channels *channels, int64_t id, int end, int force) +{ + return _channels_close(channels, id, NULL, end, force); +} + +static int +_channel_is_associated(_channels *channels, int64_t cid, int64_t interp, + int send) +{ + _PyChannelState *chan = NULL; + int err = _channels_lookup(channels, cid, NULL, &chan); + if (err != 0) { + return err; + } + else if (send && chan->closing != NULL) { + return ERR_CHANNEL_CLOSED; + } + + _channelend *end = _channelend_find(send ? chan->ends->send : chan->ends->recv, + interp, NULL); + + return (end != NULL && end->open); +} + +/* ChannelID class */ + +typedef struct channelid { + PyObject_HEAD + int64_t id; + int end; + int resolve; + _channels *channels; +} channelid; + +struct channel_id_converter_data { + PyObject *module; + int64_t cid; +}; + +static int +channel_id_converter(PyObject *arg, void *ptr) +{ + int64_t cid; + struct channel_id_converter_data *data = ptr; + module_state *state = get_module_state(data->module); + assert(state != NULL); + if (PyObject_TypeCheck(arg, state->ChannelIDType)) { + cid = ((channelid *)arg)->id; + } + else if (PyIndex_Check(arg)) { + cid = PyLong_AsLongLong(arg); + if (cid == -1 && PyErr_Occurred()) { + return 0; + } + if (cid < 0) { + PyErr_Format(PyExc_ValueError, + "channel ID must be a non-negative int, got %R", arg); + return 0; + } + } + else { + PyErr_Format(PyExc_TypeError, + "channel ID must be an int, got %.100s", + Py_TYPE(arg)->tp_name); + return 0; + } + data->cid = cid; + return 1; +} + +static int +newchannelid(PyTypeObject *cls, int64_t cid, int end, _channels *channels, + int force, int resolve, channelid **res) +{ + *res = NULL; + + channelid *self = PyObject_New(channelid, cls); + if (self == NULL) { + return -1; + } + self->id = cid; + self->end = end; + self->resolve = resolve; + self->channels = channels; + + int err = _channels_add_id_object(channels, cid); + if (err != 0) { + if (force && err == ERR_CHANNEL_NOT_FOUND) { + assert(!PyErr_Occurred()); + } + else { + Py_DECREF((PyObject *)self); + return err; + } + } + + *res = self; + return 0; +} + +static _channels * _global_channels(void); + +static PyObject * +_channelid_new(PyObject *mod, PyTypeObject *cls, + PyObject *args, PyObject *kwds) +{ + static char *kwlist[] = {"id", "send", "recv", "force", "_resolve", NULL}; + int64_t cid; + struct channel_id_converter_data cid_data = { + .module = mod, + }; + int send = -1; + int recv = -1; + int force = 0; + int resolve = 0; + if (!PyArg_ParseTupleAndKeywords(args, kwds, + "O&|$pppp:ChannelID.__new__", kwlist, + channel_id_converter, &cid_data, + &send, &recv, &force, &resolve)) { + return NULL; + } + cid = cid_data.cid; + + // Handle "send" and "recv". + if (send == 0 && recv == 0) { + PyErr_SetString(PyExc_ValueError, + "'send' and 'recv' cannot both be False"); + return NULL; + } + + int end = 0; + if (send == 1) { + if (recv == 0 || recv == -1) { + end = CHANNEL_SEND; + } + } + else if (recv == 1) { + end = CHANNEL_RECV; + } + + PyObject *id = NULL; + int err = newchannelid(cls, cid, end, _global_channels(), + force, resolve, + (channelid **)&id); + if (handle_channel_error(err, mod, cid)) { + assert(id == NULL); + return NULL; + } + assert(id != NULL); + return id; +} + +static void +channelid_dealloc(PyObject *self) +{ + int64_t cid = ((channelid *)self)->id; + _channels *channels = ((channelid *)self)->channels; + + PyTypeObject *tp = Py_TYPE(self); + tp->tp_free(self); + /* "Instances of heap-allocated types hold a reference to their type." + * See: https://docs.python.org/3.11/howto/isolating-extensions.html#garbage-collection-protocol + * See: https://docs.python.org/3.11/c-api/typeobj.html#c.PyTypeObject.tp_traverse + */ + // XXX Why don't we implement Py_TPFLAGS_HAVE_GC, e.g. Py_tp_traverse, + // like we do for _abc._abc_data? + Py_DECREF(tp); + + _channels_drop_id_object(channels, cid); +} + +static PyObject * +channelid_repr(PyObject *self) +{ + PyTypeObject *type = Py_TYPE(self); + const char *name = _PyType_Name(type); + + channelid *cid = (channelid *)self; + const char *fmt; + if (cid->end == CHANNEL_SEND) { + fmt = "%s(%" PRId64 ", send=True)"; + } + else if (cid->end == CHANNEL_RECV) { + fmt = "%s(%" PRId64 ", recv=True)"; + } + else { + fmt = "%s(%" PRId64 ")"; + } + return PyUnicode_FromFormat(fmt, name, cid->id); +} + +static PyObject * +channelid_str(PyObject *self) +{ + channelid *cid = (channelid *)self; + return PyUnicode_FromFormat("%" PRId64 "", cid->id); +} + +static PyObject * +channelid_int(PyObject *self) +{ + channelid *cid = (channelid *)self; + return PyLong_FromLongLong(cid->id); +} + +static Py_hash_t +channelid_hash(PyObject *self) +{ + channelid *cid = (channelid *)self; + PyObject *id = PyLong_FromLongLong(cid->id); + if (id == NULL) { + return -1; + } + Py_hash_t hash = PyObject_Hash(id); + Py_DECREF(id); + return hash; +} + +static PyObject * +channelid_richcompare(PyObject *self, PyObject *other, int op) +{ + PyObject *res = NULL; + if (op != Py_EQ && op != Py_NE) { + Py_RETURN_NOTIMPLEMENTED; + } + + PyObject *mod = get_module_from_type(Py_TYPE(self)); + if (mod == NULL) { + return NULL; + } + module_state *state = get_module_state(mod); + if (state == NULL) { + goto done; + } + + if (!PyObject_TypeCheck(self, state->ChannelIDType)) { + res = Py_NewRef(Py_NotImplemented); + goto done; + } + + channelid *cid = (channelid *)self; + int equal; + if (PyObject_TypeCheck(other, state->ChannelIDType)) { + channelid *othercid = (channelid *)other; + equal = (cid->end == othercid->end) && (cid->id == othercid->id); + } + else if (PyLong_Check(other)) { + /* Fast path */ + int overflow; + long long othercid = PyLong_AsLongLongAndOverflow(other, &overflow); + if (othercid == -1 && PyErr_Occurred()) { + goto done; + } + equal = !overflow && (othercid >= 0) && (cid->id == othercid); + } + else if (PyNumber_Check(other)) { + PyObject *pyid = PyLong_FromLongLong(cid->id); + if (pyid == NULL) { + goto done; + } + res = PyObject_RichCompare(pyid, other, op); + Py_DECREF(pyid); + goto done; + } + else { + res = Py_NewRef(Py_NotImplemented); + goto done; + } + + if ((op == Py_EQ && equal) || (op == Py_NE && !equal)) { + res = Py_NewRef(Py_True); + } + else { + res = Py_NewRef(Py_False); + } + +done: + Py_DECREF(mod); + return res; +} + +static PyObject * +_channel_from_cid(PyObject *cid, int end) +{ + PyObject *highlevel = PyImport_ImportModule("interpreters"); + if (highlevel == NULL) { + PyErr_Clear(); + highlevel = PyImport_ImportModule("test.support.interpreters"); + if (highlevel == NULL) { + return NULL; + } + } + const char *clsname = (end == CHANNEL_RECV) ? "RecvChannel" : + "SendChannel"; + PyObject *cls = PyObject_GetAttrString(highlevel, clsname); + Py_DECREF(highlevel); + if (cls == NULL) { + return NULL; + } + PyObject *chan = PyObject_CallFunctionObjArgs(cls, cid, NULL); + Py_DECREF(cls); + if (chan == NULL) { + return NULL; + } + return chan; +} + +struct _channelid_xid { + int64_t id; + int end; + int resolve; +}; + +static PyObject * +_channelid_from_xid(_PyCrossInterpreterData *data) +{ + struct _channelid_xid *xid = (struct _channelid_xid *)data->data; + + // It might not be imported yet, so we can't use _get_current_module(). + PyObject *mod = PyImport_ImportModule(MODULE_NAME); + if (mod == NULL) { + return NULL; + } + assert(mod != Py_None); + module_state *state = get_module_state(mod); + if (state == NULL) { + return NULL; + } + + // Note that we do not preserve the "resolve" flag. + PyObject *cid = NULL; + int err = newchannelid(state->ChannelIDType, xid->id, xid->end, + _global_channels(), 0, 0, + (channelid **)&cid); + if (err != 0) { + assert(cid == NULL); + (void)handle_channel_error(err, mod, xid->id); + goto done; + } + assert(cid != NULL); + if (xid->end == 0) { + goto done; + } + if (!xid->resolve) { + goto done; + } + + /* Try returning a high-level channel end but fall back to the ID. */ + PyObject *chan = _channel_from_cid(cid, xid->end); + if (chan == NULL) { + PyErr_Clear(); + goto done; + } + Py_DECREF(cid); + cid = chan; + +done: + Py_DECREF(mod); + return cid; +} + +static int +_channelid_shared(PyThreadState *tstate, PyObject *obj, + _PyCrossInterpreterData *data) +{ + if (_PyCrossInterpreterData_InitWithSize( + data, tstate->interp, sizeof(struct _channelid_xid), obj, + _channelid_from_xid + ) < 0) + { + return -1; + } + struct _channelid_xid *xid = (struct _channelid_xid *)data->data; + xid->id = ((channelid *)obj)->id; + xid->end = ((channelid *)obj)->end; + xid->resolve = ((channelid *)obj)->resolve; + return 0; +} + +static PyObject * +channelid_end(PyObject *self, void *end) +{ + int force = 1; + channelid *cid = (channelid *)self; + if (end != NULL) { + PyObject *id = NULL; + int err = newchannelid(Py_TYPE(self), cid->id, *(int *)end, + cid->channels, force, cid->resolve, + (channelid **)&id); + if (err != 0) { + assert(id == NULL); + PyObject *mod = get_module_from_type(Py_TYPE(self)); + if (mod == NULL) { + return NULL; + } + (void)handle_channel_error(err, mod, cid->id); + Py_DECREF(mod); + return NULL; + } + assert(id != NULL); + return id; + } + + if (cid->end == CHANNEL_SEND) { + return PyUnicode_InternFromString("send"); + } + if (cid->end == CHANNEL_RECV) { + return PyUnicode_InternFromString("recv"); + } + return PyUnicode_InternFromString("both"); +} + +static int _channelid_end_send = CHANNEL_SEND; +static int _channelid_end_recv = CHANNEL_RECV; + +static PyGetSetDef channelid_getsets[] = { + {"end", (getter)channelid_end, NULL, + PyDoc_STR("'send', 'recv', or 'both'")}, + {"send", (getter)channelid_end, NULL, + PyDoc_STR("the 'send' end of the channel"), &_channelid_end_send}, + {"recv", (getter)channelid_end, NULL, + PyDoc_STR("the 'recv' end of the channel"), &_channelid_end_recv}, + {NULL} +}; + +PyDoc_STRVAR(channelid_doc, +"A channel ID identifies a channel and may be used as an int."); + +static PyType_Slot ChannelIDType_slots[] = { + {Py_tp_dealloc, (destructor)channelid_dealloc}, + {Py_tp_doc, (void *)channelid_doc}, + {Py_tp_repr, (reprfunc)channelid_repr}, + {Py_tp_str, (reprfunc)channelid_str}, + {Py_tp_hash, channelid_hash}, + {Py_tp_richcompare, channelid_richcompare}, + {Py_tp_getset, channelid_getsets}, + // number slots + {Py_nb_int, (unaryfunc)channelid_int}, + {Py_nb_index, (unaryfunc)channelid_int}, + {0, NULL}, +}; + +static PyType_Spec ChannelIDType_spec = { + .name = "_xxsubinterpreters.ChannelID", + .basicsize = sizeof(channelid), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_DISALLOW_INSTANTIATION | Py_TPFLAGS_IMMUTABLETYPE), + .slots = ChannelIDType_slots, +}; + + +/* module level code ********************************************************/ + +/* globals is the process-global state for the module. It holds all + the data that we need to share between interpreters, so it cannot + hold PyObject values. */ +static struct globals { + int module_count; + _channels channels; +} _globals = {0}; + +static int +_globals_init(void) +{ + // XXX This isn't thread-safe. + _globals.module_count++; + if (_globals.module_count > 1) { + // Already initialized. + return 0; + } + + assert(_globals.channels.mutex == NULL); + PyThread_type_lock mutex = PyThread_allocate_lock(); + if (mutex == NULL) { + return ERR_CHANNELS_MUTEX_INIT; + } + _channels_init(&_globals.channels, mutex); + return 0; +} + +static void +_globals_fini(void) +{ + // XXX This isn't thread-safe. + _globals.module_count--; + if (_globals.module_count > 0) { + return; + } + + _channels_fini(&_globals.channels); +} + +static _channels * +_global_channels(void) { + return &_globals.channels; +} + + +static PyObject * +channel_create(PyObject *self, PyObject *Py_UNUSED(ignored)) +{ + int64_t cid = _channel_create(&_globals.channels); + if (cid < 0) { + (void)handle_channel_error(-1, self, cid); + return NULL; + } + module_state *state = get_module_state(self); + if (state == NULL) { + return NULL; + } + PyObject *id = NULL; + int err = newchannelid(state->ChannelIDType, cid, 0, + &_globals.channels, 0, 0, + (channelid **)&id); + if (handle_channel_error(err, self, cid)) { + assert(id == NULL); + err = _channel_destroy(&_globals.channels, cid); + if (handle_channel_error(err, self, cid)) { + // XXX issue a warning? + } + return NULL; + } + assert(id != NULL); + assert(((channelid *)id)->channels != NULL); + return id; +} + +PyDoc_STRVAR(channel_create_doc, +"channel_create() -> cid\n\ +\n\ +Create a new cross-interpreter channel and return a unique generated ID."); + +static PyObject * +channel_destroy(PyObject *self, PyObject *args, PyObject *kwds) +{ + static char *kwlist[] = {"cid", NULL}; + int64_t cid; + struct channel_id_converter_data cid_data = { + .module = self, + }; + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&:channel_destroy", kwlist, + channel_id_converter, &cid_data)) { + return NULL; + } + cid = cid_data.cid; + + int err = _channel_destroy(&_globals.channels, cid); + if (handle_channel_error(err, self, cid)) { + return NULL; + } + Py_RETURN_NONE; +} + +PyDoc_STRVAR(channel_destroy_doc, +"channel_destroy(cid)\n\ +\n\ +Close and finalize the channel. Afterward attempts to use the channel\n\ +will behave as though it never existed."); + +static PyObject * +channel_list_all(PyObject *self, PyObject *Py_UNUSED(ignored)) +{ + int64_t count = 0; + int64_t *cids = _channels_list_all(&_globals.channels, &count); + if (cids == NULL) { + if (count == 0) { + return PyList_New(0); + } + return NULL; + } + PyObject *ids = PyList_New((Py_ssize_t)count); + if (ids == NULL) { + goto finally; + } + module_state *state = get_module_state(self); + if (state == NULL) { + Py_DECREF(ids); + ids = NULL; + goto finally; + } + int64_t *cur = cids; + for (int64_t i=0; i < count; cur++, i++) { + PyObject *id = NULL; + int err = newchannelid(state->ChannelIDType, *cur, 0, + &_globals.channels, 0, 0, + (channelid **)&id); + if (handle_channel_error(err, self, *cur)) { + assert(id == NULL); + Py_SETREF(ids, NULL); + break; + } + assert(id != NULL); + PyList_SET_ITEM(ids, (Py_ssize_t)i, id); + } + +finally: + PyMem_Free(cids); + return ids; +} + +PyDoc_STRVAR(channel_list_all_doc, +"channel_list_all() -> [cid]\n\ +\n\ +Return the list of all IDs for active channels."); + +static PyObject * +channel_list_interpreters(PyObject *self, PyObject *args, PyObject *kwds) +{ + static char *kwlist[] = {"cid", "send", NULL}; + int64_t cid; /* Channel ID */ + struct channel_id_converter_data cid_data = { + .module = self, + }; + int send = 0; /* Send or receive end? */ + int64_t id; + PyObject *ids, *id_obj; + PyInterpreterState *interp; + + if (!PyArg_ParseTupleAndKeywords( + args, kwds, "O&$p:channel_list_interpreters", + kwlist, channel_id_converter, &cid_data, &send)) { + return NULL; + } + cid = cid_data.cid; + + ids = PyList_New(0); + if (ids == NULL) { + goto except; + } + + interp = PyInterpreterState_Head(); + while (interp != NULL) { + id = PyInterpreterState_GetID(interp); + assert(id >= 0); + int res = _channel_is_associated(&_globals.channels, cid, id, send); + if (res < 0) { + (void)handle_channel_error(res, self, cid); + goto except; + } + if (res) { + id_obj = _PyInterpreterState_GetIDObject(interp); + if (id_obj == NULL) { + goto except; + } + res = PyList_Insert(ids, 0, id_obj); + Py_DECREF(id_obj); + if (res < 0) { + goto except; + } + } + interp = PyInterpreterState_Next(interp); + } + + goto finally; + +except: + Py_CLEAR(ids); + +finally: + return ids; +} + +PyDoc_STRVAR(channel_list_interpreters_doc, +"channel_list_interpreters(cid, *, send) -> [id]\n\ +\n\ +Return the list of all interpreter IDs associated with an end of the channel.\n\ +\n\ +The 'send' argument should be a boolean indicating whether to use the send or\n\ +receive end."); + + +static PyObject * +channel_send(PyObject *self, PyObject *args, PyObject *kwds) +{ + static char *kwlist[] = {"cid", "obj", NULL}; + int64_t cid; + struct channel_id_converter_data cid_data = { + .module = self, + }; + PyObject *obj; + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&O:channel_send", kwlist, + channel_id_converter, &cid_data, &obj)) { + return NULL; + } + cid = cid_data.cid; + + int err = _channel_send(&_globals.channels, cid, obj); + if (handle_channel_error(err, self, cid)) { + return NULL; + } + Py_RETURN_NONE; +} + +PyDoc_STRVAR(channel_send_doc, +"channel_send(cid, obj)\n\ +\n\ +Add the object's data to the channel's queue."); + +static PyObject * +channel_recv(PyObject *self, PyObject *args, PyObject *kwds) +{ + static char *kwlist[] = {"cid", "default", NULL}; + int64_t cid; + struct channel_id_converter_data cid_data = { + .module = self, + }; + PyObject *dflt = NULL; + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&|O:channel_recv", kwlist, + channel_id_converter, &cid_data, &dflt)) { + return NULL; + } + cid = cid_data.cid; + + PyObject *obj = NULL; + int err = _channel_recv(&_globals.channels, cid, &obj); + if (handle_channel_error(err, self, cid)) { + return NULL; + } + Py_XINCREF(dflt); + if (obj == NULL) { + // Use the default. + if (dflt == NULL) { + (void)handle_channel_error(ERR_CHANNEL_EMPTY, self, cid); + return NULL; + } + obj = Py_NewRef(dflt); + } + Py_XDECREF(dflt); + return obj; +} + +PyDoc_STRVAR(channel_recv_doc, +"channel_recv(cid, [default]) -> obj\n\ +\n\ +Return a new object from the data at the front of the channel's queue.\n\ +\n\ +If there is nothing to receive then raise ChannelEmptyError, unless\n\ +a default value is provided. In that case return it."); + +static PyObject * +channel_close(PyObject *self, PyObject *args, PyObject *kwds) +{ + static char *kwlist[] = {"cid", "send", "recv", "force", NULL}; + int64_t cid; + struct channel_id_converter_data cid_data = { + .module = self, + }; + int send = 0; + int recv = 0; + int force = 0; + if (!PyArg_ParseTupleAndKeywords(args, kwds, + "O&|$ppp:channel_close", kwlist, + channel_id_converter, &cid_data, + &send, &recv, &force)) { + return NULL; + } + cid = cid_data.cid; + + int err = _channel_close(&_globals.channels, cid, send-recv, force); + if (handle_channel_error(err, self, cid)) { + return NULL; + } + Py_RETURN_NONE; +} + +PyDoc_STRVAR(channel_close_doc, +"channel_close(cid, *, send=None, recv=None, force=False)\n\ +\n\ +Close the channel for all interpreters.\n\ +\n\ +If the channel is empty then the keyword args are ignored and both\n\ +ends are immediately closed. Otherwise, if 'force' is True then\n\ +all queued items are released and both ends are immediately\n\ +closed.\n\ +\n\ +If the channel is not empty *and* 'force' is False then following\n\ +happens:\n\ +\n\ + * recv is True (regardless of send):\n\ + - raise ChannelNotEmptyError\n\ + * recv is None and send is None:\n\ + - raise ChannelNotEmptyError\n\ + * send is True and recv is not True:\n\ + - fully close the 'send' end\n\ + - close the 'recv' end to interpreters not already receiving\n\ + - fully close it once empty\n\ +\n\ +Closing an already closed channel results in a ChannelClosedError.\n\ +\n\ +Once the channel's ID has no more ref counts in any interpreter\n\ +the channel will be destroyed."); + +static PyObject * +channel_release(PyObject *self, PyObject *args, PyObject *kwds) +{ + // Note that only the current interpreter is affected. + static char *kwlist[] = {"cid", "send", "recv", "force", NULL}; + int64_t cid; + struct channel_id_converter_data cid_data = { + .module = self, + }; + int send = 0; + int recv = 0; + int force = 0; + if (!PyArg_ParseTupleAndKeywords(args, kwds, + "O&|$ppp:channel_release", kwlist, + channel_id_converter, &cid_data, + &send, &recv, &force)) { + return NULL; + } + cid = cid_data.cid; + if (send == 0 && recv == 0) { + send = 1; + recv = 1; + } + + // XXX Handle force is True. + // XXX Fix implicit release. + + int err = _channel_drop(&_globals.channels, cid, send, recv); + if (handle_channel_error(err, self, cid)) { + return NULL; + } + Py_RETURN_NONE; +} + +PyDoc_STRVAR(channel_release_doc, +"channel_release(cid, *, send=None, recv=None, force=True)\n\ +\n\ +Close the channel for the current interpreter. 'send' and 'recv'\n\ +(bool) may be used to indicate the ends to close. By default both\n\ +ends are closed. Closing an already closed end is a noop."); + +static PyObject * +channel__channel_id(PyObject *self, PyObject *args, PyObject *kwds) +{ + module_state *state = get_module_state(self); + if (state == NULL) { + return NULL; + } + PyTypeObject *cls = state->ChannelIDType; + PyObject *mod = get_module_from_owned_type(cls); + if (mod == NULL) { + return NULL; + } + PyObject *cid = _channelid_new(mod, cls, args, kwds); + Py_DECREF(mod); + return cid; +} + +static PyMethodDef module_functions[] = { + {"create", channel_create, + METH_NOARGS, channel_create_doc}, + {"destroy", _PyCFunction_CAST(channel_destroy), + METH_VARARGS | METH_KEYWORDS, channel_destroy_doc}, + {"list_all", channel_list_all, + METH_NOARGS, channel_list_all_doc}, + {"list_interpreters", _PyCFunction_CAST(channel_list_interpreters), + METH_VARARGS | METH_KEYWORDS, channel_list_interpreters_doc}, + {"send", _PyCFunction_CAST(channel_send), + METH_VARARGS | METH_KEYWORDS, channel_send_doc}, + {"recv", _PyCFunction_CAST(channel_recv), + METH_VARARGS | METH_KEYWORDS, channel_recv_doc}, + {"close", _PyCFunction_CAST(channel_close), + METH_VARARGS | METH_KEYWORDS, channel_close_doc}, + {"release", _PyCFunction_CAST(channel_release), + METH_VARARGS | METH_KEYWORDS, channel_release_doc}, + {"_channel_id", _PyCFunction_CAST(channel__channel_id), + METH_VARARGS | METH_KEYWORDS, NULL}, + + {NULL, NULL} /* sentinel */ +}; + + +/* initialization function */ + +PyDoc_STRVAR(module_doc, +"This module provides primitive operations to manage Python interpreters.\n\ +The 'interpreters' module provides a more convenient interface."); + +static int +module_exec(PyObject *mod) +{ + if (_globals_init() != 0) { + return -1; + } + + /* Add exception types */ + if (exceptions_init(mod) != 0) { + goto error; + } + + /* Add other types */ + module_state *state = get_module_state(mod); + if (state == NULL) { + goto error; + } + + // ChannelID + state->ChannelIDType = add_new_type( + mod, &ChannelIDType_spec, _channelid_shared); + if (state->ChannelIDType == NULL) { + goto error; + } + + return 0; + +error: + (void)_PyCrossInterpreterData_UnregisterClass(state->ChannelIDType); + _globals_fini(); + return -1; +} + +static struct PyModuleDef_Slot module_slots[] = { + {Py_mod_exec, module_exec}, + {0, NULL}, +}; + +static int +module_traverse(PyObject *mod, visitproc visit, void *arg) +{ + module_state *state = get_module_state(mod); + assert(state != NULL); + traverse_module_state(state, visit, arg); + return 0; +} + +static int +module_clear(PyObject *mod) +{ + module_state *state = get_module_state(mod); + assert(state != NULL); + clear_module_state(state); + return 0; +} + +static void +module_free(void *mod) +{ + module_state *state = get_module_state(mod); + assert(state != NULL); + clear_module_state(state); + _globals_fini(); +} + +static struct PyModuleDef moduledef = { + .m_base = PyModuleDef_HEAD_INIT, + .m_name = MODULE_NAME, + .m_doc = module_doc, + .m_size = sizeof(module_state), + .m_methods = module_functions, + .m_slots = module_slots, + .m_traverse = module_traverse, + .m_clear = module_clear, + .m_free = (freefunc)module_free, +}; + +PyMODINIT_FUNC +PyInit__xxinterpchannels(void) +{ + return PyModuleDef_Init(&moduledef); +} diff --git a/Modules/_xxsubinterpretersmodule.c b/Modules/_xxsubinterpretersmodule.c index 0892fa3a9595..461c505c092c 100644 --- a/Modules/_xxsubinterpretersmodule.c +++ b/Modules/_xxsubinterpretersmodule.c @@ -39,43 +39,6 @@ _get_current_interp(void) return PyInterpreterState_Get(); } -static PyObject * -_get_current_module(void) -{ - // We ensured it was imported in _run_script(). - PyObject *name = PyUnicode_FromString(MODULE_NAME); - if (name == NULL) { - return NULL; - } - PyObject *mod = PyImport_GetModule(name); - Py_DECREF(name); - if (mod == NULL) { - return NULL; - } - assert(mod != Py_None); - return mod; -} - -static PyObject * -get_module_from_owned_type(PyTypeObject *cls) -{ - assert(cls != NULL); - return _get_current_module(); - // XXX Use the more efficient API now that we use heap types: - //return PyType_GetModule(cls); -} - -static struct PyModuleDef moduledef; - -static PyObject * -get_module_from_type(PyTypeObject *cls) -{ - assert(cls != NULL); - return _get_current_module(); - // XXX Use the more efficient API now that we use heap types: - //return PyType_GetModuleByDef(cls, &moduledef); -} - static PyObject * add_new_exception(PyObject *mod, const char *name, PyObject *base) { @@ -95,27 +58,6 @@ add_new_exception(PyObject *mod, const char *name, PyObject *base) #define ADD_NEW_EXCEPTION(MOD, NAME, BASE) \ add_new_exception(MOD, MODULE_NAME "." Py_STRINGIFY(NAME), BASE) -static PyTypeObject * -add_new_type(PyObject *mod, PyType_Spec *spec, crossinterpdatafunc shared) -{ - PyTypeObject *cls = (PyTypeObject *)PyType_FromMetaclass( - NULL, mod, spec, NULL); - if (cls == NULL) { - return NULL; - } - if (PyModule_AddType(mod, cls) < 0) { - Py_DECREF(cls); - return NULL; - } - if (shared != NULL) { - if (_PyCrossInterpreterData_RegisterClass(cls, shared)) { - Py_DECREF(cls); - return NULL; - } - } - return cls; -} - static int _release_xid_data(_PyCrossInterpreterData *data, int ignoreexc) { @@ -127,9 +69,7 @@ _release_xid_data(_PyCrossInterpreterData *data, int ignoreexc) if (res < 0) { // XXX Fix this! /* The owning interpreter is already destroyed. - * Ideally, this shouldn't ever happen. When an interpreter is - * about to be destroyed, we should clear out all of its objects - * from every channel associated with that interpreter. + * Ideally, this shouldn't ever happen. (It's highly unlikely.) * For now we hack around that to resolve refleaks, by decref'ing * the released object here, even if its the wrong interpreter. * The owning interpreter has already been destroyed @@ -153,17 +93,8 @@ _release_xid_data(_PyCrossInterpreterData *data, int ignoreexc) /* module state *************************************************************/ typedef struct { - PyTypeObject *ChannelIDType; - - /* interpreter exceptions */ + /* exceptions */ PyObject *RunFailedError; - - /* channel exceptions */ - PyObject *ChannelError; - PyObject *ChannelNotFoundError; - PyObject *ChannelClosedError; - PyObject *ChannelEmptyError; - PyObject *ChannelNotEmptyError; } module_state; static inline module_state * @@ -178,37 +109,18 @@ get_module_state(PyObject *mod) static int traverse_module_state(module_state *state, visitproc visit, void *arg) { - /* heap types */ - Py_VISIT(state->ChannelIDType); - - /* interpreter exceptions */ + /* exceptions */ Py_VISIT(state->RunFailedError); - /* channel exceptions */ - Py_VISIT(state->ChannelError); - Py_VISIT(state->ChannelNotFoundError); - Py_VISIT(state->ChannelClosedError); - Py_VISIT(state->ChannelEmptyError); - Py_VISIT(state->ChannelNotEmptyError); return 0; } static int clear_module_state(module_state *state) { - /* heap types */ - (void)_PyCrossInterpreterData_UnregisterClass(state->ChannelIDType); - Py_CLEAR(state->ChannelIDType); - - /* interpreter exceptions */ + /* exceptions */ Py_CLEAR(state->RunFailedError); - /* channel exceptions */ - Py_CLEAR(state->ChannelError); - Py_CLEAR(state->ChannelNotFoundError); - Py_CLEAR(state->ChannelClosedError); - Py_CLEAR(state->ChannelEmptyError); - Py_CLEAR(state->ChannelNotEmptyError); return 0; } @@ -298,10 +210,8 @@ _sharedns_free(_sharedns *shared) } static _sharedns * -_get_shared_ns(PyObject *shareable, PyTypeObject *channelidtype, - int *needs_import) +_get_shared_ns(PyObject *shareable) { - *needs_import = 0; if (shareable == NULL || shareable == Py_None) { return NULL; } @@ -323,9 +233,6 @@ _get_shared_ns(PyObject *shareable, PyTypeObject *channelidtype, if (_sharednsitem_init(&shared->items[i], key, value) != 0) { break; } - if (Py_TYPE(value) == channelidtype) { - *needs_import = 1; - } } if (PyErr_Occurred()) { _sharedns_free(shared); @@ -394,1701 +301,79 @@ _sharedexception_bind(PyObject *exctype, PyObject *exc, PyObject *tb) _sharedexception *err = _sharedexception_new(); if (err == NULL) { - goto finally; - } - - PyObject *name = PyUnicode_FromFormat("%S", exctype); - if (name == NULL) { - failure = "unable to format exception type name"; - goto finally; - } - err->name = _copy_raw_string(name); - Py_DECREF(name); - if (err->name == NULL) { - if (PyErr_ExceptionMatches(PyExc_MemoryError)) { - failure = "out of memory copying exception type name"; - } else { - failure = "unable to encode and copy exception type name"; - } - goto finally; - } - - if (exc != NULL) { - PyObject *msg = PyUnicode_FromFormat("%S", exc); - if (msg == NULL) { - failure = "unable to format exception message"; - goto finally; - } - err->msg = _copy_raw_string(msg); - Py_DECREF(msg); - if (err->msg == NULL) { - if (PyErr_ExceptionMatches(PyExc_MemoryError)) { - failure = "out of memory copying exception message"; - } else { - failure = "unable to encode and copy exception message"; - } - goto finally; - } - } - -finally: - if (failure != NULL) { - PyErr_Clear(); - if (err->name != NULL) { - PyMem_Free(err->name); - err->name = NULL; - } - err->msg = failure; - } - return err; -} - -static void -_sharedexception_apply(_sharedexception *exc, PyObject *wrapperclass) -{ - if (exc->name != NULL) { - if (exc->msg != NULL) { - PyErr_Format(wrapperclass, "%s: %s", exc->name, exc->msg); - } - else { - PyErr_SetString(wrapperclass, exc->name); - } - } - else if (exc->msg != NULL) { - PyErr_SetString(wrapperclass, exc->msg); - } - else { - PyErr_SetNone(wrapperclass); - } -} - - -/* channel-specific code ****************************************************/ - -#define CHANNEL_SEND 1 -#define CHANNEL_BOTH 0 -#define CHANNEL_RECV -1 - -/* channel errors */ - -#define ERR_CHANNEL_NOT_FOUND -2 -#define ERR_CHANNEL_CLOSED -3 -#define ERR_CHANNEL_INTERP_CLOSED -4 -#define ERR_CHANNEL_EMPTY -5 -#define ERR_CHANNEL_NOT_EMPTY -6 -#define ERR_CHANNEL_MUTEX_INIT -7 -#define ERR_CHANNELS_MUTEX_INIT -8 -#define ERR_NO_NEXT_CHANNEL_ID -9 - -static int -channel_exceptions_init(PyObject *mod) -{ - module_state *state = get_module_state(mod); - if (state == NULL) { - return -1; - } - -#define ADD(NAME, BASE) \ - do { \ - assert(state->NAME == NULL); \ - state->NAME = ADD_NEW_EXCEPTION(mod, NAME, BASE); \ - if (state->NAME == NULL) { \ - return -1; \ - } \ - } while (0) - - // A channel-related operation failed. - ADD(ChannelError, PyExc_RuntimeError); - // An operation tried to use a channel that doesn't exist. - ADD(ChannelNotFoundError, state->ChannelError); - // An operation tried to use a closed channel. - ADD(ChannelClosedError, state->ChannelError); - // An operation tried to pop from an empty channel. - ADD(ChannelEmptyError, state->ChannelError); - // An operation tried to close a non-empty channel. - ADD(ChannelNotEmptyError, state->ChannelError); -#undef ADD - - return 0; -} - -static int -handle_channel_error(int err, PyObject *mod, int64_t cid) -{ - if (err == 0) { - assert(!PyErr_Occurred()); - return 0; - } - assert(err < 0); - module_state *state = get_module_state(mod); - assert(state != NULL); - if (err == ERR_CHANNEL_NOT_FOUND) { - PyErr_Format(state->ChannelNotFoundError, - "channel %" PRId64 " not found", cid); - } - else if (err == ERR_CHANNEL_CLOSED) { - PyErr_Format(state->ChannelClosedError, - "channel %" PRId64 " is closed", cid); - } - else if (err == ERR_CHANNEL_INTERP_CLOSED) { - PyErr_Format(state->ChannelClosedError, - "channel %" PRId64 " is already closed", cid); - } - else if (err == ERR_CHANNEL_EMPTY) { - PyErr_Format(state->ChannelEmptyError, - "channel %" PRId64 " is empty", cid); - } - else if (err == ERR_CHANNEL_NOT_EMPTY) { - PyErr_Format(state->ChannelNotEmptyError, - "channel %" PRId64 " may not be closed " - "if not empty (try force=True)", - cid); - } - else if (err == ERR_CHANNEL_MUTEX_INIT) { - PyErr_SetString(state->ChannelError, - "can't initialize mutex for new channel"); - } - else if (err == ERR_CHANNELS_MUTEX_INIT) { - PyErr_SetString(state->ChannelError, - "can't initialize mutex for channel management"); - } - else if (err == ERR_NO_NEXT_CHANNEL_ID) { - PyErr_SetString(state->ChannelError, - "failed to get a channel ID"); - } - else { - assert(PyErr_Occurred()); - } - return 1; -} - -/* the channel queue */ - -struct _channelitem; - -typedef struct _channelitem { - _PyCrossInterpreterData *data; - struct _channelitem *next; -} _channelitem; - -static _channelitem * -_channelitem_new(void) -{ - _channelitem *item = PyMem_NEW(_channelitem, 1); - if (item == NULL) { - PyErr_NoMemory(); - return NULL; - } - item->data = NULL; - item->next = NULL; - return item; -} - -static void -_channelitem_clear(_channelitem *item) -{ - if (item->data != NULL) { - (void)_release_xid_data(item->data, 1); - PyMem_Free(item->data); - item->data = NULL; - } - item->next = NULL; -} - -static void -_channelitem_free(_channelitem *item) -{ - _channelitem_clear(item); - PyMem_Free(item); -} - -static void -_channelitem_free_all(_channelitem *item) -{ - while (item != NULL) { - _channelitem *last = item; - item = item->next; - _channelitem_free(last); - } -} - -static _PyCrossInterpreterData * -_channelitem_popped(_channelitem *item) -{ - _PyCrossInterpreterData *data = item->data; - item->data = NULL; - _channelitem_free(item); - return data; -} - -typedef struct _channelqueue { - int64_t count; - _channelitem *first; - _channelitem *last; -} _channelqueue; - -static _channelqueue * -_channelqueue_new(void) -{ - _channelqueue *queue = PyMem_NEW(_channelqueue, 1); - if (queue == NULL) { - PyErr_NoMemory(); - return NULL; - } - queue->count = 0; - queue->first = NULL; - queue->last = NULL; - return queue; -} - -static void -_channelqueue_clear(_channelqueue *queue) -{ - _channelitem_free_all(queue->first); - queue->count = 0; - queue->first = NULL; - queue->last = NULL; -} - -static void -_channelqueue_free(_channelqueue *queue) -{ - _channelqueue_clear(queue); - PyMem_Free(queue); -} - -static int -_channelqueue_put(_channelqueue *queue, _PyCrossInterpreterData *data) -{ - _channelitem *item = _channelitem_new(); - if (item == NULL) { - return -1; - } - item->data = data; - - queue->count += 1; - if (queue->first == NULL) { - queue->first = item; - } - else { - queue->last->next = item; - } - queue->last = item; - return 0; -} - -static _PyCrossInterpreterData * -_channelqueue_get(_channelqueue *queue) -{ - _channelitem *item = queue->first; - if (item == NULL) { - return NULL; - } - queue->first = item->next; - if (queue->last == item) { - queue->last = NULL; - } - queue->count -= 1; - - return _channelitem_popped(item); -} - -/* channel-interpreter associations */ - -struct _channelend; - -typedef struct _channelend { - struct _channelend *next; - int64_t interp; - int open; -} _channelend; - -static _channelend * -_channelend_new(int64_t interp) -{ - _channelend *end = PyMem_NEW(_channelend, 1); - if (end == NULL) { - PyErr_NoMemory(); - return NULL; - } - end->next = NULL; - end->interp = interp; - end->open = 1; - return end; -} - -static void -_channelend_free(_channelend *end) -{ - PyMem_Free(end); -} - -static void -_channelend_free_all(_channelend *end) -{ - while (end != NULL) { - _channelend *last = end; - end = end->next; - _channelend_free(last); - } -} - -static _channelend * -_channelend_find(_channelend *first, int64_t interp, _channelend **pprev) -{ - _channelend *prev = NULL; - _channelend *end = first; - while (end != NULL) { - if (end->interp == interp) { - break; - } - prev = end; - end = end->next; - } - if (pprev != NULL) { - *pprev = prev; - } - return end; -} - -typedef struct _channelassociations { - // Note that the list entries are never removed for interpreter - // for which the channel is closed. This should not be a problem in - // practice. Also, a channel isn't automatically closed when an - // interpreter is destroyed. - int64_t numsendopen; - int64_t numrecvopen; - _channelend *send; - _channelend *recv; -} _channelends; - -static _channelends * -_channelends_new(void) -{ - _channelends *ends = PyMem_NEW(_channelends, 1); - if (ends== NULL) { - return NULL; - } - ends->numsendopen = 0; - ends->numrecvopen = 0; - ends->send = NULL; - ends->recv = NULL; - return ends; -} - -static void -_channelends_clear(_channelends *ends) -{ - _channelend_free_all(ends->send); - ends->send = NULL; - ends->numsendopen = 0; - - _channelend_free_all(ends->recv); - ends->recv = NULL; - ends->numrecvopen = 0; -} - -static void -_channelends_free(_channelends *ends) -{ - _channelends_clear(ends); - PyMem_Free(ends); -} - -static _channelend * -_channelends_add(_channelends *ends, _channelend *prev, int64_t interp, - int send) -{ - _channelend *end = _channelend_new(interp); - if (end == NULL) { - return NULL; - } - - if (prev == NULL) { - if (send) { - ends->send = end; - } - else { - ends->recv = end; - } - } - else { - prev->next = end; - } - if (send) { - ends->numsendopen += 1; - } - else { - ends->numrecvopen += 1; - } - return end; -} - -static int -_channelends_associate(_channelends *ends, int64_t interp, int send) -{ - _channelend *prev; - _channelend *end = _channelend_find(send ? ends->send : ends->recv, - interp, &prev); - if (end != NULL) { - if (!end->open) { - return ERR_CHANNEL_CLOSED; - } - // already associated - return 0; - } - if (_channelends_add(ends, prev, interp, send) == NULL) { - return -1; - } - return 0; -} - -static int -_channelends_is_open(_channelends *ends) -{ - if (ends->numsendopen != 0 || ends->numrecvopen != 0) { - return 1; - } - if (ends->send == NULL && ends->recv == NULL) { - return 1; - } - return 0; -} - -static void -_channelends_close_end(_channelends *ends, _channelend *end, int send) -{ - end->open = 0; - if (send) { - ends->numsendopen -= 1; - } - else { - ends->numrecvopen -= 1; - } -} - -static int -_channelends_close_interpreter(_channelends *ends, int64_t interp, int which) -{ - _channelend *prev; - _channelend *end; - if (which >= 0) { // send/both - end = _channelend_find(ends->send, interp, &prev); - if (end == NULL) { - // never associated so add it - end = _channelends_add(ends, prev, interp, 1); - if (end == NULL) { - return -1; - } - } - _channelends_close_end(ends, end, 1); - } - if (which <= 0) { // recv/both - end = _channelend_find(ends->recv, interp, &prev); - if (end == NULL) { - // never associated so add it - end = _channelends_add(ends, prev, interp, 0); - if (end == NULL) { - return -1; - } - } - _channelends_close_end(ends, end, 0); - } - return 0; -} - -static void -_channelends_close_all(_channelends *ends, int which, int force) -{ - // XXX Handle the ends. - // XXX Handle force is True. - - // Ensure all the "send"-associated interpreters are closed. - _channelend *end; - for (end = ends->send; end != NULL; end = end->next) { - _channelends_close_end(ends, end, 1); - } - - // Ensure all the "recv"-associated interpreters are closed. - for (end = ends->recv; end != NULL; end = end->next) { - _channelends_close_end(ends, end, 0); - } -} - -/* channels */ - -struct _channel; -struct _channel_closing; -static void _channel_clear_closing(struct _channel *); -static void _channel_finish_closing(struct _channel *); - -typedef struct _channel { - PyThread_type_lock mutex; - _channelqueue *queue; - _channelends *ends; - int open; - struct _channel_closing *closing; -} _PyChannelState; - -static _PyChannelState * -_channel_new(PyThread_type_lock mutex) -{ - _PyChannelState *chan = PyMem_NEW(_PyChannelState, 1); - if (chan == NULL) { - return NULL; - } - chan->mutex = mutex; - chan->queue = _channelqueue_new(); - if (chan->queue == NULL) { - PyMem_Free(chan); - return NULL; - } - chan->ends = _channelends_new(); - if (chan->ends == NULL) { - _channelqueue_free(chan->queue); - PyMem_Free(chan); - return NULL; - } - chan->open = 1; - chan->closing = NULL; - return chan; -} - -static void -_channel_free(_PyChannelState *chan) -{ - _channel_clear_closing(chan); - PyThread_acquire_lock(chan->mutex, WAIT_LOCK); - _channelqueue_free(chan->queue); - _channelends_free(chan->ends); - PyThread_release_lock(chan->mutex); - - PyThread_free_lock(chan->mutex); - PyMem_Free(chan); -} - -static int -_channel_add(_PyChannelState *chan, int64_t interp, - _PyCrossInterpreterData *data) -{ - int res = -1; - PyThread_acquire_lock(chan->mutex, WAIT_LOCK); - - if (!chan->open) { - res = ERR_CHANNEL_CLOSED; - goto done; - } - if (_channelends_associate(chan->ends, interp, 1) != 0) { - res = ERR_CHANNEL_INTERP_CLOSED; - goto done; - } - - if (_channelqueue_put(chan->queue, data) != 0) { - goto done; - } - - res = 0; -done: - PyThread_release_lock(chan->mutex); - return res; -} - -static int -_channel_next(_PyChannelState *chan, int64_t interp, - _PyCrossInterpreterData **res) -{ - int err = 0; - PyThread_acquire_lock(chan->mutex, WAIT_LOCK); - - if (!chan->open) { - err = ERR_CHANNEL_CLOSED; - goto done; - } - if (_channelends_associate(chan->ends, interp, 0) != 0) { - err = ERR_CHANNEL_INTERP_CLOSED; - goto done; - } - - _PyCrossInterpreterData *data = _channelqueue_get(chan->queue); - if (data == NULL && !PyErr_Occurred() && chan->closing != NULL) { - chan->open = 0; - } - *res = data; - -done: - PyThread_release_lock(chan->mutex); - if (chan->queue->count == 0) { - _channel_finish_closing(chan); - } - return err; -} - -static int -_channel_close_interpreter(_PyChannelState *chan, int64_t interp, int end) -{ - PyThread_acquire_lock(chan->mutex, WAIT_LOCK); - - int res = -1; - if (!chan->open) { - res = ERR_CHANNEL_CLOSED; - goto done; - } - - if (_channelends_close_interpreter(chan->ends, interp, end) != 0) { - goto done; - } - chan->open = _channelends_is_open(chan->ends); - - res = 0; -done: - PyThread_release_lock(chan->mutex); - return res; -} - -static int -_channel_close_all(_PyChannelState *chan, int end, int force) -{ - int res = -1; - PyThread_acquire_lock(chan->mutex, WAIT_LOCK); - - if (!chan->open) { - res = ERR_CHANNEL_CLOSED; - goto done; - } - - if (!force && chan->queue->count > 0) { - res = ERR_CHANNEL_NOT_EMPTY; - goto done; - } - - chan->open = 0; - - // We *could* also just leave these in place, since we've marked - // the channel as closed already. - _channelends_close_all(chan->ends, end, force); - - res = 0; -done: - PyThread_release_lock(chan->mutex); - return res; -} - -/* the set of channels */ - -struct _channelref; - -typedef struct _channelref { - int64_t id; - _PyChannelState *chan; - struct _channelref *next; - Py_ssize_t objcount; -} _channelref; - -static _channelref * -_channelref_new(int64_t id, _PyChannelState *chan) -{ - _channelref *ref = PyMem_NEW(_channelref, 1); - if (ref == NULL) { - return NULL; - } - ref->id = id; - ref->chan = chan; - ref->next = NULL; - ref->objcount = 0; - return ref; -} - -//static void -//_channelref_clear(_channelref *ref) -//{ -// ref->id = -1; -// ref->chan = NULL; -// ref->next = NULL; -// ref->objcount = 0; -//} - -static void -_channelref_free(_channelref *ref) -{ - if (ref->chan != NULL) { - _channel_clear_closing(ref->chan); - } - //_channelref_clear(ref); - PyMem_Free(ref); -} - -static _channelref * -_channelref_find(_channelref *first, int64_t id, _channelref **pprev) -{ - _channelref *prev = NULL; - _channelref *ref = first; - while (ref != NULL) { - if (ref->id == id) { - break; - } - prev = ref; - ref = ref->next; - } - if (pprev != NULL) { - *pprev = prev; - } - return ref; -} - -typedef struct _channels { - PyThread_type_lock mutex; - _channelref *head; - int64_t numopen; - int64_t next_id; -} _channels; - -static void -_channels_init(_channels *channels, PyThread_type_lock mutex) -{ - channels->mutex = mutex; - channels->head = NULL; - channels->numopen = 0; - channels->next_id = 0; -} - -static void -_channels_fini(_channels *channels) -{ - assert(channels->numopen == 0); - assert(channels->head == NULL); - if (channels->mutex != NULL) { - PyThread_free_lock(channels->mutex); - channels->mutex = NULL; - } -} - -static int64_t -_channels_next_id(_channels *channels) // needs lock -{ - int64_t id = channels->next_id; - if (id < 0) { - /* overflow */ - return -1; - } - channels->next_id += 1; - return id; -} - -static int -_channels_lookup(_channels *channels, int64_t id, PyThread_type_lock *pmutex, - _PyChannelState **res) -{ - int err = -1; - _PyChannelState *chan = NULL; - PyThread_acquire_lock(channels->mutex, WAIT_LOCK); - if (pmutex != NULL) { - *pmutex = NULL; - } - - _channelref *ref = _channelref_find(channels->head, id, NULL); - if (ref == NULL) { - err = ERR_CHANNEL_NOT_FOUND; - goto done; - } - if (ref->chan == NULL || !ref->chan->open) { - err = ERR_CHANNEL_CLOSED; - goto done; - } - - if (pmutex != NULL) { - // The mutex will be closed by the caller. - *pmutex = channels->mutex; - } - - chan = ref->chan; - err = 0; - -done: - if (pmutex == NULL || *pmutex == NULL) { - PyThread_release_lock(channels->mutex); - } - *res = chan; - return err; -} - -static int64_t -_channels_add(_channels *channels, _PyChannelState *chan) -{ - int64_t cid = -1; - PyThread_acquire_lock(channels->mutex, WAIT_LOCK); - - // Create a new ref. - int64_t id = _channels_next_id(channels); - if (id < 0) { - cid = ERR_NO_NEXT_CHANNEL_ID; - goto done; - } - _channelref *ref = _channelref_new(id, chan); - if (ref == NULL) { - goto done; - } - - // Add it to the list. - // We assume that the channel is a new one (not already in the list). - ref->next = channels->head; - channels->head = ref; - channels->numopen += 1; - - cid = id; -done: - PyThread_release_lock(channels->mutex); - return cid; -} - -/* forward */ -static int _channel_set_closing(struct _channelref *, PyThread_type_lock); - -static int -_channels_close(_channels *channels, int64_t cid, _PyChannelState **pchan, - int end, int force) -{ - int res = -1; - PyThread_acquire_lock(channels->mutex, WAIT_LOCK); - if (pchan != NULL) { - *pchan = NULL; - } - - _channelref *ref = _channelref_find(channels->head, cid, NULL); - if (ref == NULL) { - res = ERR_CHANNEL_NOT_FOUND; - goto done; - } - - if (ref->chan == NULL) { - res = ERR_CHANNEL_CLOSED; - goto done; - } - else if (!force && end == CHANNEL_SEND && ref->chan->closing != NULL) { - res = ERR_CHANNEL_CLOSED; - goto done; - } - else { - int err = _channel_close_all(ref->chan, end, force); - if (err != 0) { - if (end == CHANNEL_SEND && err == ERR_CHANNEL_NOT_EMPTY) { - if (ref->chan->closing != NULL) { - res = ERR_CHANNEL_CLOSED; - goto done; - } - // Mark the channel as closing and return. The channel - // will be cleaned up in _channel_next(). - PyErr_Clear(); - int err = _channel_set_closing(ref, channels->mutex); - if (err != 0) { - res = err; - goto done; - } - if (pchan != NULL) { - *pchan = ref->chan; - } - res = 0; - } - else { - res = err; - } - goto done; - } - if (pchan != NULL) { - *pchan = ref->chan; - } - else { - _channel_free(ref->chan); - } - ref->chan = NULL; - } - - res = 0; -done: - PyThread_release_lock(channels->mutex); - return res; -} - -static void -_channels_remove_ref(_channels *channels, _channelref *ref, _channelref *prev, - _PyChannelState **pchan) -{ - if (ref == channels->head) { - channels->head = ref->next; - } - else { - prev->next = ref->next; - } - channels->numopen -= 1; - - if (pchan != NULL) { - *pchan = ref->chan; - } - _channelref_free(ref); -} - -static int -_channels_remove(_channels *channels, int64_t id, _PyChannelState **pchan) -{ - int res = -1; - PyThread_acquire_lock(channels->mutex, WAIT_LOCK); - - if (pchan != NULL) { - *pchan = NULL; - } - - _channelref *prev = NULL; - _channelref *ref = _channelref_find(channels->head, id, &prev); - if (ref == NULL) { - res = ERR_CHANNEL_NOT_FOUND; - goto done; - } - - _channels_remove_ref(channels, ref, prev, pchan); - - res = 0; -done: - PyThread_release_lock(channels->mutex); - return res; -} - -static int -_channels_add_id_object(_channels *channels, int64_t id) -{ - int res = -1; - PyThread_acquire_lock(channels->mutex, WAIT_LOCK); - - _channelref *ref = _channelref_find(channels->head, id, NULL); - if (ref == NULL) { - res = ERR_CHANNEL_NOT_FOUND; - goto done; - } - ref->objcount += 1; - - res = 0; -done: - PyThread_release_lock(channels->mutex); - return res; -} - -static void -_channels_drop_id_object(_channels *channels, int64_t id) -{ - PyThread_acquire_lock(channels->mutex, WAIT_LOCK); - - _channelref *prev = NULL; - _channelref *ref = _channelref_find(channels->head, id, &prev); - if (ref == NULL) { - // Already destroyed. - goto done; - } - ref->objcount -= 1; - - // Destroy if no longer used. - if (ref->objcount == 0) { - _PyChannelState *chan = NULL; - _channels_remove_ref(channels, ref, prev, &chan); - if (chan != NULL) { - _channel_free(chan); - } - } - -done: - PyThread_release_lock(channels->mutex); -} - -static int64_t * -_channels_list_all(_channels *channels, int64_t *count) -{ - int64_t *cids = NULL; - PyThread_acquire_lock(channels->mutex, WAIT_LOCK); - int64_t *ids = PyMem_NEW(int64_t, (Py_ssize_t)(channels->numopen)); - if (ids == NULL) { - goto done; - } - _channelref *ref = channels->head; - for (int64_t i=0; ref != NULL; ref = ref->next, i++) { - ids[i] = ref->id; - } - *count = channels->numopen; - - cids = ids; -done: - PyThread_release_lock(channels->mutex); - return cids; -} - -/* support for closing non-empty channels */ - -struct _channel_closing { - struct _channelref *ref; -}; - -static int -_channel_set_closing(struct _channelref *ref, PyThread_type_lock mutex) { - struct _channel *chan = ref->chan; - if (chan == NULL) { - // already closed - return 0; - } - int res = -1; - PyThread_acquire_lock(chan->mutex, WAIT_LOCK); - if (chan->closing != NULL) { - res = ERR_CHANNEL_CLOSED; - goto done; - } - chan->closing = PyMem_NEW(struct _channel_closing, 1); - if (chan->closing == NULL) { - goto done; - } - chan->closing->ref = ref; - - res = 0; -done: - PyThread_release_lock(chan->mutex); - return res; -} - -static void -_channel_clear_closing(struct _channel *chan) { - PyThread_acquire_lock(chan->mutex, WAIT_LOCK); - if (chan->closing != NULL) { - PyMem_Free(chan->closing); - chan->closing = NULL; - } - PyThread_release_lock(chan->mutex); -} - -static void -_channel_finish_closing(struct _channel *chan) { - struct _channel_closing *closing = chan->closing; - if (closing == NULL) { - return; - } - _channelref *ref = closing->ref; - _channel_clear_closing(chan); - // Do the things that would have been done in _channels_close(). - ref->chan = NULL; - _channel_free(chan); -} - -/* "high"-level channel-related functions */ - -static int64_t -_channel_create(_channels *channels) -{ - PyThread_type_lock mutex = PyThread_allocate_lock(); - if (mutex == NULL) { - return ERR_CHANNEL_MUTEX_INIT; - } - _PyChannelState *chan = _channel_new(mutex); - if (chan == NULL) { - PyThread_free_lock(mutex); - return -1; - } - int64_t id = _channels_add(channels, chan); - if (id < 0) { - _channel_free(chan); - } - return id; -} - -static int -_channel_destroy(_channels *channels, int64_t id) -{ - _PyChannelState *chan = NULL; - int err = _channels_remove(channels, id, &chan); - if (err != 0) { - return err; - } - if (chan != NULL) { - _channel_free(chan); - } - return 0; -} - -static int -_channel_send(_channels *channels, int64_t id, PyObject *obj) -{ - PyInterpreterState *interp = _get_current_interp(); - if (interp == NULL) { - return -1; - } - - // Look up the channel. - PyThread_type_lock mutex = NULL; - _PyChannelState *chan = NULL; - int err = _channels_lookup(channels, id, &mutex, &chan); - if (err != 0) { - return err; - } - assert(chan != NULL); - // Past this point we are responsible for releasing the mutex. - - if (chan->closing != NULL) { - PyThread_release_lock(mutex); - return ERR_CHANNEL_CLOSED; - } - - // Convert the object to cross-interpreter data. - _PyCrossInterpreterData *data = PyMem_NEW(_PyCrossInterpreterData, 1); - if (data == NULL) { - PyThread_release_lock(mutex); - return -1; - } - if (_PyObject_GetCrossInterpreterData(obj, data) != 0) { - PyThread_release_lock(mutex); - PyMem_Free(data); - return -1; - } - - // Add the data to the channel. - int res = _channel_add(chan, PyInterpreterState_GetID(interp), data); - PyThread_release_lock(mutex); - if (res != 0) { - // We may chain an exception here: - (void)_release_xid_data(data, 0); - PyMem_Free(data); - return res; - } - - return 0; -} - -static int -_channel_recv(_channels *channels, int64_t id, PyObject **res) -{ - int err; - *res = NULL; - - PyInterpreterState *interp = _get_current_interp(); - if (interp == NULL) { - // XXX Is this always an error? - if (PyErr_Occurred()) { - return -1; - } - return 0; - } - - // Look up the channel. - PyThread_type_lock mutex = NULL; - _PyChannelState *chan = NULL; - err = _channels_lookup(channels, id, &mutex, &chan); - if (err != 0) { - return err; - } - assert(chan != NULL); - // Past this point we are responsible for releasing the mutex. - - // Pop off the next item from the channel. - _PyCrossInterpreterData *data = NULL; - err = _channel_next(chan, PyInterpreterState_GetID(interp), &data); - PyThread_release_lock(mutex); - if (err != 0) { - return err; - } - else if (data == NULL) { - assert(!PyErr_Occurred()); - return 0; - } - - // Convert the data back to an object. - PyObject *obj = _PyCrossInterpreterData_NewObject(data); - if (obj == NULL) { - assert(PyErr_Occurred()); - (void)_release_xid_data(data, 1); - PyMem_Free(data); - return -1; - } - int release_res = _release_xid_data(data, 0); - PyMem_Free(data); - if (release_res < 0) { - // The source interpreter has been destroyed already. - assert(PyErr_Occurred()); - Py_DECREF(obj); - return -1; - } - - *res = obj; - return 0; -} - -static int -_channel_drop(_channels *channels, int64_t id, int send, int recv) -{ - PyInterpreterState *interp = _get_current_interp(); - if (interp == NULL) { - return -1; - } - - // Look up the channel. - PyThread_type_lock mutex = NULL; - _PyChannelState *chan = NULL; - int err = _channels_lookup(channels, id, &mutex, &chan); - if (err != 0) { - return err; - } - // Past this point we are responsible for releasing the mutex. - - // Close one or both of the two ends. - int res = _channel_close_interpreter(chan, PyInterpreterState_GetID(interp), send-recv); - PyThread_release_lock(mutex); - return res; -} - -static int -_channel_close(_channels *channels, int64_t id, int end, int force) -{ - return _channels_close(channels, id, NULL, end, force); -} - -static int -_channel_is_associated(_channels *channels, int64_t cid, int64_t interp, - int send) -{ - _PyChannelState *chan = NULL; - int err = _channels_lookup(channels, cid, NULL, &chan); - if (err != 0) { - return err; - } - else if (send && chan->closing != NULL) { - return ERR_CHANNEL_CLOSED; - } - - _channelend *end = _channelend_find(send ? chan->ends->send : chan->ends->recv, - interp, NULL); - - return (end != NULL && end->open); -} - -/* ChannelID class */ - -typedef struct channelid { - PyObject_HEAD - int64_t id; - int end; - int resolve; - _channels *channels; -} channelid; - -struct channel_id_converter_data { - PyObject *module; - int64_t cid; -}; - -static int -channel_id_converter(PyObject *arg, void *ptr) -{ - int64_t cid; - struct channel_id_converter_data *data = ptr; - module_state *state = get_module_state(data->module); - assert(state != NULL); - if (PyObject_TypeCheck(arg, state->ChannelIDType)) { - cid = ((channelid *)arg)->id; - } - else if (PyIndex_Check(arg)) { - cid = PyLong_AsLongLong(arg); - if (cid == -1 && PyErr_Occurred()) { - return 0; - } - if (cid < 0) { - PyErr_Format(PyExc_ValueError, - "channel ID must be a non-negative int, got %R", arg); - return 0; - } - } - else { - PyErr_Format(PyExc_TypeError, - "channel ID must be an int, got %.100s", - Py_TYPE(arg)->tp_name); - return 0; - } - data->cid = cid; - return 1; -} - -static int -newchannelid(PyTypeObject *cls, int64_t cid, int end, _channels *channels, - int force, int resolve, channelid **res) -{ - *res = NULL; - - channelid *self = PyObject_New(channelid, cls); - if (self == NULL) { - return -1; - } - self->id = cid; - self->end = end; - self->resolve = resolve; - self->channels = channels; - - int err = _channels_add_id_object(channels, cid); - if (err != 0) { - if (force && err == ERR_CHANNEL_NOT_FOUND) { - assert(!PyErr_Occurred()); - } - else { - Py_DECREF((PyObject *)self); - return err; - } - } - - *res = self; - return 0; -} - -static _channels * _global_channels(void); - -static PyObject * -_channelid_new(PyObject *mod, PyTypeObject *cls, - PyObject *args, PyObject *kwds) -{ - static char *kwlist[] = {"id", "send", "recv", "force", "_resolve", NULL}; - int64_t cid; - struct channel_id_converter_data cid_data = { - .module = mod, - }; - int send = -1; - int recv = -1; - int force = 0; - int resolve = 0; - if (!PyArg_ParseTupleAndKeywords(args, kwds, - "O&|$pppp:ChannelID.__new__", kwlist, - channel_id_converter, &cid_data, - &send, &recv, &force, &resolve)) { - return NULL; - } - cid = cid_data.cid; - - // Handle "send" and "recv". - if (send == 0 && recv == 0) { - PyErr_SetString(PyExc_ValueError, - "'send' and 'recv' cannot both be False"); - return NULL; - } - - int end = 0; - if (send == 1) { - if (recv == 0 || recv == -1) { - end = CHANNEL_SEND; - } - } - else if (recv == 1) { - end = CHANNEL_RECV; - } - - PyObject *id = NULL; - int err = newchannelid(cls, cid, end, _global_channels(), - force, resolve, - (channelid **)&id); - if (handle_channel_error(err, mod, cid)) { - assert(id == NULL); - return NULL; - } - assert(id != NULL); - return id; -} - -static void -channelid_dealloc(PyObject *self) -{ - int64_t cid = ((channelid *)self)->id; - _channels *channels = ((channelid *)self)->channels; - - PyTypeObject *tp = Py_TYPE(self); - tp->tp_free(self); - /* "Instances of heap-allocated types hold a reference to their type." - * See: https://docs.python.org/3.11/howto/isolating-extensions.html#garbage-collection-protocol - * See: https://docs.python.org/3.11/c-api/typeobj.html#c.PyTypeObject.tp_traverse - */ - // XXX Why don't we implement Py_TPFLAGS_HAVE_GC, e.g. Py_tp_traverse, - // like we do for _abc._abc_data? - Py_DECREF(tp); - - _channels_drop_id_object(channels, cid); -} - -static PyObject * -channelid_repr(PyObject *self) -{ - PyTypeObject *type = Py_TYPE(self); - const char *name = _PyType_Name(type); - - channelid *cid = (channelid *)self; - const char *fmt; - if (cid->end == CHANNEL_SEND) { - fmt = "%s(%" PRId64 ", send=True)"; - } - else if (cid->end == CHANNEL_RECV) { - fmt = "%s(%" PRId64 ", recv=True)"; - } - else { - fmt = "%s(%" PRId64 ")"; - } - return PyUnicode_FromFormat(fmt, name, cid->id); -} - -static PyObject * -channelid_str(PyObject *self) -{ - channelid *cid = (channelid *)self; - return PyUnicode_FromFormat("%" PRId64 "", cid->id); -} - -static PyObject * -channelid_int(PyObject *self) -{ - channelid *cid = (channelid *)self; - return PyLong_FromLongLong(cid->id); -} - -static Py_hash_t -channelid_hash(PyObject *self) -{ - channelid *cid = (channelid *)self; - PyObject *id = PyLong_FromLongLong(cid->id); - if (id == NULL) { - return -1; - } - Py_hash_t hash = PyObject_Hash(id); - Py_DECREF(id); - return hash; -} - -static PyObject * -channelid_richcompare(PyObject *self, PyObject *other, int op) -{ - PyObject *res = NULL; - if (op != Py_EQ && op != Py_NE) { - Py_RETURN_NOTIMPLEMENTED; - } - - PyObject *mod = get_module_from_type(Py_TYPE(self)); - if (mod == NULL) { - return NULL; - } - module_state *state = get_module_state(mod); - if (state == NULL) { - goto done; - } - - if (!PyObject_TypeCheck(self, state->ChannelIDType)) { - res = Py_NewRef(Py_NotImplemented); - goto done; - } - - channelid *cid = (channelid *)self; - int equal; - if (PyObject_TypeCheck(other, state->ChannelIDType)) { - channelid *othercid = (channelid *)other; - equal = (cid->end == othercid->end) && (cid->id == othercid->id); - } - else if (PyLong_Check(other)) { - /* Fast path */ - int overflow; - long long othercid = PyLong_AsLongLongAndOverflow(other, &overflow); - if (othercid == -1 && PyErr_Occurred()) { - goto done; - } - equal = !overflow && (othercid >= 0) && (cid->id == othercid); - } - else if (PyNumber_Check(other)) { - PyObject *pyid = PyLong_FromLongLong(cid->id); - if (pyid == NULL) { - goto done; - } - res = PyObject_RichCompare(pyid, other, op); - Py_DECREF(pyid); - goto done; - } - else { - res = Py_NewRef(Py_NotImplemented); - goto done; - } - - if ((op == Py_EQ && equal) || (op == Py_NE && !equal)) { - res = Py_NewRef(Py_True); - } - else { - res = Py_NewRef(Py_False); - } - -done: - Py_DECREF(mod); - return res; -} - -static PyObject * -_channel_from_cid(PyObject *cid, int end) -{ - PyObject *highlevel = PyImport_ImportModule("interpreters"); - if (highlevel == NULL) { - PyErr_Clear(); - highlevel = PyImport_ImportModule("test.support.interpreters"); - if (highlevel == NULL) { - return NULL; - } - } - const char *clsname = (end == CHANNEL_RECV) ? "RecvChannel" : - "SendChannel"; - PyObject *cls = PyObject_GetAttrString(highlevel, clsname); - Py_DECREF(highlevel); - if (cls == NULL) { - return NULL; - } - PyObject *chan = PyObject_CallFunctionObjArgs(cls, cid, NULL); - Py_DECREF(cls); - if (chan == NULL) { - return NULL; - } - return chan; -} - -struct _channelid_xid { - int64_t id; - int end; - int resolve; -}; - -static PyObject * -_channelid_from_xid(_PyCrossInterpreterData *data) -{ - struct _channelid_xid *xid = (struct _channelid_xid *)data->data; - - PyObject *mod = _get_current_module(); - if (mod == NULL) { - return NULL; - } - module_state *state = get_module_state(mod); - if (state == NULL) { - return NULL; + goto finally; } - // Note that we do not preserve the "resolve" flag. - PyObject *cid = NULL; - int err = newchannelid(state->ChannelIDType, xid->id, xid->end, - _global_channels(), 0, 0, - (channelid **)&cid); - if (err != 0) { - assert(cid == NULL); - (void)handle_channel_error(err, mod, xid->id); - goto done; - } - assert(cid != NULL); - if (xid->end == 0) { - goto done; + PyObject *name = PyUnicode_FromFormat("%S", exctype); + if (name == NULL) { + failure = "unable to format exception type name"; + goto finally; } - if (!xid->resolve) { - goto done; + err->name = _copy_raw_string(name); + Py_DECREF(name); + if (err->name == NULL) { + if (PyErr_ExceptionMatches(PyExc_MemoryError)) { + failure = "out of memory copying exception type name"; + } else { + failure = "unable to encode and copy exception type name"; + } + goto finally; } - /* Try returning a high-level channel end but fall back to the ID. */ - PyObject *chan = _channel_from_cid(cid, xid->end); - if (chan == NULL) { - PyErr_Clear(); - goto done; + if (exc != NULL) { + PyObject *msg = PyUnicode_FromFormat("%S", exc); + if (msg == NULL) { + failure = "unable to format exception message"; + goto finally; + } + err->msg = _copy_raw_string(msg); + Py_DECREF(msg); + if (err->msg == NULL) { + if (PyErr_ExceptionMatches(PyExc_MemoryError)) { + failure = "out of memory copying exception message"; + } else { + failure = "unable to encode and copy exception message"; + } + goto finally; + } } - Py_DECREF(cid); - cid = chan; -done: - Py_DECREF(mod); - return cid; -} - -static int -_channelid_shared(PyThreadState *tstate, PyObject *obj, - _PyCrossInterpreterData *data) -{ - if (_PyCrossInterpreterData_InitWithSize( - data, tstate->interp, sizeof(struct _channelid_xid), obj, - _channelid_from_xid - ) < 0) - { - return -1; +finally: + if (failure != NULL) { + PyErr_Clear(); + if (err->name != NULL) { + PyMem_Free(err->name); + err->name = NULL; + } + err->msg = failure; } - struct _channelid_xid *xid = (struct _channelid_xid *)data->data; - xid->id = ((channelid *)obj)->id; - xid->end = ((channelid *)obj)->end; - xid->resolve = ((channelid *)obj)->resolve; - return 0; + return err; } -static PyObject * -channelid_end(PyObject *self, void *end) +static void +_sharedexception_apply(_sharedexception *exc, PyObject *wrapperclass) { - int force = 1; - channelid *cid = (channelid *)self; - if (end != NULL) { - PyObject *id = NULL; - int err = newchannelid(Py_TYPE(self), cid->id, *(int *)end, - cid->channels, force, cid->resolve, - (channelid **)&id); - if (err != 0) { - assert(id == NULL); - PyObject *mod = get_module_from_type(Py_TYPE(self)); - if (mod == NULL) { - return NULL; - } - (void)handle_channel_error(err, mod, cid->id); - Py_DECREF(mod); - return NULL; + if (exc->name != NULL) { + if (exc->msg != NULL) { + PyErr_Format(wrapperclass, "%s: %s", exc->name, exc->msg); + } + else { + PyErr_SetString(wrapperclass, exc->name); } - assert(id != NULL); - return id; } - - if (cid->end == CHANNEL_SEND) { - return PyUnicode_InternFromString("send"); + else if (exc->msg != NULL) { + PyErr_SetString(wrapperclass, exc->msg); } - if (cid->end == CHANNEL_RECV) { - return PyUnicode_InternFromString("recv"); + else { + PyErr_SetNone(wrapperclass); } - return PyUnicode_InternFromString("both"); } -static int _channelid_end_send = CHANNEL_SEND; -static int _channelid_end_recv = CHANNEL_RECV; - -static PyGetSetDef channelid_getsets[] = { - {"end", (getter)channelid_end, NULL, - PyDoc_STR("'send', 'recv', or 'both'")}, - {"send", (getter)channelid_end, NULL, - PyDoc_STR("the 'send' end of the channel"), &_channelid_end_send}, - {"recv", (getter)channelid_end, NULL, - PyDoc_STR("the 'recv' end of the channel"), &_channelid_end_recv}, - {NULL} -}; - -PyDoc_STRVAR(channelid_doc, -"A channel ID identifies a channel and may be used as an int."); - -static PyType_Slot ChannelIDType_slots[] = { - {Py_tp_dealloc, (destructor)channelid_dealloc}, - {Py_tp_doc, (void *)channelid_doc}, - {Py_tp_repr, (reprfunc)channelid_repr}, - {Py_tp_str, (reprfunc)channelid_str}, - {Py_tp_hash, channelid_hash}, - {Py_tp_richcompare, channelid_richcompare}, - {Py_tp_getset, channelid_getsets}, - // number slots - {Py_nb_int, (unaryfunc)channelid_int}, - {Py_nb_index, (unaryfunc)channelid_int}, - {0, NULL}, -}; - -static PyType_Spec ChannelIDType_spec = { - .name = "_xxsubinterpreters.ChannelID", - .basicsize = sizeof(channelid), - .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | - Py_TPFLAGS_DISALLOW_INSTANTIATION | Py_TPFLAGS_IMMUTABLETYPE), - .slots = ChannelIDType_slots, -}; - /* interpreter-specific code ************************************************/ static int -interp_exceptions_init(PyObject *mod) +exceptions_init(PyObject *mod) { module_state *state = get_module_state(mod); if (state == NULL) { @@ -2145,24 +430,12 @@ _ensure_not_running(PyInterpreterState *interp) static int _run_script(PyInterpreterState *interp, const char *codestr, - _sharedns *shared, int needs_import, - _sharedexception **exc) + _sharedns *shared, _sharedexception **exc) { PyObject *exctype = NULL; PyObject *excval = NULL; PyObject *tb = NULL; - if (needs_import) { - // It might not have been imported yet in the current interpreter. - // However, it will (almost) always have been imported already - // in the main interpreter. - PyObject *mod = PyImport_ImportModule(MODULE_NAME); - if (mod == NULL) { - goto error; - } - Py_DECREF(mod); - } - PyObject *main_mod = _PyInterpreterState_GetMainModule(interp); if (main_mod == NULL) { goto error; @@ -2223,9 +496,7 @@ _run_script_in_interpreter(PyObject *mod, PyInterpreterState *interp, } module_state *state = get_module_state(mod); - int needs_import = 0; - _sharedns *shared = _get_shared_ns(shareables, state->ChannelIDType, - &needs_import); + _sharedns *shared = _get_shared_ns(shareables); if (shared == NULL && PyErr_Occurred()) { return -1; } @@ -2241,7 +512,7 @@ _run_script_in_interpreter(PyObject *mod, PyInterpreterState *interp, // Run the script. _sharedexception *exc = NULL; - int result = _run_script(interp, codestr, shared, needs_import, &exc); + int result = _run_script(interp, codestr, shared, &exc); // Switch back. if (save_tstate != NULL) { @@ -2269,50 +540,6 @@ _run_script_in_interpreter(PyObject *mod, PyInterpreterState *interp, /* module level code ********************************************************/ -/* globals is the process-global state for the module. It holds all - the data that we need to share between interpreters, so it cannot - hold PyObject values. */ -static struct globals { - int module_count; - _channels channels; -} _globals = {0}; - -static int -_globals_init(void) -{ - // XXX This isn't thread-safe. - _globals.module_count++; - if (_globals.module_count > 1) { - // Already initialized. - return 0; - } - - assert(_globals.channels.mutex == NULL); - PyThread_type_lock mutex = PyThread_allocate_lock(); - if (mutex == NULL) { - return ERR_CHANNELS_MUTEX_INIT; - } - _channels_init(&_globals.channels, mutex); - return 0; -} - -static void -_globals_fini(void) -{ - // XXX This isn't thread-safe. - _globals.module_count--; - if (_globals.module_count > 0) { - return; - } - - _channels_fini(&_globals.channels); -} - -static _channels * -_global_channels(void) { - return &_globals.channels; -} - static PyObject * interp_create(PyObject *self, PyObject *args, PyObject *kwds) { @@ -2578,358 +805,6 @@ PyDoc_STRVAR(is_running_doc, \n\ Return whether or not the identified interpreter is running."); -static PyObject * -channel_create(PyObject *self, PyObject *Py_UNUSED(ignored)) -{ - int64_t cid = _channel_create(&_globals.channels); - if (cid < 0) { - (void)handle_channel_error(-1, self, cid); - return NULL; - } - module_state *state = get_module_state(self); - if (state == NULL) { - return NULL; - } - PyObject *id = NULL; - int err = newchannelid(state->ChannelIDType, cid, 0, - &_globals.channels, 0, 0, - (channelid **)&id); - if (handle_channel_error(err, self, cid)) { - assert(id == NULL); - err = _channel_destroy(&_globals.channels, cid); - if (handle_channel_error(err, self, cid)) { - // XXX issue a warning? - } - return NULL; - } - assert(id != NULL); - assert(((channelid *)id)->channels != NULL); - return id; -} - -PyDoc_STRVAR(channel_create_doc, -"channel_create() -> cid\n\ -\n\ -Create a new cross-interpreter channel and return a unique generated ID."); - -static PyObject * -channel_destroy(PyObject *self, PyObject *args, PyObject *kwds) -{ - static char *kwlist[] = {"cid", NULL}; - int64_t cid; - struct channel_id_converter_data cid_data = { - .module = self, - }; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&:channel_destroy", kwlist, - channel_id_converter, &cid_data)) { - return NULL; - } - cid = cid_data.cid; - - int err = _channel_destroy(&_globals.channels, cid); - if (handle_channel_error(err, self, cid)) { - return NULL; - } - Py_RETURN_NONE; -} - -PyDoc_STRVAR(channel_destroy_doc, -"channel_destroy(cid)\n\ -\n\ -Close and finalize the channel. Afterward attempts to use the channel\n\ -will behave as though it never existed."); - -static PyObject * -channel_list_all(PyObject *self, PyObject *Py_UNUSED(ignored)) -{ - int64_t count = 0; - int64_t *cids = _channels_list_all(&_globals.channels, &count); - if (cids == NULL) { - if (count == 0) { - return PyList_New(0); - } - return NULL; - } - PyObject *ids = PyList_New((Py_ssize_t)count); - if (ids == NULL) { - goto finally; - } - module_state *state = get_module_state(self); - if (state == NULL) { - Py_DECREF(ids); - ids = NULL; - goto finally; - } - int64_t *cur = cids; - for (int64_t i=0; i < count; cur++, i++) { - PyObject *id = NULL; - int err = newchannelid(state->ChannelIDType, *cur, 0, - &_globals.channels, 0, 0, - (channelid **)&id); - if (handle_channel_error(err, self, *cur)) { - assert(id == NULL); - Py_SETREF(ids, NULL); - break; - } - assert(id != NULL); - PyList_SET_ITEM(ids, (Py_ssize_t)i, id); - } - -finally: - PyMem_Free(cids); - return ids; -} - -PyDoc_STRVAR(channel_list_all_doc, -"channel_list_all() -> [cid]\n\ -\n\ -Return the list of all IDs for active channels."); - -static PyObject * -channel_list_interpreters(PyObject *self, PyObject *args, PyObject *kwds) -{ - static char *kwlist[] = {"cid", "send", NULL}; - int64_t cid; /* Channel ID */ - struct channel_id_converter_data cid_data = { - .module = self, - }; - int send = 0; /* Send or receive end? */ - int64_t id; - PyObject *ids, *id_obj; - PyInterpreterState *interp; - - if (!PyArg_ParseTupleAndKeywords( - args, kwds, "O&$p:channel_list_interpreters", - kwlist, channel_id_converter, &cid_data, &send)) { - return NULL; - } - cid = cid_data.cid; - - ids = PyList_New(0); - if (ids == NULL) { - goto except; - } - - interp = PyInterpreterState_Head(); - while (interp != NULL) { - id = PyInterpreterState_GetID(interp); - assert(id >= 0); - int res = _channel_is_associated(&_globals.channels, cid, id, send); - if (res < 0) { - (void)handle_channel_error(res, self, cid); - goto except; - } - if (res) { - id_obj = _PyInterpreterState_GetIDObject(interp); - if (id_obj == NULL) { - goto except; - } - res = PyList_Insert(ids, 0, id_obj); - Py_DECREF(id_obj); - if (res < 0) { - goto except; - } - } - interp = PyInterpreterState_Next(interp); - } - - goto finally; - -except: - Py_CLEAR(ids); - -finally: - return ids; -} - -PyDoc_STRVAR(channel_list_interpreters_doc, -"channel_list_interpreters(cid, *, send) -> [id]\n\ -\n\ -Return the list of all interpreter IDs associated with an end of the channel.\n\ -\n\ -The 'send' argument should be a boolean indicating whether to use the send or\n\ -receive end."); - - -static PyObject * -channel_send(PyObject *self, PyObject *args, PyObject *kwds) -{ - static char *kwlist[] = {"cid", "obj", NULL}; - int64_t cid; - struct channel_id_converter_data cid_data = { - .module = self, - }; - PyObject *obj; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&O:channel_send", kwlist, - channel_id_converter, &cid_data, &obj)) { - return NULL; - } - cid = cid_data.cid; - - int err = _channel_send(&_globals.channels, cid, obj); - if (handle_channel_error(err, self, cid)) { - return NULL; - } - Py_RETURN_NONE; -} - -PyDoc_STRVAR(channel_send_doc, -"channel_send(cid, obj)\n\ -\n\ -Add the object's data to the channel's queue."); - -static PyObject * -channel_recv(PyObject *self, PyObject *args, PyObject *kwds) -{ - static char *kwlist[] = {"cid", "default", NULL}; - int64_t cid; - struct channel_id_converter_data cid_data = { - .module = self, - }; - PyObject *dflt = NULL; - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O&|O:channel_recv", kwlist, - channel_id_converter, &cid_data, &dflt)) { - return NULL; - } - cid = cid_data.cid; - - PyObject *obj = NULL; - int err = _channel_recv(&_globals.channels, cid, &obj); - if (handle_channel_error(err, self, cid)) { - return NULL; - } - Py_XINCREF(dflt); - if (obj == NULL) { - // Use the default. - if (dflt == NULL) { - (void)handle_channel_error(ERR_CHANNEL_EMPTY, self, cid); - return NULL; - } - obj = Py_NewRef(dflt); - } - Py_XDECREF(dflt); - return obj; -} - -PyDoc_STRVAR(channel_recv_doc, -"channel_recv(cid, [default]) -> obj\n\ -\n\ -Return a new object from the data at the front of the channel's queue.\n\ -\n\ -If there is nothing to receive then raise ChannelEmptyError, unless\n\ -a default value is provided. In that case return it."); - -static PyObject * -channel_close(PyObject *self, PyObject *args, PyObject *kwds) -{ - static char *kwlist[] = {"cid", "send", "recv", "force", NULL}; - int64_t cid; - struct channel_id_converter_data cid_data = { - .module = self, - }; - int send = 0; - int recv = 0; - int force = 0; - if (!PyArg_ParseTupleAndKeywords(args, kwds, - "O&|$ppp:channel_close", kwlist, - channel_id_converter, &cid_data, - &send, &recv, &force)) { - return NULL; - } - cid = cid_data.cid; - - int err = _channel_close(&_globals.channels, cid, send-recv, force); - if (handle_channel_error(err, self, cid)) { - return NULL; - } - Py_RETURN_NONE; -} - -PyDoc_STRVAR(channel_close_doc, -"channel_close(cid, *, send=None, recv=None, force=False)\n\ -\n\ -Close the channel for all interpreters.\n\ -\n\ -If the channel is empty then the keyword args are ignored and both\n\ -ends are immediately closed. Otherwise, if 'force' is True then\n\ -all queued items are released and both ends are immediately\n\ -closed.\n\ -\n\ -If the channel is not empty *and* 'force' is False then following\n\ -happens:\n\ -\n\ - * recv is True (regardless of send):\n\ - - raise ChannelNotEmptyError\n\ - * recv is None and send is None:\n\ - - raise ChannelNotEmptyError\n\ - * send is True and recv is not True:\n\ - - fully close the 'send' end\n\ - - close the 'recv' end to interpreters not already receiving\n\ - - fully close it once empty\n\ -\n\ -Closing an already closed channel results in a ChannelClosedError.\n\ -\n\ -Once the channel's ID has no more ref counts in any interpreter\n\ -the channel will be destroyed."); - -static PyObject * -channel_release(PyObject *self, PyObject *args, PyObject *kwds) -{ - // Note that only the current interpreter is affected. - static char *kwlist[] = {"cid", "send", "recv", "force", NULL}; - int64_t cid; - struct channel_id_converter_data cid_data = { - .module = self, - }; - int send = 0; - int recv = 0; - int force = 0; - if (!PyArg_ParseTupleAndKeywords(args, kwds, - "O&|$ppp:channel_release", kwlist, - channel_id_converter, &cid_data, - &send, &recv, &force)) { - return NULL; - } - cid = cid_data.cid; - if (send == 0 && recv == 0) { - send = 1; - recv = 1; - } - - // XXX Handle force is True. - // XXX Fix implicit release. - - int err = _channel_drop(&_globals.channels, cid, send, recv); - if (handle_channel_error(err, self, cid)) { - return NULL; - } - Py_RETURN_NONE; -} - -PyDoc_STRVAR(channel_release_doc, -"channel_release(cid, *, send=None, recv=None, force=True)\n\ -\n\ -Close the channel for the current interpreter. 'send' and 'recv'\n\ -(bool) may be used to indicate the ends to close. By default both\n\ -ends are closed. Closing an already closed end is a noop."); - -static PyObject * -channel__channel_id(PyObject *self, PyObject *args, PyObject *kwds) -{ - module_state *state = get_module_state(self); - if (state == NULL) { - return NULL; - } - PyTypeObject *cls = state->ChannelIDType; - PyObject *mod = get_module_from_owned_type(cls); - if (mod == NULL) { - return NULL; - } - PyObject *cid = _channelid_new(mod, cls, args, kwds); - Py_DECREF(mod); - return cid; -} - static PyMethodDef module_functions[] = { {"create", _PyCFunction_CAST(interp_create), METH_VARARGS | METH_KEYWORDS, create_doc}, @@ -2941,6 +816,7 @@ static PyMethodDef module_functions[] = { METH_NOARGS, get_current_doc}, {"get_main", interp_get_main, METH_NOARGS, get_main_doc}, + {"is_running", _PyCFunction_CAST(interp_is_running), METH_VARARGS | METH_KEYWORDS, is_running_doc}, {"run_string", _PyCFunction_CAST(interp_run_string), @@ -2949,25 +825,6 @@ static PyMethodDef module_functions[] = { {"is_shareable", _PyCFunction_CAST(object_is_shareable), METH_VARARGS | METH_KEYWORDS, is_shareable_doc}, - {"channel_create", channel_create, - METH_NOARGS, channel_create_doc}, - {"channel_destroy", _PyCFunction_CAST(channel_destroy), - METH_VARARGS | METH_KEYWORDS, channel_destroy_doc}, - {"channel_list_all", channel_list_all, - METH_NOARGS, channel_list_all_doc}, - {"channel_list_interpreters", _PyCFunction_CAST(channel_list_interpreters), - METH_VARARGS | METH_KEYWORDS, channel_list_interpreters_doc}, - {"channel_send", _PyCFunction_CAST(channel_send), - METH_VARARGS | METH_KEYWORDS, channel_send_doc}, - {"channel_recv", _PyCFunction_CAST(channel_recv), - METH_VARARGS | METH_KEYWORDS, channel_recv_doc}, - {"channel_close", _PyCFunction_CAST(channel_close), - METH_VARARGS | METH_KEYWORDS, channel_close_doc}, - {"channel_release", _PyCFunction_CAST(channel_release), - METH_VARARGS | METH_KEYWORDS, channel_release_doc}, - {"_channel_id", _PyCFunction_CAST(channel__channel_id), - METH_VARARGS | METH_KEYWORDS, NULL}, - {NULL, NULL} /* sentinel */ }; @@ -2981,29 +838,8 @@ The 'interpreters' module provides a more convenient interface."); static int module_exec(PyObject *mod) { - if (_globals_init() != 0) { - return -1; - } - - module_state *state = get_module_state(mod); - if (state == NULL) { - goto error; - } - /* Add exception types */ - if (interp_exceptions_init(mod) != 0) { - goto error; - } - if (channel_exceptions_init(mod) != 0) { - goto error; - } - - /* Add other types */ - - // ChannelID - state->ChannelIDType = add_new_type( - mod, &ChannelIDType_spec, _channelid_shared); - if (state->ChannelIDType == NULL) { + if (exceptions_init(mod) != 0) { goto error; } @@ -3015,8 +851,6 @@ module_exec(PyObject *mod) return 0; error: - (void)_PyCrossInterpreterData_UnregisterClass(state->ChannelIDType); - _globals_fini(); return -1; } @@ -3049,7 +883,6 @@ module_free(void *mod) module_state *state = get_module_state(mod); assert(state != NULL); clear_module_state(state); - _globals_fini(); } static struct PyModuleDef moduledef = { diff --git a/PC/config.c b/PC/config.c index 9d900c78e40d..cdb5db23c4ae 100644 --- a/PC/config.c +++ b/PC/config.c @@ -37,6 +37,7 @@ extern PyObject* PyInit__weakref(void); /* XXX: These two should really be extracted to standalone extensions. */ extern PyObject* PyInit_xxsubtype(void); extern PyObject* PyInit__xxsubinterpreters(void); +extern PyObject* PyInit__xxinterpchannels(void); extern PyObject* PyInit__random(void); extern PyObject* PyInit_itertools(void); extern PyObject* PyInit__collections(void); @@ -134,6 +135,7 @@ struct _inittab _PyImport_Inittab[] = { {"xxsubtype", PyInit_xxsubtype}, {"_xxsubinterpreters", PyInit__xxsubinterpreters}, + {"_xxinterpchannels", PyInit__xxinterpchannels}, #ifdef _Py_HAVE_ZLIB {"zlib", PyInit_zlib}, #endif diff --git a/PCbuild/pythoncore.vcxproj b/PCbuild/pythoncore.vcxproj index d3bd5b378e0d..397d22abe235 100644 --- a/PCbuild/pythoncore.vcxproj +++ b/PCbuild/pythoncore.vcxproj @@ -418,6 +418,7 @@ + diff --git a/PCbuild/pythoncore.vcxproj.filters b/PCbuild/pythoncore.vcxproj.filters index c1b531fd818a..bcbedcc3235b 100644 --- a/PCbuild/pythoncore.vcxproj.filters +++ b/PCbuild/pythoncore.vcxproj.filters @@ -1322,6 +1322,9 @@ Modules + + Modules + Parser diff --git a/Tools/build/generate_stdlib_module_names.py b/Tools/build/generate_stdlib_module_names.py index c8a23f41fdd4..d15e5e2d5450 100644 --- a/Tools/build/generate_stdlib_module_names.py +++ b/Tools/build/generate_stdlib_module_names.py @@ -36,6 +36,7 @@ '_testmultiphase', '_testsinglephase', '_xxsubinterpreters', + '_xxinterpchannels', '_xxtestfuzz', 'idlelib.idle_test', 'test', diff --git a/configure b/configure index 8b707cda6212..aef8103085bc 100755 --- a/configure +++ b/configure @@ -752,6 +752,8 @@ MODULE__MULTIPROCESSING_FALSE MODULE__MULTIPROCESSING_TRUE MODULE__ZONEINFO_FALSE MODULE__ZONEINFO_TRUE +MODULE__XXINTERPCHANNELS_FALSE +MODULE__XXINTERPCHANNELS_TRUE MODULE__XXSUBINTERPRETERS_FALSE MODULE__XXSUBINTERPRETERS_TRUE MODULE__TYPING_FALSE @@ -25615,6 +25617,7 @@ case $ac_sys_system in #( py_cv_module__scproxy=n/a py_cv_module__tkinter=n/a py_cv_module__xxsubinterpreters=n/a + py_cv_module__xxinterpchannels=n/a py_cv_module_grp=n/a py_cv_module_nis=n/a py_cv_module_ossaudiodev=n/a @@ -26057,6 +26060,26 @@ fi +fi + + + if test "$py_cv_module__xxinterpchannels" != "n/a"; then : + py_cv_module__xxinterpchannels=yes +fi + if test "$py_cv_module__xxinterpchannels" = yes; then + MODULE__XXINTERPCHANNELS_TRUE= + MODULE__XXINTERPCHANNELS_FALSE='#' +else + MODULE__XXINTERPCHANNELS_TRUE='#' + MODULE__XXINTERPCHANNELS_FALSE= +fi + + as_fn_append MODULE_BLOCK "MODULE__XXINTERPCHANNELS_STATE=$py_cv_module__xxinterpchannels$as_nl" + if test "x$py_cv_module__xxinterpchannels" = xyes; then : + + + + fi @@ -28236,6 +28259,10 @@ if test -z "${MODULE__XXSUBINTERPRETERS_TRUE}" && test -z "${MODULE__XXSUBINTERP as_fn_error $? "conditional \"MODULE__XXSUBINTERPRETERS\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi +if test -z "${MODULE__XXINTERPCHANNELS_TRUE}" && test -z "${MODULE__XXINTERPCHANNELS_FALSE}"; then + as_fn_error $? "conditional \"MODULE__XXINTERPCHANNELS\" was never defined. +Usually this means the macro was only invoked conditionally." "$LINENO" 5 +fi if test -z "${MODULE__ZONEINFO_TRUE}" && test -z "${MODULE__ZONEINFO_FALSE}"; then as_fn_error $? "conditional \"MODULE__ZONEINFO\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 diff --git a/configure.ac b/configure.ac index 5eee4586680d..010bca855f00 100644 --- a/configure.ac +++ b/configure.ac @@ -7017,6 +7017,7 @@ AS_CASE([$ac_sys_system], [_scproxy], [_tkinter], [_xxsubinterpreters], + [_xxinterpchannels], [grp], [nis], [ossaudiodev], @@ -7135,6 +7136,7 @@ PY_STDLIB_MOD_SIMPLE([select]) PY_STDLIB_MOD_SIMPLE([_struct]) PY_STDLIB_MOD_SIMPLE([_typing]) PY_STDLIB_MOD_SIMPLE([_xxsubinterpreters]) +PY_STDLIB_MOD_SIMPLE([_xxinterpchannels]) PY_STDLIB_MOD_SIMPLE([_zoneinfo]) dnl multiprocessing modules From webhook-mailer at python.org Fri Feb 3 22:33:58 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Sat, 04 Feb 2023 03:33:58 -0000 Subject: [Python-checkins] Add missing `versionadded` directive for `PyCode_Addr2Location` (#101347) Message-ID: https://github.com/python/cpython/commit/f11a3d1ebe0c78f8c159c63a37022b9b96f720dd commit: f11a3d1ebe0c78f8c159c63a37022b9b96f720dd branch: main author: Max Bachmann committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-04T09:03:28+05:30 summary: Add missing `versionadded` directive for `PyCode_Addr2Location` (#101347) files: M Doc/c-api/code.rst diff --git a/Doc/c-api/code.rst b/Doc/c-api/code.rst index a6eb86f1a0b5..ae75d68901d7 100644 --- a/Doc/c-api/code.rst +++ b/Doc/c-api/code.rst @@ -77,6 +77,8 @@ bound into a function. Returns ``1`` if the function succeeds and 0 otherwise. + .. versionadded:: 3.11 + .. c:function:: PyObject* PyCode_GetCode(PyCodeObject *co) Equivalent to the Python code ``getattr(co, 'co_code')``. From webhook-mailer at python.org Fri Feb 3 22:40:57 2023 From: webhook-mailer at python.org (miss-islington) Date: Sat, 04 Feb 2023 03:40:57 -0000 Subject: [Python-checkins] Add missing `versionadded` directive for `PyCode_Addr2Location` (GH-101347) Message-ID: https://github.com/python/cpython/commit/4c763463fca5eed1a99c73e7a43008420292d4f7 commit: 4c763463fca5eed1a99c73e7a43008420292d4f7 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-03T19:40:51-08:00 summary: Add missing `versionadded` directive for `PyCode_Addr2Location` (GH-101347) (cherry picked from commit f11a3d1ebe0c78f8c159c63a37022b9b96f720dd) Co-authored-by: Max Bachmann files: M Doc/c-api/code.rst diff --git a/Doc/c-api/code.rst b/Doc/c-api/code.rst index 9054e7ee3181..ee39f2aea85b 100644 --- a/Doc/c-api/code.rst +++ b/Doc/c-api/code.rst @@ -77,6 +77,8 @@ bound into a function. Returns ``1`` if the function succeeds and 0 otherwise. + .. versionadded:: 3.11 + .. c:function:: PyObject* PyCode_GetCode(PyCodeObject *co) Equivalent to the Python code ``getattr(co, 'co_code')``. From webhook-mailer at python.org Fri Feb 3 22:49:35 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Sat, 04 Feb 2023 03:49:35 -0000 Subject: [Python-checkins] GH-56426: Add cross-reference to the documentation for faulthandler, traceback, and pdb. (#101157) Message-ID: https://github.com/python/cpython/commit/cef9de62b8bf5e2d11d5a074012dfa81dc4ea935 commit: cef9de62b8bf5e2d11d5a074012dfa81dc4ea935 branch: main author: Furkan Onder committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-04T09:19:29+05:30 summary: GH-56426: Add cross-reference to the documentation for faulthandler, traceback, and pdb. (#101157) Co-authored-by: C.A.M. Gerlach files: M Doc/library/faulthandler.rst M Doc/library/pdb.rst M Doc/library/traceback.rst diff --git a/Doc/library/faulthandler.rst b/Doc/library/faulthandler.rst index 07a748994144..f64dfeb5e081 100644 --- a/Doc/library/faulthandler.rst +++ b/Doc/library/faulthandler.rst @@ -43,6 +43,13 @@ Python is deadlocked. The :ref:`Python Development Mode ` calls :func:`faulthandler.enable` at Python startup. +.. seealso:: + + Module :mod:`pdb` + Interactive source code debugger for Python programs. + + Module :mod:`traceback` + Standard interface to extract, format and print stack traces of Python programs. Dumping the traceback --------------------- @@ -52,6 +59,8 @@ Dumping the traceback Dump the tracebacks of all threads into *file*. If *all_threads* is ``False``, dump only the current thread. + .. seealso:: :func:`traceback.print_tb`, which can be used to print a traceback object. + .. versionchanged:: 3.5 Added support for passing file descriptor to this function. @@ -178,4 +187,3 @@ handler: File "/home/python/cpython/Lib/ctypes/__init__.py", line 486 in string_at File "", line 1 in Segmentation fault - diff --git a/Doc/library/pdb.rst b/Doc/library/pdb.rst index 383c3adcf289..4ae12a5d03a7 100644 --- a/Doc/library/pdb.rst +++ b/Doc/library/pdb.rst @@ -27,6 +27,15 @@ The debugger is extensible -- it is actually defined as the class :class:`Pdb`. This is currently undocumented but easily understood by reading the source. The extension interface uses the modules :mod:`bdb` and :mod:`cmd`. +.. seealso:: + + Module :mod:`faulthandler` + Used to dump Python tracebacks explicitly, on a fault, after a timeout, + or on a user signal. + + Module :mod:`traceback` + Standard interface to extract, format and print stack traces of Python programs. + The debugger's prompt is ``(Pdb)``. Typical usage to run a program under control of the debugger is:: diff --git a/Doc/library/traceback.rst b/Doc/library/traceback.rst index f8c1eabadacf..69818baf184d 100644 --- a/Doc/library/traceback.rst +++ b/Doc/library/traceback.rst @@ -20,8 +20,15 @@ The module uses traceback objects --- this is the object type that is stored in the :data:`sys.last_traceback` variable and returned as the third item from :func:`sys.exc_info`. -The module defines the following functions: +.. seealso:: + + Module :mod:`faulthandler` + Used to dump Python tracebacks explicitly, on a fault, after a timeout, or on a user signal. + Module :mod:`pdb` + Interactive source code debugger for Python programs. + +The module defines the following functions: .. function:: print_tb(tb, limit=None, file=None) From webhook-mailer at python.org Fri Feb 3 22:57:03 2023 From: webhook-mailer at python.org (miss-islington) Date: Sat, 04 Feb 2023 03:57:03 -0000 Subject: [Python-checkins] GH-56426: Add cross-reference to the documentation for faulthandler, traceback, and pdb. (GH-101157) Message-ID: https://github.com/python/cpython/commit/7cbcfbe2fffebfc57c9b50ae937bfbb896401fba commit: 7cbcfbe2fffebfc57c9b50ae937bfbb896401fba branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-03T19:56:57-08:00 summary: GH-56426: Add cross-reference to the documentation for faulthandler, traceback, and pdb. (GH-101157) (cherry picked from commit cef9de62b8bf5e2d11d5a074012dfa81dc4ea935) Co-authored-by: Furkan Onder Co-authored-by: C.A.M. Gerlach files: M Doc/library/faulthandler.rst M Doc/library/pdb.rst M Doc/library/traceback.rst diff --git a/Doc/library/faulthandler.rst b/Doc/library/faulthandler.rst index be0912376bd8..b80de69a79a8 100644 --- a/Doc/library/faulthandler.rst +++ b/Doc/library/faulthandler.rst @@ -43,6 +43,13 @@ Python is deadlocked. The :ref:`Python Development Mode ` calls :func:`faulthandler.enable` at Python startup. +.. seealso:: + + Module :mod:`pdb` + Interactive source code debugger for Python programs. + + Module :mod:`traceback` + Standard interface to extract, format and print stack traces of Python programs. Dumping the traceback --------------------- @@ -52,6 +59,8 @@ Dumping the traceback Dump the tracebacks of all threads into *file*. If *all_threads* is ``False``, dump only the current thread. + .. seealso:: :func:`traceback.print_tb`, which can be used to print a traceback object. + .. versionchanged:: 3.5 Added support for passing file descriptor to this function. @@ -178,4 +187,3 @@ handler: File "/home/python/cpython/Lib/ctypes/__init__.py", line 486 in string_at File "", line 1 in Segmentation fault - diff --git a/Doc/library/pdb.rst b/Doc/library/pdb.rst index 383c3adcf289..4ae12a5d03a7 100644 --- a/Doc/library/pdb.rst +++ b/Doc/library/pdb.rst @@ -27,6 +27,15 @@ The debugger is extensible -- it is actually defined as the class :class:`Pdb`. This is currently undocumented but easily understood by reading the source. The extension interface uses the modules :mod:`bdb` and :mod:`cmd`. +.. seealso:: + + Module :mod:`faulthandler` + Used to dump Python tracebacks explicitly, on a fault, after a timeout, + or on a user signal. + + Module :mod:`traceback` + Standard interface to extract, format and print stack traces of Python programs. + The debugger's prompt is ``(Pdb)``. Typical usage to run a program under control of the debugger is:: diff --git a/Doc/library/traceback.rst b/Doc/library/traceback.rst index 2f253dff1de7..a7a015f17193 100644 --- a/Doc/library/traceback.rst +++ b/Doc/library/traceback.rst @@ -20,8 +20,15 @@ The module uses traceback objects --- this is the object type that is stored in the :data:`sys.last_traceback` variable and returned as the third item from :func:`sys.exc_info`. -The module defines the following functions: +.. seealso:: + + Module :mod:`faulthandler` + Used to dump Python tracebacks explicitly, on a fault, after a timeout, or on a user signal. + Module :mod:`pdb` + Interactive source code debugger for Python programs. + +The module defines the following functions: .. function:: print_tb(tb, limit=None, file=None) From webhook-mailer at python.org Fri Feb 3 22:57:39 2023 From: webhook-mailer at python.org (miss-islington) Date: Sat, 04 Feb 2023 03:57:39 -0000 Subject: [Python-checkins] GH-56426: Add cross-reference to the documentation for faulthandler, traceback, and pdb. (GH-101157) Message-ID: https://github.com/python/cpython/commit/dbdbc796d264b8d63c89c2ca3ea9634bbaee2d2d commit: dbdbc796d264b8d63c89c2ca3ea9634bbaee2d2d branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-03T19:57:34-08:00 summary: GH-56426: Add cross-reference to the documentation for faulthandler, traceback, and pdb. (GH-101157) (cherry picked from commit cef9de62b8bf5e2d11d5a074012dfa81dc4ea935) Co-authored-by: Furkan Onder Co-authored-by: C.A.M. Gerlach files: M Doc/library/faulthandler.rst M Doc/library/pdb.rst M Doc/library/traceback.rst diff --git a/Doc/library/faulthandler.rst b/Doc/library/faulthandler.rst index be0912376bd8..b80de69a79a8 100644 --- a/Doc/library/faulthandler.rst +++ b/Doc/library/faulthandler.rst @@ -43,6 +43,13 @@ Python is deadlocked. The :ref:`Python Development Mode ` calls :func:`faulthandler.enable` at Python startup. +.. seealso:: + + Module :mod:`pdb` + Interactive source code debugger for Python programs. + + Module :mod:`traceback` + Standard interface to extract, format and print stack traces of Python programs. Dumping the traceback --------------------- @@ -52,6 +59,8 @@ Dumping the traceback Dump the tracebacks of all threads into *file*. If *all_threads* is ``False``, dump only the current thread. + .. seealso:: :func:`traceback.print_tb`, which can be used to print a traceback object. + .. versionchanged:: 3.5 Added support for passing file descriptor to this function. @@ -178,4 +187,3 @@ handler: File "/home/python/cpython/Lib/ctypes/__init__.py", line 486 in string_at File "", line 1 in Segmentation fault - diff --git a/Doc/library/pdb.rst b/Doc/library/pdb.rst index dcd509de56ad..c87e32779609 100644 --- a/Doc/library/pdb.rst +++ b/Doc/library/pdb.rst @@ -27,6 +27,15 @@ The debugger is extensible -- it is actually defined as the class :class:`Pdb`. This is currently undocumented but easily understood by reading the source. The extension interface uses the modules :mod:`bdb` and :mod:`cmd`. +.. seealso:: + + Module :mod:`faulthandler` + Used to dump Python tracebacks explicitly, on a fault, after a timeout, + or on a user signal. + + Module :mod:`traceback` + Standard interface to extract, format and print stack traces of Python programs. + The debugger's prompt is ``(Pdb)``. Typical usage to run a program under control of the debugger is:: diff --git a/Doc/library/traceback.rst b/Doc/library/traceback.rst index c93e7f49c110..ab51cb9bb2d5 100644 --- a/Doc/library/traceback.rst +++ b/Doc/library/traceback.rst @@ -20,8 +20,15 @@ The module uses traceback objects --- this is the object type that is stored in the :data:`sys.last_traceback` variable and returned as the third item from :func:`sys.exc_info`. -The module defines the following functions: +.. seealso:: + + Module :mod:`faulthandler` + Used to dump Python tracebacks explicitly, on a fault, after a timeout, or on a user signal. + Module :mod:`pdb` + Interactive source code debugger for Python programs. + +The module defines the following functions: .. function:: print_tb(tb, limit=None, file=None) From webhook-mailer at python.org Sat Feb 4 02:55:36 2023 From: webhook-mailer at python.org (corona10) Date: Sat, 04 Feb 2023 07:55:36 -0000 Subject: [Python-checkins] =?utf-8?q?gh-101282=3A_Update_BOLT_--split-fun?= =?utf-8?q?ctions_flag_not_to_use_deprecated_u=E2=80=A6_=28gh-101557=29?= Message-ID: https://github.com/python/cpython/commit/144aaa74bbd77aee822ee92344744dbb05aa2f30 commit: 144aaa74bbd77aee822ee92344744dbb05aa2f30 branch: main author: Dong-hee Na committer: corona10 date: 2023-02-04T16:55:31+09:00 summary: gh-101282: Update BOLT --split-functions flag not to use deprecated u? (gh-101557) gh-101282: Update BOLT --split-functions flag not to use deprecated usage files: A Misc/NEWS.d/next/Build/2023-02-04-06-59-07.gh-issue-101282.7sQz5l.rst M Makefile.pre.in diff --git a/Makefile.pre.in b/Makefile.pre.in index 618bb7b5f730..f2f9371e8ac0 100644 --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -647,7 +647,7 @@ bolt-opt: @PREBOLT_RULE@ @LLVM_BOLT@ ./$(BUILDPYTHON) -instrument -instrumentation-file-append-pid -instrumentation-file=$(abspath $(BUILDPYTHON).bolt) -o $(BUILDPYTHON).bolt_inst ./$(BUILDPYTHON).bolt_inst $(PROFILE_TASK) || true @MERGE_FDATA@ $(BUILDPYTHON).*.fdata > $(BUILDPYTHON).fdata - @LLVM_BOLT@ ./$(BUILDPYTHON) -o $(BUILDPYTHON).bolt -data=$(BUILDPYTHON).fdata -update-debug-sections -reorder-blocks=ext-tsp -reorder-functions=hfsort+ -split-functions=3 -icf=1 -inline-all -split-eh -reorder-functions-use-hot-size -peepholes=all -jump-tables=aggressive -inline-ap -indirect-call-promotion=all -dyno-stats -use-gnu-stack -frame-opt=hot + @LLVM_BOLT@ ./$(BUILDPYTHON) -o $(BUILDPYTHON).bolt -data=$(BUILDPYTHON).fdata -update-debug-sections -reorder-blocks=ext-tsp -reorder-functions=hfsort+ -split-functions -icf=1 -inline-all -split-eh -reorder-functions-use-hot-size -peepholes=all -jump-tables=aggressive -inline-ap -indirect-call-promotion=all -dyno-stats -use-gnu-stack -frame-opt=hot rm -f *.fdata rm -f $(BUILDPYTHON).bolt_inst mv $(BUILDPYTHON).bolt $(BUILDPYTHON) diff --git a/Misc/NEWS.d/next/Build/2023-02-04-06-59-07.gh-issue-101282.7sQz5l.rst b/Misc/NEWS.d/next/Build/2023-02-04-06-59-07.gh-issue-101282.7sQz5l.rst new file mode 100644 index 000000000000..49d485646673 --- /dev/null +++ b/Misc/NEWS.d/next/Build/2023-02-04-06-59-07.gh-issue-101282.7sQz5l.rst @@ -0,0 +1,2 @@ +Update BOLT configration not to use depreacted usage of ``--split +functions``. Patch by Dong-hee Na. From webhook-mailer at python.org Sat Feb 4 15:07:36 2023 From: webhook-mailer at python.org (gpshead) Date: Sat, 04 Feb 2023 20:07:36 -0000 Subject: [Python-checkins] gh-101322: Ensure test_zlib.ZlibDecompressorTest runs, fix errors in ZlibDecompressor (#101323) Message-ID: https://github.com/python/cpython/commit/a89e6713c4de99d4be5a1304b134e57a24ab10ac commit: a89e6713c4de99d4be5a1304b134e57a24ab10ac branch: main author: Ruben Vorderman committer: gpshead date: 2023-02-04T12:07:30-08:00 summary: gh-101322: Ensure test_zlib.ZlibDecompressorTest runs, fix errors in ZlibDecompressor (#101323) * Ensure test_zlib.ZlibDecompressorTest actually runs, fix errors in ZlibDecompressor. files: A Misc/NEWS.d/next/Library/2023-01-26-06-44-35.gh-issue-101323.h8Hk11.rst M Lib/test/test_zlib.py M Modules/zlibmodule.c diff --git a/Lib/test/test_zlib.py b/Lib/test/test_zlib.py index ae54f6c46891..3dac70eb1285 100644 --- a/Lib/test/test_zlib.py +++ b/Lib/test/test_zlib.py @@ -944,13 +944,18 @@ def choose_lines(source, number, seed=None, generator=random): """ -class ZlibDecompressorTest(): +class ZlibDecompressorTest(unittest.TestCase): # Test adopted from test_bz2.py TEXT = HAMLET_SCENE DATA = zlib.compress(HAMLET_SCENE) BAD_DATA = b"Not a valid deflate block" + BIG_TEXT = DATA * ((128 * 1024 // len(DATA)) + 1) + BIG_DATA = zlib.compress(BIG_TEXT) + def test_Constructor(self): - self.assertRaises(TypeError, zlib._ZlibDecompressor, 42) + self.assertRaises(TypeError, zlib._ZlibDecompressor, "ASDA") + self.assertRaises(TypeError, zlib._ZlibDecompressor, -15, "notbytes") + self.assertRaises(TypeError, zlib._ZlibDecompressor, -15, b"bytes", 5) def testDecompress(self): zlibd = zlib._ZlibDecompressor() diff --git a/Misc/NEWS.d/next/Library/2023-01-26-06-44-35.gh-issue-101323.h8Hk11.rst b/Misc/NEWS.d/next/Library/2023-01-26-06-44-35.gh-issue-101323.h8Hk11.rst new file mode 100644 index 000000000000..f8419e130dbc --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-01-26-06-44-35.gh-issue-101323.h8Hk11.rst @@ -0,0 +1,2 @@ +Fix a bug where errors where not thrown by zlib._ZlibDecompressor if +encountered during decompressing. diff --git a/Modules/zlibmodule.c b/Modules/zlibmodule.c index 1cdfd0132028..e2f7dbaca87a 100644 --- a/Modules/zlibmodule.c +++ b/Modules/zlibmodule.c @@ -1519,6 +1519,7 @@ decompress_buf(ZlibDecompressor *self, Py_ssize_t max_length) } } else if (err != Z_OK && err != Z_BUF_ERROR) { zlib_error(state, self->zst, err, "while decompressing data"); + goto error; } self->avail_in_real += self->zst.avail_in; From webhook-mailer at python.org Sat Feb 4 15:09:35 2023 From: webhook-mailer at python.org (gpshead) Date: Sat, 04 Feb 2023 20:09:35 -0000 Subject: [Python-checkins] [3.10] [3.11] gh-99952: fix refcount issues in ctypes.Structure from_param() result (GH-101339) (#101340) Message-ID: https://github.com/python/cpython/commit/b134978467409eb1a7ed0d5ca4a656fab1927919 commit: b134978467409eb1a7ed0d5ca4a656fab1927919 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: gpshead date: 2023-02-04T12:09:29-08:00 summary: [3.10] [3.11] gh-99952: fix refcount issues in ctypes.Structure from_param() result (GH-101339) (#101340) [3.11] gh-99952: [ctypes] fix refcount issues in from_param() result. (GH-100169) Fixes a reference counting issue with `ctypes.Structure` when a `from_param()` method call is used and the structure size is larger than a C pointer `sizeof(void*)`. This problem existed for a very long time, but became more apparent in 3.8+ by change likely due to garbage collection cleanup timing changes.. (cherry picked from commit dfad678d7024ab86d265d84ed45999e031a03691) (cherry picked from commit fa7c37af4936abfe34aa261d6ed9703bc5842ad4) Co-authored-by: Gregory P. Smith Co-authored-by: Yukihiro Nakadaira Co-authored-by: Kumar Aditya <59607654+kumaraditya303 at users.noreply.github.com> files: A Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst M Lib/ctypes/test/test_parameters.py M Modules/_ctypes/_ctypes.c M Modules/_ctypes/_ctypes_test.c diff --git a/Lib/ctypes/test/test_parameters.py b/Lib/ctypes/test/test_parameters.py index 38af7ac13d75..3fdc994e9078 100644 --- a/Lib/ctypes/test/test_parameters.py +++ b/Lib/ctypes/test/test_parameters.py @@ -244,6 +244,58 @@ def test_parameter_repr(self): self.assertRegex(repr(c_wchar_p.from_param('hihi')), r"^$") self.assertRegex(repr(c_void_p.from_param(0x12)), r"^$") + @test.support.cpython_only + def test_from_param_result_refcount(self): + # Issue #99952 + import _ctypes_test + from ctypes import PyDLL, c_int, c_void_p, py_object, Structure + + class X(Structure): + """This struct size is <= sizeof(void*).""" + _fields_ = [("a", c_void_p)] + + def __del__(self): + trace.append(4) + + @classmethod + def from_param(cls, value): + trace.append(2) + return cls() + + PyList_Append = PyDLL(_ctypes_test.__file__)._testfunc_pylist_append + PyList_Append.restype = c_int + PyList_Append.argtypes = [py_object, py_object, X] + + trace = [] + trace.append(1) + PyList_Append(trace, 3, "dummy") + trace.append(5) + + self.assertEqual(trace, [1, 2, 3, 4, 5]) + + class Y(Structure): + """This struct size is > sizeof(void*).""" + _fields_ = [("a", c_void_p), ("b", c_void_p)] + + def __del__(self): + trace.append(4) + + @classmethod + def from_param(cls, value): + trace.append(2) + return cls() + + PyList_Append = PyDLL(_ctypes_test.__file__)._testfunc_pylist_append + PyList_Append.restype = c_int + PyList_Append.argtypes = [py_object, py_object, Y] + + trace = [] + trace.append(1) + PyList_Append(trace, 3, "dummy") + trace.append(5) + + self.assertEqual(trace, [1, 2, 3, 4, 5]) + ################################################################ if __name__ == '__main__': diff --git a/Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst b/Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst new file mode 100644 index 000000000000..09ec96124953 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst @@ -0,0 +1,2 @@ +Fix a reference undercounting issue in :class:`ctypes.Structure` with ``from_param()`` +results larger than a C pointer. diff --git a/Modules/_ctypes/_ctypes.c b/Modules/_ctypes/_ctypes.c index a534a828d1ee..690258f540b5 100644 --- a/Modules/_ctypes/_ctypes.c +++ b/Modules/_ctypes/_ctypes.c @@ -406,6 +406,7 @@ _ctypes_alloc_format_string_with_shape(int ndim, const Py_ssize_t *shape, typedef struct { PyObject_HEAD void *ptr; + PyObject *keep; // If set, a reference to the original CDataObject. } StructParamObject; @@ -413,6 +414,7 @@ static void StructParam_dealloc(PyObject *myself) { StructParamObject *self = (StructParamObject *)myself; + Py_XDECREF(self->keep); PyMem_Free(self->ptr); Py_TYPE(self)->tp_free(myself); } @@ -460,6 +462,7 @@ StructUnionType_paramfunc(CDataObject *self) StructParamObject *struct_param = (StructParamObject *)obj; struct_param->ptr = ptr; + struct_param->keep = Py_NewRef(self); } else { ptr = self->b_ptr; obj = (PyObject *)self; diff --git a/Modules/_ctypes/_ctypes_test.c b/Modules/_ctypes/_ctypes_test.c index a33d15de9c0d..fe631a7e495b 100644 --- a/Modules/_ctypes/_ctypes_test.c +++ b/Modules/_ctypes/_ctypes_test.c @@ -1032,6 +1032,12 @@ EXPORT (HRESULT) KeepObject(IUnknown *punk) #endif +EXPORT(int) +_testfunc_pylist_append(PyObject *list, PyObject *item) +{ + return PyList_Append(list, item); +} + static struct PyModuleDef_Slot _ctypes_test_slots[] = { {0, NULL} }; From webhook-mailer at python.org Sat Feb 4 18:55:04 2023 From: webhook-mailer at python.org (rhettinger) Date: Sat, 04 Feb 2023 23:55:04 -0000 Subject: [Python-checkins] GH-100485: Create an alternative code path when an accurate fma() implementation is not available (#101567) Message-ID: https://github.com/python/cpython/commit/5a2b984568f72f0d7ff7c7b4ee8ce31af9fd1b7e commit: 5a2b984568f72f0d7ff7c7b4ee8ce31af9fd1b7e branch: main author: Raymond Hettinger committer: rhettinger date: 2023-02-04T17:54:44-06:00 summary: GH-100485: Create an alternative code path when an accurate fma() implementation is not available (#101567) files: M Lib/test/test_math.py M Modules/mathmodule.c diff --git a/Lib/test/test_math.py b/Lib/test/test_math.py index 2c84e55ab45c..8d849045b2d1 100644 --- a/Lib/test/test_math.py +++ b/Lib/test/test_math.py @@ -1450,6 +1450,11 @@ def Trial(dotfunc, c, n): n = 20 # Length of vectors c = 1e30 # Target condition number + # If the following test fails, it means that the C math library + # implementation of fma() is not compliant with the C99 standard + # and is inaccurate. To solve this problem, make a new build + # with the symbol UNRELIABLE_FMA defined. That will enable a + # slower but accurate code path that avoids the fma() call. relative_err = median(Trial(math.sumprod, c, n) for i in range(times)) self.assertLess(relative_err, 1e-16) diff --git a/Modules/mathmodule.c b/Modules/mathmodule.c index e6cdb3bae1ec..654336d6d9f4 100644 --- a/Modules/mathmodule.c +++ b/Modules/mathmodule.c @@ -2851,6 +2851,8 @@ dl_sum(double a, double b) return (DoubleLength) {x, y}; } +#ifndef UNRELIABLE_FMA + static DoubleLength dl_mul(double x, double y) { @@ -2860,6 +2862,47 @@ dl_mul(double x, double y) return (DoubleLength) {z, zz}; } +#else + +/* + The default implementation of dl_mul() depends on the C math library + having an accurate fma() function as required by ? 7.12.13.1 of the + C99 standard. + + The UNRELIABLE_FMA option is provided as a slower but accurate + alternative for builds where the fma() function is found wanting. + The speed penalty may be modest (17% slower on an Apple M1 Max), + so don't hesitate to enable this build option. + + The algorithms are from the T. J. Dekker paper: + A Floating-Point Technique for Extending the Available Precision + https://csclub.uwaterloo.ca/~pbarfuss/dekker1971.pdf +*/ + +static DoubleLength +dl_split(double x) { + // Dekker (5.5) and (5.6). + double t = x * 134217729.0; // Veltkamp constant = 2.0 ** 27 + 1 + double hi = t - (t - x); + double lo = x - hi; + return (DoubleLength) {hi, lo}; +} + +static DoubleLength +dl_mul(double x, double y) +{ + // Dekker (5.12) and mul12() + DoubleLength xx = dl_split(x); + DoubleLength yy = dl_split(y); + double p = xx.hi * yy.hi; + double q = xx.hi * yy.lo + xx.lo * yy.hi; + double z = p + q; + double zz = p - z + q + xx.lo * yy.lo; + return (DoubleLength) {z, zz}; +} + +#endif + typedef struct { double hi; double lo; double tiny; } TripleLength; static const TripleLength tl_zero = {0.0, 0.0, 0.0}; From webhook-mailer at python.org Sun Feb 5 02:14:21 2023 From: webhook-mailer at python.org (abalkin) Date: Sun, 05 Feb 2023 07:14:21 -0000 Subject: [Python-checkins] Fix detection of presence of time.tzset (gh-101539) (#101540) Message-ID: https://github.com/python/cpython/commit/ddd619cffa457776a22f224b7111bd39de289d66 commit: ddd619cffa457776a22f224b7111bd39de289d66 branch: main author: Alexander Belopolsky committer: abalkin date: 2023-02-05T11:14:15+04:00 summary: Fix detection of presence of time.tzset (gh-101539) (#101540) Resolves gh-101539 Related to gh-31898 files: M Lib/test/datetimetester.py diff --git a/Lib/test/datetimetester.py b/Lib/test/datetimetester.py index 6a1df174a1b9..570f803918c1 100644 --- a/Lib/test/datetimetester.py +++ b/Lib/test/datetimetester.py @@ -6173,7 +6173,7 @@ def test_gaps(self): self.assertEqual(ldt.fold, 0) @unittest.skipUnless( - hasattr(time, "tzset"), "time module has no attribute tzset" + hasattr(_time, "tzset"), "time module has no attribute tzset" ) def test_system_transitions(self): if ('Riyadh8' in self.zonename or From webhook-mailer at python.org Sun Feb 5 04:45:44 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Sun, 05 Feb 2023 09:45:44 -0000 Subject: [Python-checkins] Add missing preposition in argparse docs (#101548) Message-ID: https://github.com/python/cpython/commit/6e4a521c2ab84c082ad71e540b045699f0dbbc11 commit: 6e4a521c2ab84c082ad71e540b045699f0dbbc11 branch: main author: alnoki <43892045+alnoki at users.noreply.github.com> committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-05T15:15:07+05:30 summary: Add missing preposition in argparse docs (#101548) files: M Doc/library/argparse.rst diff --git a/Doc/library/argparse.rst b/Doc/library/argparse.rst index 475cac70291e..dbaa5d0d9b99 100644 --- a/Doc/library/argparse.rst +++ b/Doc/library/argparse.rst @@ -31,7 +31,7 @@ Core Functionality The :mod:`argparse` module's support for command-line interfaces is built around an instance of :class:`argparse.ArgumentParser`. It is a container for -argument specifications and has options that apply the parser as whole:: +argument specifications and has options that apply to the parser as whole:: parser = argparse.ArgumentParser( prog = 'ProgramName', From webhook-mailer at python.org Sun Feb 5 04:55:42 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Sun, 05 Feb 2023 09:55:42 -0000 Subject: [Python-checkins] gh-101221: Add options in the documentation of timeit command (#101222) Message-ID: https://github.com/python/cpython/commit/9b60ee976a6b66fe96c2d39051612999c26561e5 commit: 9b60ee976a6b66fe96c2d39051612999c26561e5 branch: main author: busywhitespace committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-05T15:25:36+05:30 summary: gh-101221: Add options in the documentation of timeit command (#101222) files: M Doc/library/timeit.rst diff --git a/Doc/library/timeit.rst b/Doc/library/timeit.rst index 5437704cec33..32ab565aba0c 100644 --- a/Doc/library/timeit.rst +++ b/Doc/library/timeit.rst @@ -206,7 +206,7 @@ Command-Line Interface When called as a program from the command line, the following form is used:: - python -m timeit [-n N] [-r N] [-u U] [-s S] [-h] [statement ...] + python -m timeit [-n N] [-r N] [-u U] [-s S] [-p] [-v] [-h] [statement ...] Where the following options are understood: From webhook-mailer at python.org Sun Feb 5 05:03:00 2023 From: webhook-mailer at python.org (mdickinson) Date: Sun, 05 Feb 2023 10:03:00 -0000 Subject: [Python-checkins] gh-101266: Fix __sizeof__ for subclasses of int (#101394) Message-ID: https://github.com/python/cpython/commit/39017e04b55d4c110787551dc9a0cb753f27d700 commit: 39017e04b55d4c110787551dc9a0cb753f27d700 branch: main author: Mark Dickinson committer: mdickinson date: 2023-02-05T10:02:53Z summary: gh-101266: Fix __sizeof__ for subclasses of int (#101394) Fix the behaviour of the `__sizeof__` method (and hence the results returned by `sys.getsizeof`) for subclasses of `int`. Previously, `int` subclasses gave identical results to the `int` base class, ignoring the presence of the instance dictionary. * Issue: gh-101266 files: A Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst M Lib/test/test_long.py M Objects/boolobject.c M Objects/longobject.c diff --git a/Lib/test/test_long.py b/Lib/test/test_long.py index 569ab15820e3..d299c34cec07 100644 --- a/Lib/test/test_long.py +++ b/Lib/test/test_long.py @@ -1601,5 +1601,44 @@ def test_square(self): self.assertEqual(n**2, (1 << (2 * bitlen)) - (1 << (bitlen + 1)) + 1) + def test___sizeof__(self): + self.assertEqual(int.__itemsize__, sys.int_info.sizeof_digit) + + # Pairs (test_value, number of allocated digits) + test_values = [ + # We always allocate space for at least one digit, even for + # a value of zero; sys.getsizeof should reflect that. + (0, 1), + (1, 1), + (-1, 1), + (BASE-1, 1), + (1-BASE, 1), + (BASE, 2), + (-BASE, 2), + (BASE*BASE - 1, 2), + (BASE*BASE, 3), + ] + + for value, ndigits in test_values: + with self.subTest(value): + self.assertEqual( + value.__sizeof__(), + int.__basicsize__ + int.__itemsize__ * ndigits + ) + + # Same test for a subclass of int. + class MyInt(int): + pass + + self.assertEqual(MyInt.__itemsize__, sys.int_info.sizeof_digit) + + for value, ndigits in test_values: + with self.subTest(value): + self.assertEqual( + MyInt(value).__sizeof__(), + MyInt.__basicsize__ + MyInt.__itemsize__ * ndigits + ) + + if __name__ == "__main__": unittest.main() diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst new file mode 100644 index 000000000000..51999bacb8de --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst @@ -0,0 +1 @@ +Fix :func:`sys.getsizeof` reporting for :class:`int` subclasses. diff --git a/Objects/boolobject.c b/Objects/boolobject.c index 55b4a4077ab7..a035f4633238 100644 --- a/Objects/boolobject.c +++ b/Objects/boolobject.c @@ -4,6 +4,8 @@ #include "pycore_object.h" // _Py_FatalRefcountError() #include "pycore_runtime.h" // _Py_ID() +#include + /* We define bool_repr to return "False" or "True" */ static PyObject * @@ -153,8 +155,8 @@ bool_dealloc(PyObject* Py_UNUSED(ignore)) PyTypeObject PyBool_Type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) "bool", - sizeof(struct _longobject), - 0, + offsetof(struct _longobject, long_value.ob_digit), /* tp_basicsize */ + sizeof(digit), /* tp_itemsize */ bool_dealloc, /* tp_dealloc */ 0, /* tp_vectorcall_offset */ 0, /* tp_getattr */ diff --git a/Objects/longobject.c b/Objects/longobject.c index 65bf15648b07..8293f133bed2 100644 --- a/Objects/longobject.c +++ b/Objects/longobject.c @@ -5882,13 +5882,10 @@ static Py_ssize_t int___sizeof___impl(PyObject *self) /*[clinic end generated code: output=3303f008eaa6a0a5 input=9b51620c76fc4507]*/ { - Py_ssize_t res; - - res = offsetof(PyLongObject, long_value.ob_digit) - /* using Py_MAX(..., 1) because we always allocate space for at least - one digit, even though the integer zero has a Py_SIZE of 0 */ - + Py_MAX(Py_ABS(Py_SIZE(self)), 1)*sizeof(digit); - return res; + /* using Py_MAX(..., 1) because we always allocate space for at least + one digit, even though the integer zero has a Py_SIZE of 0 */ + Py_ssize_t ndigits = Py_MAX(Py_ABS(Py_SIZE(self)), 1); + return Py_TYPE(self)->tp_basicsize + Py_TYPE(self)->tp_itemsize * ndigits; } /*[clinic input] From webhook-mailer at python.org Sun Feb 5 06:30:53 2023 From: webhook-mailer at python.org (pradyunsg) Date: Sun, 05 Feb 2023 11:30:53 -0000 Subject: [Python-checkins] gh-101570: Update bundled pip version to 23.0 (#101571) Message-ID: https://github.com/python/cpython/commit/19ac43629e6fd73f2dbf9fd5a0b97d791c28acc7 commit: 19ac43629e6fd73f2dbf9fd5a0b97d791c28acc7 branch: main author: Pradyun Gedam committer: pradyunsg date: 2023-02-05T11:30:44Z summary: gh-101570: Update bundled pip version to 23.0 (#101571) Update bundled pip version to 23.0 This is the current latest version of `pip`. --------- Co-authored-by: Pradyun Gedam Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com> files: A Lib/ensurepip/_bundled/pip-23.0-py3-none-any.whl A Misc/NEWS.d/next/Library/2023-02-04-21-01-49.gh-issue-101570.lbtUsD.rst D Lib/ensurepip/_bundled/pip-22.3.1-py3-none-any.whl M Lib/ensurepip/__init__.py diff --git a/Lib/ensurepip/__init__.py b/Lib/ensurepip/__init__.py index 1a2f57c07ba3..052b7bca28dd 100644 --- a/Lib/ensurepip/__init__.py +++ b/Lib/ensurepip/__init__.py @@ -11,7 +11,7 @@ __all__ = ["version", "bootstrap"] _PACKAGE_NAMES = ('setuptools', 'pip') _SETUPTOOLS_VERSION = "65.5.0" -_PIP_VERSION = "22.3.1" +_PIP_VERSION = "23.0" _PROJECTS = [ ("setuptools", _SETUPTOOLS_VERSION, "py3"), ("pip", _PIP_VERSION, "py3"), diff --git a/Lib/ensurepip/_bundled/pip-22.3.1-py3-none-any.whl b/Lib/ensurepip/_bundled/pip-23.0-py3-none-any.whl similarity index 73% rename from Lib/ensurepip/_bundled/pip-22.3.1-py3-none-any.whl rename to Lib/ensurepip/_bundled/pip-23.0-py3-none-any.whl index c5b7753e757d..bb9aebf474cf 100644 Binary files a/Lib/ensurepip/_bundled/pip-22.3.1-py3-none-any.whl and b/Lib/ensurepip/_bundled/pip-23.0-py3-none-any.whl differ diff --git a/Misc/NEWS.d/next/Library/2023-02-04-21-01-49.gh-issue-101570.lbtUsD.rst b/Misc/NEWS.d/next/Library/2023-02-04-21-01-49.gh-issue-101570.lbtUsD.rst new file mode 100644 index 000000000000..599edab6c071 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-04-21-01-49.gh-issue-101570.lbtUsD.rst @@ -0,0 +1 @@ +Upgrade pip wheel bundled with ensurepip (pip 23.0) From webhook-mailer at python.org Sun Feb 5 07:19:01 2023 From: webhook-mailer at python.org (mdickinson) Date: Sun, 05 Feb 2023 12:19:01 -0000 Subject: [Python-checkins] [3.11] gh-101266: Fix __sizeof__ for subclasses of int (GH-101394) (#101579) Message-ID: https://github.com/python/cpython/commit/cf89c16486a4cc297413e17d32082ec4f389d725 commit: cf89c16486a4cc297413e17d32082ec4f389d725 branch: 3.11 author: Mark Dickinson committer: mdickinson date: 2023-02-05T12:18:56Z summary: [3.11] gh-101266: Fix __sizeof__ for subclasses of int (GH-101394) (#101579) Fix the behaviour of the `__sizeof__` method (and hence the results returned by `sys.getsizeof`) for subclasses of `int`. Previously, `int` subclasses gave identical results to the `int` base class, ignoring the presence of the instance dictionary. (Manual backport of #101394 to the Python 3.11 branch.) files: A Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst M Lib/test/test_long.py M Objects/boolobject.c M Objects/longobject.c diff --git a/Lib/test/test_long.py b/Lib/test/test_long.py index 77b37ca1fa4a..03bed4c41f5a 100644 --- a/Lib/test/test_long.py +++ b/Lib/test/test_long.py @@ -1596,5 +1596,44 @@ def test_square(self): self.assertEqual(n**2, (1 << (2 * bitlen)) - (1 << (bitlen + 1)) + 1) + def test___sizeof__(self): + self.assertEqual(int.__itemsize__, sys.int_info.sizeof_digit) + + # Pairs (test_value, number of allocated digits) + test_values = [ + # We always allocate space for at least one digit, even for + # a value of zero; sys.getsizeof should reflect that. + (0, 1), + (1, 1), + (-1, 1), + (BASE-1, 1), + (1-BASE, 1), + (BASE, 2), + (-BASE, 2), + (BASE*BASE - 1, 2), + (BASE*BASE, 3), + ] + + for value, ndigits in test_values: + with self.subTest(value): + self.assertEqual( + value.__sizeof__(), + int.__basicsize__ + int.__itemsize__ * ndigits + ) + + # Same test for a subclass of int. + class MyInt(int): + pass + + self.assertEqual(MyInt.__itemsize__, sys.int_info.sizeof_digit) + + for value, ndigits in test_values: + with self.subTest(value): + self.assertEqual( + MyInt(value).__sizeof__(), + MyInt.__basicsize__ + MyInt.__itemsize__ * ndigits + ) + + if __name__ == "__main__": unittest.main() diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst new file mode 100644 index 000000000000..51999bacb8de --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst @@ -0,0 +1 @@ +Fix :func:`sys.getsizeof` reporting for :class:`int` subclasses. diff --git a/Objects/boolobject.c b/Objects/boolobject.c index 8a20e368d4a4..01acf0f00b66 100644 --- a/Objects/boolobject.c +++ b/Objects/boolobject.c @@ -4,6 +4,8 @@ #include "pycore_object.h" // _Py_FatalRefcountError() #include "pycore_runtime.h" // _Py_ID() +#include + /* We define bool_repr to return "False" or "True" */ static PyObject * @@ -154,8 +156,8 @@ bool_dealloc(PyObject* Py_UNUSED(ignore)) PyTypeObject PyBool_Type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) "bool", - sizeof(struct _longobject), - 0, + offsetof(struct _longobject, ob_digit), /* tp_basicsize */ + sizeof(digit), /* tp_itemsize */ bool_dealloc, /* tp_dealloc */ 0, /* tp_vectorcall_offset */ 0, /* tp_getattr */ diff --git a/Objects/longobject.c b/Objects/longobject.c index 84c05e8aabdf..39bd72ce5598 100644 --- a/Objects/longobject.c +++ b/Objects/longobject.c @@ -5664,13 +5664,10 @@ static Py_ssize_t int___sizeof___impl(PyObject *self) /*[clinic end generated code: output=3303f008eaa6a0a5 input=9b51620c76fc4507]*/ { - Py_ssize_t res; - - res = offsetof(PyLongObject, ob_digit) - /* using Py_MAX(..., 1) because we always allocate space for at least - one digit, even though the integer zero has a Py_SIZE of 0 */ - + Py_MAX(Py_ABS(Py_SIZE(self)), 1)*sizeof(digit); - return res; + /* using Py_MAX(..., 1) because we always allocate space for at least + one digit, even though the integer zero has a Py_SIZE of 0 */ + Py_ssize_t ndigits = Py_MAX(Py_ABS(Py_SIZE(self)), 1); + return Py_TYPE(self)->tp_basicsize + Py_TYPE(self)->tp_itemsize * ndigits; } /*[clinic input] From webhook-mailer at python.org Sun Feb 5 11:36:39 2023 From: webhook-mailer at python.org (mdickinson) Date: Sun, 05 Feb 2023 16:36:39 -0000 Subject: [Python-checkins] Revert "gh-89381: Fix invalid signatures of math/cmath.log (#101404)" (#101580) Message-ID: https://github.com/python/cpython/commit/0672a6c23b2b72666e10d9c61fc025e66aad9c2b commit: 0672a6c23b2b72666e10d9c61fc025e66aad9c2b branch: main author: Mark Dickinson committer: mdickinson date: 2023-02-05T16:36:33Z summary: Revert "gh-89381: Fix invalid signatures of math/cmath.log (#101404)" (#101580) This reverts commit 0ef92d979311ba82d4c41b22ef38e12e1b08b13d. files: D Misc/NEWS.d/next/Library/2023-01-16-10-42-58.gh-issue-89381.lM2WL0.rst M Doc/library/cmath.rst M Doc/library/math.rst M Lib/test/test_math.py M Modules/clinic/cmathmodule.c.h M Modules/clinic/mathmodule.c.h M Modules/cmathmodule.c M Modules/mathmodule.c diff --git a/Doc/library/cmath.rst b/Doc/library/cmath.rst index c575b90e6461..28cd96b0e12d 100644 --- a/Doc/library/cmath.rst +++ b/Doc/library/cmath.rst @@ -89,7 +89,7 @@ Power and logarithmic functions logarithms. -.. function:: log(x, base=None) +.. function:: log(x[, base]) Returns the logarithm of *x* to the given *base*. If the *base* is not specified, returns the natural logarithm of *x*. There is one branch cut, from 0 diff --git a/Doc/library/math.rst b/Doc/library/math.rst index 9da22b6ad056..797f32408eac 100644 --- a/Doc/library/math.rst +++ b/Doc/library/math.rst @@ -393,12 +393,13 @@ Power and logarithmic functions .. versionadded:: 3.2 -.. function:: log(x, base=None) +.. function:: log(x[, base]) - Return the logarithm of *x* to the given *base*. + With one argument, return the natural logarithm of *x* (to base *e*). + + With two arguments, return the logarithm of *x* to the given *base*, + calculated as ``log(x)/log(base)``. - If the *base* is not specified, returns the natural - logarithm (base *e*) of *x*. .. function:: log1p(x) diff --git a/Lib/test/test_math.py b/Lib/test/test_math.py index 8d849045b2d1..433161c2dd41 100644 --- a/Lib/test/test_math.py +++ b/Lib/test/test_math.py @@ -1146,7 +1146,6 @@ def testLog(self): self.ftest('log(1/e)', math.log(1/math.e), -1) self.ftest('log(1)', math.log(1), 0) self.ftest('log(e)', math.log(math.e), 1) - self.ftest('log(e, None)', math.log(math.e, None), 1) self.ftest('log(32,2)', math.log(32,2), 5) self.ftest('log(10**40, 10)', math.log(10**40, 10), 40) self.ftest('log(10**40, 10**20)', math.log(10**40, 10**20), 2) diff --git a/Misc/NEWS.d/next/Library/2023-01-16-10-42-58.gh-issue-89381.lM2WL0.rst b/Misc/NEWS.d/next/Library/2023-01-16-10-42-58.gh-issue-89381.lM2WL0.rst deleted file mode 100644 index 7bffe7d226e3..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-16-10-42-58.gh-issue-89381.lM2WL0.rst +++ /dev/null @@ -1 +0,0 @@ -:func:`~math.log` and :func:`~cmath.log` support default base=None values. diff --git a/Modules/clinic/cmathmodule.c.h b/Modules/clinic/cmathmodule.c.h index bc91c20f373b..b1da9452c61d 100644 --- a/Modules/clinic/cmathmodule.c.h +++ b/Modules/clinic/cmathmodule.c.h @@ -639,13 +639,12 @@ cmath_tanh(PyObject *module, PyObject *arg) } PyDoc_STRVAR(cmath_log__doc__, -"log($module, z, base=None, /)\n" +"log($module, z, base=, /)\n" "--\n" "\n" "log(z[, base]) -> the logarithm of z to the given base.\n" "\n" -"If the base is not specified or is None, returns the\n" -"natural logarithm (base e) of z."); +"If the base not specified, returns the natural logarithm (base e) of z."); #define CMATH_LOG_METHODDEF \ {"log", _PyCFunction_CAST(cmath_log), METH_FASTCALL, cmath_log__doc__}, @@ -658,7 +657,7 @@ cmath_log(PyObject *module, PyObject *const *args, Py_ssize_t nargs) { PyObject *return_value = NULL; Py_complex x; - PyObject *y_obj = Py_None; + PyObject *y_obj = NULL; if (!_PyArg_CheckPositional("log", nargs, 1, 2)) { goto exit; @@ -983,4 +982,4 @@ cmath_isclose(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObjec exit: return return_value; } -/*[clinic end generated code: output=2630f8740909a8f7 input=a9049054013a1b77]*/ +/*[clinic end generated code: output=0146c656e67f5d5f input=a9049054013a1b77]*/ diff --git a/Modules/clinic/mathmodule.c.h b/Modules/clinic/mathmodule.c.h index 0d61fd1be38d..1f9725883b98 100644 --- a/Modules/clinic/mathmodule.c.h +++ b/Modules/clinic/mathmodule.c.h @@ -187,37 +187,43 @@ math_modf(PyObject *module, PyObject *arg) } PyDoc_STRVAR(math_log__doc__, -"log($module, x, base=None, /)\n" -"--\n" -"\n" +"log(x, [base=math.e])\n" "Return the logarithm of x to the given base.\n" "\n" -"If the base is not specified or is None, returns the natural\n" -"logarithm (base e) of x."); +"If the base not specified, returns the natural logarithm (base e) of x."); #define MATH_LOG_METHODDEF \ - {"log", _PyCFunction_CAST(math_log), METH_FASTCALL, math_log__doc__}, + {"log", (PyCFunction)math_log, METH_VARARGS, math_log__doc__}, static PyObject * -math_log_impl(PyObject *module, PyObject *x, PyObject *base); +math_log_impl(PyObject *module, PyObject *x, int group_right_1, + PyObject *base); static PyObject * -math_log(PyObject *module, PyObject *const *args, Py_ssize_t nargs) +math_log(PyObject *module, PyObject *args) { PyObject *return_value = NULL; PyObject *x; - PyObject *base = Py_None; + int group_right_1 = 0; + PyObject *base = NULL; - if (!_PyArg_CheckPositional("log", nargs, 1, 2)) { - goto exit; - } - x = args[0]; - if (nargs < 2) { - goto skip_optional; + switch (PyTuple_GET_SIZE(args)) { + case 1: + if (!PyArg_ParseTuple(args, "O:log", &x)) { + goto exit; + } + break; + case 2: + if (!PyArg_ParseTuple(args, "OO:log", &x, &base)) { + goto exit; + } + group_right_1 = 1; + break; + default: + PyErr_SetString(PyExc_TypeError, "math.log requires 1 to 2 arguments"); + goto exit; } - base = args[1]; -skip_optional: - return_value = math_log_impl(module, x, base); + return_value = math_log_impl(module, x, group_right_1, base); exit: return return_value; @@ -948,4 +954,4 @@ math_ulp(PyObject *module, PyObject *arg) exit: return return_value; } -/*[clinic end generated code: output=afec63ebb0da709a input=a9049054013a1b77]*/ +/*[clinic end generated code: output=899211ec70e4506c input=a9049054013a1b77]*/ diff --git a/Modules/cmathmodule.c b/Modules/cmathmodule.c index 62caba031eda..2038ac26e658 100644 --- a/Modules/cmathmodule.c +++ b/Modules/cmathmodule.c @@ -952,24 +952,23 @@ cmath_tanh_impl(PyObject *module, Py_complex z) cmath.log z as x: Py_complex - base as y_obj: object = None + base as y_obj: object = NULL / log(z[, base]) -> the logarithm of z to the given base. -If the base is not specified or is None, returns the -natural logarithm (base e) of z. +If the base not specified, returns the natural logarithm (base e) of z. [clinic start generated code]*/ static PyObject * cmath_log_impl(PyObject *module, Py_complex x, PyObject *y_obj) -/*[clinic end generated code: output=4effdb7d258e0d94 input=e7db51859ebf70bf]*/ +/*[clinic end generated code: output=4effdb7d258e0d94 input=230ed3a71ecd000a]*/ { Py_complex y; errno = 0; x = c_log(x); - if (y_obj != Py_None) { + if (y_obj != NULL) { y = PyComplex_AsCComplex(y_obj); if (PyErr_Occurred()) { return NULL; diff --git a/Modules/mathmodule.c b/Modules/mathmodule.c index 654336d6d9f4..992957e675a7 100644 --- a/Modules/mathmodule.c +++ b/Modules/mathmodule.c @@ -2366,24 +2366,26 @@ loghelper(PyObject* arg, double (*func)(double)) math.log x: object - base: object = None + [ + base: object(c_default="NULL") = math.e + ] / Return the logarithm of x to the given base. -If the base is not specified or is None, returns the natural -logarithm (base e) of x. +If the base not specified, returns the natural logarithm (base e) of x. [clinic start generated code]*/ static PyObject * -math_log_impl(PyObject *module, PyObject *x, PyObject *base) -/*[clinic end generated code: output=1dead263cbb1e854 input=ef032cc9837943e1]*/ +math_log_impl(PyObject *module, PyObject *x, int group_right_1, + PyObject *base) +/*[clinic end generated code: output=7b5a39e526b73fc9 input=0f62d5726cbfebbd]*/ { PyObject *num, *den; PyObject *ans; num = loghelper(x, m_log); - if (num == NULL || base == Py_None) + if (num == NULL || base == NULL) return num; den = loghelper(base, m_log); From webhook-mailer at python.org Sun Feb 5 12:10:59 2023 From: webhook-mailer at python.org (mdickinson) Date: Sun, 05 Feb 2023 17:10:59 -0000 Subject: [Python-checkins] gh-76961: Fix the PEP3118 format string for ctypes.Structure (#5561) Message-ID: https://github.com/python/cpython/commit/90d85a9b4136aa1feb02f88aab614a3c29f20ed3 commit: 90d85a9b4136aa1feb02f88aab614a3c29f20ed3 branch: main author: Eric Wieser committer: mdickinson date: 2023-02-05T17:10:53Z summary: gh-76961: Fix the PEP3118 format string for ctypes.Structure (#5561) The summary of this diff is that it: * adds a `_ctypes_alloc_format_padding` function to append strings like `37x` to a format string to indicate 37 padding bytes * removes the branches that amount to "give up on producing a valid format string if the struct is packed" * combines the resulting adjacent `if (isStruct) {`s now that neither is `if (isStruct && !isPacked) {` * invokes `_ctypes_alloc_format_padding` to add padding between structure fields, and after the last structure field. The computation used for the total size is unchanged from ctypes already used. This patch does not affect any existing aligment computation; all it does is use subtraction to deduce the amount of paddnig introduced by the existing code. --- Without this fix, it would never include padding bytes - an assumption that was only valid in the case when `_pack_` was set - and this case was explicitly not implemented. This should allow conversion from ctypes structs to numpy structs Fixes https://github.com/numpy/numpy/issues/10528 files: A Misc/NEWS.d/next/Core and Builtins/2018-02-05-21-54-46.bpo-32780.Dtiz8z.rst M Doc/library/ctypes.rst M Lib/test/test_ctypes/test_pep3118.py M Modules/_ctypes/stgdict.c diff --git a/Doc/library/ctypes.rst b/Doc/library/ctypes.rst index 4de5c820f2c6..0642bbe9f99d 100644 --- a/Doc/library/ctypes.rst +++ b/Doc/library/ctypes.rst @@ -2510,6 +2510,7 @@ fields, or any other data types containing pointer type fields. An optional small integer that allows overriding the alignment of structure fields in the instance. :attr:`_pack_` must already be defined when :attr:`_fields_` is assigned, otherwise it will have no effect. + Setting this attribute to 0 is the same as not setting it at all. .. attribute:: _anonymous_ diff --git a/Lib/test/test_ctypes/test_pep3118.py b/Lib/test/test_ctypes/test_pep3118.py index efffc80a66fc..74fdf29fc9ad 100644 --- a/Lib/test/test_ctypes/test_pep3118.py +++ b/Lib/test/test_ctypes/test_pep3118.py @@ -86,6 +86,20 @@ class PackedPoint(Structure): _pack_ = 2 _fields_ = [("x", c_long), ("y", c_long)] +class PointMidPad(Structure): + _fields_ = [("x", c_byte), ("y", c_uint64)] + +class PackedPointMidPad(Structure): + _pack_ = 2 + _fields_ = [("x", c_byte), ("y", c_uint64)] + +class PointEndPad(Structure): + _fields_ = [("x", c_uint64), ("y", c_byte)] + +class PackedPointEndPad(Structure): + _pack_ = 2 + _fields_ = [("x", c_uint64), ("y", c_byte)] + class Point2(Structure): pass Point2._fields_ = [("x", c_long), ("y", c_long)] @@ -185,10 +199,13 @@ class Complete(Structure): ## structures and unions - (Point, "T{ 0); + + if (padding == 1) { + /* Use x instead of 1x, for brevity */ + return _ctypes_alloc_format_string(prefix, "x"); + } + + int ret = PyOS_snprintf(buf, sizeof(buf), "%zdx", padding); + assert(0 <= ret && ret < sizeof(buf)); + return _ctypes_alloc_format_string(prefix, buf); +} + /* Retrieve the (optional) _pack_ attribute from a type, the _fields_ attribute, and create an StgDictObject. Used for Structure and Union subclasses. @@ -346,11 +369,10 @@ PyCStructUnionType_update_stgdict(PyObject *type, PyObject *fields, int isStruct { StgDictObject *stgdict, *basedict; Py_ssize_t len, offset, size, align, i; - Py_ssize_t union_size, total_align; + Py_ssize_t union_size, total_align, aligned_size; Py_ssize_t field_size = 0; int bitofs; PyObject *tmp; - int isPacked; int pack; Py_ssize_t ffi_ofs; int big_endian; @@ -374,7 +396,6 @@ PyCStructUnionType_update_stgdict(PyObject *type, PyObject *fields, int isStruct return -1; } if (tmp) { - isPacked = 1; pack = _PyLong_AsInt(tmp); Py_DECREF(tmp); if (pack < 0) { @@ -389,7 +410,7 @@ PyCStructUnionType_update_stgdict(PyObject *type, PyObject *fields, int isStruct } } else { - isPacked = 0; + /* Setting `_pack_ = 0` amounts to using the default alignment */ pack = 0; } @@ -470,12 +491,10 @@ PyCStructUnionType_update_stgdict(PyObject *type, PyObject *fields, int isStruct } assert(stgdict->format == NULL); - if (isStruct && !isPacked) { + if (isStruct) { stgdict->format = _ctypes_alloc_format_string(NULL, "T{"); } else { - /* PEP3118 doesn't support union, or packed structures (well, - only standard packing, but we don't support the pep for - that). Use 'B' for bytes. */ + /* PEP3118 doesn't support union. Use 'B' for bytes. */ stgdict->format = _ctypes_alloc_format_string(NULL, "B"); } if (stgdict->format == NULL) @@ -543,12 +562,14 @@ PyCStructUnionType_update_stgdict(PyObject *type, PyObject *fields, int isStruct } else bitsize = 0; - if (isStruct && !isPacked) { + if (isStruct) { const char *fieldfmt = dict->format ? dict->format : "B"; const char *fieldname = PyUnicode_AsUTF8(name); char *ptr; Py_ssize_t len; char *buf; + Py_ssize_t last_size = size; + Py_ssize_t padding; if (fieldname == NULL) { @@ -556,11 +577,38 @@ PyCStructUnionType_update_stgdict(PyObject *type, PyObject *fields, int isStruct return -1; } + /* construct the field now, as `prop->offset` is `offset` with + corrected alignment */ + prop = PyCField_FromDesc(desc, i, + &field_size, bitsize, &bitofs, + &size, &offset, &align, + pack, big_endian); + if (prop == NULL) { + Py_DECREF(pair); + return -1; + } + + /* number of bytes between the end of the last field and the start + of this one */ + padding = ((CFieldObject *)prop)->offset - last_size; + + if (padding > 0) { + ptr = stgdict->format; + stgdict->format = _ctypes_alloc_format_padding(ptr, padding); + PyMem_Free(ptr); + if (stgdict->format == NULL) { + Py_DECREF(pair); + Py_DECREF(prop); + return -1; + } + } + len = strlen(fieldname) + strlen(fieldfmt); buf = PyMem_Malloc(len + 2 + 1); if (buf == NULL) { Py_DECREF(pair); + Py_DECREF(prop); PyErr_NoMemory(); return -1; } @@ -578,15 +626,9 @@ PyCStructUnionType_update_stgdict(PyObject *type, PyObject *fields, int isStruct if (stgdict->format == NULL) { Py_DECREF(pair); + Py_DECREF(prop); return -1; } - } - - if (isStruct) { - prop = PyCField_FromDesc(desc, i, - &field_size, bitsize, &bitofs, - &size, &offset, &align, - pack, big_endian); } else /* union */ { size = 0; offset = 0; @@ -595,14 +637,14 @@ PyCStructUnionType_update_stgdict(PyObject *type, PyObject *fields, int isStruct &field_size, bitsize, &bitofs, &size, &offset, &align, pack, big_endian); + if (prop == NULL) { + Py_DECREF(pair); + return -1; + } union_size = max(size, union_size); } total_align = max(align, total_align); - if (!prop) { - Py_DECREF(pair); - return -1; - } if (-1 == PyObject_SetAttr(type, name, prop)) { Py_DECREF(prop); Py_DECREF(pair); @@ -612,26 +654,41 @@ PyCStructUnionType_update_stgdict(PyObject *type, PyObject *fields, int isStruct Py_DECREF(prop); } - if (isStruct && !isPacked) { - char *ptr = stgdict->format; + if (!isStruct) { + size = union_size; + } + + /* Adjust the size according to the alignment requirements */ + aligned_size = ((size + total_align - 1) / total_align) * total_align; + + if (isStruct) { + char *ptr; + Py_ssize_t padding; + + /* Pad up to the full size of the struct */ + padding = aligned_size - size; + if (padding > 0) { + ptr = stgdict->format; + stgdict->format = _ctypes_alloc_format_padding(ptr, padding); + PyMem_Free(ptr); + if (stgdict->format == NULL) { + return -1; + } + } + + ptr = stgdict->format; stgdict->format = _ctypes_alloc_format_string(stgdict->format, "}"); PyMem_Free(ptr); if (stgdict->format == NULL) return -1; } - if (!isStruct) - size = union_size; - - /* Adjust the size according to the alignment requirements */ - size = ((size + total_align - 1) / total_align) * total_align; - stgdict->ffi_type_pointer.alignment = Py_SAFE_DOWNCAST(total_align, Py_ssize_t, unsigned short); - stgdict->ffi_type_pointer.size = size; + stgdict->ffi_type_pointer.size = aligned_size; - stgdict->size = size; + stgdict->size = aligned_size; stgdict->align = total_align; stgdict->length = len; /* ADD ffi_ofs? */ From webhook-mailer at python.org Sun Feb 5 12:37:05 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Sun, 05 Feb 2023 17:37:05 -0000 Subject: [Python-checkins] bpo-33591: Add support for path like objects to `ctypes.CDLL` (#7032) Message-ID: https://github.com/python/cpython/commit/f7e9fbacb250ad9df95d0024161b40dfdc9cc39e commit: f7e9fbacb250ad9df95d0024161b40dfdc9cc39e branch: main author: mrh1997 committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-05T23:06:57+05:30 summary: bpo-33591: Add support for path like objects to `ctypes.CDLL` (#7032) Co-authored-by: Oleg Iarygin files: A Misc/NEWS.d/next/Library/2018-05-21-17-18-00.gh-issue-77772.Fhg84L.rst M Doc/library/ctypes.rst M Lib/ctypes/__init__.py M Lib/test/test_ctypes/test_loading.py M Misc/ACKS diff --git a/Doc/library/ctypes.rst b/Doc/library/ctypes.rst index 0642bbe9f99d..8fd681286b81 100644 --- a/Doc/library/ctypes.rst +++ b/Doc/library/ctypes.rst @@ -1380,6 +1380,10 @@ way is to instantiate one of the following classes: DLLs and determine which one is not found using Windows debugging and tracing tools. + .. versionchanged:: 3.12 + + The *name* parameter can now be a :term:`path-like object`. + .. seealso:: `Microsoft DUMPBIN tool `_ @@ -1398,6 +1402,10 @@ way is to instantiate one of the following classes: .. versionchanged:: 3.3 :exc:`WindowsError` used to be raised. + .. versionchanged:: 3.12 + + The *name* parameter can now be a :term:`path-like object`. + .. class:: WinDLL(name, mode=DEFAULT_MODE, handle=None, use_errno=False, use_last_error=False, winmode=None) @@ -1405,6 +1413,10 @@ way is to instantiate one of the following classes: functions in these libraries use the ``stdcall`` calling convention, and are assumed to return :c:expr:`int` by default. + .. versionchanged:: 3.12 + + The *name* parameter can now be a :term:`path-like object`. + The Python :term:`global interpreter lock` is released before calling any function exported by these libraries, and reacquired afterwards. @@ -1418,6 +1430,10 @@ function exported by these libraries, and reacquired afterwards. Thus, this is only useful to call Python C api functions directly. + .. versionchanged:: 3.12 + + The *name* parameter can now be a :term:`path-like object`. + All these classes can be instantiated by calling them with at least one argument, the pathname of the shared library. If you have an existing handle to an already loaded shared library, it can be passed as the ``handle`` named diff --git a/Lib/ctypes/__init__.py b/Lib/ctypes/__init__.py index 2e9d4c5e7238..95353bab26cc 100644 --- a/Lib/ctypes/__init__.py +++ b/Lib/ctypes/__init__.py @@ -344,6 +344,8 @@ def __init__(self, name, mode=DEFAULT_MODE, handle=None, use_errno=False, use_last_error=False, winmode=None): + if name: + name = _os.fspath(name) self._name = name flags = self._func_flags_ if use_errno: diff --git a/Lib/test/test_ctypes/test_loading.py b/Lib/test/test_ctypes/test_loading.py index 15e365ed267d..f2434926a517 100644 --- a/Lib/test/test_ctypes/test_loading.py +++ b/Lib/test/test_ctypes/test_loading.py @@ -28,10 +28,20 @@ class LoaderTest(unittest.TestCase): unknowndll = "xxrandomnamexx" def test_load(self): - if libc_name is None: - self.skipTest('could not find libc') - CDLL(libc_name) - CDLL(os.path.basename(libc_name)) + if libc_name is not None: + test_lib = libc_name + else: + if os.name == "nt": + import _ctypes_test + test_lib = _ctypes_test.__file__ + else: + self.skipTest('could not find library to load') + CDLL(test_lib) + CDLL(os.path.basename(test_lib)) + class CTypesTestPathLikeCls: + def __fspath__(self): + return test_lib + CDLL(CTypesTestPathLikeCls()) self.assertRaises(OSError, CDLL, self.unknowndll) def test_load_version(self): diff --git a/Misc/ACKS b/Misc/ACKS index 74abcebe21ea..d27d60f5b360 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -754,6 +754,7 @@ Tim Hochberg Benjamin Hodgson Joerg-Cyril Hoehle Douwe Hoekstra +Robert Hoelzl Gregor Hoffleit Chris Hoffman Tim Hoffmann diff --git a/Misc/NEWS.d/next/Library/2018-05-21-17-18-00.gh-issue-77772.Fhg84L.rst b/Misc/NEWS.d/next/Library/2018-05-21-17-18-00.gh-issue-77772.Fhg84L.rst new file mode 100644 index 000000000000..3a7c6d45297b --- /dev/null +++ b/Misc/NEWS.d/next/Library/2018-05-21-17-18-00.gh-issue-77772.Fhg84L.rst @@ -0,0 +1,3 @@ +:class:`ctypes.CDLL`, :class:`ctypes.OleDLL`, :class:`ctypes.WinDLL`, +and :class:`ctypes.PyDLL` now accept :term:`path-like objects +` as their ``name`` argument. Patch by Robert Hoelzl. From webhook-mailer at python.org Sun Feb 5 12:45:02 2023 From: webhook-mailer at python.org (ethanfurman) Date: Sun, 05 Feb 2023 17:45:02 -0000 Subject: [Python-checkins] gh-101334: Don't force USTAR format in test_tarfile. (GH-101572) Message-ID: https://github.com/python/cpython/commit/ffcb8220d7a8c8ca169b467d9e4a752874f68af2 commit: ffcb8220d7a8c8ca169b467d9e4a752874f68af2 branch: main author: Gregory P. Smith committer: ethanfurman date: 2023-02-05T09:44:57-08:00 summary: gh-101334: Don't force USTAR format in test_tarfile. (GH-101572) That causes the test to fail when run using a high UID as that ancient format cannot represent it. The current default (PAX) and the old default (GNU) both support high UIDs. files: A Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst M Lib/test/test_tarfile.py diff --git a/Lib/test/test_tarfile.py b/Lib/test/test_tarfile.py index 213932069201..f15a80097668 100644 --- a/Lib/test/test_tarfile.py +++ b/Lib/test/test_tarfile.py @@ -225,6 +225,11 @@ def test_add_dir_getmember(self): self.add_dir_and_getmember('bar') self.add_dir_and_getmember('a'*101) + @unittest.skipIf( + (hasattr(os, 'getuid') and os.getuid() > 0o777_7777) or + (hasattr(os, 'getgid') and os.getgid() > 0o777_7777), + "uid or gid too high for USTAR format." + ) def add_dir_and_getmember(self, name): with os_helper.temp_cwd(): with tarfile.open(tmpname, 'w') as tar: diff --git a/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst b/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst new file mode 100644 index 000000000000..2a95fd9ae53c --- /dev/null +++ b/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst @@ -0,0 +1 @@ +``test_tarfile`` has been updated to pass when run as a high UID. From webhook-mailer at python.org Sun Feb 5 14:46:56 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 05 Feb 2023 19:46:56 -0000 Subject: [Python-checkins] gh-101334: Don't force USTAR format in test_tarfile. (GH-101572) Message-ID: https://github.com/python/cpython/commit/435fcb07e5e7caea4253e39e1e2de80e58be8ea2 commit: 435fcb07e5e7caea4253e39e1e2de80e58be8ea2 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-05T11:46:50-08:00 summary: gh-101334: Don't force USTAR format in test_tarfile. (GH-101572) That causes the test to fail when run using a high UID as that ancient format cannot represent it. The current default (PAX) and the old default (GNU) both support high UIDs. (cherry picked from commit ffcb8220d7a8c8ca169b467d9e4a752874f68af2) Co-authored-by: Gregory P. Smith files: A Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst M Lib/test/test_tarfile.py diff --git a/Lib/test/test_tarfile.py b/Lib/test/test_tarfile.py index c658cca7a780..cba95e9320ec 100644 --- a/Lib/test/test_tarfile.py +++ b/Lib/test/test_tarfile.py @@ -225,6 +225,11 @@ def test_add_dir_getmember(self): self.add_dir_and_getmember('bar') self.add_dir_and_getmember('a'*101) + @unittest.skipIf( + (hasattr(os, 'getuid') and os.getuid() > 0o777_7777) or + (hasattr(os, 'getgid') and os.getgid() > 0o777_7777), + "uid or gid too high for USTAR format." + ) def add_dir_and_getmember(self, name): with os_helper.temp_cwd(): with tarfile.open(tmpname, 'w') as tar: diff --git a/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst b/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst new file mode 100644 index 000000000000..2a95fd9ae53c --- /dev/null +++ b/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst @@ -0,0 +1 @@ +``test_tarfile`` has been updated to pass when run as a high UID. From webhook-mailer at python.org Sun Feb 5 14:47:17 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 05 Feb 2023 19:47:17 -0000 Subject: [Python-checkins] gh-101334: Don't force USTAR format in test_tarfile. (GH-101572) Message-ID: https://github.com/python/cpython/commit/6ae80323dfec893726d05319f77c65e78b62d2cd commit: 6ae80323dfec893726d05319f77c65e78b62d2cd branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-05T11:47:12-08:00 summary: gh-101334: Don't force USTAR format in test_tarfile. (GH-101572) That causes the test to fail when run using a high UID as that ancient format cannot represent it. The current default (PAX) and the old default (GNU) both support high UIDs. (cherry picked from commit ffcb8220d7a8c8ca169b467d9e4a752874f68af2) Co-authored-by: Gregory P. Smith files: A Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst M Lib/test/test_tarfile.py diff --git a/Lib/test/test_tarfile.py b/Lib/test/test_tarfile.py index cdfd426807bc..7a0830f68602 100644 --- a/Lib/test/test_tarfile.py +++ b/Lib/test/test_tarfile.py @@ -225,6 +225,11 @@ def test_add_dir_getmember(self): self.add_dir_and_getmember('bar') self.add_dir_and_getmember('a'*101) + @unittest.skipIf( + (hasattr(os, 'getuid') and os.getuid() > 0o777_7777) or + (hasattr(os, 'getgid') and os.getgid() > 0o777_7777), + "uid or gid too high for USTAR format." + ) def add_dir_and_getmember(self, name): with os_helper.temp_cwd(): with tarfile.open(tmpname, 'w') as tar: diff --git a/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst b/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst new file mode 100644 index 000000000000..2a95fd9ae53c --- /dev/null +++ b/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst @@ -0,0 +1 @@ +``test_tarfile`` has been updated to pass when run as a high UID. From webhook-mailer at python.org Sun Feb 5 21:55:43 2023 From: webhook-mailer at python.org (abalkin) Date: Mon, 06 Feb 2023 02:55:43 -0000 Subject: [Python-checkins] Trivial Change: Remove unhelpful doc in `datetime.timedelta` (#100164) Message-ID: https://github.com/python/cpython/commit/d3e2dd6e71bd8e5482973891926d5df19be687eb commit: d3e2dd6e71bd8e5482973891926d5df19be687eb branch: main author: Matty G committer: abalkin date: 2023-02-06T06:55:37+04:00 summary: Trivial Change: Remove unhelpful doc in `datetime.timedelta` (#100164) files: M Lib/datetime.py diff --git a/Lib/datetime.py b/Lib/datetime.py index 68746de1cabf..637144637485 100644 --- a/Lib/datetime.py +++ b/Lib/datetime.py @@ -587,9 +587,12 @@ class timedelta: returning a timedelta, and addition or subtraction of a datetime and a timedelta giving a datetime. - Representation: (days, seconds, microseconds). Why? Because I - felt like it. + Representation: (days, seconds, microseconds). """ + # The representation of (days, seconds, microseconds) was chosen + # arbitrarily; the exact rationale originally specified in the docstring + # was "Because I felt like it." + __slots__ = '_days', '_seconds', '_microseconds', '_hashcode' def __new__(cls, days=0, seconds=0, microseconds=0, From webhook-mailer at python.org Sun Feb 5 22:29:25 2023 From: webhook-mailer at python.org (ethanfurman) Date: Mon, 06 Feb 2023 03:29:25 -0000 Subject: [Python-checkins] gh-101541: [Enum] create flag psuedo-member without calling original __new__ (GH-101590) Message-ID: https://github.com/python/cpython/commit/ef7c2bfcf1fd618438f981ace64499a99ae9fae0 commit: ef7c2bfcf1fd618438f981ace64499a99ae9fae0 branch: main author: Ethan Furman committer: ethanfurman date: 2023-02-05T19:29:06-08:00 summary: gh-101541: [Enum] create flag psuedo-member without calling original __new__ (GH-101590) files: A Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst M Lib/enum.py M Lib/test/test_enum.py diff --git a/Lib/enum.py b/Lib/enum.py index adb61519abe9..d14e91a9b017 100644 --- a/Lib/enum.py +++ b/Lib/enum.py @@ -1429,12 +1429,11 @@ def _missing_(cls, value): % (cls.__name__, value, unknown, bin(unknown)) ) # normal Flag? - __new__ = getattr(cls, '__new_member__', None) - if cls._member_type_ is object and not __new__: + if cls._member_type_ is object: # construct a singleton enum pseudo-member pseudo_member = object.__new__(cls) else: - pseudo_member = (__new__ or cls._member_type_.__new__)(cls, value) + pseudo_member = cls._member_type_.__new__(cls, value) if not hasattr(pseudo_member, '_value_'): pseudo_member._value_ = value if member_value: diff --git a/Lib/test/test_enum.py b/Lib/test/test_enum.py index 1e653e94f6b5..0a2e0c14d268 100644 --- a/Lib/test/test_enum.py +++ b/Lib/test/test_enum.py @@ -2855,6 +2855,46 @@ class NTEnum(Enum): [TTuple(id=0, a=0, blist=[]), TTuple(id=1, a=2, blist=[4]), TTuple(id=2, a=4, blist=[0, 1, 2])], ) + def test_flag_with_custom_new(self): + class FlagFromChar(IntFlag): + def __new__(cls, c): + value = 1 << c + self = int.__new__(cls, value) + self._value_ = value + return self + # + a = ord('a') + # + self.assertEqual(FlagFromChar.a, 158456325028528675187087900672) + self.assertEqual(FlagFromChar.a|1, 158456325028528675187087900673) + # + # + class FlagFromChar(Flag): + def __new__(cls, c): + value = 1 << c + self = object.__new__(cls) + self._value_ = value + return self + # + a = ord('a') + z = 1 + # + self.assertEqual(FlagFromChar.a.value, 158456325028528675187087900672) + self.assertEqual((FlagFromChar.a|FlagFromChar.z).value, 158456325028528675187087900674) + # + # + class FlagFromChar(int, Flag, boundary=KEEP): + def __new__(cls, c): + value = 1 << c + self = int.__new__(cls, value) + self._value_ = value + return self + # + a = ord('a') + # + self.assertEqual(FlagFromChar.a, 158456325028528675187087900672) + self.assertEqual(FlagFromChar.a|1, 158456325028528675187087900673) + class TestOrder(unittest.TestCase): "test usage of the `_order_` attribute" diff --git a/Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst b/Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst new file mode 100644 index 000000000000..0f149e80dc54 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst @@ -0,0 +1 @@ +[Enum] - fix psuedo-flag creation From webhook-mailer at python.org Sun Feb 5 22:52:28 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 06 Feb 2023 03:52:28 -0000 Subject: [Python-checkins] gh-101541: [Enum] create flag psuedo-member without calling original __new__ (GH-101590) Message-ID: https://github.com/python/cpython/commit/cf8973c63882980ef77043a5bcfefbb75d8de35c commit: cf8973c63882980ef77043a5bcfefbb75d8de35c branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-05T19:52:22-08:00 summary: gh-101541: [Enum] create flag psuedo-member without calling original __new__ (GH-101590) (cherry picked from commit ef7c2bfcf1fd618438f981ace64499a99ae9fae0) Co-authored-by: Ethan Furman files: A Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst M Lib/enum.py M Lib/test/test_enum.py diff --git a/Lib/enum.py b/Lib/enum.py index 7ad599bb1e62..5cff41713ae5 100644 --- a/Lib/enum.py +++ b/Lib/enum.py @@ -1430,12 +1430,11 @@ def _missing_(cls, value): % (cls.__name__, value, unknown, bin(unknown)) ) # normal Flag? - __new__ = getattr(cls, '__new_member__', None) - if cls._member_type_ is object and not __new__: + if cls._member_type_ is object: # construct a singleton enum pseudo-member pseudo_member = object.__new__(cls) else: - pseudo_member = (__new__ or cls._member_type_.__new__)(cls, value) + pseudo_member = cls._member_type_.__new__(cls, value) if not hasattr(pseudo_member, '_value_'): pseudo_member._value_ = value if member_value: diff --git a/Lib/test/test_enum.py b/Lib/test/test_enum.py index 0e2da1d64c19..0143e8594d6e 100644 --- a/Lib/test/test_enum.py +++ b/Lib/test/test_enum.py @@ -2741,6 +2741,46 @@ class NTEnum(Enum): [TTuple(id=0, a=0, blist=[]), TTuple(id=1, a=2, blist=[4]), TTuple(id=2, a=4, blist=[0, 1, 2])], ) + def test_flag_with_custom_new(self): + class FlagFromChar(IntFlag): + def __new__(cls, c): + value = 1 << c + self = int.__new__(cls, value) + self._value_ = value + return self + # + a = ord('a') + # + self.assertEqual(FlagFromChar.a, 158456325028528675187087900672) + self.assertEqual(FlagFromChar.a|1, 158456325028528675187087900673) + # + # + class FlagFromChar(Flag): + def __new__(cls, c): + value = 1 << c + self = object.__new__(cls) + self._value_ = value + return self + # + a = ord('a') + z = 1 + # + self.assertEqual(FlagFromChar.a.value, 158456325028528675187087900672) + self.assertEqual((FlagFromChar.a|FlagFromChar.z).value, 158456325028528675187087900674) + # + # + class FlagFromChar(int, Flag, boundary=KEEP): + def __new__(cls, c): + value = 1 << c + self = int.__new__(cls, value) + self._value_ = value + return self + # + a = ord('a') + # + self.assertEqual(FlagFromChar.a, 158456325028528675187087900672) + self.assertEqual(FlagFromChar.a|1, 158456325028528675187087900673) + class TestOrder(unittest.TestCase): "test usage of the `_order_` attribute" diff --git a/Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst b/Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst new file mode 100644 index 000000000000..0f149e80dc54 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst @@ -0,0 +1 @@ +[Enum] - fix psuedo-flag creation From webhook-mailer at python.org Sun Feb 5 23:58:07 2023 From: webhook-mailer at python.org (corona10) Date: Mon, 06 Feb 2023 04:58:07 -0000 Subject: [Python-checkins] =?utf-8?q?gh-101372=3A_Fix_unicodedata=2Eis=5F?= =?utf-8?q?normalized_to_properly_handle_the_UCD_3=E2=80=A6_=28gh-101388?= =?utf-8?q?=29?= Message-ID: https://github.com/python/cpython/commit/9ef7e75434587fc8f167d73eee5dd9bdca62714b commit: 9ef7e75434587fc8f167d73eee5dd9bdca62714b branch: main author: Dong-hee Na committer: corona10 date: 2023-02-06T13:58:00+09:00 summary: gh-101372: Fix unicodedata.is_normalized to properly handle the UCD 3? (gh-101388) files: A Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst M Modules/unicodedata.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst new file mode 100644 index 000000000000..65a207e3f7e4 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst @@ -0,0 +1,2 @@ +Fix :func:`~unicodedata.is_normalized` to properly handle the UCD 3.2.0 +cases. Patch by Dong-hee Na. diff --git a/Modules/unicodedata.c b/Modules/unicodedata.c index 59fccd4b834d..c108f14871f9 100644 --- a/Modules/unicodedata.c +++ b/Modules/unicodedata.c @@ -800,7 +800,7 @@ is_normalized_quickcheck(PyObject *self, PyObject *input, bool nfc, bool k, { /* UCD 3.2.0 is requested, quickchecks must be disabled. */ if (UCD_Check(self)) { - return NO; + return MAYBE; } if (PyUnicode_IS_ASCII(input)) { From webhook-mailer at python.org Mon Feb 6 05:33:04 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 06 Feb 2023 10:33:04 -0000 Subject: [Python-checkins] =?utf-8?q?gh-101372=3A_Fix_unicodedata=2Eis=5F?= =?utf-8?q?normalized_to_properly_handle_the_UCD_3=E2=80=A6_=28gh-101388?= =?utf-8?q?=29?= Message-ID: https://github.com/python/cpython/commit/9bd000c7b82142a6ebed5b0d5fcb4a7188a43b27 commit: 9bd000c7b82142a6ebed5b0d5fcb4a7188a43b27 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-06T02:32:30-08:00 summary: gh-101372: Fix unicodedata.is_normalized to properly handle the UCD 3? (gh-101388) (cherry picked from commit 9ef7e75434587fc8f167d73eee5dd9bdca62714b) Co-authored-by: Dong-hee Na files: A Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst M Modules/unicodedata.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst new file mode 100644 index 000000000000..65a207e3f7e4 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst @@ -0,0 +1,2 @@ +Fix :func:`~unicodedata.is_normalized` to properly handle the UCD 3.2.0 +cases. Patch by Dong-hee Na. diff --git a/Modules/unicodedata.c b/Modules/unicodedata.c index 64918115283d..3c3937f66c18 100644 --- a/Modules/unicodedata.c +++ b/Modules/unicodedata.c @@ -803,7 +803,7 @@ is_normalized_quickcheck(PyObject *self, PyObject *input, bool nfc, bool k, { /* UCD 3.2.0 is requested, quickchecks must be disabled. */ if (UCD_Check(self)) { - return NO; + return MAYBE; } if (PyUnicode_IS_ASCII(input)) { From webhook-mailer at python.org Mon Feb 6 05:33:05 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 06 Feb 2023 10:33:05 -0000 Subject: [Python-checkins] =?utf-8?q?gh-101372=3A_Fix_unicodedata=2Eis=5F?= =?utf-8?q?normalized_to_properly_handle_the_UCD_3=E2=80=A6_=28gh-101388?= =?utf-8?q?=29?= Message-ID: https://github.com/python/cpython/commit/33250297411d38c16e0a4df1831edd17e5de8616 commit: 33250297411d38c16e0a4df1831edd17e5de8616 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-06T02:32:58-08:00 summary: gh-101372: Fix unicodedata.is_normalized to properly handle the UCD 3? (gh-101388) (cherry picked from commit 9ef7e75434587fc8f167d73eee5dd9bdca62714b) Co-authored-by: Dong-hee Na files: A Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst M Modules/unicodedata.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst new file mode 100644 index 000000000000..65a207e3f7e4 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst @@ -0,0 +1,2 @@ +Fix :func:`~unicodedata.is_normalized` to properly handle the UCD 3.2.0 +cases. Patch by Dong-hee Na. diff --git a/Modules/unicodedata.c b/Modules/unicodedata.c index f87eb6087989..517eb4842f86 100644 --- a/Modules/unicodedata.c +++ b/Modules/unicodedata.c @@ -805,7 +805,7 @@ is_normalized_quickcheck(PyObject *self, PyObject *input, bool nfc, bool k, { /* UCD 3.2.0 is requested, quickchecks must be disabled. */ if (UCD_Check(self)) { - return NO; + return MAYBE; } Py_ssize_t i, len; From webhook-mailer at python.org Mon Feb 6 07:25:38 2023 From: webhook-mailer at python.org (mdickinson) Date: Mon, 06 Feb 2023 12:25:38 -0000 Subject: [Python-checkins] gh-76961: Fix buildbot failures in test_pep3118 (#101587) Message-ID: https://github.com/python/cpython/commit/46416b9004b687856eaa73e5d48520cd768bbf82 commit: 46416b9004b687856eaa73e5d48520cd768bbf82 branch: main author: Mark Dickinson committer: mdickinson date: 2023-02-06T12:25:31Z summary: gh-76961: Fix buildbot failures in test_pep3118 (#101587) This PR fixes the buildbot failures introduced by the merge of #5561, by restricting the relevant tests to something that should work on both 32-bit and 64-bit platforms. It also silences some compiler warnings introduced in that PR. files: M Lib/test/test_ctypes/test_pep3118.py M Modules/_ctypes/stgdict.c diff --git a/Lib/test/test_ctypes/test_pep3118.py b/Lib/test/test_ctypes/test_pep3118.py index 74fdf29fc9ad..c8a70e3e3356 100644 --- a/Lib/test/test_ctypes/test_pep3118.py +++ b/Lib/test/test_ctypes/test_pep3118.py @@ -87,14 +87,14 @@ class PackedPoint(Structure): _fields_ = [("x", c_long), ("y", c_long)] class PointMidPad(Structure): - _fields_ = [("x", c_byte), ("y", c_uint64)] + _fields_ = [("x", c_byte), ("y", c_uint)] class PackedPointMidPad(Structure): _pack_ = 2 _fields_ = [("x", c_byte), ("y", c_uint64)] class PointEndPad(Structure): - _fields_ = [("x", c_uint64), ("y", c_byte)] + _fields_ = [("x", c_uint), ("y", c_byte)] class PackedPointEndPad(Structure): _pack_ = 2 @@ -199,14 +199,14 @@ class Complete(Structure): ## structures and unions - (Point2, "T{ https://github.com/python/cpython/commit/7a253103d4c64fcca4c0939a584c2028d8da6829 commit: 7a253103d4c64fcca4c0939a584c2028d8da6829 branch: main author: Steve Dower committer: zooba date: 2023-02-06T15:55:32Z summary: gh-101543: Ensure Windows registry path is only used when stdlib can't be found (GH-101544) files: A Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst M Modules/getpath.py diff --git a/Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst b/Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst new file mode 100644 index 000000000000..d4e2c6f23013 --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst @@ -0,0 +1,2 @@ +Ensure the install path in the registry is only used when the standard +library hasn't been located in any other way. diff --git a/Modules/getpath.py b/Modules/getpath.py index 6a0883878778..9913fcba497d 100644 --- a/Modules/getpath.py +++ b/Modules/getpath.py @@ -582,7 +582,7 @@ def search_up(prefix, *landmarks, test=isfile): # Detect prefix by searching from our executable location for the stdlib_dir if STDLIB_SUBDIR and STDLIB_LANDMARKS and executable_dir and not prefix: prefix = search_up(executable_dir, *STDLIB_LANDMARKS) - if prefix: + if prefix and not stdlib_dir: stdlib_dir = joinpath(prefix, STDLIB_SUBDIR) if PREFIX and not prefix: @@ -631,20 +631,6 @@ def search_up(prefix, *landmarks, test=isfile): warn('Consider setting $PYTHONHOME to [:]') -# If we haven't set [plat]stdlib_dir already, set them now -if not stdlib_dir: - if prefix: - stdlib_dir = joinpath(prefix, STDLIB_SUBDIR) - else: - stdlib_dir = '' - -if not platstdlib_dir: - if exec_prefix: - platstdlib_dir = joinpath(exec_prefix, PLATSTDLIB_LANDMARK) - else: - platstdlib_dir = '' - - # For a venv, update the main prefix/exec_prefix but leave the base ones unchanged # XXX: We currently do not update prefix here, but it happens in site.py #if venv_prefix: @@ -706,8 +692,9 @@ def search_up(prefix, *landmarks, test=isfile): pythonpath.extend(v.split(DELIM)) i += 1 # Paths from the core key get appended last, but only - # when home was not set and we aren't in a build dir - if not home_was_set and not venv_prefix and not build_prefix: + # when home was not set and we haven't found our stdlib + # some other way. + if not home and not stdlib_dir: v = winreg.QueryValue(key, None) if isinstance(v, str): pythonpath.extend(v.split(DELIM)) @@ -722,6 +709,11 @@ def search_up(prefix, *landmarks, test=isfile): pythonpath.append(joinpath(prefix, p)) # Then add stdlib_dir and platstdlib_dir + if not stdlib_dir and prefix: + stdlib_dir = joinpath(prefix, STDLIB_SUBDIR) + if not platstdlib_dir and exec_prefix: + platstdlib_dir = joinpath(exec_prefix, PLATSTDLIB_LANDMARK) + if os_name == 'nt': # QUIRK: Windows generates paths differently if platstdlib_dir: @@ -792,5 +784,6 @@ def search_up(prefix, *landmarks, test=isfile): config['base_exec_prefix'] = base_exec_prefix or exec_prefix config['platlibdir'] = platlibdir -config['stdlib_dir'] = stdlib_dir -config['platstdlib_dir'] = platstdlib_dir +# test_embed expects empty strings, not None +config['stdlib_dir'] = stdlib_dir or '' +config['platstdlib_dir'] = platstdlib_dir or '' From webhook-mailer at python.org Mon Feb 6 12:34:12 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 06 Feb 2023 17:34:12 -0000 Subject: [Python-checkins] gh-101543: Ensure Windows registry path is only used when stdlib can't be found (GH-101544) Message-ID: https://github.com/python/cpython/commit/d003bcc91aa9b79c45fc711ce831b736208f0fb5 commit: d003bcc91aa9b79c45fc711ce831b736208f0fb5 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-06T09:34:05-08:00 summary: gh-101543: Ensure Windows registry path is only used when stdlib can't be found (GH-101544) (cherry picked from commit 7a253103d4c64fcca4c0939a584c2028d8da6829) Co-authored-by: Steve Dower files: A Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst M Modules/getpath.py diff --git a/Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst b/Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst new file mode 100644 index 000000000000..d4e2c6f23013 --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst @@ -0,0 +1,2 @@ +Ensure the install path in the registry is only used when the standard +library hasn't been located in any other way. diff --git a/Modules/getpath.py b/Modules/getpath.py index e84714170732..74f918cf28e4 100644 --- a/Modules/getpath.py +++ b/Modules/getpath.py @@ -582,7 +582,7 @@ def search_up(prefix, *landmarks, test=isfile): # Detect prefix by searching from our executable location for the stdlib_dir if STDLIB_SUBDIR and STDLIB_LANDMARKS and executable_dir and not prefix: prefix = search_up(executable_dir, *STDLIB_LANDMARKS) - if prefix: + if prefix and not stdlib_dir: stdlib_dir = joinpath(prefix, STDLIB_SUBDIR) if PREFIX and not prefix: @@ -631,20 +631,6 @@ def search_up(prefix, *landmarks, test=isfile): warn('Consider setting $PYTHONHOME to [:]') -# If we haven't set [plat]stdlib_dir already, set them now -if not stdlib_dir: - if prefix: - stdlib_dir = joinpath(prefix, STDLIB_SUBDIR) - else: - stdlib_dir = '' - -if not platstdlib_dir: - if exec_prefix: - platstdlib_dir = joinpath(exec_prefix, PLATSTDLIB_LANDMARK) - else: - platstdlib_dir = '' - - # For a venv, update the main prefix/exec_prefix but leave the base ones unchanged # XXX: We currently do not update prefix here, but it happens in site.py #if venv_prefix: @@ -706,8 +692,9 @@ def search_up(prefix, *landmarks, test=isfile): pythonpath.extend(v.split(DELIM)) i += 1 # Paths from the core key get appended last, but only - # when home was not set and we aren't in a build dir - if not home_was_set and not venv_prefix and not build_prefix: + # when home was not set and we haven't found our stdlib + # some other way. + if not home and not stdlib_dir: v = winreg.QueryValue(key, None) if isinstance(v, str): pythonpath.extend(v.split(DELIM)) @@ -722,6 +709,11 @@ def search_up(prefix, *landmarks, test=isfile): pythonpath.append(joinpath(prefix, p)) # Then add stdlib_dir and platstdlib_dir + if not stdlib_dir and prefix: + stdlib_dir = joinpath(prefix, STDLIB_SUBDIR) + if not platstdlib_dir and exec_prefix: + platstdlib_dir = joinpath(exec_prefix, PLATSTDLIB_LANDMARK) + if os_name == 'nt': # QUIRK: Windows generates paths differently if platstdlib_dir: @@ -792,5 +784,6 @@ def search_up(prefix, *landmarks, test=isfile): config['base_exec_prefix'] = base_exec_prefix or exec_prefix config['platlibdir'] = platlibdir -config['stdlib_dir'] = stdlib_dir -config['platstdlib_dir'] = platstdlib_dir +# test_embed expects empty strings, not None +config['stdlib_dir'] = stdlib_dir or '' +config['platstdlib_dir'] = platstdlib_dir or '' From webhook-mailer at python.org Mon Feb 6 14:28:34 2023 From: webhook-mailer at python.org (JelleZijlstra) Date: Mon, 06 Feb 2023 19:28:34 -0000 Subject: [Python-checkins] gh-101562: typing: add tests for inheritance with NotRequired & Required in parent fields (#101563) Message-ID: https://github.com/python/cpython/commit/b96b344f251954bb64aeb13c3e0c460350565c2a commit: b96b344f251954bb64aeb13c3e0c460350565c2a branch: main author: Eclips4 <80244920+Eclips4 at users.noreply.github.com> committer: JelleZijlstra date: 2023-02-06T11:28:24-08:00 summary: gh-101562: typing: add tests for inheritance with NotRequired & Required in parent fields (#101563) files: M Lib/test/test_typing.py M Misc/ACKS diff --git a/Lib/test/test_typing.py b/Lib/test/test_typing.py index 5aa49bb0e245..7a460d94469f 100644 --- a/Lib/test/test_typing.py +++ b/Lib/test/test_typing.py @@ -4892,6 +4892,18 @@ class NontotalMovie(TypedDict, total=False): title: Required[str] year: int +class ParentNontotalMovie(TypedDict, total=False): + title: Required[str] + +class ChildTotalMovie(ParentNontotalMovie): + year: NotRequired[int] + +class ParentDeeplyAnnotatedMovie(TypedDict): + title: Annotated[Annotated[Required[str], "foobar"], "another level"] + +class ChildDeeplyAnnotatedMovie(ParentDeeplyAnnotatedMovie): + year: NotRequired[Annotated[int, 2000]] + class AnnotatedMovie(TypedDict): title: Annotated[Required[str], "foobar"] year: NotRequired[Annotated[int, 2000]] @@ -5221,6 +5233,17 @@ def test_get_type_hints_typeddict(self): 'a': Annotated[Required[int], "a", "b", "c"] }) + self.assertEqual(get_type_hints(ChildTotalMovie), {"title": str, "year": int}) + self.assertEqual(get_type_hints(ChildTotalMovie, include_extras=True), { + "title": Required[str], "year": NotRequired[int] + }) + + self.assertEqual(get_type_hints(ChildDeeplyAnnotatedMovie), {"title": str, "year": int}) + self.assertEqual(get_type_hints(ChildDeeplyAnnotatedMovie, include_extras=True), { + "title": Annotated[Required[str], "foobar", "another level"], + "year": NotRequired[Annotated[int, 2000]] + }) + def test_get_type_hints_collections_abc_callable(self): # https://github.com/python/cpython/issues/91621 P = ParamSpec('P') @@ -6381,6 +6404,16 @@ def test_required_notrequired_keys(self): self.assertEqual(WeirdlyQuotedMovie.__optional_keys__, frozenset({"year"})) + self.assertEqual(ChildTotalMovie.__required_keys__, + frozenset({"title"})) + self.assertEqual(ChildTotalMovie.__optional_keys__, + frozenset({"year"})) + + self.assertEqual(ChildDeeplyAnnotatedMovie.__required_keys__, + frozenset({"title"})) + self.assertEqual(ChildDeeplyAnnotatedMovie.__optional_keys__, + frozenset({"year"})) + def test_multiple_inheritance(self): class One(TypedDict): one: int diff --git a/Misc/ACKS b/Misc/ACKS index d27d60f5b360..e12cbea0ebd6 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1414,6 +1414,7 @@ Jean-Fran?ois Pi?ronne Oleg Plakhotnyuk Anatoliy Platonov Marcel Plch +Kirill Podoprigora Remi Pointel Jon Poler Ariel Poliak From webhook-mailer at python.org Mon Feb 6 14:53:59 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 06 Feb 2023 19:53:59 -0000 Subject: [Python-checkins] gh-101562: typing: add tests for inheritance with NotRequired & Required in parent fields (GH-101563) Message-ID: https://github.com/python/cpython/commit/9e7acafa14e30a3c0cc20245ff6987cd732bf269 commit: 9e7acafa14e30a3c0cc20245ff6987cd732bf269 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-06T11:53:53-08:00 summary: gh-101562: typing: add tests for inheritance with NotRequired & Required in parent fields (GH-101563) (cherry picked from commit b96b344f251954bb64aeb13c3e0c460350565c2a) Co-authored-by: Eclips4 <80244920+Eclips4 at users.noreply.github.com> files: M Lib/test/test_typing.py M Misc/ACKS diff --git a/Lib/test/test_typing.py b/Lib/test/test_typing.py index 10023af4f788..276f95bacfcf 100644 --- a/Lib/test/test_typing.py +++ b/Lib/test/test_typing.py @@ -4872,6 +4872,18 @@ class NontotalMovie(TypedDict, total=False): title: Required[str] year: int +class ParentNontotalMovie(TypedDict, total=False): + title: Required[str] + +class ChildTotalMovie(ParentNontotalMovie): + year: NotRequired[int] + +class ParentDeeplyAnnotatedMovie(TypedDict): + title: Annotated[Annotated[Required[str], "foobar"], "another level"] + +class ChildDeeplyAnnotatedMovie(ParentDeeplyAnnotatedMovie): + year: NotRequired[Annotated[int, 2000]] + class AnnotatedMovie(TypedDict): title: Annotated[Required[str], "foobar"] year: NotRequired[Annotated[int, 2000]] @@ -5201,6 +5213,17 @@ def test_get_type_hints_typeddict(self): 'a': Annotated[Required[int], "a", "b", "c"] }) + self.assertEqual(get_type_hints(ChildTotalMovie), {"title": str, "year": int}) + self.assertEqual(get_type_hints(ChildTotalMovie, include_extras=True), { + "title": Required[str], "year": NotRequired[int] + }) + + self.assertEqual(get_type_hints(ChildDeeplyAnnotatedMovie), {"title": str, "year": int}) + self.assertEqual(get_type_hints(ChildDeeplyAnnotatedMovie, include_extras=True), { + "title": Annotated[Required[str], "foobar", "another level"], + "year": NotRequired[Annotated[int, 2000]] + }) + def test_get_type_hints_collections_abc_callable(self): # https://github.com/python/cpython/issues/91621 P = ParamSpec('P') @@ -6340,6 +6363,16 @@ def test_required_notrequired_keys(self): self.assertEqual(WeirdlyQuotedMovie.__optional_keys__, frozenset({"year"})) + self.assertEqual(ChildTotalMovie.__required_keys__, + frozenset({"title"})) + self.assertEqual(ChildTotalMovie.__optional_keys__, + frozenset({"year"})) + + self.assertEqual(ChildDeeplyAnnotatedMovie.__required_keys__, + frozenset({"title"})) + self.assertEqual(ChildDeeplyAnnotatedMovie.__optional_keys__, + frozenset({"year"})) + def test_multiple_inheritance(self): class One(TypedDict): one: int diff --git a/Misc/ACKS b/Misc/ACKS index dc47e894b764..f674332751c3 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1403,6 +1403,7 @@ Jean-Fran?ois Pi?ronne Oleg Plakhotnyuk Anatoliy Platonov Marcel Plch +Kirill Podoprigora Remi Pointel Jon Poler Ariel Poliak From webhook-mailer at python.org Mon Feb 6 16:05:48 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 06 Feb 2023 21:05:48 -0000 Subject: [Python-checkins] =?utf-8?b?Z2gtMTAxNjA5OiBGaXggIuKAmHN0YXRl4oCZ?= =?utf-8?q?_may_be_used_uninitialized=22_warning_in_=60=5Fxxinterpchannels?= =?utf-8?q?module=60_=28GH-101610=29?= Message-ID: https://github.com/python/cpython/commit/262003fd3297f7f4ee09cebd1abb225066412ce7 commit: 262003fd3297f7f4ee09cebd1abb225066412ce7 branch: main author: Nikita Sobolev committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-06T13:05:41-08:00 summary: gh-101609: Fix "?state? may be used uninitialized" warning in `_xxinterpchannelsmodule` (GH-101610) I went with the easiest solution: just removing the offending line. See the issue description with my reasoning. https://github.com/python/cpython/issues/101609 files: M Modules/_xxinterpchannelsmodule.c diff --git a/Modules/_xxinterpchannelsmodule.c b/Modules/_xxinterpchannelsmodule.c index 8601a189e875..60538c318748 100644 --- a/Modules/_xxinterpchannelsmodule.c +++ b/Modules/_xxinterpchannelsmodule.c @@ -174,7 +174,9 @@ static int clear_module_state(module_state *state) { /* heap types */ - (void)_PyCrossInterpreterData_UnregisterClass(state->ChannelIDType); + if (state->ChannelIDType != NULL) { + (void)_PyCrossInterpreterData_UnregisterClass(state->ChannelIDType); + } Py_CLEAR(state->ChannelIDType); /* exceptions */ @@ -2269,7 +2271,6 @@ module_exec(PyObject *mod) return 0; error: - (void)_PyCrossInterpreterData_UnregisterClass(state->ChannelIDType); _globals_fini(); return -1; } From webhook-mailer at python.org Mon Feb 6 16:39:31 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Mon, 06 Feb 2023 21:39:31 -0000 Subject: [Python-checkins] gh-59956: Partial Fix for GILState API Compatibility with Subinterpreters (gh-101431) Message-ID: https://github.com/python/cpython/commit/132b3f8302c021ac31e9c1797a127d57faa1afee commit: 132b3f8302c021ac31e9c1797a127d57faa1afee branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-06T14:39:25-07:00 summary: gh-59956: Partial Fix for GILState API Compatibility with Subinterpreters (gh-101431) The GILState API (PEP 311) implementation from 2003 made the assumption that only one thread state would ever be used for any given OS thread, explicitly disregarding the case of subinterpreters. However, PyThreadState_Swap() still facilitated switching between subinterpreters, meaning the "current" thread state (holding the GIL), and the GILState thread state could end up out of sync, causing problems (including crashes). This change addresses the issue by keeping the two in sync in PyThreadState_Swap(). I verified the fix against gh-99040. Note that the other GILState-subinterpreter incompatibility (with autoInterpreterState) is not resolved here. https://github.com/python/cpython/issues/59956 files: A Misc/NEWS.d/next/Core and Builtins/2023-01-30-11-56-09.gh-issue-59956.7xqnC_.rst M Python/pystate.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-30-11-56-09.gh-issue-59956.7xqnC_.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-30-11-56-09.gh-issue-59956.7xqnC_.rst new file mode 100644 index 000000000000..b3c1896b9493 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-01-30-11-56-09.gh-issue-59956.7xqnC_.rst @@ -0,0 +1,3 @@ +The GILState API is now partially compatible with subinterpreters. +Previously, ``PyThreadState_GET()`` and ``PyGILState_GetThisThreadState()`` +would get out of sync, causing inconsistent behavior and crashes. diff --git a/Python/pystate.c b/Python/pystate.c index 8bb49d954a81..ed8c2e212a55 100644 --- a/Python/pystate.c +++ b/Python/pystate.c @@ -266,10 +266,10 @@ unbind_tstate(PyThreadState *tstate) thread state for an OS level thread is when there are multiple interpreters. - The PyGILState_*() APIs don't work with multiple - interpreters (see bpo-10915 and bpo-15751), so this function - sets TSS only once. Thus, the first thread state created for that - given OS level thread will "win", which seems reasonable behaviour. + Before 3.12, the PyGILState_*() APIs didn't work with multiple + interpreters (see bpo-10915 and bpo-15751), so this function used + to set TSS only once. Thus, the first thread state created for that + given OS level thread would "win", which seemed reasonable behaviour. */ static void @@ -286,10 +286,6 @@ bind_gilstate_tstate(PyThreadState *tstate) assert(tstate != tcur); if (tcur != NULL) { - // The original gilstate implementation only respects the - // first thread state set. - // XXX Skipping like this does not play nice with multiple interpreters. - return; tcur->_status.bound_gilstate = 0; } gilstate_tss_set(runtime, tstate); @@ -1738,20 +1734,7 @@ _PyThreadState_Swap(_PyRuntimeState *runtime, PyThreadState *newts) tstate_activate(newts); } - /* It should not be possible for more than one thread state - to be used for a thread. Check this the best we can in debug - builds. - */ - // XXX The above isn't true when multiple interpreters are involved. #if defined(Py_DEBUG) - if (newts && gilstate_tss_initialized(runtime)) { - PyThreadState *check = gilstate_tss_get(runtime); - if (check != newts) { - if (check && check->interp == newts->interp) { - Py_FatalError("Invalid thread state for this thread"); - } - } - } errno = err; #endif return oldts; From webhook-mailer at python.org Mon Feb 6 16:59:53 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 06 Feb 2023 21:59:53 -0000 Subject: [Python-checkins] GH-101616: Mention the Docs Discourse forum in the "reporting docs issues" (GH-101617) Message-ID: https://github.com/python/cpython/commit/949c58f945b93af5b7bb70c6448e940da669065d commit: 949c58f945b93af5b7bb70c6448e940da669065d branch: main author: Mariatta Wijaya committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-06T13:59:45-08:00 summary: GH-101616: Mention the Docs Discourse forum in the "reporting docs issues" (GH-101617) Fixes https://github.com/python/cpython/issues/101616 files: M Doc/bugs.rst diff --git a/Doc/bugs.rst b/Doc/bugs.rst index 69d7c27410d5..4f30ef19ee4d 100644 --- a/Doc/bugs.rst +++ b/Doc/bugs.rst @@ -19,6 +19,9 @@ If you find a bug in this documentation or would like to propose an improvement, please submit a bug report on the :ref:`tracker `. If you have a suggestion on how to fix it, include that as well. +You can also open a discussion item on our +`Documentation Discourse forum `_. + If you're short on time, you can also email documentation bug reports to docs at python.org (behavioral bugs can be sent to python-list at python.org). 'docs@' is a mailing list run by volunteers; your request will be noticed, From webhook-mailer at python.org Mon Feb 6 17:07:06 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 06 Feb 2023 22:07:06 -0000 Subject: [Python-checkins] GH-101616: Mention the Docs Discourse forum in the "reporting docs issues" (GH-101617) Message-ID: https://github.com/python/cpython/commit/c993ffa4776c9ec00843a3e7fb4cd3444e7fe90e commit: c993ffa4776c9ec00843a3e7fb4cd3444e7fe90e branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-06T14:07:00-08:00 summary: GH-101616: Mention the Docs Discourse forum in the "reporting docs issues" (GH-101617) Fixes https://github.com/python/cpython/issues/101616 (cherry picked from commit 949c58f945b93af5b7bb70c6448e940da669065d) Co-authored-by: Mariatta Wijaya files: M Doc/bugs.rst diff --git a/Doc/bugs.rst b/Doc/bugs.rst index 69d7c27410d5..4f30ef19ee4d 100644 --- a/Doc/bugs.rst +++ b/Doc/bugs.rst @@ -19,6 +19,9 @@ If you find a bug in this documentation or would like to propose an improvement, please submit a bug report on the :ref:`tracker `. If you have a suggestion on how to fix it, include that as well. +You can also open a discussion item on our +`Documentation Discourse forum `_. + If you're short on time, you can also email documentation bug reports to docs at python.org (behavioral bugs can be sent to python-list at python.org). 'docs@' is a mailing list run by volunteers; your request will be noticed, From webhook-mailer at python.org Mon Feb 6 17:10:30 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 06 Feb 2023 22:10:30 -0000 Subject: [Python-checkins] GH-101616: Mention the Docs Discourse forum in the "reporting docs issues" (GH-101617) Message-ID: https://github.com/python/cpython/commit/25196d6fb2c4d5d78b9b1c56207246d4de8dc83b commit: 25196d6fb2c4d5d78b9b1c56207246d4de8dc83b branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-06T14:10:23-08:00 summary: GH-101616: Mention the Docs Discourse forum in the "reporting docs issues" (GH-101617) Fixes https://github.com/python/cpython/issues/101616 (cherry picked from commit 949c58f945b93af5b7bb70c6448e940da669065d) Co-authored-by: Mariatta Wijaya files: M Doc/bugs.rst diff --git a/Doc/bugs.rst b/Doc/bugs.rst index 69d7c27410d5..4f30ef19ee4d 100644 --- a/Doc/bugs.rst +++ b/Doc/bugs.rst @@ -19,6 +19,9 @@ If you find a bug in this documentation or would like to propose an improvement, please submit a bug report on the :ref:`tracker `. If you have a suggestion on how to fix it, include that as well. +You can also open a discussion item on our +`Documentation Discourse forum `_. + If you're short on time, you can also email documentation bug reports to docs at python.org (behavioral bugs can be sent to python-list at python.org). 'docs@' is a mailing list run by volunteers; your request will be noticed, From webhook-mailer at python.org Mon Feb 6 17:45:37 2023 From: webhook-mailer at python.org (iritkatriel) Date: Mon, 06 Feb 2023 22:45:37 -0000 Subject: [Python-checkins] gh-98831: rewrite COPY and SWAP in the instruction definition DSL (#101620) Message-ID: https://github.com/python/cpython/commit/38752760c91c87dd67af16d2cee611a22e647567 commit: 38752760c91c87dd67af16d2cee611a22e647567 branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-06T22:45:18Z summary: gh-98831: rewrite COPY and SWAP in the instruction definition DSL (#101620) files: M Python/bytecodes.c M Python/generated_cases.c.h M Python/opcode_metadata.h diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 8993567ac822..0fc0b3b8280f 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -3098,11 +3098,9 @@ dummy_func( PUSH(result); } - // stack effect: ( -- __0) - inst(COPY) { - assert(oparg != 0); - PyObject *peek = PEEK(oparg); - PUSH(Py_NewRef(peek)); + inst(COPY, (bottom, unused[oparg-1] -- bottom, unused[oparg-1], top)) { + assert(oparg > 0); + top = Py_NewRef(bottom); } inst(BINARY_OP, (unused/1, lhs, rhs -- res)) { @@ -3126,12 +3124,9 @@ dummy_func( ERROR_IF(res == NULL, error); } - // stack effect: ( -- ) - inst(SWAP) { - assert(oparg != 0); - PyObject *top = TOP(); - SET_TOP(PEEK(oparg)); - PEEK(oparg) = top; + inst(SWAP, (bottom, unused[oparg-2], top -- + top, unused[oparg-2], bottom)) { + assert(oparg >= 2); } inst(EXTENDED_ARG, (--)) { diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index e524bfcb99d4..f0f314a143c2 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -3723,9 +3723,12 @@ } TARGET(COPY) { - assert(oparg != 0); - PyObject *peek = PEEK(oparg); - PUSH(Py_NewRef(peek)); + PyObject *bottom = PEEK(1 + (oparg-1)); + PyObject *top; + assert(oparg > 0); + top = Py_NewRef(bottom); + STACK_GROW(1); + POKE(1, top); DISPATCH(); } @@ -3760,10 +3763,11 @@ } TARGET(SWAP) { - assert(oparg != 0); - PyObject *top = TOP(); - SET_TOP(PEEK(oparg)); - PEEK(oparg) = top; + PyObject *top = PEEK(1); + PyObject *bottom = PEEK(2 + (oparg-2)); + assert(oparg >= 2); + POKE(1, bottom); + POKE(2 + (oparg-2), top); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index 857526c35aa5..948d17519e27 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -333,11 +333,11 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case FORMAT_VALUE: return -1; case COPY: - return -1; + return (oparg-1) + 1; case BINARY_OP: return 2; case SWAP: - return -1; + return (oparg-2) + 2; case EXTENDED_ARG: return 0; case CACHE: @@ -679,11 +679,11 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case FORMAT_VALUE: return -1; case COPY: - return -1; + return (oparg-1) + 2; case BINARY_OP: return 1; case SWAP: - return -1; + return (oparg-2) + 2; case EXTENDED_ARG: return 0; case CACHE: From webhook-mailer at python.org Mon Feb 6 17:54:00 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Mon, 06 Feb 2023 22:54:00 -0000 Subject: [Python-checkins] gh-59956: Add a Test to Verify GILState Matches the "Current" Thread State (gh-101625) Message-ID: https://github.com/python/cpython/commit/914f8fd9f7fc5e48b54d938a68c932cc618ef3a6 commit: 914f8fd9f7fc5e48b54d938a68c932cc618ef3a6 branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-06T15:53:31-07:00 summary: gh-59956: Add a Test to Verify GILState Matches the "Current" Thread State (gh-101625) This test should have been in gh-101431. https://github.com/python/cpython/issues/59956 files: M Lib/test/test_capi/test_misc.py M Modules/_testcapimodule.c diff --git a/Lib/test/test_capi/test_misc.py b/Lib/test/test_capi/test_misc.py index dace37c362e5..03e22d7a2d38 100644 --- a/Lib/test/test_capi/test_misc.py +++ b/Lib/test/test_capi/test_misc.py @@ -1413,6 +1413,9 @@ def callback(): ret = assert_python_ok('-X', 'tracemalloc', '-c', code) self.assertIn(b'callback called', ret.out) + def test_gilstate_matches_current(self): + _testcapi.test_current_tstate_matches() + class Test_testcapi(unittest.TestCase): locals().update((name, getattr(_testcapi, name)) diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index f0d6e404f54a..5e47f4975a2d 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -1534,6 +1534,42 @@ crash_no_current_thread(PyObject *self, PyObject *Py_UNUSED(ignored)) return NULL; } +/* Test that the GILState thread and the "current" thread match. */ +static PyObject * +test_current_tstate_matches(PyObject *self, PyObject *Py_UNUSED(ignored)) +{ + PyThreadState *orig_tstate = PyThreadState_Get(); + + if (orig_tstate != PyGILState_GetThisThreadState()) { + PyErr_SetString(PyExc_RuntimeError, + "current thread state doesn't match GILState"); + return NULL; + } + + const char *err = NULL; + PyThreadState_Swap(NULL); + PyThreadState *substate = Py_NewInterpreter(); + + if (substate != PyThreadState_Get()) { + err = "subinterpreter thread state not current"; + goto finally; + } + if (substate != PyGILState_GetThisThreadState()) { + err = "subinterpreter thread state doesn't match GILState"; + goto finally; + } + +finally: + Py_EndInterpreter(substate); + PyThreadState_Swap(orig_tstate); + + if (err != NULL) { + PyErr_SetString(PyExc_RuntimeError, err); + return NULL; + } + Py_RETURN_NONE; +} + /* To run some code in a sub-interpreter. */ static PyObject * run_in_subinterp(PyObject *self, PyObject *args) @@ -3354,6 +3390,7 @@ static PyMethodDef TestMethods[] = { {"make_memoryview_from_NULL_pointer", make_memoryview_from_NULL_pointer, METH_NOARGS}, {"crash_no_current_thread", crash_no_current_thread, METH_NOARGS}, + {"test_current_tstate_matches", test_current_tstate_matches, METH_NOARGS}, {"run_in_subinterp", run_in_subinterp, METH_VARARGS}, {"run_in_subinterp_with_config", _PyCFunction_CAST(run_in_subinterp_with_config), From webhook-mailer at python.org Mon Feb 6 21:11:13 2023 From: webhook-mailer at python.org (gpshead) Date: Tue, 07 Feb 2023 02:11:13 -0000 Subject: [Python-checkins] gh-99108: Replace SHA2-224 & 256 with verified code from HACL* (#99109) Message-ID: https://github.com/python/cpython/commit/1fcc0efdaa84b3602c236391633b70ff36df149b commit: 1fcc0efdaa84b3602c236391633b70ff36df149b branch: main author: Jonathan Protzenko committer: gpshead date: 2023-02-06T18:11:01-08:00 summary: gh-99108: Replace SHA2-224 & 256 with verified code from HACL* (#99109) replacing hashlib primitives (for the non-OpenSSL case) with verified implementations from HACL*. This is the first PR in the series, and focuses specifically on SHA2-256 and SHA2-224. This PR imports Hacl_Streaming_SHA2 into the Python tree. This is the HACL* implementation of SHA2, which combines a core implementation of SHA2 along with a layer of buffer management that allows updating the digest with any number of bytes. This supersedes the previous implementation in the tree. @franziskuskiefer was kind enough to benchmark the changes: in addition to being verified (thus providing significant safety and security improvements), this implementation also provides a sizeable performance boost! ``` --------------------------------------------------------------- Benchmark Time CPU Iterations --------------------------------------------------------------- Sha2_256_Streaming 3163 ns 3160 ns 219353 // this PR LibTomCrypt_Sha2_256 5057 ns 5056 ns 136234 // library used by Python currently ``` The changes in this PR are as follows: - import the subset of HACL* that covers SHA2-256/224 into `Modules/_hacl` - rewire sha256module.c to use the HACL* implementation Co-authored-by: Gregory P. Smith [Google LLC] Co-authored-by: Erlend E. Aasland files: A Misc/NEWS.d/next/Security/2022-11-08-12-06-52.gh-issue-99108.4Wrsuh.rst A Modules/_hacl/Hacl_Streaming_SHA2.c A Modules/_hacl/Hacl_Streaming_SHA2.h A Modules/_hacl/README.md A Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h A Modules/_hacl/include/krml/internal/target.h A Modules/_hacl/include/krml/lowstar_endianness.h A Modules/_hacl/include/python_hacl_namespaces.h A Modules/_hacl/internal/Hacl_SHA2_Generic.h A Modules/_hacl/refresh.sh M Lib/test/test_hashlib.py M Makefile.pre.in M Modules/Setup.stdlib.in M Modules/sha256module.c M PCbuild/pythoncore.vcxproj M PCbuild/pythoncore.vcxproj.filters M configure M configure.ac diff --git a/Lib/test/test_hashlib.py b/Lib/test/test_hashlib.py index 450dc4933f47..9c92b4e9c280 100644 --- a/Lib/test/test_hashlib.py +++ b/Lib/test/test_hashlib.py @@ -22,6 +22,7 @@ from test.support import _4G, bigmemtest from test.support.import_helper import import_fresh_module from test.support import os_helper +from test.support import requires_resource from test.support import threading_helper from test.support import warnings_helper from http.client import HTTPException @@ -354,6 +355,15 @@ def test_large_update(self): self.assertEqual(m1.digest(*args), m4_copy.digest(*args)) self.assertEqual(m4.digest(*args), m4_digest) + @requires_resource('cpu') + def test_sha256_update_over_4gb(self): + zero_1mb = b"\0" * 1024 * 1024 + h = hashlib.sha256() + for i in range(0, 4096): + h.update(zero_1mb) + h.update(b"hello world") + self.assertEqual(h.hexdigest(), "a5364f7a52ebe2e25f1838a4ca715a893b6fd7a23f2a0d9e9762120da8b1bf53") + def check(self, name, data, hexdigest, shake=False, **kwargs): length = len(hexdigest)//2 hexdigest = hexdigest.lower() diff --git a/Makefile.pre.in b/Makefile.pre.in index f2f9371e8ac0..3641c4eeebee 100644 --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -2612,7 +2612,7 @@ MODULE__HASHLIB_DEPS=$(srcdir)/Modules/hashlib.h MODULE__IO_DEPS=$(srcdir)/Modules/_io/_iomodule.h MODULE__MD5_DEPS=$(srcdir)/Modules/hashlib.h MODULE__SHA1_DEPS=$(srcdir)/Modules/hashlib.h -MODULE__SHA256_DEPS=$(srcdir)/Modules/hashlib.h +MODULE__SHA256_DEPS=$(srcdir)/Modules/hashlib.h $(srcdir)/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h $(srcdir)/Modules/_hacl/include/krml/lowstar_endianness.h $(srcdir)/Modules/_hacl/include/krml/internal/target.h $(srcdir)/Modules/_hacl/Hacl_Streaming_SHA2.h MODULE__SHA3_DEPS=$(srcdir)/Modules/_sha3/sha3.c $(srcdir)/Modules/_sha3/sha3.h $(srcdir)/Modules/hashlib.h MODULE__SHA512_DEPS=$(srcdir)/Modules/hashlib.h MODULE__SOCKET_DEPS=$(srcdir)/Modules/socketmodule.h $(srcdir)/Modules/addrinfo.h $(srcdir)/Modules/getaddrinfo.c $(srcdir)/Modules/getnameinfo.c diff --git a/Misc/NEWS.d/next/Security/2022-11-08-12-06-52.gh-issue-99108.4Wrsuh.rst b/Misc/NEWS.d/next/Security/2022-11-08-12-06-52.gh-issue-99108.4Wrsuh.rst new file mode 100644 index 000000000000..64acc09c482e --- /dev/null +++ b/Misc/NEWS.d/next/Security/2022-11-08-12-06-52.gh-issue-99108.4Wrsuh.rst @@ -0,0 +1,4 @@ +Replace the builtin :mod:`hashlib` implementations of SHA2-224 and SHA2-256 +originally from LibTomCrypt with formally verified, side-channel resistant +code from the `HACL* `_ project. The +builtins remain a fallback only used when OpenSSL does not provide them. diff --git a/Modules/Setup.stdlib.in b/Modules/Setup.stdlib.in index 3fd591c70d49..f72783810f94 100644 --- a/Modules/Setup.stdlib.in +++ b/Modules/Setup.stdlib.in @@ -79,7 +79,7 @@ # hashing builtins, can be disabled with --without-builtin-hashlib-hashes @MODULE__MD5_TRUE at _md5 md5module.c @MODULE__SHA1_TRUE at _sha1 sha1module.c - at MODULE__SHA256_TRUE@_sha256 sha256module.c + at MODULE__SHA256_TRUE@_sha256 sha256module.c _hacl/Hacl_Streaming_SHA2.c @MODULE__SHA512_TRUE at _sha512 sha512module.c @MODULE__SHA3_TRUE at _sha3 _sha3/sha3module.c @MODULE__BLAKE2_TRUE at _blake2 _blake2/blake2module.c _blake2/blake2b_impl.c _blake2/blake2s_impl.c diff --git a/Modules/_hacl/Hacl_Streaming_SHA2.c b/Modules/_hacl/Hacl_Streaming_SHA2.c new file mode 100644 index 000000000000..84566571792a --- /dev/null +++ b/Modules/_hacl/Hacl_Streaming_SHA2.c @@ -0,0 +1,682 @@ +/* MIT License + * + * Copyright (c) 2016-2022 INRIA, CMU and Microsoft Corporation + * Copyright (c) 2022-2023 HACL* Contributors + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + + +#include "Hacl_Streaming_SHA2.h" + +#include "internal/Hacl_SHA2_Generic.h" + + +static inline void sha256_init(uint32_t *hash) +{ + KRML_MAYBE_FOR8(i, + (uint32_t)0U, + (uint32_t)8U, + (uint32_t)1U, + uint32_t *os = hash; + uint32_t x = Hacl_Impl_SHA2_Generic_h256[i]; + os[i] = x;); +} + +static inline void sha256_update0(uint8_t *b, uint32_t *hash) +{ + uint32_t hash_old[8U] = { 0U }; + uint32_t ws[16U] = { 0U }; + memcpy(hash_old, hash, (uint32_t)8U * sizeof (uint32_t)); + uint8_t *b10 = b; + uint32_t u = load32_be(b10); + ws[0U] = u; + uint32_t u0 = load32_be(b10 + (uint32_t)4U); + ws[1U] = u0; + uint32_t u1 = load32_be(b10 + (uint32_t)8U); + ws[2U] = u1; + uint32_t u2 = load32_be(b10 + (uint32_t)12U); + ws[3U] = u2; + uint32_t u3 = load32_be(b10 + (uint32_t)16U); + ws[4U] = u3; + uint32_t u4 = load32_be(b10 + (uint32_t)20U); + ws[5U] = u4; + uint32_t u5 = load32_be(b10 + (uint32_t)24U); + ws[6U] = u5; + uint32_t u6 = load32_be(b10 + (uint32_t)28U); + ws[7U] = u6; + uint32_t u7 = load32_be(b10 + (uint32_t)32U); + ws[8U] = u7; + uint32_t u8 = load32_be(b10 + (uint32_t)36U); + ws[9U] = u8; + uint32_t u9 = load32_be(b10 + (uint32_t)40U); + ws[10U] = u9; + uint32_t u10 = load32_be(b10 + (uint32_t)44U); + ws[11U] = u10; + uint32_t u11 = load32_be(b10 + (uint32_t)48U); + ws[12U] = u11; + uint32_t u12 = load32_be(b10 + (uint32_t)52U); + ws[13U] = u12; + uint32_t u13 = load32_be(b10 + (uint32_t)56U); + ws[14U] = u13; + uint32_t u14 = load32_be(b10 + (uint32_t)60U); + ws[15U] = u14; + KRML_MAYBE_FOR4(i0, + (uint32_t)0U, + (uint32_t)4U, + (uint32_t)1U, + KRML_MAYBE_FOR16(i, + (uint32_t)0U, + (uint32_t)16U, + (uint32_t)1U, + uint32_t k_t = Hacl_Impl_SHA2_Generic_k224_256[(uint32_t)16U * i0 + i]; + uint32_t ws_t = ws[i]; + uint32_t a0 = hash[0U]; + uint32_t b0 = hash[1U]; + uint32_t c0 = hash[2U]; + uint32_t d0 = hash[3U]; + uint32_t e0 = hash[4U]; + uint32_t f0 = hash[5U]; + uint32_t g0 = hash[6U]; + uint32_t h02 = hash[7U]; + uint32_t k_e_t = k_t; + uint32_t + t1 = + h02 + + + ((e0 << (uint32_t)26U | e0 >> (uint32_t)6U) + ^ + ((e0 << (uint32_t)21U | e0 >> (uint32_t)11U) + ^ (e0 << (uint32_t)7U | e0 >> (uint32_t)25U))) + + ((e0 & f0) ^ (~e0 & g0)) + + k_e_t + + ws_t; + uint32_t + t2 = + ((a0 << (uint32_t)30U | a0 >> (uint32_t)2U) + ^ + ((a0 << (uint32_t)19U | a0 >> (uint32_t)13U) + ^ (a0 << (uint32_t)10U | a0 >> (uint32_t)22U))) + + ((a0 & b0) ^ ((a0 & c0) ^ (b0 & c0))); + uint32_t a1 = t1 + t2; + uint32_t b1 = a0; + uint32_t c1 = b0; + uint32_t d1 = c0; + uint32_t e1 = d0 + t1; + uint32_t f1 = e0; + uint32_t g1 = f0; + uint32_t h12 = g0; + hash[0U] = a1; + hash[1U] = b1; + hash[2U] = c1; + hash[3U] = d1; + hash[4U] = e1; + hash[5U] = f1; + hash[6U] = g1; + hash[7U] = h12;); + if (i0 < (uint32_t)3U) + { + KRML_MAYBE_FOR16(i, + (uint32_t)0U, + (uint32_t)16U, + (uint32_t)1U, + uint32_t t16 = ws[i]; + uint32_t t15 = ws[(i + (uint32_t)1U) % (uint32_t)16U]; + uint32_t t7 = ws[(i + (uint32_t)9U) % (uint32_t)16U]; + uint32_t t2 = ws[(i + (uint32_t)14U) % (uint32_t)16U]; + uint32_t + s1 = + (t2 << (uint32_t)15U | t2 >> (uint32_t)17U) + ^ ((t2 << (uint32_t)13U | t2 >> (uint32_t)19U) ^ t2 >> (uint32_t)10U); + uint32_t + s0 = + (t15 << (uint32_t)25U | t15 >> (uint32_t)7U) + ^ ((t15 << (uint32_t)14U | t15 >> (uint32_t)18U) ^ t15 >> (uint32_t)3U); + ws[i] = s1 + t7 + s0 + t16;); + }); + KRML_MAYBE_FOR8(i, + (uint32_t)0U, + (uint32_t)8U, + (uint32_t)1U, + uint32_t *os = hash; + uint32_t x = hash[i] + hash_old[i]; + os[i] = x;); +} + +static inline void sha256_update_nblocks(uint32_t len, uint8_t *b, uint32_t *st) +{ + uint32_t blocks = len / (uint32_t)64U; + for (uint32_t i = (uint32_t)0U; i < blocks; i++) + { + uint8_t *b0 = b; + uint8_t *mb = b0 + i * (uint32_t)64U; + sha256_update0(mb, st); + } +} + +static inline void +sha256_update_last(uint64_t totlen, uint32_t len, uint8_t *b, uint32_t *hash) +{ + uint32_t blocks; + if (len + (uint32_t)8U + (uint32_t)1U <= (uint32_t)64U) + { + blocks = (uint32_t)1U; + } + else + { + blocks = (uint32_t)2U; + } + uint32_t fin = blocks * (uint32_t)64U; + uint8_t last[128U] = { 0U }; + uint8_t totlen_buf[8U] = { 0U }; + uint64_t total_len_bits = totlen << (uint32_t)3U; + store64_be(totlen_buf, total_len_bits); + uint8_t *b0 = b; + memcpy(last, b0, len * sizeof (uint8_t)); + last[len] = (uint8_t)0x80U; + memcpy(last + fin - (uint32_t)8U, totlen_buf, (uint32_t)8U * sizeof (uint8_t)); + uint8_t *last00 = last; + uint8_t *last10 = last + (uint32_t)64U; + uint8_t *l0 = last00; + uint8_t *l1 = last10; + uint8_t *lb0 = l0; + uint8_t *lb1 = l1; + uint8_t *last0 = lb0; + uint8_t *last1 = lb1; + sha256_update0(last0, hash); + if (blocks > (uint32_t)1U) + { + sha256_update0(last1, hash); + return; + } +} + +static inline void sha256_finish(uint32_t *st, uint8_t *h) +{ + uint8_t hbuf[32U] = { 0U }; + KRML_MAYBE_FOR8(i, + (uint32_t)0U, + (uint32_t)8U, + (uint32_t)1U, + store32_be(hbuf + i * (uint32_t)4U, st[i]);); + memcpy(h, hbuf, (uint32_t)32U * sizeof (uint8_t)); +} + +static inline void sha224_init(uint32_t *hash) +{ + KRML_MAYBE_FOR8(i, + (uint32_t)0U, + (uint32_t)8U, + (uint32_t)1U, + uint32_t *os = hash; + uint32_t x = Hacl_Impl_SHA2_Generic_h224[i]; + os[i] = x;); +} + +static inline void sha224_update_nblocks(uint32_t len, uint8_t *b, uint32_t *st) +{ + sha256_update_nblocks(len, b, st); +} + +static void sha224_update_last(uint64_t totlen, uint32_t len, uint8_t *b, uint32_t *st) +{ + sha256_update_last(totlen, len, b, st); +} + +static inline void sha224_finish(uint32_t *st, uint8_t *h) +{ + uint8_t hbuf[32U] = { 0U }; + KRML_MAYBE_FOR8(i, + (uint32_t)0U, + (uint32_t)8U, + (uint32_t)1U, + store32_be(hbuf + i * (uint32_t)4U, st[i]);); + memcpy(h, hbuf, (uint32_t)28U * sizeof (uint8_t)); +} + +/** +Allocate initial state for the SHA2_256 hash. The state is to be freed by +calling `free_256`. +*/ +Hacl_Streaming_SHA2_state_sha2_224 *Hacl_Streaming_SHA2_create_in_256(void) +{ + uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)64U, sizeof (uint8_t)); + uint32_t *block_state = (uint32_t *)KRML_HOST_CALLOC((uint32_t)8U, sizeof (uint32_t)); + Hacl_Streaming_SHA2_state_sha2_224 + s = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; + Hacl_Streaming_SHA2_state_sha2_224 + *p = + (Hacl_Streaming_SHA2_state_sha2_224 *)KRML_HOST_MALLOC(sizeof ( + Hacl_Streaming_SHA2_state_sha2_224 + )); + p[0U] = s; + sha256_init(block_state); + return p; +} + +/** +Copies the state passed as argument into a newly allocated state (deep copy). +The state is to be freed by calling `free_256`. Cloning the state this way is +useful, for instance, if your control-flow diverges and you need to feed +more (different) data into the hash in each branch. +*/ +Hacl_Streaming_SHA2_state_sha2_224 +*Hacl_Streaming_SHA2_copy_256(Hacl_Streaming_SHA2_state_sha2_224 *s0) +{ + Hacl_Streaming_SHA2_state_sha2_224 scrut = *s0; + uint32_t *block_state0 = scrut.block_state; + uint8_t *buf0 = scrut.buf; + uint64_t total_len0 = scrut.total_len; + uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)64U, sizeof (uint8_t)); + memcpy(buf, buf0, (uint32_t)64U * sizeof (uint8_t)); + uint32_t *block_state = (uint32_t *)KRML_HOST_CALLOC((uint32_t)8U, sizeof (uint32_t)); + memcpy(block_state, block_state0, (uint32_t)8U * sizeof (uint32_t)); + Hacl_Streaming_SHA2_state_sha2_224 + s = { .block_state = block_state, .buf = buf, .total_len = total_len0 }; + Hacl_Streaming_SHA2_state_sha2_224 + *p = + (Hacl_Streaming_SHA2_state_sha2_224 *)KRML_HOST_MALLOC(sizeof ( + Hacl_Streaming_SHA2_state_sha2_224 + )); + p[0U] = s; + return p; +} + +/** +Reset an existing state to the initial hash state with empty data. +*/ +void Hacl_Streaming_SHA2_init_256(Hacl_Streaming_SHA2_state_sha2_224 *s) +{ + Hacl_Streaming_SHA2_state_sha2_224 scrut = *s; + uint8_t *buf = scrut.buf; + uint32_t *block_state = scrut.block_state; + sha256_init(block_state); + Hacl_Streaming_SHA2_state_sha2_224 + tmp = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; + s[0U] = tmp; +} + +static inline uint32_t +update_224_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *data, uint32_t len) +{ + Hacl_Streaming_SHA2_state_sha2_224 s = *p; + uint64_t total_len = s.total_len; + if ((uint64_t)len > (uint64_t)2305843009213693951U - total_len) + { + return (uint32_t)1U; + } + uint32_t sz; + if (total_len % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len > (uint64_t)0U) + { + sz = (uint32_t)64U; + } + else + { + sz = (uint32_t)(total_len % (uint64_t)(uint32_t)64U); + } + if (len <= (uint32_t)64U - sz) + { + Hacl_Streaming_SHA2_state_sha2_224 s1 = *p; + uint32_t *block_state1 = s1.block_state; + uint8_t *buf = s1.buf; + uint64_t total_len1 = s1.total_len; + uint32_t sz1; + if (total_len1 % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len1 > (uint64_t)0U) + { + sz1 = (uint32_t)64U; + } + else + { + sz1 = (uint32_t)(total_len1 % (uint64_t)(uint32_t)64U); + } + uint8_t *buf2 = buf + sz1; + memcpy(buf2, data, len * sizeof (uint8_t)); + uint64_t total_len2 = total_len1 + (uint64_t)len; + *p + = + ( + (Hacl_Streaming_SHA2_state_sha2_224){ + .block_state = block_state1, + .buf = buf, + .total_len = total_len2 + } + ); + } + else if (sz == (uint32_t)0U) + { + Hacl_Streaming_SHA2_state_sha2_224 s1 = *p; + uint32_t *block_state1 = s1.block_state; + uint8_t *buf = s1.buf; + uint64_t total_len1 = s1.total_len; + uint32_t sz1; + if (total_len1 % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len1 > (uint64_t)0U) + { + sz1 = (uint32_t)64U; + } + else + { + sz1 = (uint32_t)(total_len1 % (uint64_t)(uint32_t)64U); + } + if (!(sz1 == (uint32_t)0U)) + { + sha256_update_nblocks((uint32_t)64U, buf, block_state1); + } + uint32_t ite; + if ((uint64_t)len % (uint64_t)(uint32_t)64U == (uint64_t)0U && (uint64_t)len > (uint64_t)0U) + { + ite = (uint32_t)64U; + } + else + { + ite = (uint32_t)((uint64_t)len % (uint64_t)(uint32_t)64U); + } + uint32_t n_blocks = (len - ite) / (uint32_t)64U; + uint32_t data1_len = n_blocks * (uint32_t)64U; + uint32_t data2_len = len - data1_len; + uint8_t *data1 = data; + uint8_t *data2 = data + data1_len; + sha256_update_nblocks(data1_len, data1, block_state1); + uint8_t *dst = buf; + memcpy(dst, data2, data2_len * sizeof (uint8_t)); + *p + = + ( + (Hacl_Streaming_SHA2_state_sha2_224){ + .block_state = block_state1, + .buf = buf, + .total_len = total_len1 + (uint64_t)len + } + ); + } + else + { + uint32_t diff = (uint32_t)64U - sz; + uint8_t *data1 = data; + uint8_t *data2 = data + diff; + Hacl_Streaming_SHA2_state_sha2_224 s1 = *p; + uint32_t *block_state10 = s1.block_state; + uint8_t *buf0 = s1.buf; + uint64_t total_len10 = s1.total_len; + uint32_t sz10; + if (total_len10 % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len10 > (uint64_t)0U) + { + sz10 = (uint32_t)64U; + } + else + { + sz10 = (uint32_t)(total_len10 % (uint64_t)(uint32_t)64U); + } + uint8_t *buf2 = buf0 + sz10; + memcpy(buf2, data1, diff * sizeof (uint8_t)); + uint64_t total_len2 = total_len10 + (uint64_t)diff; + *p + = + ( + (Hacl_Streaming_SHA2_state_sha2_224){ + .block_state = block_state10, + .buf = buf0, + .total_len = total_len2 + } + ); + Hacl_Streaming_SHA2_state_sha2_224 s10 = *p; + uint32_t *block_state1 = s10.block_state; + uint8_t *buf = s10.buf; + uint64_t total_len1 = s10.total_len; + uint32_t sz1; + if (total_len1 % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len1 > (uint64_t)0U) + { + sz1 = (uint32_t)64U; + } + else + { + sz1 = (uint32_t)(total_len1 % (uint64_t)(uint32_t)64U); + } + if (!(sz1 == (uint32_t)0U)) + { + sha256_update_nblocks((uint32_t)64U, buf, block_state1); + } + uint32_t ite; + if + ( + (uint64_t)(len - diff) + % (uint64_t)(uint32_t)64U + == (uint64_t)0U + && (uint64_t)(len - diff) > (uint64_t)0U + ) + { + ite = (uint32_t)64U; + } + else + { + ite = (uint32_t)((uint64_t)(len - diff) % (uint64_t)(uint32_t)64U); + } + uint32_t n_blocks = (len - diff - ite) / (uint32_t)64U; + uint32_t data1_len = n_blocks * (uint32_t)64U; + uint32_t data2_len = len - diff - data1_len; + uint8_t *data11 = data2; + uint8_t *data21 = data2 + data1_len; + sha256_update_nblocks(data1_len, data11, block_state1); + uint8_t *dst = buf; + memcpy(dst, data21, data2_len * sizeof (uint8_t)); + *p + = + ( + (Hacl_Streaming_SHA2_state_sha2_224){ + .block_state = block_state1, + .buf = buf, + .total_len = total_len1 + (uint64_t)(len - diff) + } + ); + } + return (uint32_t)0U; +} + +/** +Feed an arbitrary amount of data into the hash. This function returns 0 for +success, or 1 if the combined length of all of the data passed to `update_256` +(since the last call to `init_256`) exceeds 2^61-1 bytes. + +This function is identical to the update function for SHA2_224. +*/ +uint32_t +Hacl_Streaming_SHA2_update_256( + Hacl_Streaming_SHA2_state_sha2_224 *p, + uint8_t *input, + uint32_t input_len +) +{ + return update_224_256(p, input, input_len); +} + +/** +Write the resulting hash into `dst`, an array of 32 bytes. The state remains +valid after a call to `finish_256`, meaning the user may feed more data into +the hash via `update_256`. (The finish_256 function operates on an internal copy of +the state and therefore does not invalidate the client-held state `p`.) +*/ +void Hacl_Streaming_SHA2_finish_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *dst) +{ + Hacl_Streaming_SHA2_state_sha2_224 scrut = *p; + uint32_t *block_state = scrut.block_state; + uint8_t *buf_ = scrut.buf; + uint64_t total_len = scrut.total_len; + uint32_t r; + if (total_len % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len > (uint64_t)0U) + { + r = (uint32_t)64U; + } + else + { + r = (uint32_t)(total_len % (uint64_t)(uint32_t)64U); + } + uint8_t *buf_1 = buf_; + uint32_t tmp_block_state[8U] = { 0U }; + memcpy(tmp_block_state, block_state, (uint32_t)8U * sizeof (uint32_t)); + uint32_t ite; + if (r % (uint32_t)64U == (uint32_t)0U && r > (uint32_t)0U) + { + ite = (uint32_t)64U; + } + else + { + ite = r % (uint32_t)64U; + } + uint8_t *buf_last = buf_1 + r - ite; + uint8_t *buf_multi = buf_1; + sha256_update_nblocks((uint32_t)0U, buf_multi, tmp_block_state); + uint64_t prev_len_last = total_len - (uint64_t)r; + sha256_update_last(prev_len_last + (uint64_t)r, r, buf_last, tmp_block_state); + sha256_finish(tmp_block_state, dst); +} + +/** +Free a state allocated with `create_in_256`. + +This function is identical to the free function for SHA2_224. +*/ +void Hacl_Streaming_SHA2_free_256(Hacl_Streaming_SHA2_state_sha2_224 *s) +{ + Hacl_Streaming_SHA2_state_sha2_224 scrut = *s; + uint8_t *buf = scrut.buf; + uint32_t *block_state = scrut.block_state; + KRML_HOST_FREE(block_state); + KRML_HOST_FREE(buf); + KRML_HOST_FREE(s); +} + +/** +Hash `input`, of len `input_len`, into `dst`, an array of 32 bytes. +*/ +void Hacl_Streaming_SHA2_sha256(uint8_t *input, uint32_t input_len, uint8_t *dst) +{ + uint8_t *ib = input; + uint8_t *rb = dst; + uint32_t st[8U] = { 0U }; + sha256_init(st); + uint32_t rem = input_len % (uint32_t)64U; + uint64_t len_ = (uint64_t)input_len; + sha256_update_nblocks(input_len, ib, st); + uint32_t rem1 = input_len % (uint32_t)64U; + uint8_t *b0 = ib; + uint8_t *lb = b0 + input_len - rem1; + sha256_update_last(len_, rem, lb, st); + sha256_finish(st, rb); +} + +Hacl_Streaming_SHA2_state_sha2_224 *Hacl_Streaming_SHA2_create_in_224(void) +{ + uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)64U, sizeof (uint8_t)); + uint32_t *block_state = (uint32_t *)KRML_HOST_CALLOC((uint32_t)8U, sizeof (uint32_t)); + Hacl_Streaming_SHA2_state_sha2_224 + s = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; + Hacl_Streaming_SHA2_state_sha2_224 + *p = + (Hacl_Streaming_SHA2_state_sha2_224 *)KRML_HOST_MALLOC(sizeof ( + Hacl_Streaming_SHA2_state_sha2_224 + )); + p[0U] = s; + sha224_init(block_state); + return p; +} + +void Hacl_Streaming_SHA2_init_224(Hacl_Streaming_SHA2_state_sha2_224 *s) +{ + Hacl_Streaming_SHA2_state_sha2_224 scrut = *s; + uint8_t *buf = scrut.buf; + uint32_t *block_state = scrut.block_state; + sha224_init(block_state); + Hacl_Streaming_SHA2_state_sha2_224 + tmp = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; + s[0U] = tmp; +} + +uint32_t +Hacl_Streaming_SHA2_update_224( + Hacl_Streaming_SHA2_state_sha2_224 *p, + uint8_t *input, + uint32_t input_len +) +{ + return update_224_256(p, input, input_len); +} + +/** +Write the resulting hash into `dst`, an array of 28 bytes. The state remains +valid after a call to `finish_224`, meaning the user may feed more data into +the hash via `update_224`. +*/ +void Hacl_Streaming_SHA2_finish_224(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *dst) +{ + Hacl_Streaming_SHA2_state_sha2_224 scrut = *p; + uint32_t *block_state = scrut.block_state; + uint8_t *buf_ = scrut.buf; + uint64_t total_len = scrut.total_len; + uint32_t r; + if (total_len % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len > (uint64_t)0U) + { + r = (uint32_t)64U; + } + else + { + r = (uint32_t)(total_len % (uint64_t)(uint32_t)64U); + } + uint8_t *buf_1 = buf_; + uint32_t tmp_block_state[8U] = { 0U }; + memcpy(tmp_block_state, block_state, (uint32_t)8U * sizeof (uint32_t)); + uint32_t ite; + if (r % (uint32_t)64U == (uint32_t)0U && r > (uint32_t)0U) + { + ite = (uint32_t)64U; + } + else + { + ite = r % (uint32_t)64U; + } + uint8_t *buf_last = buf_1 + r - ite; + uint8_t *buf_multi = buf_1; + sha224_update_nblocks((uint32_t)0U, buf_multi, tmp_block_state); + uint64_t prev_len_last = total_len - (uint64_t)r; + sha224_update_last(prev_len_last + (uint64_t)r, r, buf_last, tmp_block_state); + sha224_finish(tmp_block_state, dst); +} + +void Hacl_Streaming_SHA2_free_224(Hacl_Streaming_SHA2_state_sha2_224 *p) +{ + Hacl_Streaming_SHA2_free_256(p); +} + +/** +Hash `input`, of len `input_len`, into `dst`, an array of 28 bytes. +*/ +void Hacl_Streaming_SHA2_sha224(uint8_t *input, uint32_t input_len, uint8_t *dst) +{ + uint8_t *ib = input; + uint8_t *rb = dst; + uint32_t st[8U] = { 0U }; + sha224_init(st); + uint32_t rem = input_len % (uint32_t)64U; + uint64_t len_ = (uint64_t)input_len; + sha224_update_nblocks(input_len, ib, st); + uint32_t rem1 = input_len % (uint32_t)64U; + uint8_t *b0 = ib; + uint8_t *lb = b0 + input_len - rem1; + sha224_update_last(len_, rem, lb, st); + sha224_finish(st, rb); +} + diff --git a/Modules/_hacl/Hacl_Streaming_SHA2.h b/Modules/_hacl/Hacl_Streaming_SHA2.h new file mode 100644 index 000000000000..c83a835afe70 --- /dev/null +++ b/Modules/_hacl/Hacl_Streaming_SHA2.h @@ -0,0 +1,136 @@ +/* MIT License + * + * Copyright (c) 2016-2022 INRIA, CMU and Microsoft Corporation + * Copyright (c) 2022-2023 HACL* Contributors + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + + +#ifndef __Hacl_Streaming_SHA2_H +#define __Hacl_Streaming_SHA2_H + +#if defined(__cplusplus) +extern "C" { +#endif + +#include +#include "python_hacl_namespaces.h" +#include "krml/FStar_UInt_8_16_32_64.h" +#include "krml/lowstar_endianness.h" +#include "krml/internal/target.h" + + + + +typedef struct Hacl_Streaming_SHA2_state_sha2_224_s +{ + uint32_t *block_state; + uint8_t *buf; + uint64_t total_len; +} +Hacl_Streaming_SHA2_state_sha2_224; + +typedef Hacl_Streaming_SHA2_state_sha2_224 Hacl_Streaming_SHA2_state_sha2_256; + +/** +Allocate initial state for the SHA2_256 hash. The state is to be freed by +calling `free_256`. +*/ +Hacl_Streaming_SHA2_state_sha2_224 *Hacl_Streaming_SHA2_create_in_256(void); + +/** +Copies the state passed as argument into a newly allocated state (deep copy). +The state is to be freed by calling `free_256`. Cloning the state this way is +useful, for instance, if your control-flow diverges and you need to feed +more (different) data into the hash in each branch. +*/ +Hacl_Streaming_SHA2_state_sha2_224 +*Hacl_Streaming_SHA2_copy_256(Hacl_Streaming_SHA2_state_sha2_224 *s0); + +/** +Reset an existing state to the initial hash state with empty data. +*/ +void Hacl_Streaming_SHA2_init_256(Hacl_Streaming_SHA2_state_sha2_224 *s); + +/** +Feed an arbitrary amount of data into the hash. This function returns 0 for +success, or 1 if the combined length of all of the data passed to `update_256` +(since the last call to `init_256`) exceeds 2^61-1 bytes. + +This function is identical to the update function for SHA2_224. +*/ +uint32_t +Hacl_Streaming_SHA2_update_256( + Hacl_Streaming_SHA2_state_sha2_224 *p, + uint8_t *input, + uint32_t input_len +); + +/** +Write the resulting hash into `dst`, an array of 32 bytes. The state remains +valid after a call to `finish_256`, meaning the user may feed more data into +the hash via `update_256`. (The finish_256 function operates on an internal copy of +the state and therefore does not invalidate the client-held state `p`.) +*/ +void Hacl_Streaming_SHA2_finish_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *dst); + +/** +Free a state allocated with `create_in_256`. + +This function is identical to the free function for SHA2_224. +*/ +void Hacl_Streaming_SHA2_free_256(Hacl_Streaming_SHA2_state_sha2_224 *s); + +/** +Hash `input`, of len `input_len`, into `dst`, an array of 32 bytes. +*/ +void Hacl_Streaming_SHA2_sha256(uint8_t *input, uint32_t input_len, uint8_t *dst); + +Hacl_Streaming_SHA2_state_sha2_224 *Hacl_Streaming_SHA2_create_in_224(void); + +void Hacl_Streaming_SHA2_init_224(Hacl_Streaming_SHA2_state_sha2_224 *s); + +uint32_t +Hacl_Streaming_SHA2_update_224( + Hacl_Streaming_SHA2_state_sha2_224 *p, + uint8_t *input, + uint32_t input_len +); + +/** +Write the resulting hash into `dst`, an array of 28 bytes. The state remains +valid after a call to `finish_224`, meaning the user may feed more data into +the hash via `update_224`. +*/ +void Hacl_Streaming_SHA2_finish_224(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *dst); + +void Hacl_Streaming_SHA2_free_224(Hacl_Streaming_SHA2_state_sha2_224 *p); + +/** +Hash `input`, of len `input_len`, into `dst`, an array of 28 bytes. +*/ +void Hacl_Streaming_SHA2_sha224(uint8_t *input, uint32_t input_len, uint8_t *dst); + +#if defined(__cplusplus) +} +#endif + +#define __Hacl_Streaming_SHA2_H_DEFINED +#endif diff --git a/Modules/_hacl/README.md b/Modules/_hacl/README.md new file mode 100644 index 000000000000..e6a156a54b3c --- /dev/null +++ b/Modules/_hacl/README.md @@ -0,0 +1,29 @@ +# Algorithm implementations used by the `hashlib` module. + +This code comes from the +[HACL\*](https://github.com/hacl-star/hacl-star/) project. + +HACL\* is a cryptographic library that has been formally verified for memory +safety, functional correctness, and secret independence. + +## Updating HACL* + +Use the `refresh.sh` script in this directory to pull in a new upstream code +version. The upstream git hash used for the most recent code pull is recorded +in the script. Modify the script as needed to bring in more if changes are +needed based on upstream code refactoring. + +Never manually edit HACL\* files. Always add transformation shell code to the +`refresh.sh` script to perform any necessary edits. If there are serious code +changes needed, work with the upstream repository. + +## Local files + +1. `./include/python_hacl_namespaces.h` +1. `./README.md` +1. `./refresh.sh` + +## ACKS + +* Jonathan Protzenko aka [@msprotz on Github](https://github.com/msprotz) +contributed our HACL\* based builtin code. diff --git a/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h b/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h new file mode 100644 index 000000000000..3e2e4b32b22f --- /dev/null +++ b/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h @@ -0,0 +1,109 @@ +/* + Copyright (c) INRIA and Microsoft Corporation. All rights reserved. + Licensed under the Apache 2.0 License. +*/ + + +#ifndef __FStar_UInt_8_16_32_64_H +#define __FStar_UInt_8_16_32_64_H + + + + +#include +#include + +#include "krml/lowstar_endianness.h" +#include "krml/FStar_UInt_8_16_32_64.h" +#include "krml/internal/target.h" +static inline uint64_t FStar_UInt64_eq_mask(uint64_t a, uint64_t b) +{ + uint64_t x = a ^ b; + uint64_t minus_x = ~x + (uint64_t)1U; + uint64_t x_or_minus_x = x | minus_x; + uint64_t xnx = x_or_minus_x >> (uint32_t)63U; + return xnx - (uint64_t)1U; +} + +static inline uint64_t FStar_UInt64_gte_mask(uint64_t a, uint64_t b) +{ + uint64_t x = a; + uint64_t y = b; + uint64_t x_xor_y = x ^ y; + uint64_t x_sub_y = x - y; + uint64_t x_sub_y_xor_y = x_sub_y ^ y; + uint64_t q = x_xor_y | x_sub_y_xor_y; + uint64_t x_xor_q = x ^ q; + uint64_t x_xor_q_ = x_xor_q >> (uint32_t)63U; + return x_xor_q_ - (uint64_t)1U; +} + +static inline uint32_t FStar_UInt32_eq_mask(uint32_t a, uint32_t b) +{ + uint32_t x = a ^ b; + uint32_t minus_x = ~x + (uint32_t)1U; + uint32_t x_or_minus_x = x | minus_x; + uint32_t xnx = x_or_minus_x >> (uint32_t)31U; + return xnx - (uint32_t)1U; +} + +static inline uint32_t FStar_UInt32_gte_mask(uint32_t a, uint32_t b) +{ + uint32_t x = a; + uint32_t y = b; + uint32_t x_xor_y = x ^ y; + uint32_t x_sub_y = x - y; + uint32_t x_sub_y_xor_y = x_sub_y ^ y; + uint32_t q = x_xor_y | x_sub_y_xor_y; + uint32_t x_xor_q = x ^ q; + uint32_t x_xor_q_ = x_xor_q >> (uint32_t)31U; + return x_xor_q_ - (uint32_t)1U; +} + +static inline uint16_t FStar_UInt16_eq_mask(uint16_t a, uint16_t b) +{ + uint16_t x = a ^ b; + uint16_t minus_x = ~x + (uint16_t)1U; + uint16_t x_or_minus_x = x | minus_x; + uint16_t xnx = x_or_minus_x >> (uint32_t)15U; + return xnx - (uint16_t)1U; +} + +static inline uint16_t FStar_UInt16_gte_mask(uint16_t a, uint16_t b) +{ + uint16_t x = a; + uint16_t y = b; + uint16_t x_xor_y = x ^ y; + uint16_t x_sub_y = x - y; + uint16_t x_sub_y_xor_y = x_sub_y ^ y; + uint16_t q = x_xor_y | x_sub_y_xor_y; + uint16_t x_xor_q = x ^ q; + uint16_t x_xor_q_ = x_xor_q >> (uint32_t)15U; + return x_xor_q_ - (uint16_t)1U; +} + +static inline uint8_t FStar_UInt8_eq_mask(uint8_t a, uint8_t b) +{ + uint8_t x = a ^ b; + uint8_t minus_x = ~x + (uint8_t)1U; + uint8_t x_or_minus_x = x | minus_x; + uint8_t xnx = x_or_minus_x >> (uint32_t)7U; + return xnx - (uint8_t)1U; +} + +static inline uint8_t FStar_UInt8_gte_mask(uint8_t a, uint8_t b) +{ + uint8_t x = a; + uint8_t y = b; + uint8_t x_xor_y = x ^ y; + uint8_t x_sub_y = x - y; + uint8_t x_sub_y_xor_y = x_sub_y ^ y; + uint8_t q = x_xor_y | x_sub_y_xor_y; + uint8_t x_xor_q = x ^ q; + uint8_t x_xor_q_ = x_xor_q >> (uint32_t)7U; + return x_xor_q_ - (uint8_t)1U; +} + + +#define __FStar_UInt_8_16_32_64_H_DEFINED +#endif diff --git a/Modules/_hacl/include/krml/internal/target.h b/Modules/_hacl/include/krml/internal/target.h new file mode 100644 index 000000000000..9ef59859a554 --- /dev/null +++ b/Modules/_hacl/include/krml/internal/target.h @@ -0,0 +1,218 @@ +/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. + Licensed under the Apache 2.0 License. */ + +#ifndef __KRML_TARGET_H +#define __KRML_TARGET_H + +#include +#include +#include +#include +#include +#include +#include + +/* Since KaRaMeL emits the inline keyword unconditionally, we follow the + * guidelines at https://gcc.gnu.org/onlinedocs/gcc/Inline.html and make this + * __inline__ to ensure the code compiles with -std=c90 and earlier. */ +#ifdef __GNUC__ +# define inline __inline__ +#endif + +#ifndef KRML_HOST_MALLOC +# define KRML_HOST_MALLOC malloc +#endif + +#ifndef KRML_HOST_CALLOC +# define KRML_HOST_CALLOC calloc +#endif + +#ifndef KRML_HOST_FREE +# define KRML_HOST_FREE free +#endif + +/* Macros for prettier unrolling of loops */ +#define KRML_LOOP1(i, n, x) { \ + x \ + i += n; \ +} + +#define KRML_LOOP2(i, n, x) \ + KRML_LOOP1(i, n, x) \ + KRML_LOOP1(i, n, x) + +#define KRML_LOOP3(i, n, x) \ + KRML_LOOP2(i, n, x) \ + KRML_LOOP1(i, n, x) + +#define KRML_LOOP4(i, n, x) \ + KRML_LOOP2(i, n, x) \ + KRML_LOOP2(i, n, x) + +#define KRML_LOOP5(i, n, x) \ + KRML_LOOP4(i, n, x) \ + KRML_LOOP1(i, n, x) + +#define KRML_LOOP6(i, n, x) \ + KRML_LOOP4(i, n, x) \ + KRML_LOOP2(i, n, x) + +#define KRML_LOOP7(i, n, x) \ + KRML_LOOP4(i, n, x) \ + KRML_LOOP3(i, n, x) + +#define KRML_LOOP8(i, n, x) \ + KRML_LOOP4(i, n, x) \ + KRML_LOOP4(i, n, x) + +#define KRML_LOOP9(i, n, x) \ + KRML_LOOP8(i, n, x) \ + KRML_LOOP1(i, n, x) + +#define KRML_LOOP10(i, n, x) \ + KRML_LOOP8(i, n, x) \ + KRML_LOOP2(i, n, x) + +#define KRML_LOOP11(i, n, x) \ + KRML_LOOP8(i, n, x) \ + KRML_LOOP3(i, n, x) + +#define KRML_LOOP12(i, n, x) \ + KRML_LOOP8(i, n, x) \ + KRML_LOOP4(i, n, x) + +#define KRML_LOOP13(i, n, x) \ + KRML_LOOP8(i, n, x) \ + KRML_LOOP5(i, n, x) + +#define KRML_LOOP14(i, n, x) \ + KRML_LOOP8(i, n, x) \ + KRML_LOOP6(i, n, x) + +#define KRML_LOOP15(i, n, x) \ + KRML_LOOP8(i, n, x) \ + KRML_LOOP7(i, n, x) + +#define KRML_LOOP16(i, n, x) \ + KRML_LOOP8(i, n, x) \ + KRML_LOOP8(i, n, x) + +#define KRML_UNROLL_FOR(i, z, n, k, x) do { \ + uint32_t i = z; \ + KRML_LOOP##n(i, k, x) \ +} while (0) + +#define KRML_ACTUAL_FOR(i, z, n, k, x) \ + do { \ + for (uint32_t i = z; i < n; i += k) { \ + x \ + } \ + } while (0) + +#ifndef KRML_UNROLL_MAX +#define KRML_UNROLL_MAX 16 +#endif + +/* 1 is the number of loop iterations, i.e. (n - z)/k as evaluated by krml */ +#if 0 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR0(i, z, n, k, x) +#else +#define KRML_MAYBE_FOR0(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 1 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR1(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 1, k, x) +#else +#define KRML_MAYBE_FOR1(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 2 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR2(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 2, k, x) +#else +#define KRML_MAYBE_FOR2(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 3 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR3(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 3, k, x) +#else +#define KRML_MAYBE_FOR3(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 4 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR4(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 4, k, x) +#else +#define KRML_MAYBE_FOR4(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 5 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR5(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 5, k, x) +#else +#define KRML_MAYBE_FOR5(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 6 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR6(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 6, k, x) +#else +#define KRML_MAYBE_FOR6(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 7 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR7(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 7, k, x) +#else +#define KRML_MAYBE_FOR7(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 8 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR8(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 8, k, x) +#else +#define KRML_MAYBE_FOR8(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 9 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR9(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 9, k, x) +#else +#define KRML_MAYBE_FOR9(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 10 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR10(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 10, k, x) +#else +#define KRML_MAYBE_FOR10(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 11 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR11(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 11, k, x) +#else +#define KRML_MAYBE_FOR11(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 12 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR12(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 12, k, x) +#else +#define KRML_MAYBE_FOR12(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 13 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR13(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 13, k, x) +#else +#define KRML_MAYBE_FOR13(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 14 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR14(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 14, k, x) +#else +#define KRML_MAYBE_FOR14(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 15 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR15(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 15, k, x) +#else +#define KRML_MAYBE_FOR15(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif + +#if 16 <= KRML_UNROLL_MAX +#define KRML_MAYBE_FOR16(i, z, n, k, x) KRML_UNROLL_FOR(i, z, 16, k, x) +#else +#define KRML_MAYBE_FOR16(i, z, n, k, x) KRML_ACTUAL_FOR(i, z, n, k, x) +#endif +#endif diff --git a/Modules/_hacl/include/krml/lowstar_endianness.h b/Modules/_hacl/include/krml/lowstar_endianness.h new file mode 100644 index 000000000000..32a7391e817e --- /dev/null +++ b/Modules/_hacl/include/krml/lowstar_endianness.h @@ -0,0 +1,230 @@ +/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. + Licensed under the Apache 2.0 License. */ + +#ifndef __LOWSTAR_ENDIANNESS_H +#define __LOWSTAR_ENDIANNESS_H + +#include +#include + +/******************************************************************************/ +/* Implementing C.fst (part 2: endian-ness macros) */ +/******************************************************************************/ + +/* ... for Linux */ +#if defined(__linux__) || defined(__CYGWIN__) || defined (__USE_SYSTEM_ENDIAN_H__) || defined(__GLIBC__) +# include + +/* ... for OSX */ +#elif defined(__APPLE__) +# include +# define htole64(x) OSSwapHostToLittleInt64(x) +# define le64toh(x) OSSwapLittleToHostInt64(x) +# define htobe64(x) OSSwapHostToBigInt64(x) +# define be64toh(x) OSSwapBigToHostInt64(x) + +# define htole16(x) OSSwapHostToLittleInt16(x) +# define le16toh(x) OSSwapLittleToHostInt16(x) +# define htobe16(x) OSSwapHostToBigInt16(x) +# define be16toh(x) OSSwapBigToHostInt16(x) + +# define htole32(x) OSSwapHostToLittleInt32(x) +# define le32toh(x) OSSwapLittleToHostInt32(x) +# define htobe32(x) OSSwapHostToBigInt32(x) +# define be32toh(x) OSSwapBigToHostInt32(x) + +/* ... for Solaris */ +#elif defined(__sun__) +# include +# define htole64(x) LE_64(x) +# define le64toh(x) LE_64(x) +# define htobe64(x) BE_64(x) +# define be64toh(x) BE_64(x) + +# define htole16(x) LE_16(x) +# define le16toh(x) LE_16(x) +# define htobe16(x) BE_16(x) +# define be16toh(x) BE_16(x) + +# define htole32(x) LE_32(x) +# define le32toh(x) LE_32(x) +# define htobe32(x) BE_32(x) +# define be32toh(x) BE_32(x) + +/* ... for the BSDs */ +#elif defined(__FreeBSD__) || defined(__NetBSD__) || defined(__DragonFly__) +# include +#elif defined(__OpenBSD__) +# include + +/* ... for Windows (MSVC)... not targeting XBOX 360! */ +#elif defined(_MSC_VER) + +# include +# define htobe16(x) _byteswap_ushort(x) +# define htole16(x) (x) +# define be16toh(x) _byteswap_ushort(x) +# define le16toh(x) (x) + +# define htobe32(x) _byteswap_ulong(x) +# define htole32(x) (x) +# define be32toh(x) _byteswap_ulong(x) +# define le32toh(x) (x) + +# define htobe64(x) _byteswap_uint64(x) +# define htole64(x) (x) +# define be64toh(x) _byteswap_uint64(x) +# define le64toh(x) (x) + +/* ... for Windows (GCC-like, e.g. mingw or clang) */ +#elif (defined(_WIN32) || defined(_WIN64)) && \ + (defined(__GNUC__) || defined(__clang__)) + +# define htobe16(x) __builtin_bswap16(x) +# define htole16(x) (x) +# define be16toh(x) __builtin_bswap16(x) +# define le16toh(x) (x) + +# define htobe32(x) __builtin_bswap32(x) +# define htole32(x) (x) +# define be32toh(x) __builtin_bswap32(x) +# define le32toh(x) (x) + +# define htobe64(x) __builtin_bswap64(x) +# define htole64(x) (x) +# define be64toh(x) __builtin_bswap64(x) +# define le64toh(x) (x) + +/* ... generic big-endian fallback code */ +#elif defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ + +/* byte swapping code inspired by: + * https://github.com/rweather/arduinolibs/blob/master/libraries/Crypto/utility/EndianUtil.h + * */ + +# define htobe32(x) (x) +# define be32toh(x) (x) +# define htole32(x) \ + (__extension__({ \ + uint32_t _temp = (x); \ + ((_temp >> 24) & 0x000000FF) | ((_temp >> 8) & 0x0000FF00) | \ + ((_temp << 8) & 0x00FF0000) | ((_temp << 24) & 0xFF000000); \ + })) +# define le32toh(x) (htole32((x))) + +# define htobe64(x) (x) +# define be64toh(x) (x) +# define htole64(x) \ + (__extension__({ \ + uint64_t __temp = (x); \ + uint32_t __low = htobe32((uint32_t)__temp); \ + uint32_t __high = htobe32((uint32_t)(__temp >> 32)); \ + (((uint64_t)__low) << 32) | __high; \ + })) +# define le64toh(x) (htole64((x))) + +/* ... generic little-endian fallback code */ +#elif defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + +# define htole32(x) (x) +# define le32toh(x) (x) +# define htobe32(x) \ + (__extension__({ \ + uint32_t _temp = (x); \ + ((_temp >> 24) & 0x000000FF) | ((_temp >> 8) & 0x0000FF00) | \ + ((_temp << 8) & 0x00FF0000) | ((_temp << 24) & 0xFF000000); \ + })) +# define be32toh(x) (htobe32((x))) + +# define htole64(x) (x) +# define le64toh(x) (x) +# define htobe64(x) \ + (__extension__({ \ + uint64_t __temp = (x); \ + uint32_t __low = htobe32((uint32_t)__temp); \ + uint32_t __high = htobe32((uint32_t)(__temp >> 32)); \ + (((uint64_t)__low) << 32) | __high; \ + })) +# define be64toh(x) (htobe64((x))) + +/* ... couldn't determine endian-ness of the target platform */ +#else +# error "Please define __BYTE_ORDER__!" + +#endif /* defined(__linux__) || ... */ + +/* Loads and stores. These avoid undefined behavior due to unaligned memory + * accesses, via memcpy. */ + +inline static uint16_t load16(uint8_t *b) { + uint16_t x; + memcpy(&x, b, 2); + return x; +} + +inline static uint32_t load32(uint8_t *b) { + uint32_t x; + memcpy(&x, b, 4); + return x; +} + +inline static uint64_t load64(uint8_t *b) { + uint64_t x; + memcpy(&x, b, 8); + return x; +} + +inline static void store16(uint8_t *b, uint16_t i) { + memcpy(b, &i, 2); +} + +inline static void store32(uint8_t *b, uint32_t i) { + memcpy(b, &i, 4); +} + +inline static void store64(uint8_t *b, uint64_t i) { + memcpy(b, &i, 8); +} + +/* Legacy accessors so that this header can serve as an implementation of + * C.Endianness */ +#define load16_le(b) (le16toh(load16(b))) +#define store16_le(b, i) (store16(b, htole16(i))) +#define load16_be(b) (be16toh(load16(b))) +#define store16_be(b, i) (store16(b, htobe16(i))) + +#define load32_le(b) (le32toh(load32(b))) +#define store32_le(b, i) (store32(b, htole32(i))) +#define load32_be(b) (be32toh(load32(b))) +#define store32_be(b, i) (store32(b, htobe32(i))) + +#define load64_le(b) (le64toh(load64(b))) +#define store64_le(b, i) (store64(b, htole64(i))) +#define load64_be(b) (be64toh(load64(b))) +#define store64_be(b, i) (store64(b, htobe64(i))) + +/* Co-existence of LowStar.Endianness and FStar.Endianness generates name + * conflicts, because of course both insist on having no prefixes. Until a + * prefix is added, or until we truly retire FStar.Endianness, solve this issue + * in an elegant way. */ +#define load16_le0 load16_le +#define store16_le0 store16_le +#define load16_be0 load16_be +#define store16_be0 store16_be + +#define load32_le0 load32_le +#define store32_le0 store32_le +#define load32_be0 load32_be +#define store32_be0 store32_be + +#define load64_le0 load64_le +#define store64_le0 store64_le +#define load64_be0 load64_be +#define store64_be0 store64_be + +#define load128_le0 load128_le +#define store128_le0 store128_le +#define load128_be0 load128_be +#define store128_be0 store128_be + +#endif diff --git a/Modules/_hacl/include/python_hacl_namespaces.h b/Modules/_hacl/include/python_hacl_namespaces.h new file mode 100644 index 000000000000..af390459311f --- /dev/null +++ b/Modules/_hacl/include/python_hacl_namespaces.h @@ -0,0 +1,28 @@ +#ifndef _PYTHON_HACL_NAMESPACES_H +#define _PYTHON_HACL_NAMESPACES_H + +/* + * C's excuse for namespaces: Use globally unique names to avoid linkage + * conflicts with builds linking or dynamically loading other code potentially + * using HACL* libraries. + */ + +#define Hacl_Streaming_SHA2_state_sha2_224_s python_hashlib_Hacl_Streaming_SHA2_state_sha2_224_s +#define Hacl_Streaming_SHA2_state_sha2_224 python_hashlib_Hacl_Streaming_SHA2_state_sha2_224 +#define Hacl_Streaming_SHA2_state_sha2_256 python_hashlib_Hacl_Streaming_SHA2_state_sha2_256 +#define Hacl_Streaming_SHA2_create_in_256 python_hashlib_Hacl_Streaming_SHA2_create_in_256 +#define Hacl_Streaming_SHA2_create_in_224 python_hashlib_Hacl_Streaming_SHA2_create_in_224 +#define Hacl_Streaming_SHA2_copy_256 python_hashlib_Hacl_Streaming_SHA2_copy_256 +#define Hacl_Streaming_SHA2_copy_224 python_hashlib_Hacl_Streaming_SHA2_copy_224 +#define Hacl_Streaming_SHA2_init_256 python_hashlib_Hacl_Streaming_SHA2_init_256 +#define Hacl_Streaming_SHA2_init_224 python_hashlib_Hacl_Streaming_SHA2_init_224 +#define Hacl_Streaming_SHA2_update_256 python_hashlib_Hacl_Streaming_SHA2_update_256 +#define Hacl_Streaming_SHA2_update_224 python_hashlib_Hacl_Streaming_SHA2_update_224 +#define Hacl_Streaming_SHA2_finish_256 python_hashlib_Hacl_Streaming_SHA2_finish_256 +#define Hacl_Streaming_SHA2_finish_224 python_hashlib_Hacl_Streaming_SHA2_finish_224 +#define Hacl_Streaming_SHA2_free_256 python_hashlib_Hacl_Streaming_SHA2_free_256 +#define Hacl_Streaming_SHA2_free_224 python_hashlib_Hacl_Streaming_SHA2_free_224 +#define Hacl_Streaming_SHA2_sha256 python_hashlib_Hacl_Streaming_SHA2_sha256 +#define Hacl_Streaming_SHA2_sha224 python_hashlib_Hacl_Streaming_SHA2_sha224 + +#endif // _PYTHON_HACL_NAMESPACES_H diff --git a/Modules/_hacl/internal/Hacl_SHA2_Generic.h b/Modules/_hacl/internal/Hacl_SHA2_Generic.h new file mode 100644 index 000000000000..23f7cea1eb38 --- /dev/null +++ b/Modules/_hacl/internal/Hacl_SHA2_Generic.h @@ -0,0 +1,135 @@ +/* MIT License + * + * Copyright (c) 2016-2022 INRIA, CMU and Microsoft Corporation + * Copyright (c) 2022-2023 HACL* Contributors + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + + +#ifndef __internal_Hacl_SHA2_Generic_H +#define __internal_Hacl_SHA2_Generic_H + +#if defined(__cplusplus) +extern "C" { +#endif + +#include +#include "krml/FStar_UInt_8_16_32_64.h" +#include "krml/lowstar_endianness.h" +#include "krml/internal/target.h" + + + + +static const +uint32_t +Hacl_Impl_SHA2_Generic_h224[8U] = + { + (uint32_t)0xc1059ed8U, (uint32_t)0x367cd507U, (uint32_t)0x3070dd17U, (uint32_t)0xf70e5939U, + (uint32_t)0xffc00b31U, (uint32_t)0x68581511U, (uint32_t)0x64f98fa7U, (uint32_t)0xbefa4fa4U + }; + +static const +uint32_t +Hacl_Impl_SHA2_Generic_h256[8U] = + { + (uint32_t)0x6a09e667U, (uint32_t)0xbb67ae85U, (uint32_t)0x3c6ef372U, (uint32_t)0xa54ff53aU, + (uint32_t)0x510e527fU, (uint32_t)0x9b05688cU, (uint32_t)0x1f83d9abU, (uint32_t)0x5be0cd19U + }; + +static const +uint64_t +Hacl_Impl_SHA2_Generic_h384[8U] = + { + (uint64_t)0xcbbb9d5dc1059ed8U, (uint64_t)0x629a292a367cd507U, (uint64_t)0x9159015a3070dd17U, + (uint64_t)0x152fecd8f70e5939U, (uint64_t)0x67332667ffc00b31U, (uint64_t)0x8eb44a8768581511U, + (uint64_t)0xdb0c2e0d64f98fa7U, (uint64_t)0x47b5481dbefa4fa4U + }; + +static const +uint64_t +Hacl_Impl_SHA2_Generic_h512[8U] = + { + (uint64_t)0x6a09e667f3bcc908U, (uint64_t)0xbb67ae8584caa73bU, (uint64_t)0x3c6ef372fe94f82bU, + (uint64_t)0xa54ff53a5f1d36f1U, (uint64_t)0x510e527fade682d1U, (uint64_t)0x9b05688c2b3e6c1fU, + (uint64_t)0x1f83d9abfb41bd6bU, (uint64_t)0x5be0cd19137e2179U + }; + +static const +uint32_t +Hacl_Impl_SHA2_Generic_k224_256[64U] = + { + (uint32_t)0x428a2f98U, (uint32_t)0x71374491U, (uint32_t)0xb5c0fbcfU, (uint32_t)0xe9b5dba5U, + (uint32_t)0x3956c25bU, (uint32_t)0x59f111f1U, (uint32_t)0x923f82a4U, (uint32_t)0xab1c5ed5U, + (uint32_t)0xd807aa98U, (uint32_t)0x12835b01U, (uint32_t)0x243185beU, (uint32_t)0x550c7dc3U, + (uint32_t)0x72be5d74U, (uint32_t)0x80deb1feU, (uint32_t)0x9bdc06a7U, (uint32_t)0xc19bf174U, + (uint32_t)0xe49b69c1U, (uint32_t)0xefbe4786U, (uint32_t)0x0fc19dc6U, (uint32_t)0x240ca1ccU, + (uint32_t)0x2de92c6fU, (uint32_t)0x4a7484aaU, (uint32_t)0x5cb0a9dcU, (uint32_t)0x76f988daU, + (uint32_t)0x983e5152U, (uint32_t)0xa831c66dU, (uint32_t)0xb00327c8U, (uint32_t)0xbf597fc7U, + (uint32_t)0xc6e00bf3U, (uint32_t)0xd5a79147U, (uint32_t)0x06ca6351U, (uint32_t)0x14292967U, + (uint32_t)0x27b70a85U, (uint32_t)0x2e1b2138U, (uint32_t)0x4d2c6dfcU, (uint32_t)0x53380d13U, + (uint32_t)0x650a7354U, (uint32_t)0x766a0abbU, (uint32_t)0x81c2c92eU, (uint32_t)0x92722c85U, + (uint32_t)0xa2bfe8a1U, (uint32_t)0xa81a664bU, (uint32_t)0xc24b8b70U, (uint32_t)0xc76c51a3U, + (uint32_t)0xd192e819U, (uint32_t)0xd6990624U, (uint32_t)0xf40e3585U, (uint32_t)0x106aa070U, + (uint32_t)0x19a4c116U, (uint32_t)0x1e376c08U, (uint32_t)0x2748774cU, (uint32_t)0x34b0bcb5U, + (uint32_t)0x391c0cb3U, (uint32_t)0x4ed8aa4aU, (uint32_t)0x5b9cca4fU, (uint32_t)0x682e6ff3U, + (uint32_t)0x748f82eeU, (uint32_t)0x78a5636fU, (uint32_t)0x84c87814U, (uint32_t)0x8cc70208U, + (uint32_t)0x90befffaU, (uint32_t)0xa4506cebU, (uint32_t)0xbef9a3f7U, (uint32_t)0xc67178f2U + }; + +static const +uint64_t +Hacl_Impl_SHA2_Generic_k384_512[80U] = + { + (uint64_t)0x428a2f98d728ae22U, (uint64_t)0x7137449123ef65cdU, (uint64_t)0xb5c0fbcfec4d3b2fU, + (uint64_t)0xe9b5dba58189dbbcU, (uint64_t)0x3956c25bf348b538U, (uint64_t)0x59f111f1b605d019U, + (uint64_t)0x923f82a4af194f9bU, (uint64_t)0xab1c5ed5da6d8118U, (uint64_t)0xd807aa98a3030242U, + (uint64_t)0x12835b0145706fbeU, (uint64_t)0x243185be4ee4b28cU, (uint64_t)0x550c7dc3d5ffb4e2U, + (uint64_t)0x72be5d74f27b896fU, (uint64_t)0x80deb1fe3b1696b1U, (uint64_t)0x9bdc06a725c71235U, + (uint64_t)0xc19bf174cf692694U, (uint64_t)0xe49b69c19ef14ad2U, (uint64_t)0xefbe4786384f25e3U, + (uint64_t)0x0fc19dc68b8cd5b5U, (uint64_t)0x240ca1cc77ac9c65U, (uint64_t)0x2de92c6f592b0275U, + (uint64_t)0x4a7484aa6ea6e483U, (uint64_t)0x5cb0a9dcbd41fbd4U, (uint64_t)0x76f988da831153b5U, + (uint64_t)0x983e5152ee66dfabU, (uint64_t)0xa831c66d2db43210U, (uint64_t)0xb00327c898fb213fU, + (uint64_t)0xbf597fc7beef0ee4U, (uint64_t)0xc6e00bf33da88fc2U, (uint64_t)0xd5a79147930aa725U, + (uint64_t)0x06ca6351e003826fU, (uint64_t)0x142929670a0e6e70U, (uint64_t)0x27b70a8546d22ffcU, + (uint64_t)0x2e1b21385c26c926U, (uint64_t)0x4d2c6dfc5ac42aedU, (uint64_t)0x53380d139d95b3dfU, + (uint64_t)0x650a73548baf63deU, (uint64_t)0x766a0abb3c77b2a8U, (uint64_t)0x81c2c92e47edaee6U, + (uint64_t)0x92722c851482353bU, (uint64_t)0xa2bfe8a14cf10364U, (uint64_t)0xa81a664bbc423001U, + (uint64_t)0xc24b8b70d0f89791U, (uint64_t)0xc76c51a30654be30U, (uint64_t)0xd192e819d6ef5218U, + (uint64_t)0xd69906245565a910U, (uint64_t)0xf40e35855771202aU, (uint64_t)0x106aa07032bbd1b8U, + (uint64_t)0x19a4c116b8d2d0c8U, (uint64_t)0x1e376c085141ab53U, (uint64_t)0x2748774cdf8eeb99U, + (uint64_t)0x34b0bcb5e19b48a8U, (uint64_t)0x391c0cb3c5c95a63U, (uint64_t)0x4ed8aa4ae3418acbU, + (uint64_t)0x5b9cca4f7763e373U, (uint64_t)0x682e6ff3d6b2b8a3U, (uint64_t)0x748f82ee5defb2fcU, + (uint64_t)0x78a5636f43172f60U, (uint64_t)0x84c87814a1f0ab72U, (uint64_t)0x8cc702081a6439ecU, + (uint64_t)0x90befffa23631e28U, (uint64_t)0xa4506cebde82bde9U, (uint64_t)0xbef9a3f7b2c67915U, + (uint64_t)0xc67178f2e372532bU, (uint64_t)0xca273eceea26619cU, (uint64_t)0xd186b8c721c0c207U, + (uint64_t)0xeada7dd6cde0eb1eU, (uint64_t)0xf57d4f7fee6ed178U, (uint64_t)0x06f067aa72176fbaU, + (uint64_t)0x0a637dc5a2c898a6U, (uint64_t)0x113f9804bef90daeU, (uint64_t)0x1b710b35131c471bU, + (uint64_t)0x28db77f523047d84U, (uint64_t)0x32caab7b40c72493U, (uint64_t)0x3c9ebe0a15c9bebcU, + (uint64_t)0x431d67c49c100d4cU, (uint64_t)0x4cc5d4becb3e42b6U, (uint64_t)0x597f299cfc657e2aU, + (uint64_t)0x5fcb6fab3ad6faecU, (uint64_t)0x6c44198c4a475817U + }; + +#if defined(__cplusplus) +} +#endif + +#define __internal_Hacl_SHA2_Generic_H_DEFINED +#endif diff --git a/Modules/_hacl/refresh.sh b/Modules/_hacl/refresh.sh new file mode 100755 index 000000000000..594873862a2d --- /dev/null +++ b/Modules/_hacl/refresh.sh @@ -0,0 +1,132 @@ +#!/usr/bin/env bash +# +# Use this script to update the HACL generated hash algorithm implementation +# code from a local checkout of the upstream hacl-star repository. +# + +set -e +set -o pipefail + +if [[ "${BASH_VERSINFO[0]}" -lt 4 ]]; then + echo "A bash version >= 4 required. Got: $BASH_VERSION" >&2 + exit 1 +fi + +if [[ $1 == "" ]]; then + echo "Usage: $0 path-to-hacl-directory" + echo "" + echo " path-to-hacl-directory should be a local git checkout of a" + echo " https://github.com/hacl-star/hacl-star/ repo." + exit 1 +fi + +# Update this when updating to a new version after verifying that the changes +# the update brings in are good. +expected_hacl_star_rev=94aabbb4cf71347d3779a8db486c761403c6d036 + +hacl_dir="$(realpath "$1")" +cd "$(dirname "$0")" +actual_rev=$(cd "$hacl_dir" && git rev-parse HEAD) + +if [[ "$actual_rev" != "$expected_hacl_star_rev" ]]; then + echo "WARNING: HACL* in '$hacl_dir' is at revision:" >&2 + echo " $actual_rev" >&2 + echo "but expected revision:" >&2 + echo " $expected_hacl_star_rev" >&2 + echo "Edit the expected rev if the changes pulled in are what you want." +fi + +# Step 1: copy files + +declare -a dist_files +dist_files=( + Hacl_Streaming_SHA2.h + internal/Hacl_SHA2_Generic.h + Hacl_Streaming_SHA2.c +) + +declare -a include_files +include_files=( + include/krml/lowstar_endianness.h + include/krml/internal/target.h +) + +declare -a lib_files +lib_files=( + krmllib/dist/minimal/FStar_UInt_8_16_32_64.h +) + +# C files for the algorithms themselves: current directory +(cd "$hacl_dir/dist/gcc-compatible" && tar cf - "${dist_files[@]}") | tar xf - + +# Support header files (e.g. endianness macros): stays in include/ +(cd "$hacl_dir/dist/karamel" && tar cf - "${include_files[@]}") | tar xf - + +# Special treatment: we don't bother with an extra directory and move krmllib +# files to the same include directory +for f in "${lib_files[@]}"; do + cp "$hacl_dir/dist/karamel/$f" include/krml/ +done + +# Step 2: some in-place modifications to keep things simple and minimal + +# This is basic, but refreshes of the vendored HACL code are infrequent, so +# let's not over-engineer this. +if [[ $(uname) == "Darwin" ]]; then + # You're already running with homebrew or macports to satisfy the + # bash>=4 requirement, so requiring GNU sed is entirely reasonable. + sed=gsed +else + sed=sed +fi + +readarray -t all_files < <(find . -name '*.h' -or -name '*.c') + +# types.h is a simple wrapper that defines the uint128 type then proceeds to +# include FStar_UInt_8_16_32_64.h; we jump the types.h step since our current +# selection of algorithms does not necessitate the use of uint128 +$sed -i 's!#include.*types.h"!#include "krml/FStar_UInt_8_16_32_64.h"!g' "${all_files[@]}" +$sed -i 's!#include.*compat.h"!!g' "${all_files[@]}" + +# FStar_UInt_8_16_32_64 contains definitions useful in the general case, but not +# for us; trim! +$sed -i -z 's!\(extern\|typedef\)[^;]*;\n\n!!g' include/krml/FStar_UInt_8_16_32_64.h + +# This contains static inline prototypes that are defined in +# FStar_UInt_8_16_32_64; they are by default repeated for safety of separate +# compilation, but this is not necessary. +$sed -i 's!#include.*Hacl_Krmllib.h"!!g' "${all_files[@]}" + +# This header is useful for *other* algorithms that refer to SHA2, e.g. Ed25519 +# which needs to compute a digest of a message before signing it. Here, since no +# other algorithm builds upon SHA2, this internal header is useless (and is not +# included in $dist_files). +$sed -i 's!#include.*internal/Hacl_Streaming_SHA2.h"!#include "Hacl_Streaming_SHA2.h"!g' "${all_files[@]}" + +# The SHA2 file contains all variants of SHA2. We strip 384 and 512 for the time +# being, to be included later. +# This regexp matches a separator (two new lines), followed by: +# +# * +# ... 384 or 512 ... { +# * +# } +# +# The first non-empty lines are the comment block. The second ... may spill over +# the next following lines if the arguments are printed in one-per-line mode. +$sed -i -z 's/\n\n\([^\n]\+\n\)*[^\n]*\(384\|512\)[^{]*{\n\?\( [^\n]*\n\)*}//g' Hacl_Streaming_SHA2.c + +# Same thing with function prototypes +$sed -i -z 's/\n\n\([^\n]\+\n\)*[^\n]*\(384\|512\)[^;]*;//g' Hacl_Streaming_SHA2.h + +# Use globally unique names for the Hacl_ C APIs to avoid linkage conflicts. +$sed -i -z 's!#include \n!#include \n#include "python_hacl_namespaces.h"\n!' Hacl_Streaming_SHA2.h + +# Finally, we remove a bunch of ifdefs from target.h that are, again, useful in +# the general case, but not exercised by the subset of HACL* that we vendor. +$sed -z -i 's!#ifndef KRML_\(HOST_PRINTF\|HOST_EXIT\|PRE_ALIGN\|POST_ALIGN\|ALIGNED_MALLOC\|ALIGNED_FREE\|HOST_TIME\)\n\(\n\|# [^\n]*\n\|[^#][^\n]*\n\)*#endif\n\n!!g' include/krml/internal/target.h +$sed -z -i 's!\n\n\([^#][^\n]*\n\)*#define KRML_\(EABORT\|EXIT\|CHECK_SIZE\)[^\n]*\(\n [^\n]*\)*!!g' include/krml/internal/target.h +$sed -z -i 's!\n\n\([^#][^\n]*\n\)*#if [^\n]*\n\( [^\n]*\n\)*#define KRML_\(EABORT\|EXIT\|CHECK_SIZE\)[^\n]*\(\n [^\n]*\)*!!g' include/krml/internal/target.h +$sed -z -i 's!\n\n\([^#][^\n]*\n\)*#if [^\n]*\n\( [^\n]*\n\)*# define _\?KRML_\(DEPRECATED\|CHECK_SIZE_PRAGMA\|HOST_EPRINTF\|HOST_SNPRINTF\)[^\n]*\n\([^#][^\n]*\n\|#el[^\n]*\n\|# [^\n]*\n\)*#endif!!g' include/krml/internal/target.h + +echo "Updated; verify all is okay using git diff and git status." diff --git a/Modules/sha256module.c b/Modules/sha256module.c index 17ee86683b7a..630e4bf03bbe 100644 --- a/Modules/sha256module.c +++ b/Modules/sha256module.c @@ -31,29 +31,33 @@ class SHA256Type "SHAobject *" "&PyType_Type" [clinic start generated code]*/ /*[clinic end generated code: output=da39a3ee5e6b4b0d input=71a39174d4f0a744]*/ -/* Some useful types */ -typedef unsigned char SHA_BYTE; -typedef uint32_t SHA_INT32; /* 32-bit integer */ - -/* The SHA block size and message digest sizes, in bytes */ +/* The SHA block size and maximum message digest sizes, in bytes */ #define SHA_BLOCKSIZE 64 -#define SHA_DIGESTSIZE 32 +#define SHA_DIGESTSIZE 32 + +/* The SHA2-224 and SHA2-256 implementations defer to the HACL* verified + * library. */ -/* The structure for storing SHA info */ +#include "_hacl/Hacl_Streaming_SHA2.h" typedef struct { - PyObject_HEAD - SHA_INT32 digest[8]; /* Message digest */ - SHA_INT32 count_lo, count_hi; /* 64-bit bit count */ - SHA_BYTE data[SHA_BLOCKSIZE]; /* SHA data buffer */ - int local; /* unprocessed amount in data */ - int digestsize; + PyObject_HEAD + // Even though one could conceivably perform run-type checks to tell apart a + // sha224_type from a sha256_type (and thus deduce the digest size), we must + // keep this field because it's exposed as a member field on the underlying + // python object. + // TODO: could we transform this into a getter and get rid of the redundant + // field? + int digestsize; + Hacl_Streaming_SHA2_state_sha2_256 *state; } SHAobject; #include "clinic/sha256module.c.h" +/* We shall use run-time type information in the remainder of this module to + * tell apart SHA2-224 and SHA2-256 */ typedef struct { PyTypeObject* sha224_type; PyTypeObject* sha256_type; @@ -67,321 +71,12 @@ _sha256_get_state(PyObject *module) return (_sha256_state *)state; } -/* When run on a little-endian CPU we need to perform byte reversal on an - array of longwords. */ - -#if PY_LITTLE_ENDIAN -static void longReverse(SHA_INT32 *buffer, int byteCount) -{ - byteCount /= sizeof(*buffer); - for (; byteCount--; buffer++) { - *buffer = _Py_bswap32(*buffer); - } -} -#endif - static void SHAcopy(SHAobject *src, SHAobject *dest) { - dest->local = src->local; dest->digestsize = src->digestsize; - dest->count_lo = src->count_lo; - dest->count_hi = src->count_hi; - memcpy(dest->digest, src->digest, sizeof(src->digest)); - memcpy(dest->data, src->data, sizeof(src->data)); -} - - -/* ------------------------------------------------------------------------ - * - * This code for the SHA-256 algorithm was noted as public domain. The - * original headers are pasted below. - * - * Several changes have been made to make it more compatible with the - * Python environment and desired interface. - * - */ - -/* LibTomCrypt, modular cryptographic library -- Tom St Denis - * - * LibTomCrypt is a library that provides various cryptographic - * algorithms in a highly modular and flexible manner. - * - * The library is free for all purposes without any express - * guarantee it works. - * - * Tom St Denis, tomstdenis at iahu.ca, https://www.libtom.net - */ - - -/* SHA256 by Tom St Denis */ - -/* Various logical functions */ -#define ROR(x, y)\ -( ((((unsigned long)(x)&0xFFFFFFFFUL)>>(unsigned long)((y)&31)) | \ -((unsigned long)(x)<<(unsigned long)(32-((y)&31)))) & 0xFFFFFFFFUL) -#define Ch(x,y,z) (z ^ (x & (y ^ z))) -#define Maj(x,y,z) (((x | y) & z) | (x & y)) -#define S(x, n) ROR((x),(n)) -#define R(x, n) (((x)&0xFFFFFFFFUL)>>(n)) -#define Sigma0(x) (S(x, 2) ^ S(x, 13) ^ S(x, 22)) -#define Sigma1(x) (S(x, 6) ^ S(x, 11) ^ S(x, 25)) -#define Gamma0(x) (S(x, 7) ^ S(x, 18) ^ R(x, 3)) -#define Gamma1(x) (S(x, 17) ^ S(x, 19) ^ R(x, 10)) - - -static void -sha_transform(SHAobject *sha_info) -{ - int i; - SHA_INT32 S[8], W[64], t0, t1; - - memcpy(W, sha_info->data, sizeof(sha_info->data)); -#if PY_LITTLE_ENDIAN - longReverse(W, (int)sizeof(sha_info->data)); -#endif - - for (i = 16; i < 64; ++i) { - W[i] = Gamma1(W[i - 2]) + W[i - 7] + Gamma0(W[i - 15]) + W[i - 16]; - } - for (i = 0; i < 8; ++i) { - S[i] = sha_info->digest[i]; - } - - /* Compress */ -#define RND(a,b,c,d,e,f,g,h,i,ki) \ - t0 = h + Sigma1(e) + Ch(e, f, g) + ki + W[i]; \ - t1 = Sigma0(a) + Maj(a, b, c); \ - d += t0; \ - h = t0 + t1; - - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],0,0x428a2f98); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],1,0x71374491); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],2,0xb5c0fbcf); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],3,0xe9b5dba5); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],4,0x3956c25b); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],5,0x59f111f1); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],6,0x923f82a4); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],7,0xab1c5ed5); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],8,0xd807aa98); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],9,0x12835b01); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],10,0x243185be); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],11,0x550c7dc3); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],12,0x72be5d74); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],13,0x80deb1fe); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],14,0x9bdc06a7); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],15,0xc19bf174); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],16,0xe49b69c1); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],17,0xefbe4786); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],18,0x0fc19dc6); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],19,0x240ca1cc); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],20,0x2de92c6f); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],21,0x4a7484aa); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],22,0x5cb0a9dc); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],23,0x76f988da); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],24,0x983e5152); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],25,0xa831c66d); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],26,0xb00327c8); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],27,0xbf597fc7); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],28,0xc6e00bf3); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],29,0xd5a79147); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],30,0x06ca6351); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],31,0x14292967); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],32,0x27b70a85); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],33,0x2e1b2138); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],34,0x4d2c6dfc); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],35,0x53380d13); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],36,0x650a7354); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],37,0x766a0abb); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],38,0x81c2c92e); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],39,0x92722c85); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],40,0xa2bfe8a1); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],41,0xa81a664b); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],42,0xc24b8b70); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],43,0xc76c51a3); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],44,0xd192e819); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],45,0xd6990624); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],46,0xf40e3585); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],47,0x106aa070); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],48,0x19a4c116); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],49,0x1e376c08); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],50,0x2748774c); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],51,0x34b0bcb5); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],52,0x391c0cb3); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],53,0x4ed8aa4a); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],54,0x5b9cca4f); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],55,0x682e6ff3); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],56,0x748f82ee); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],57,0x78a5636f); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],58,0x84c87814); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],59,0x8cc70208); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],60,0x90befffa); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],61,0xa4506ceb); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],62,0xbef9a3f7); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],63,0xc67178f2); - -#undef RND - - /* feedback */ - for (i = 0; i < 8; i++) { - sha_info->digest[i] = sha_info->digest[i] + S[i]; - } - -} - - - -/* initialize the SHA digest */ - -static void -sha_init(SHAobject *sha_info) -{ - sha_info->digest[0] = 0x6A09E667L; - sha_info->digest[1] = 0xBB67AE85L; - sha_info->digest[2] = 0x3C6EF372L; - sha_info->digest[3] = 0xA54FF53AL; - sha_info->digest[4] = 0x510E527FL; - sha_info->digest[5] = 0x9B05688CL; - sha_info->digest[6] = 0x1F83D9ABL; - sha_info->digest[7] = 0x5BE0CD19L; - sha_info->count_lo = 0L; - sha_info->count_hi = 0L; - sha_info->local = 0; - sha_info->digestsize = 32; + dest->state = Hacl_Streaming_SHA2_copy_256(src->state); } -static void -sha224_init(SHAobject *sha_info) -{ - sha_info->digest[0] = 0xc1059ed8L; - sha_info->digest[1] = 0x367cd507L; - sha_info->digest[2] = 0x3070dd17L; - sha_info->digest[3] = 0xf70e5939L; - sha_info->digest[4] = 0xffc00b31L; - sha_info->digest[5] = 0x68581511L; - sha_info->digest[6] = 0x64f98fa7L; - sha_info->digest[7] = 0xbefa4fa4L; - sha_info->count_lo = 0L; - sha_info->count_hi = 0L; - sha_info->local = 0; - sha_info->digestsize = 28; -} - - -/* update the SHA digest */ - -static void -sha_update(SHAobject *sha_info, SHA_BYTE *buffer, Py_ssize_t count) -{ - Py_ssize_t i; - SHA_INT32 clo; - - clo = sha_info->count_lo + ((SHA_INT32) count << 3); - if (clo < sha_info->count_lo) { - ++sha_info->count_hi; - } - sha_info->count_lo = clo; - sha_info->count_hi += (SHA_INT32) count >> 29; - if (sha_info->local) { - i = SHA_BLOCKSIZE - sha_info->local; - if (i > count) { - i = count; - } - memcpy(((SHA_BYTE *) sha_info->data) + sha_info->local, buffer, i); - count -= i; - buffer += i; - sha_info->local += (int)i; - if (sha_info->local == SHA_BLOCKSIZE) { - sha_transform(sha_info); - } - else { - return; - } - } - while (count >= SHA_BLOCKSIZE) { - memcpy(sha_info->data, buffer, SHA_BLOCKSIZE); - buffer += SHA_BLOCKSIZE; - count -= SHA_BLOCKSIZE; - sha_transform(sha_info); - } - memcpy(sha_info->data, buffer, count); - sha_info->local = (int)count; -} - -/* finish computing the SHA digest */ - -static void -sha_final(unsigned char digest[SHA_DIGESTSIZE], SHAobject *sha_info) -{ - int count; - SHA_INT32 lo_bit_count, hi_bit_count; - - lo_bit_count = sha_info->count_lo; - hi_bit_count = sha_info->count_hi; - count = (int) ((lo_bit_count >> 3) & 0x3f); - ((SHA_BYTE *) sha_info->data)[count++] = 0x80; - if (count > SHA_BLOCKSIZE - 8) { - memset(((SHA_BYTE *) sha_info->data) + count, 0, - SHA_BLOCKSIZE - count); - sha_transform(sha_info); - memset((SHA_BYTE *) sha_info->data, 0, SHA_BLOCKSIZE - 8); - } - else { - memset(((SHA_BYTE *) sha_info->data) + count, 0, - SHA_BLOCKSIZE - 8 - count); - } - - /* GJS: note that we add the hi/lo in big-endian. sha_transform will - swap these values into host-order. */ - sha_info->data[56] = (hi_bit_count >> 24) & 0xff; - sha_info->data[57] = (hi_bit_count >> 16) & 0xff; - sha_info->data[58] = (hi_bit_count >> 8) & 0xff; - sha_info->data[59] = (hi_bit_count >> 0) & 0xff; - sha_info->data[60] = (lo_bit_count >> 24) & 0xff; - sha_info->data[61] = (lo_bit_count >> 16) & 0xff; - sha_info->data[62] = (lo_bit_count >> 8) & 0xff; - sha_info->data[63] = (lo_bit_count >> 0) & 0xff; - sha_transform(sha_info); - digest[ 0] = (unsigned char) ((sha_info->digest[0] >> 24) & 0xff); - digest[ 1] = (unsigned char) ((sha_info->digest[0] >> 16) & 0xff); - digest[ 2] = (unsigned char) ((sha_info->digest[0] >> 8) & 0xff); - digest[ 3] = (unsigned char) ((sha_info->digest[0] ) & 0xff); - digest[ 4] = (unsigned char) ((sha_info->digest[1] >> 24) & 0xff); - digest[ 5] = (unsigned char) ((sha_info->digest[1] >> 16) & 0xff); - digest[ 6] = (unsigned char) ((sha_info->digest[1] >> 8) & 0xff); - digest[ 7] = (unsigned char) ((sha_info->digest[1] ) & 0xff); - digest[ 8] = (unsigned char) ((sha_info->digest[2] >> 24) & 0xff); - digest[ 9] = (unsigned char) ((sha_info->digest[2] >> 16) & 0xff); - digest[10] = (unsigned char) ((sha_info->digest[2] >> 8) & 0xff); - digest[11] = (unsigned char) ((sha_info->digest[2] ) & 0xff); - digest[12] = (unsigned char) ((sha_info->digest[3] >> 24) & 0xff); - digest[13] = (unsigned char) ((sha_info->digest[3] >> 16) & 0xff); - digest[14] = (unsigned char) ((sha_info->digest[3] >> 8) & 0xff); - digest[15] = (unsigned char) ((sha_info->digest[3] ) & 0xff); - digest[16] = (unsigned char) ((sha_info->digest[4] >> 24) & 0xff); - digest[17] = (unsigned char) ((sha_info->digest[4] >> 16) & 0xff); - digest[18] = (unsigned char) ((sha_info->digest[4] >> 8) & 0xff); - digest[19] = (unsigned char) ((sha_info->digest[4] ) & 0xff); - digest[20] = (unsigned char) ((sha_info->digest[5] >> 24) & 0xff); - digest[21] = (unsigned char) ((sha_info->digest[5] >> 16) & 0xff); - digest[22] = (unsigned char) ((sha_info->digest[5] >> 8) & 0xff); - digest[23] = (unsigned char) ((sha_info->digest[5] ) & 0xff); - digest[24] = (unsigned char) ((sha_info->digest[6] >> 24) & 0xff); - digest[25] = (unsigned char) ((sha_info->digest[6] >> 16) & 0xff); - digest[26] = (unsigned char) ((sha_info->digest[6] >> 8) & 0xff); - digest[27] = (unsigned char) ((sha_info->digest[6] ) & 0xff); - digest[28] = (unsigned char) ((sha_info->digest[7] >> 24) & 0xff); - digest[29] = (unsigned char) ((sha_info->digest[7] >> 16) & 0xff); - digest[30] = (unsigned char) ((sha_info->digest[7] >> 8) & 0xff); - digest[31] = (unsigned char) ((sha_info->digest[7] ) & 0xff); -} - -/* - * End of copied SHA code. - * - * ------------------------------------------------------------------------ - */ - - static SHAobject * newSHA224object(_sha256_state *state) { @@ -409,14 +104,31 @@ SHA_traverse(PyObject *ptr, visitproc visit, void *arg) } static void -SHA_dealloc(PyObject *ptr) +SHA_dealloc(SHAobject *ptr) { + Hacl_Streaming_SHA2_free_256(ptr->state); PyTypeObject *tp = Py_TYPE(ptr); PyObject_GC_UnTrack(ptr); PyObject_GC_Del(ptr); Py_DECREF(tp); } +/* HACL* takes a uint32_t for the length of its parameter, but Py_ssize_t can be + * 64 bits. */ +static void update_256(Hacl_Streaming_SHA2_state_sha2_256 *state, uint8_t *buf, Py_ssize_t len) { + /* Note: we explicitly ignore the error code on the basis that it would take > + * 1 billion years to overflow the maximum admissible length for SHA2-256 + * (namely, 2^61-1 bytes). */ + while (len > UINT32_MAX) { + Hacl_Streaming_SHA2_update_256(state, buf, UINT32_MAX); + len -= UINT32_MAX; + buf += UINT32_MAX; + } + /* Cast to uint32_t is safe: upon exiting the loop, len <= UINT32_MAX, and + * therefore fits in a uint32_t */ + Hacl_Streaming_SHA2_update_256(state, buf, (uint32_t) len); +} + /* External methods for a hash object */ @@ -458,11 +170,10 @@ static PyObject * SHA256Type_digest_impl(SHAobject *self) /*[clinic end generated code: output=46616a5e909fbc3d input=f1f4cfea5cbde35c]*/ { - unsigned char digest[SHA_DIGESTSIZE]; - SHAobject temp; - - SHAcopy(self, &temp); - sha_final(digest, &temp); + uint8_t digest[SHA_DIGESTSIZE]; + // HACL performs copies under the hood so that self->state remains valid + // after this call. + Hacl_Streaming_SHA2_finish_256(self->state, digest); return PyBytes_FromStringAndSize((const char *)digest, self->digestsize); } @@ -476,13 +187,8 @@ static PyObject * SHA256Type_hexdigest_impl(SHAobject *self) /*[clinic end generated code: output=725f8a7041ae97f3 input=0cc4c714693010d1]*/ { - unsigned char digest[SHA_DIGESTSIZE]; - SHAobject temp; - - /* Get the raw (binary) digest value */ - SHAcopy(self, &temp); - sha_final(digest, &temp); - + uint8_t digest[SHA_DIGESTSIZE]; + Hacl_Streaming_SHA2_finish_256(self->state, digest); return _Py_strhex((const char *)digest, self->digestsize); } @@ -503,7 +209,7 @@ SHA256Type_update(SHAobject *self, PyObject *obj) GET_BUFFER_VIEW_OR_ERROUT(obj, &buf); - sha_update(self, buf.buf, buf.len); + update_256(self->state, buf.buf, buf.len); PyBuffer_Release(&buf); Py_RETURN_NONE; @@ -524,12 +230,12 @@ SHA256_get_block_size(PyObject *self, void *closure) } static PyObject * -SHA256_get_name(PyObject *self, void *closure) +SHA256_get_name(SHAobject *self, void *closure) { - if (((SHAobject *)self)->digestsize == 32) - return PyUnicode_FromStringAndSize("sha256", 6); - else + if (self->digestsize == 28) { return PyUnicode_FromStringAndSize("sha224", 6); + } + return PyUnicode_FromStringAndSize("sha256", 6); } static PyGetSetDef SHA_getseters[] = { @@ -606,7 +312,8 @@ _sha256_sha256_impl(PyObject *module, PyObject *string, int usedforsecurity) return NULL; } - sha_init(new); + new->state = Hacl_Streaming_SHA2_create_in_256(); + new->digestsize = 32; if (PyErr_Occurred()) { Py_DECREF(new); @@ -616,7 +323,7 @@ _sha256_sha256_impl(PyObject *module, PyObject *string, int usedforsecurity) return NULL; } if (string) { - sha_update(new, buf.buf, buf.len); + update_256(new->state, buf.buf, buf.len); PyBuffer_Release(&buf); } @@ -651,7 +358,8 @@ _sha256_sha224_impl(PyObject *module, PyObject *string, int usedforsecurity) return NULL; } - sha224_init(new); + new->state = Hacl_Streaming_SHA2_create_in_224(); + new->digestsize = 28; if (PyErr_Occurred()) { Py_DECREF(new); @@ -661,7 +369,7 @@ _sha256_sha224_impl(PyObject *module, PyObject *string, int usedforsecurity) return NULL; } if (string) { - sha_update(new, buf.buf, buf.len); + update_256(new->state, buf.buf, buf.len); PyBuffer_Release(&buf); } diff --git a/PCbuild/pythoncore.vcxproj b/PCbuild/pythoncore.vcxproj index 397d22abe235..e8e9ff01e306 100644 --- a/PCbuild/pythoncore.vcxproj +++ b/PCbuild/pythoncore.vcxproj @@ -100,7 +100,7 @@ /Zm200 %(AdditionalOptions) - $(PySourcePath)Python;%(AdditionalIncludeDirectories) + $(PySourcePath)Modules\_hacl\include;$(PySourcePath)Modules\_hacl\internal;$(PySourcePath)Python;%(AdditionalIncludeDirectories) $(zlibDir);%(AdditionalIncludeDirectories) _USRDLL;Py_BUILD_CORE;Py_BUILD_CORE_BUILTIN;Py_ENABLE_SHARED;MS_DLL_ID="$(SysWinVer)";%(PreprocessorDefinitions) _Py_HAVE_ZLIB;%(PreprocessorDefinitions) @@ -407,6 +407,7 @@ + diff --git a/PCbuild/pythoncore.vcxproj.filters b/PCbuild/pythoncore.vcxproj.filters index bcbedcc3235b..4820db6f2c32 100644 --- a/PCbuild/pythoncore.vcxproj.filters +++ b/PCbuild/pythoncore.vcxproj.filters @@ -866,6 +866,9 @@ Modules + + Modules + Modules diff --git a/configure b/configure index aef8103085bc..97694c602d1c 100755 --- a/configure +++ b/configure @@ -24426,6 +24426,7 @@ SRCDIRS="\ Modules/_ctypes \ Modules/_decimal \ Modules/_decimal/libmpdec \ + Modules/_hacl \ Modules/_io \ Modules/_multiprocessing \ Modules/_sha3 \ @@ -26966,7 +26967,7 @@ fi as_fn_append MODULE_BLOCK "MODULE__SHA256_STATE=$py_cv_module__sha256$as_nl" if test "x$py_cv_module__sha256" = xyes; then : - + as_fn_append MODULE_BLOCK "MODULE__SHA256_CFLAGS=-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE$as_nl" fi diff --git a/configure.ac b/configure.ac index 010bca855f00..09369b985b33 100644 --- a/configure.ac +++ b/configure.ac @@ -6508,6 +6508,7 @@ SRCDIRS="\ Modules/_ctypes \ Modules/_decimal \ Modules/_decimal/libmpdec \ + Modules/_hacl \ Modules/_io \ Modules/_multiprocessing \ Modules/_sha3 \ @@ -7197,7 +7198,9 @@ dnl By default we always compile these even when OpenSSL is available dnl (issue #14693). The modules are small. PY_STDLIB_MOD([_md5], [test "$with_builtin_md5" = yes]) PY_STDLIB_MOD([_sha1], [test "$with_builtin_sha1" = yes]) -PY_STDLIB_MOD([_sha256], [test "$with_builtin_sha256" = yes]) +PY_STDLIB_MOD([_sha256], + [test "$with_builtin_sha256" = yes], [], + [-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE]) PY_STDLIB_MOD([_sha512], [test "$with_builtin_sha512" = yes]) PY_STDLIB_MOD([_sha3], [test "$with_builtin_sha3" = yes]) PY_STDLIB_MOD([_blake2], From webhook-mailer at python.org Tue Feb 7 00:04:05 2023 From: webhook-mailer at python.org (gvanrossum) Date: Tue, 07 Feb 2023 05:04:05 -0000 Subject: [Python-checkins] gh-98831: Move DSL documentation here from ideas repo (#101629) Message-ID: https://github.com/python/cpython/commit/694e346a01f407c7f78138331db006dea79f82a8 commit: 694e346a01f407c7f78138331db006dea79f82a8 branch: main author: Guido van Rossum committer: gvanrossum date: 2023-02-06T21:03:58-08:00 summary: gh-98831: Move DSL documentation here from ideas repo (#101629) files: A Tools/cases_generator/interpreter_definition.md M Tools/cases_generator/README.md diff --git a/Tools/cases_generator/README.md b/Tools/cases_generator/README.md index dc055ead1941..c595a932ac44 100644 --- a/Tools/cases_generator/README.md +++ b/Tools/cases_generator/README.md @@ -1,5 +1,8 @@ # Tooling to generate interpreters +Documentation for the instruction definitions in `Python/bytecodes.c` +("the DSL") is [here](interpreter_definition.md). + What's currently here: - `lexer.py`: lexer for C, originally written by Mark Shannon @@ -7,10 +10,10 @@ What's currently here: - `parser.py`: Parser for instruction definition DSL; main class `Parser` - `generate_cases.py`: driver script to read `Python/bytecodes.c` and write `Python/generated_cases.c.h` +- `test_generator.py`: tests, require manual running using `pytest` -The DSL for the instruction definitions in `Python/bytecodes.c` is described -[here](https://github.com/faster-cpython/ideas/blob/main/3.12/interpreter_definition.md). -Note that there is some dummy C code at the top and bottom of the file +Note that there is some dummy C code at the top and bottom of +`Python/bytecodes.c` to fool text editors like VS Code into believing this is valid C code. ## A bit about the parser diff --git a/Tools/cases_generator/interpreter_definition.md b/Tools/cases_generator/interpreter_definition.md new file mode 100644 index 000000000000..c7bd38d32ff4 --- /dev/null +++ b/Tools/cases_generator/interpreter_definition.md @@ -0,0 +1,409 @@ +# A higher level definition of the bytecode interpreter + +## Abstract + +The CPython interpreter is defined in C, meaning that the semantics of the +bytecode instructions, the dispatching mechanism, error handling, and +tracing and instrumentation are all intermixed. + +This document proposes defining a custom C-like DSL for defining the +instruction semantics and tools for generating the code deriving from +the instruction definitions. + +These tools would be used to: +* Generate the main interpreter (done) +* Generate the tier 2 interpreter +* Generate documentation for instructions +* Generate metadata about instructions, such as stack use (done). + +Having a single definition file ensures that there is a single source +of truth for bytecode semantics. + +Other tools that operate on bytecodes, like `frame.setlineno` +and the `dis` module, will be derived from the common semantic +definition, reducing errors. + +## Motivation + +The bytecode interpreter of CPython has traditionally been defined as standard +C code, but with a lot of macros. +The presence of these macros and the nature of bytecode interpreters means +that the interpreter is effectively defined in a domain specific language (DSL). + +Rather than using an ad-hoc DSL embedded in the C code for the interpreter, +a custom DSL should be defined and the semantics of the bytecode instructions, +and the instructions defined in that DSL. + +Generating the interpreter decouples low-level details of dispatching +and error handling from the semantics of the instructions, resulting +in more maintainable code and a potentially faster interpreter. + +It also provides the ability to create and check optimizers and optimization +passes from the semantic definition, reducing errors. + +## Rationale + +As we improve the performance of CPython, we need to optimize larger regions +of code, use more complex optimizations and, ultimately, translate to machine +code. + +All of these steps introduce the possibility of more bugs, and require more code +to be written. One way to mitigate this is through the use of code generators. +Code generators decouple the debugging of the code (the generator) from checking +the correctness (the DSL input). + +For example, we are likely to want a new interpreter for the tier 2 optimizer +to be added in 3.12. That interpreter will have a different API, a different +set of instructions and potentially different dispatching mechanism. +But the instructions it will interpret will be built from the same building +blocks as the instructions for the tier 1 (PEP 659) interpreter. + +Rewriting all the instructions is tedious and error-prone, and changing the +instructions is a maintenance headache as both versions need to be kept in sync. + +By using a code generator and using a common source for the instructions, or +parts of instructions, we can reduce the potential for errors considerably. + + +## Specification + +This specification is at an early stage and is likely to change considerably. + +Syntax +------ + +Each op definition has a kind, a name, a stack and instruction stream effect, +and a piece of C code describing its semantics:: + +``` + file: + (definition | family)+ + + definition: + "inst" "(" NAME ["," stack_effect] ")" "{" C-code "}" + | + "op" "(" NAME "," stack_effect ")" "{" C-code "}" + | + "macro" "(" NAME ")" "=" uop ("+" uop)* ";" + | + "super" "(" NAME ")" "=" NAME ("+" NAME)* ";" + + stack_effect: + "(" [inputs] "--" [outputs] ")" + + inputs: + input ("," input)* + + outputs: + output ("," output)* + + input: + object | stream | array + + output: + object | array + + uop: + NAME | stream + + object: + NAME [":" type] [ "if" "(" C-expression ")" ] + + type: + NAME + + stream: + NAME "/" size + + size: + INTEGER + + array: + object "[" C-expression "]" + + family: + "family" "(" NAME ")" = "{" NAME ("," NAME)+ "}" ";" +``` + +The following definitions may occur: + +* `inst`: A normal instruction, as previously defined by `TARGET(NAME)` in `ceval.c`. +* `op`: A part instruction from which macros can be constructed. +* `macro`: A bytecode instruction constructed from ops and cache effects. +* `super`: A super-instruction, such as `LOAD_FAST__LOAD_FAST`, constructed from + normal or macro instructions. + +`NAME` can be any ASCII identifier that is a C identifier and not a C or Python keyword. +`foo_1` is legal. `$` is not legal, nor is `struct` or `class`. + +The optional `type` in an `object` is the C type. It defaults to `PyObject *`. +The objects before the "--" are the objects on top of the the stack at the start +of the instruction. Those after the "--" are the objects on top of the the stack +at the end of the instruction. + +An `inst` without `stack_effect` is a transitional form to allow the original C code +definitions to be copied. It lacks information to generate anything other than the +interpreter, but is useful for initial porting of code. + +Stack effect names may be `unused`, indicating the space is to be reserved +but no use of it will be made in the instruction definition. +This is useful to ensure that all instructions in a family have the same +stack effect. + +The number in a `stream` define how many codeunits are consumed from the +instruction stream. It returns a 16, 32 or 64 bit value. +If the name is `unused` the size can be any value and that many codeunits +will be skipped in the instruction stream. + +By convention cache effects (`stream`) must precede the input effects. + +The name `oparg` is pre-defined as a 32 bit value fetched from the instruction stream. + +The C code may include special functions that are understood by the tools as +part of the DSL. + +Those functions include: + +* `DEOPT_IF(cond, instruction)`. Deoptimize if `cond` is met. +* `ERROR_IF(cond, label)`. Jump to error handler if `cond` is true. +* `DECREF_INPUTS()`. Generate `Py_DECREF()` calls for the input stack effects. + +Variables can either be defined in the input, output, or in the C code. +Variables defined in the input may not be assigned in the C code. +If an `ERROR_IF` occurs, all values will be removed from the stack; +they must already be `DECREF`'ed by the code block. +If a `DEOPT_IF` occurs, no values will be removed from the stack or +the instruction stream; no values must have been `DECREF`'ed or created. + +These requirements result in the following constraints on the use of +`DEOPT_IF` and `ERROR_IF` in any instruction's code block: + +1. Until the last `DEOPT_IF`, no objects may be allocated, `INCREF`ed, + or `DECREF`ed. +2. Before the first `ERROR_IF`, all input values must be `DECREF`ed, + and no objects may be allocated or `INCREF`ed, with the exception + of attempting to create an object and checking for success using + `ERROR_IF(result == NULL, label)`. (TODO: Unclear what to do with + intermediate results.) +3. No `DEOPT_IF` may follow an `ERROR_IF` in the same block. + +Semantics +--------- + +The underlying execution model is a stack machine. +Operations pop values from the stack, and push values to the stack. +They also can look at, and consume, values from the instruction stream. + +All members of a family must have the same stack and instruction stream effect. + +Examples +-------- + +(Another source of examples can be found in the [tests](test_generator.py).) + +Some examples: + +### Output stack effect +```C + inst ( LOAD_FAST, (-- value) ) { + value = frame->f_localsplus[oparg]; + Py_INCREF(value); + } +``` +This would generate: +```C + TARGET(LOAD_FAST) { + PyObject *value; + value = frame->f_localsplus[oparg]; + Py_INCREF(value); + PUSH(value); + DISPATCH(); + } +``` + +### Input stack effect +```C + inst ( STORE_FAST, (value --) ) { + SETLOCAL(oparg, value); + } +``` +This would generate: +```C + TARGET(STORE_FAST) { + PyObject *value = PEEK(1); + SETLOCAL(oparg, value); + STACK_SHRINK(1); + DISPATCH(); + } +``` + +### Super-instruction definition + +```C + super ( LOAD_FAST__LOAD_FAST ) = LOAD_FAST + LOAD_FAST ; +``` +This might get translated into the following: +```C + TARGET(LOAD_FAST__LOAD_FAST) { + PyObject *value; + value = frame->f_localsplus[oparg]; + Py_INCREF(value); + PUSH(value); + NEXTOPARG(); + next_instr++; + value = frame->f_localsplus[oparg]; + Py_INCREF(value); + PUSH(value); + DISPATCH(); + } +``` + +### Input stack effect and cache effect +```C + op ( CHECK_OBJECT_TYPE, (owner, type_version/2 -- owner) ) { + PyTypeObject *tp = Py_TYPE(owner); + assert(type_version != 0); + DEOPT_IF(tp->tp_version_tag != type_version); + } +``` +This might become (if it was an instruction): +```C + TARGET(CHECK_OBJECT_TYPE) { + PyObject *owner = PEEK(1); + uint32 type_version = read32(next_instr); + PyTypeObject *tp = Py_TYPE(owner); + assert(type_version != 0); + DEOPT_IF(tp->tp_version_tag != type_version); + next_instr += 2; + DISPATCH(); + } +``` + +### More examples + +For explanations see "Generating the interpreter" below.) +```C + op ( CHECK_HAS_INSTANCE_VALUES, (owner -- owner) ) { + PyDictOrValues dorv = *_PyObject_DictOrValuesPointer(owner); + DEOPT_IF(!_PyDictOrValues_IsValues(dorv)); + } +``` +```C + op ( LOAD_INSTANCE_VALUE, (owner, index/1 -- null if (oparg & 1), res) ) { + res = _PyDictOrValues_GetValues(dorv)->values[index]; + DEOPT_IF(res == NULL); + Py_INCREF(res); + null = NULL; + Py_DECREF(owner); + } +``` +```C + macro ( LOAD_ATTR_INSTANCE_VALUE ) = + counter/1 + CHECK_OBJECT_TYPE + CHECK_HAS_INSTANCE_VALUES + + LOAD_INSTANCE_VALUE + unused/4 ; +``` +```C + op ( LOAD_SLOT, (owner, index/1 -- null if (oparg & 1), res) ) { + char *addr = (char *)owner + index; + res = *(PyObject **)addr; + DEOPT_IF(res == NULL); + Py_INCREF(res); + null = NULL; + Py_DECREF(owner); + } +``` +```C + macro ( LOAD_ATTR_SLOT ) = counter/1 + CHECK_OBJECT_TYPE + LOAD_SLOT + unused/4; +``` +```C + inst ( BUILD_TUPLE, (items[oparg] -- tuple) ) { + tuple = _PyTuple_FromArraySteal(items, oparg); + ERROR_IF(tuple == NULL, error); + } +``` +```C + inst ( PRINT_EXPR ) { + PyObject *value = POP(); + PyObject *hook = _PySys_GetAttr(tstate, &_Py_ID(displayhook)); + PyObject *res; + if (hook == NULL) { + _PyErr_SetString(tstate, PyExc_RuntimeError, + "lost sys.displayhook"); + Py_DECREF(value); + goto error; + } + res = PyObject_CallOneArg(hook, value); + Py_DECREF(value); + ERROR_IF(res == NULL); + Py_DECREF(res); + } +``` + +### Define an instruction family +These opcodes all share the same instruction format): +```C + family(load_attr) = { LOAD_ATTR, LOAD_ATTR_INSTANCE_VALUE, LOAD_SLOT } ; +``` + +Generating the interpreter +========================== + +The generated C code for a single instruction includes a preamble and dispatch at the end +which can be easily inserted. What is more complex is ensuring the correct stack effects +and not generating excess pops and pushes. + +For example, in `CHECK_HAS_INSTANCE_VALUES`, `owner` occurs in the input, so it cannot be +redefined. Thus it doesn't need to written and can be read without adjusting the stack pointer. +The C code generated for `CHECK_HAS_INSTANCE_VALUES` would look something like: + +```C + { + PyObject *owner = stack_pointer[-1]; + PyDictOrValues dorv = *_PyObject_DictOrValuesPointer(owner); + DEOPT_IF(!_PyDictOrValues_IsValues(dorv)); + } +``` + +When combining ops together to form instructions, temporary values should be used, +rather than popping and pushing, such that `LOAD_ATTR_SLOT` would look something like: + +```C + case LOAD_ATTR_SLOT: { + PyObject *s1 = stack_pointer[-1]; + /* CHECK_OBJECT_TYPE */ + { + PyObject *owner = s1; + uint32_t type_version = read32(next_instr + 1); + PyTypeObject *tp = Py_TYPE(owner); + assert(type_version != 0); + if (tp->tp_version_tag != type_version) goto deopt; + } + /* LOAD_SLOT */ + { + PyObject *owner = s1; + uint16_t index = *(next_instr + 1 + 2); + char *addr = (char *)owner + index; + PyObject *null; + PyObject *res = *(PyObject **)addr; + if (res == NULL) goto deopt; + Py_INCREF(res); + null = NULL; + Py_DECREF(owner); + if (oparg & 1) { + stack_pointer[0] = null; + stack_pointer += 1; + } + s1 = res; + } + next_instr += (1 + 1 + 2 + 1 + 4); + stack_pointer[-1] = s1; + DISPATCH(); + } +``` + +Other tools +=========== + +From the instruction definitions we can generate the stack marking code used in `frame.set_lineno()`, +and the tables for use by disassemblers. + From webhook-mailer at python.org Tue Feb 7 00:25:49 2023 From: webhook-mailer at python.org (gvanrossum) Date: Tue, 07 Feb 2023 05:25:49 -0000 Subject: [Python-checkins] gh-85747: Active voice & suggested edits, 'running/stopping loop' & 'callbacks' subsections of asyncio-eventloop.rst (#100270) Message-ID: https://github.com/python/cpython/commit/c4de6b1d52304a0a9cdfafc1dad5098993710404 commit: c4de6b1d52304a0a9cdfafc1dad5098993710404 branch: main author: Brian Skinn committer: gvanrossum date: 2023-02-06T21:25:42-08:00 summary: gh-85747: Active voice & suggested edits, 'running/stopping loop' & 'callbacks' subsections of asyncio-eventloop.rst (#100270) Co-authored-by: C.A.M. Gerlach Co-authored-by: Terry Jan Reedy files: M Doc/library/asyncio-eventloop.rst diff --git a/Doc/library/asyncio-eventloop.rst b/Doc/library/asyncio-eventloop.rst index db63a5dd11ad..f86e78428802 100644 --- a/Doc/library/asyncio-eventloop.rst +++ b/Doc/library/asyncio-eventloop.rst @@ -186,19 +186,24 @@ Running and stopping the loop .. coroutinemethod:: loop.shutdown_default_executor(timeout=None) Schedule the closure of the default executor and wait for it to join all of - the threads in the :class:`ThreadPoolExecutor`. After calling this method, a - :exc:`RuntimeError` will be raised if :meth:`loop.run_in_executor` is called - while using the default executor. + the threads in the :class:`~concurrent.futures.ThreadPoolExecutor`. + Once this method has been called, + using the default executor with :meth:`loop.run_in_executor` + will raise a :exc:`RuntimeError`. - The *timeout* parameter specifies the amount of time the executor will - be given to finish joining. The default value is ``None``, which means the - executor will be given an unlimited amount of time. + The *timeout* parameter specifies the amount of time + (in :class:`float` seconds) the executor will be given to finish joining. + With the default, ``None``, + the executor is allowed an unlimited amount of time. - If the timeout duration is reached, a warning is emitted and executor is - terminated without waiting for its threads to finish joining. + If the *timeout* is reached, a :exc:`RuntimeWarning` is emitted + and the default executor is terminated + without waiting for its threads to finish joining. - Note that there is no need to call this function when - :func:`asyncio.run` is used. + .. note:: + + Do not call this method when using :func:`asyncio.run`, + as the latter handles default executor shutdown automatically. .. versionadded:: 3.9 @@ -213,22 +218,23 @@ Scheduling callbacks Schedule the *callback* :term:`callback` to be called with *args* arguments at the next iteration of the event loop. + Return an instance of :class:`asyncio.Handle`, + which can be used later to cancel the callback. + Callbacks are called in the order in which they are registered. Each callback will be called exactly once. - An optional keyword-only *context* argument allows specifying a + The optional keyword-only *context* argument specifies a custom :class:`contextvars.Context` for the *callback* to run in. - The current context is used when no *context* is provided. - - An instance of :class:`asyncio.Handle` is returned, which can be - used later to cancel the callback. + Callbacks use the current context when no *context* is provided. - This method is not thread-safe. + Unlike :meth:`call_soon_threadsafe`, this method is not thread-safe. .. method:: loop.call_soon_threadsafe(callback, *args, context=None) - A thread-safe variant of :meth:`call_soon`. Must be used to - schedule callbacks *from another thread*. + A thread-safe variant of :meth:`call_soon`. When scheduling callbacks from + another thread, this function *must* be used, since :meth:`call_soon` is not + thread-safe. Raises :exc:`RuntimeError` if called on a loop that's been closed. This can happen on a secondary thread when the main application is From webhook-mailer at python.org Tue Feb 7 04:34:48 2023 From: webhook-mailer at python.org (ambv) Date: Tue, 07 Feb 2023 09:34:48 -0000 Subject: [Python-checkins] gh-101072: support default and kw default in PyEval_EvalCodeEx for 3.11+ (#101127) Message-ID: https://github.com/python/cpython/commit/ae62bddaf81176a7e55f95f18d55621c9c46c23d commit: ae62bddaf81176a7e55f95f18d55621c9c46c23d branch: main author: Matthieu Dartiailh committer: ambv date: 2023-02-07T10:34:21+01:00 summary: gh-101072: support default and kw default in PyEval_EvalCodeEx for 3.11+ (#101127) Co-authored-by: ?ukasz Langa files: A Lib/test/test_capi/test_eval_code_ex.py A Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst M Modules/_testcapimodule.c M Objects/funcobject.c M Python/ceval.c diff --git a/Lib/test/test_capi/test_eval_code_ex.py b/Lib/test/test_capi/test_eval_code_ex.py new file mode 100644 index 000000000000..2d28e5289eff --- /dev/null +++ b/Lib/test/test_capi/test_eval_code_ex.py @@ -0,0 +1,56 @@ +import unittest + +from test.support import import_helper + + +# Skip this test if the _testcapi module isn't available. +_testcapi = import_helper.import_module('_testcapi') + + +class PyEval_EvalCodeExTests(unittest.TestCase): + + def test_simple(self): + def f(): + return a + + self.assertEqual(_testcapi.eval_code_ex(f.__code__, dict(a=1)), 1) + + # Need to force the compiler to use LOAD_NAME + # def test_custom_locals(self): + # def f(): + # return + + def test_with_args(self): + def f(a, b, c): + return a + + self.assertEqual(_testcapi.eval_code_ex(f.__code__, {}, {}, (1, 2, 3)), 1) + + def test_with_kwargs(self): + def f(a, b, c): + return a + + self.assertEqual(_testcapi.eval_code_ex(f.__code__, {}, {}, (), dict(a=1, b=2, c=3)), 1) + + def test_with_default(self): + def f(a): + return a + + self.assertEqual(_testcapi.eval_code_ex(f.__code__, {}, {}, (), {}, (1,)), 1) + + def test_with_kwarg_default(self): + def f(*, a): + return a + + self.assertEqual(_testcapi.eval_code_ex(f.__code__, {}, {}, (), {}, (), dict(a=1)), 1) + + def test_with_closure(self): + a = 1 + def f(): + return a + + self.assertEqual(_testcapi.eval_code_ex(f.__code__, {}, {}, (), {}, (), {}, f.__closure__), 1) + + +if __name__ == "__main__": + unittest.main() diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst new file mode 100644 index 000000000000..2d991f6ca21b --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst @@ -0,0 +1,8 @@ +macOS #.. section: IDLE #.. section: Tools/Demos #.. section: C API + +# Write your Misc/NEWS entry below. It should be a simple ReST paragraph. # +Don't start with "- Issue #: " or "- gh-issue-: " or that sort of +stuff. +########################################################################### + +Fix the ``defs`` and ``kwdefs`` arguments to :c:func:`PyEval_EvalCodeEx`. diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index 5e47f4975a2d..5a6097ef0ac5 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -2237,7 +2237,7 @@ dict_get_version(PyObject *self, PyObject *args) return NULL; _Py_COMP_DIAG_PUSH - _Py_COMP_DIAG_IGNORE_DEPR_DECLS + _Py_COMP_DIAG_IGNORE_DEPR_DECLS version = dict->ma_version_tag; _Py_COMP_DIAG_POP @@ -3064,6 +3064,144 @@ eval_get_func_desc(PyObject *self, PyObject *func) return PyUnicode_FromString(PyEval_GetFuncDesc(func)); } +static PyObject * +eval_eval_code_ex(PyObject *mod, PyObject *pos_args) +{ + PyObject *result = NULL; + PyObject *code; + PyObject *globals; + PyObject *locals = NULL; + PyObject *args = NULL; + PyObject *kwargs = NULL; + PyObject *defaults = NULL; + PyObject *kw_defaults = NULL; + PyObject *closure = NULL; + + PyObject **c_kwargs = NULL; + + if (!PyArg_UnpackTuple(pos_args, + "eval_code_ex", + 2, + 8, + &code, + &globals, + &locals, + &args, + &kwargs, + &defaults, + &kw_defaults, + &closure)) + { + goto exit; + } + + if (!PyCode_Check(code)) { + PyErr_SetString(PyExc_TypeError, + "code must be a Python code object"); + goto exit; + } + + if (!PyDict_Check(globals)) { + PyErr_SetString(PyExc_TypeError, "globals must be a dict"); + goto exit; + } + + if (locals && !PyMapping_Check(locals)) { + PyErr_SetString(PyExc_TypeError, "locals must be a mapping"); + goto exit; + } + if (locals == Py_None) { + locals = NULL; + } + + PyObject **c_args = NULL; + Py_ssize_t c_args_len = 0; + + if (args) + { + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_TypeError, "args must be a tuple"); + goto exit; + } else { + c_args = &PyTuple_GET_ITEM(args, 0); + c_args_len = PyTuple_Size(args); + } + } + + Py_ssize_t c_kwargs_len = 0; + + if (kwargs) + { + if (!PyDict_Check(kwargs)) { + PyErr_SetString(PyExc_TypeError, "keywords must be a dict"); + goto exit; + } else { + c_kwargs_len = PyDict_Size(kwargs); + if (c_kwargs_len > 0) { + c_kwargs = PyMem_NEW(PyObject*, 2 * c_kwargs_len); + if (!c_kwargs) { + PyErr_NoMemory(); + goto exit; + } + + Py_ssize_t i = 0; + Py_ssize_t pos = 0; + + while (PyDict_Next(kwargs, + &pos, + &c_kwargs[i], + &c_kwargs[i + 1])) + { + i += 2; + } + c_kwargs_len = i / 2; + /* XXX This is broken if the caller deletes dict items! */ + } + } + } + + + PyObject **c_defaults = NULL; + Py_ssize_t c_defaults_len = 0; + + if (defaults && PyTuple_Check(defaults)) { + c_defaults = &PyTuple_GET_ITEM(defaults, 0); + c_defaults_len = PyTuple_Size(defaults); + } + + if (kw_defaults && !PyDict_Check(kw_defaults)) { + PyErr_SetString(PyExc_TypeError, "kw_defaults must be a dict"); + goto exit; + } + + if (closure && !PyTuple_Check(closure)) { + PyErr_SetString(PyExc_TypeError, "closure must be a tuple of cells"); + goto exit; + } + + + result = PyEval_EvalCodeEx( + code, + globals, + locals, + c_args, + c_args_len, + c_kwargs, + c_kwargs_len, + c_defaults, + c_defaults_len, + kw_defaults, + closure + ); + +exit: + if (c_kwargs) { + PyMem_DEL(c_kwargs); + } + + return result; +} + static PyObject * get_feature_macros(PyObject *self, PyObject *Py_UNUSED(args)) { @@ -3385,6 +3523,7 @@ static PyMethodDef TestMethods[] = { {"set_exc_info", test_set_exc_info, METH_VARARGS}, {"argparsing", argparsing, METH_VARARGS}, {"code_newempty", code_newempty, METH_VARARGS}, + {"eval_code_ex", eval_eval_code_ex, METH_VARARGS}, {"make_exception_with_doc", _PyCFunction_CAST(make_exception_with_doc), METH_VARARGS | METH_KEYWORDS}, {"make_memoryview_from_NULL_pointer", make_memoryview_from_NULL_pointer, diff --git a/Objects/funcobject.c b/Objects/funcobject.c index baa360381a77..91a6b3dd40a2 100644 --- a/Objects/funcobject.c +++ b/Objects/funcobject.c @@ -87,8 +87,8 @@ _PyFunction_FromConstructor(PyFrameConstructor *constr) op->func_name = Py_NewRef(constr->fc_name); op->func_qualname = Py_NewRef(constr->fc_qualname); op->func_code = Py_NewRef(constr->fc_code); - op->func_defaults = NULL; - op->func_kwdefaults = NULL; + op->func_defaults = Py_XNewRef(constr->fc_defaults); + op->func_kwdefaults = Py_XNewRef(constr->fc_kwdefaults); op->func_closure = Py_XNewRef(constr->fc_closure); op->func_doc = Py_NewRef(Py_None); op->func_dict = NULL; diff --git a/Python/ceval.c b/Python/ceval.c index 2e6fed580ded..ecb5bf965555 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -1761,9 +1761,6 @@ PyEval_EvalCodeEx(PyObject *_co, PyObject *globals, PyObject *locals, } allargs = newargs; } - for (int i = 0; i < kwcount; i++) { - PyTuple_SET_ITEM(kwnames, i, Py_NewRef(kws[2*i])); - } PyFrameConstructor constr = { .fc_globals = globals, .fc_builtins = builtins, From webhook-mailer at python.org Tue Feb 7 04:52:45 2023 From: webhook-mailer at python.org (ambv) Date: Tue, 07 Feb 2023 09:52:45 -0000 Subject: [Python-checkins] [gh-101072] Fix Blurb for GH-101127 Message-ID: https://github.com/python/cpython/commit/79903240480429a6e545177416a7b782b0e5b9bd commit: 79903240480429a6e545177416a7b782b0e5b9bd branch: main author: ?ukasz Langa committer: ambv date: 2023-02-07T10:50:39+01:00 summary: [gh-101072] Fix Blurb for GH-101127 files: M Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst index 2d991f6ca21b..6b98aac2a465 100644 --- a/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst @@ -1,8 +1,2 @@ -macOS #.. section: IDLE #.. section: Tools/Demos #.. section: C API - -# Write your Misc/NEWS entry below. It should be a simple ReST paragraph. # -Don't start with "- Issue #: " or "- gh-issue-: " or that sort of -stuff. -########################################################################### - -Fix the ``defs`` and ``kwdefs`` arguments to :c:func:`PyEval_EvalCodeEx`. +Fix the ``defs`` and ``kwdefs`` arguments to :c:func:`PyEval_EvalCodeEx` +and a reference leak in that function. From webhook-mailer at python.org Tue Feb 7 06:44:50 2023 From: webhook-mailer at python.org (mdickinson) Date: Tue, 07 Feb 2023 11:44:50 -0000 Subject: [Python-checkins] [3.11] gh-101266: Revert fix __sizeof__ for subclasses of int (#101638) Message-ID: https://github.com/python/cpython/commit/358b02dac478b9ce379ab5982400bb8ecf11a2ee commit: 358b02dac478b9ce379ab5982400bb8ecf11a2ee branch: 3.11 author: Mark Dickinson committer: mdickinson date: 2023-02-07T11:44:43Z summary: [3.11] gh-101266: Revert fix __sizeof__ for subclasses of int (#101638) Revert "[3.11] gh-101266: Fix __sizeof__ for subclasses of int (GH-101394) (#101579)" This reverts commit cf89c16486a4cc297413e17d32082ec4f389d725. files: D Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst M Lib/test/test_long.py M Objects/boolobject.c M Objects/longobject.c diff --git a/Lib/test/test_long.py b/Lib/test/test_long.py index 03bed4c41f5a..77b37ca1fa4a 100644 --- a/Lib/test/test_long.py +++ b/Lib/test/test_long.py @@ -1596,44 +1596,5 @@ def test_square(self): self.assertEqual(n**2, (1 << (2 * bitlen)) - (1 << (bitlen + 1)) + 1) - def test___sizeof__(self): - self.assertEqual(int.__itemsize__, sys.int_info.sizeof_digit) - - # Pairs (test_value, number of allocated digits) - test_values = [ - # We always allocate space for at least one digit, even for - # a value of zero; sys.getsizeof should reflect that. - (0, 1), - (1, 1), - (-1, 1), - (BASE-1, 1), - (1-BASE, 1), - (BASE, 2), - (-BASE, 2), - (BASE*BASE - 1, 2), - (BASE*BASE, 3), - ] - - for value, ndigits in test_values: - with self.subTest(value): - self.assertEqual( - value.__sizeof__(), - int.__basicsize__ + int.__itemsize__ * ndigits - ) - - # Same test for a subclass of int. - class MyInt(int): - pass - - self.assertEqual(MyInt.__itemsize__, sys.int_info.sizeof_digit) - - for value, ndigits in test_values: - with self.subTest(value): - self.assertEqual( - MyInt(value).__sizeof__(), - MyInt.__basicsize__ + MyInt.__itemsize__ * ndigits - ) - - if __name__ == "__main__": unittest.main() diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst deleted file mode 100644 index 51999bacb8de..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst +++ /dev/null @@ -1 +0,0 @@ -Fix :func:`sys.getsizeof` reporting for :class:`int` subclasses. diff --git a/Objects/boolobject.c b/Objects/boolobject.c index 01acf0f00b66..8a20e368d4a4 100644 --- a/Objects/boolobject.c +++ b/Objects/boolobject.c @@ -4,8 +4,6 @@ #include "pycore_object.h" // _Py_FatalRefcountError() #include "pycore_runtime.h" // _Py_ID() -#include - /* We define bool_repr to return "False" or "True" */ static PyObject * @@ -156,8 +154,8 @@ bool_dealloc(PyObject* Py_UNUSED(ignore)) PyTypeObject PyBool_Type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) "bool", - offsetof(struct _longobject, ob_digit), /* tp_basicsize */ - sizeof(digit), /* tp_itemsize */ + sizeof(struct _longobject), + 0, bool_dealloc, /* tp_dealloc */ 0, /* tp_vectorcall_offset */ 0, /* tp_getattr */ diff --git a/Objects/longobject.c b/Objects/longobject.c index 39bd72ce5598..84c05e8aabdf 100644 --- a/Objects/longobject.c +++ b/Objects/longobject.c @@ -5664,10 +5664,13 @@ static Py_ssize_t int___sizeof___impl(PyObject *self) /*[clinic end generated code: output=3303f008eaa6a0a5 input=9b51620c76fc4507]*/ { - /* using Py_MAX(..., 1) because we always allocate space for at least - one digit, even though the integer zero has a Py_SIZE of 0 */ - Py_ssize_t ndigits = Py_MAX(Py_ABS(Py_SIZE(self)), 1); - return Py_TYPE(self)->tp_basicsize + Py_TYPE(self)->tp_itemsize * ndigits; + Py_ssize_t res; + + res = offsetof(PyLongObject, ob_digit) + /* using Py_MAX(..., 1) because we always allocate space for at least + one digit, even though the integer zero has a Py_SIZE of 0 */ + + Py_MAX(Py_ABS(Py_SIZE(self)), 1)*sizeof(digit); + return res; } /*[clinic input] From webhook-mailer at python.org Tue Feb 7 08:36:43 2023 From: webhook-mailer at python.org (pablogsal) Date: Tue, 07 Feb 2023 13:36:43 -0000 Subject: [Python-checkins] [3.11] gh-101072: support default and kw default in PyEval_EvalCodeEx for 3.11+ (GH-101127) (#101636) Message-ID: https://github.com/python/cpython/commit/955ba2839bc5875424ae745bfab53e880a9ace49 commit: 955ba2839bc5875424ae745bfab53e880a9ace49 branch: 3.11 author: ?ukasz Langa committer: pablogsal date: 2023-02-07T13:36:35Z summary: [3.11] gh-101072: support default and kw default in PyEval_EvalCodeEx for 3.11+ (GH-101127) (#101636) Co-authored-by: ?ukasz Langa Co-authored-by: Matthieu Dartiailh files: A Lib/test/test_capi/test_eval_code_ex.py A Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst M Modules/_testcapimodule.c M Objects/funcobject.c M Python/ceval.c diff --git a/Lib/test/test_capi/test_eval_code_ex.py b/Lib/test/test_capi/test_eval_code_ex.py new file mode 100644 index 000000000000..2d28e5289eff --- /dev/null +++ b/Lib/test/test_capi/test_eval_code_ex.py @@ -0,0 +1,56 @@ +import unittest + +from test.support import import_helper + + +# Skip this test if the _testcapi module isn't available. +_testcapi = import_helper.import_module('_testcapi') + + +class PyEval_EvalCodeExTests(unittest.TestCase): + + def test_simple(self): + def f(): + return a + + self.assertEqual(_testcapi.eval_code_ex(f.__code__, dict(a=1)), 1) + + # Need to force the compiler to use LOAD_NAME + # def test_custom_locals(self): + # def f(): + # return + + def test_with_args(self): + def f(a, b, c): + return a + + self.assertEqual(_testcapi.eval_code_ex(f.__code__, {}, {}, (1, 2, 3)), 1) + + def test_with_kwargs(self): + def f(a, b, c): + return a + + self.assertEqual(_testcapi.eval_code_ex(f.__code__, {}, {}, (), dict(a=1, b=2, c=3)), 1) + + def test_with_default(self): + def f(a): + return a + + self.assertEqual(_testcapi.eval_code_ex(f.__code__, {}, {}, (), {}, (1,)), 1) + + def test_with_kwarg_default(self): + def f(*, a): + return a + + self.assertEqual(_testcapi.eval_code_ex(f.__code__, {}, {}, (), {}, (), dict(a=1)), 1) + + def test_with_closure(self): + a = 1 + def f(): + return a + + self.assertEqual(_testcapi.eval_code_ex(f.__code__, {}, {}, (), {}, (), {}, f.__closure__), 1) + + +if __name__ == "__main__": + unittest.main() diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst new file mode 100644 index 000000000000..6b98aac2a465 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst @@ -0,0 +1,2 @@ +Fix the ``defs`` and ``kwdefs`` arguments to :c:func:`PyEval_EvalCodeEx` +and a reference leak in that function. diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index dce948a8f0f1..daf2df4ffdde 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -6058,6 +6058,144 @@ eval_get_func_desc(PyObject *self, PyObject *func) return PyUnicode_FromString(PyEval_GetFuncDesc(func)); } +static PyObject * +eval_eval_code_ex(PyObject *mod, PyObject *pos_args) +{ + PyObject *result = NULL; + PyObject *code; + PyObject *globals; + PyObject *locals = NULL; + PyObject *args = NULL; + PyObject *kwargs = NULL; + PyObject *defaults = NULL; + PyObject *kw_defaults = NULL; + PyObject *closure = NULL; + + PyObject **c_kwargs = NULL; + + if (!PyArg_UnpackTuple(pos_args, + "eval_code_ex", + 2, + 8, + &code, + &globals, + &locals, + &args, + &kwargs, + &defaults, + &kw_defaults, + &closure)) + { + goto exit; + } + + if (!PyCode_Check(code)) { + PyErr_SetString(PyExc_TypeError, + "code must be a Python code object"); + goto exit; + } + + if (!PyDict_Check(globals)) { + PyErr_SetString(PyExc_TypeError, "globals must be a dict"); + goto exit; + } + + if (locals && !PyMapping_Check(locals)) { + PyErr_SetString(PyExc_TypeError, "locals must be a mapping"); + goto exit; + } + if (locals == Py_None) { + locals = NULL; + } + + PyObject **c_args = NULL; + Py_ssize_t c_args_len = 0; + + if (args) + { + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_TypeError, "args must be a tuple"); + goto exit; + } else { + c_args = &PyTuple_GET_ITEM(args, 0); + c_args_len = PyTuple_Size(args); + } + } + + Py_ssize_t c_kwargs_len = 0; + + if (kwargs) + { + if (!PyDict_Check(kwargs)) { + PyErr_SetString(PyExc_TypeError, "keywords must be a dict"); + goto exit; + } else { + c_kwargs_len = PyDict_Size(kwargs); + if (c_kwargs_len > 0) { + c_kwargs = PyMem_NEW(PyObject*, 2 * c_kwargs_len); + if (!c_kwargs) { + PyErr_NoMemory(); + goto exit; + } + + Py_ssize_t i = 0; + Py_ssize_t pos = 0; + + while (PyDict_Next(kwargs, + &pos, + &c_kwargs[i], + &c_kwargs[i + 1])) + { + i += 2; + } + c_kwargs_len = i / 2; + /* XXX This is broken if the caller deletes dict items! */ + } + } + } + + + PyObject **c_defaults = NULL; + Py_ssize_t c_defaults_len = 0; + + if (defaults && PyTuple_Check(defaults)) { + c_defaults = &PyTuple_GET_ITEM(defaults, 0); + c_defaults_len = PyTuple_Size(defaults); + } + + if (kw_defaults && !PyDict_Check(kw_defaults)) { + PyErr_SetString(PyExc_TypeError, "kw_defaults must be a dict"); + goto exit; + } + + if (closure && !PyTuple_Check(closure)) { + PyErr_SetString(PyExc_TypeError, "closure must be a tuple of cells"); + goto exit; + } + + + result = PyEval_EvalCodeEx( + code, + globals, + locals, + c_args, + c_args_len, + c_kwargs, + c_kwargs_len, + c_defaults, + c_defaults_len, + kw_defaults, + closure + ); + +exit: + if (c_kwargs) { + PyMem_DEL(c_kwargs); + } + + return result; +} + static PyObject * get_feature_macros(PyObject *self, PyObject *Py_UNUSED(args)) { @@ -6402,6 +6540,7 @@ static PyMethodDef TestMethods[] = { {"set_exc_info", test_set_exc_info, METH_VARARGS}, {"argparsing", argparsing, METH_VARARGS}, {"code_newempty", code_newempty, METH_VARARGS}, + {"eval_code_ex", eval_eval_code_ex, METH_VARARGS}, {"make_exception_with_doc", _PyCFunction_CAST(make_exception_with_doc), METH_VARARGS | METH_KEYWORDS}, {"make_memoryview_from_NULL_pointer", make_memoryview_from_NULL_pointer, diff --git a/Objects/funcobject.c b/Objects/funcobject.c index 63074634747f..57600c97e6aa 100644 --- a/Objects/funcobject.c +++ b/Objects/funcobject.c @@ -27,8 +27,10 @@ _PyFunction_FromConstructor(PyFrameConstructor *constr) op->func_qualname = constr->fc_qualname; Py_INCREF(constr->fc_code); op->func_code = constr->fc_code; - op->func_defaults = NULL; - op->func_kwdefaults = NULL; + Py_XINCREF(constr->fc_defaults); + op->func_defaults = constr->fc_defaults; + Py_XINCREF(constr->fc_kwdefaults); + op->func_kwdefaults = constr->fc_kwdefaults; Py_XINCREF(constr->fc_closure); op->func_closure = constr->fc_closure; Py_INCREF(Py_None); diff --git a/Python/ceval.c b/Python/ceval.c index a34e4ffb72f8..01ac2618df0d 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -6489,10 +6489,6 @@ PyEval_EvalCodeEx(PyObject *_co, PyObject *globals, PyObject *locals, } allargs = newargs; } - for (int i = 0; i < kwcount; i++) { - Py_INCREF(kws[2*i]); - PyTuple_SET_ITEM(kwnames, i, kws[2*i]); - } PyFrameConstructor constr = { .fc_globals = globals, .fc_builtins = builtins, From webhook-mailer at python.org Tue Feb 7 08:59:33 2023 From: webhook-mailer at python.org (ambv) Date: Tue, 07 Feb 2023 13:59:33 -0000 Subject: [Python-checkins] Doctest: Pin sphinxext-opengraph==0.7.5 to prevent importing NumPy (#101642) Message-ID: https://github.com/python/cpython/commit/a757d721271974c45e2feacef739af4e86ec7350 commit: a757d721271974c45e2feacef739af4e86ec7350 branch: main author: Hugo van Kemenade committer: ambv date: 2023-02-07T14:59:26+01:00 summary: Doctest: Pin sphinxext-opengraph==0.7.5 to prevent importing NumPy (#101642) files: M Doc/requirements.txt diff --git a/Doc/requirements.txt b/Doc/requirements.txt index 134f39d6d7b3..71d3cd61e538 100644 --- a/Doc/requirements.txt +++ b/Doc/requirements.txt @@ -8,7 +8,7 @@ sphinx==4.5.0 blurb sphinx-lint==0.6.7 -sphinxext-opengraph>=0.7.1 +sphinxext-opengraph==0.7.5 # The theme used by the documentation is stored separately, so we need # to install that as well. From webhook-mailer at python.org Tue Feb 7 09:54:53 2023 From: webhook-mailer at python.org (hugovk) Date: Tue, 07 Feb 2023 14:54:53 -0000 Subject: [Python-checkins] Fix nesting of 'Pending Removal in Python 3.14' (#101637) Message-ID: https://github.com/python/cpython/commit/a687ae9eb5c0aea06c52de1e426904b79f767f4e commit: a687ae9eb5c0aea06c52de1e426904b79f767f4e branch: main author: Oleg Iarygin committer: hugovk date: 2023-02-07T16:54:47+02:00 summary: Fix nesting of 'Pending Removal in Python 3.14' (#101637) files: M Doc/whatsnew/3.12.rst diff --git a/Doc/whatsnew/3.12.rst b/Doc/whatsnew/3.12.rst index 0c5a70b64574..b723b70154f0 100644 --- a/Doc/whatsnew/3.12.rst +++ b/Doc/whatsnew/3.12.rst @@ -479,7 +479,7 @@ APIs: * :class:`webbrowser.MacOSX` (:gh:`86421`) Pending Removal in Python 3.14 -============================== +------------------------------ * Deprecated the following :mod:`importlib.abc` classes, scheduled for removal in Python 3.14: From webhook-mailer at python.org Tue Feb 7 10:21:21 2023 From: webhook-mailer at python.org (ned-deily) Date: Tue, 07 Feb 2023 15:21:21 -0000 Subject: [Python-checkins] [3.7] gh-95778: add doc missing in some places (GH-100627) (GH-101631) Message-ID: https://github.com/python/cpython/commit/ecfed4f9cc711b6843f6b0fc46b241ba0d3fd0d1 commit: ecfed4f9cc711b6843f6b0fc46b241ba0d3fd0d1 branch: 3.7 author: ?ric committer: ned-deily date: 2023-02-07T10:21:12-05:00 summary: [3.7] gh-95778: add doc missing in some places (GH-100627) (GH-101631) (cherry picked from commit 46521826cb1883e29e4640f94089dd92c57efc5b) files: M Misc/python.man diff --git a/Misc/python.man b/Misc/python.man index 67a5bb4cf849..8803f8452552 100644 --- a/Misc/python.man +++ b/Misc/python.man @@ -309,6 +309,10 @@ Set implementation specific option. The following options are available: locale-aware mode. -X utf8=0 explicitly disables UTF-8 mode (even when it would otherwise activate automatically). See PYTHONUTF8 for more details + -X int_max_str_digits=number: limit the size of int<->str conversions. + This helps avoid denial of service attacks when parsing untrusted data. + The default is sys.int_info.default_max_str_digits. 0 disables. + .TP .B \-x Skip the first line of the source. This is intended for a DOS @@ -476,6 +480,11 @@ values. The integer must be a decimal number in the range [0,4294967295]. Specifying the value 0 will disable hash randomization. +.IP PYTHONINTMAXSTRDIGITS +Limit the maximum digit characters in an int value +when converting from a string and when converting an int back to a str. +A value of 0 disables the limit. Conversions to or from bases 2, 4, 8, +16, and 32 are never limited. .IP PYTHONMALLOC Set the Python memory allocators and/or install debug hooks. The available memory allocators are From webhook-mailer at python.org Tue Feb 7 11:29:11 2023 From: webhook-mailer at python.org (gvanrossum) Date: Tue, 07 Feb 2023 16:29:11 -0000 Subject: [Python-checkins] gh-98831: Modernize the FOR_ITER family of instructions (#101626) Message-ID: https://github.com/python/cpython/commit/d54b8d8fbd76c05e9006175ab26d737c4b055dfb commit: d54b8d8fbd76c05e9006175ab26d737c4b055dfb branch: main author: Guido van Rossum committer: gvanrossum date: 2023-02-07T08:28:28-08:00 summary: gh-98831: Modernize the FOR_ITER family of instructions (#101626) Co-authored-by: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> files: M Python/bytecodes.c M Python/generated_cases.c.h M Python/opcode_metadata.h diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 0fc0b3b8280f..ec0439af98e6 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -2066,27 +2066,35 @@ dummy_func( PREDICT(LOAD_CONST); } - // stack effect: ( -- __0) - inst(FOR_ITER) { + // Most members of this family are "secretly" super-instructions. + // When the loop is exhausted, they jump, and the jump target is + // always END_FOR, which pops two values off the stack. + // This is optimized by skipping that instruction and combining + // its effect (popping 'iter' instead of pushing 'next'.) + + family(for_iter, INLINE_CACHE_ENTRIES_FOR_ITER) = { + FOR_ITER, + FOR_ITER_LIST, + FOR_ITER_TUPLE, + FOR_ITER_RANGE, + FOR_ITER_GEN, + }; + + inst(FOR_ITER, (unused/1, iter -- iter, next)) { #if ENABLE_SPECIALIZATION _PyForIterCache *cache = (_PyForIterCache *)next_instr; if (ADAPTIVE_COUNTER_IS_ZERO(cache->counter)) { assert(cframe.use_tracing == 0); next_instr--; - _Py_Specialize_ForIter(TOP(), next_instr, oparg); + _Py_Specialize_ForIter(iter, next_instr, oparg); DISPATCH_SAME_OPARG(); } STAT_INC(FOR_ITER, deferred); DECREMENT_ADAPTIVE_COUNTER(cache->counter); #endif /* ENABLE_SPECIALIZATION */ - /* before: [iter]; after: [iter, iter()] *or* [] */ - PyObject *iter = TOP(); - PyObject *next = (*Py_TYPE(iter)->tp_iternext)(iter); - if (next != NULL) { - PUSH(next); - JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER); - } - else { + /* before: [iter]; after: [iter, iter()] *or* [] (and jump over END_FOR.) */ + next = (*Py_TYPE(iter)->tp_iternext)(iter); + if (next == NULL) { if (_PyErr_Occurred(tstate)) { if (!_PyErr_ExceptionMatches(tstate, PyExc_StopIteration)) { goto error; @@ -2098,63 +2106,66 @@ dummy_func( } /* iterator ended normally */ assert(_Py_OPCODE(next_instr[INLINE_CACHE_ENTRIES_FOR_ITER + oparg]) == END_FOR); - STACK_SHRINK(1); Py_DECREF(iter); - /* Skip END_FOR */ + STACK_SHRINK(1); + /* Jump forward oparg, then skip following END_FOR instruction */ JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + oparg + 1); + DISPATCH(); } + // Common case: no jump, leave it to the code generator } - // stack effect: ( -- __0) - inst(FOR_ITER_LIST) { + inst(FOR_ITER_LIST, (unused/1, iter -- iter, next)) { assert(cframe.use_tracing == 0); - _PyListIterObject *it = (_PyListIterObject *)TOP(); - DEOPT_IF(Py_TYPE(it) != &PyListIter_Type, FOR_ITER); + DEOPT_IF(Py_TYPE(iter) != &PyListIter_Type, FOR_ITER); + _PyListIterObject *it = (_PyListIterObject *)iter; STAT_INC(FOR_ITER, hit); PyListObject *seq = it->it_seq; if (seq) { if (it->it_index < PyList_GET_SIZE(seq)) { - PyObject *next = PyList_GET_ITEM(seq, it->it_index++); - PUSH(Py_NewRef(next)); - JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER); + next = Py_NewRef(PyList_GET_ITEM(seq, it->it_index++)); goto end_for_iter_list; // End of this instruction } it->it_seq = NULL; Py_DECREF(seq); } + Py_DECREF(iter); STACK_SHRINK(1); - Py_DECREF(it); + /* Jump forward oparg, then skip following END_FOR instruction */ JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + oparg + 1); + DISPATCH(); end_for_iter_list: + // Common case: no jump, leave it to the code generator } - // stack effect: ( -- __0) - inst(FOR_ITER_TUPLE) { + inst(FOR_ITER_TUPLE, (unused/1, iter -- iter, next)) { assert(cframe.use_tracing == 0); - _PyTupleIterObject *it = (_PyTupleIterObject *)TOP(); + _PyTupleIterObject *it = (_PyTupleIterObject *)iter; DEOPT_IF(Py_TYPE(it) != &PyTupleIter_Type, FOR_ITER); STAT_INC(FOR_ITER, hit); PyTupleObject *seq = it->it_seq; if (seq) { if (it->it_index < PyTuple_GET_SIZE(seq)) { - PyObject *next = PyTuple_GET_ITEM(seq, it->it_index++); - PUSH(Py_NewRef(next)); - JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER); + next = Py_NewRef(PyTuple_GET_ITEM(seq, it->it_index++)); goto end_for_iter_tuple; // End of this instruction } it->it_seq = NULL; Py_DECREF(seq); } + Py_DECREF(iter); STACK_SHRINK(1); - Py_DECREF(it); + /* Jump forward oparg, then skip following END_FOR instruction */ JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + oparg + 1); + DISPATCH(); end_for_iter_tuple: + // Common case: no jump, leave it to the code generator } - // stack effect: ( -- __0) - inst(FOR_ITER_RANGE) { + // This is slightly different, when the loop isn't terminated we + // jump over the immediately following STORE_FAST instruction. + inst(FOR_ITER_RANGE, (unused/1, iter -- iter, unused)) { assert(cframe.use_tracing == 0); - _PyRangeIterObject *r = (_PyRangeIterObject *)TOP(); + _PyRangeIterObject *r = (_PyRangeIterObject *)iter; DEOPT_IF(Py_TYPE(r) != &PyRangeIter_Type, FOR_ITER); STAT_INC(FOR_ITER, hit); _Py_CODEUNIT next = next_instr[INLINE_CACHE_ENTRIES_FOR_ITER]; @@ -2162,6 +2173,7 @@ dummy_func( if (r->len <= 0) { STACK_SHRINK(1); Py_DECREF(r); + // Jump over END_FOR instruction. JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + oparg + 1); } else { @@ -2174,11 +2186,13 @@ dummy_func( // The STORE_FAST is already done. JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + 1); } + DISPATCH(); } - inst(FOR_ITER_GEN) { + // This is *not* a super-instruction, unique in the family. + inst(FOR_ITER_GEN, (unused/1, iter -- iter, unused)) { assert(cframe.use_tracing == 0); - PyGenObject *gen = (PyGenObject *)TOP(); + PyGenObject *gen = (PyGenObject *)iter; DEOPT_IF(Py_TYPE(gen) != &PyGen_Type, FOR_ITER); DEOPT_IF(gen->gi_frame_state >= FRAME_EXECUTING, FOR_ITER); STAT_INC(FOR_ITER, hit); @@ -3168,9 +3182,6 @@ family(call, INLINE_CACHE_ENTRIES_CALL) = { CALL_NO_KW_LIST_APPEND, CALL_NO_KW_METHOD_DESCRIPTOR_FAST, CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS, CALL_NO_KW_METHOD_DESCRIPTOR_O, CALL_NO_KW_STR_1, CALL_NO_KW_TUPLE_1, CALL_NO_KW_TYPE_1 }; -family(for_iter, INLINE_CACHE_ENTRIES_FOR_ITER) = { - FOR_ITER, FOR_ITER_LIST, - FOR_ITER_RANGE }; family(store_fast) = { STORE_FAST, STORE_FAST__LOAD_FAST, STORE_FAST__STORE_FAST }; family(unpack_sequence, INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE) = { UNPACK_SEQUENCE, UNPACK_SEQUENCE_LIST, diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index f0f314a143c2..4e511f43d950 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -2619,25 +2619,23 @@ TARGET(FOR_ITER) { PREDICTED(FOR_ITER); + static_assert(INLINE_CACHE_ENTRIES_FOR_ITER == 1, "incorrect cache size"); + PyObject *iter = PEEK(1); + PyObject *next; #if ENABLE_SPECIALIZATION _PyForIterCache *cache = (_PyForIterCache *)next_instr; if (ADAPTIVE_COUNTER_IS_ZERO(cache->counter)) { assert(cframe.use_tracing == 0); next_instr--; - _Py_Specialize_ForIter(TOP(), next_instr, oparg); + _Py_Specialize_ForIter(iter, next_instr, oparg); DISPATCH_SAME_OPARG(); } STAT_INC(FOR_ITER, deferred); DECREMENT_ADAPTIVE_COUNTER(cache->counter); #endif /* ENABLE_SPECIALIZATION */ - /* before: [iter]; after: [iter, iter()] *or* [] */ - PyObject *iter = TOP(); - PyObject *next = (*Py_TYPE(iter)->tp_iternext)(iter); - if (next != NULL) { - PUSH(next); - JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER); - } - else { + /* before: [iter]; after: [iter, iter()] *or* [] (and jump over END_FOR.) */ + next = (*Py_TYPE(iter)->tp_iternext)(iter); + if (next == NULL) { if (_PyErr_Occurred(tstate)) { if (!_PyErr_ExceptionMatches(tstate, PyExc_StopIteration)) { goto error; @@ -2649,63 +2647,81 @@ } /* iterator ended normally */ assert(_Py_OPCODE(next_instr[INLINE_CACHE_ENTRIES_FOR_ITER + oparg]) == END_FOR); - STACK_SHRINK(1); Py_DECREF(iter); - /* Skip END_FOR */ + STACK_SHRINK(1); + /* Jump forward oparg, then skip following END_FOR instruction */ JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + oparg + 1); + DISPATCH(); } + // Common case: no jump, leave it to the code generator + STACK_GROW(1); + POKE(1, next); + JUMPBY(1); DISPATCH(); } TARGET(FOR_ITER_LIST) { + PyObject *iter = PEEK(1); + PyObject *next; assert(cframe.use_tracing == 0); - _PyListIterObject *it = (_PyListIterObject *)TOP(); - DEOPT_IF(Py_TYPE(it) != &PyListIter_Type, FOR_ITER); + DEOPT_IF(Py_TYPE(iter) != &PyListIter_Type, FOR_ITER); + _PyListIterObject *it = (_PyListIterObject *)iter; STAT_INC(FOR_ITER, hit); PyListObject *seq = it->it_seq; if (seq) { if (it->it_index < PyList_GET_SIZE(seq)) { - PyObject *next = PyList_GET_ITEM(seq, it->it_index++); - PUSH(Py_NewRef(next)); - JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER); + next = Py_NewRef(PyList_GET_ITEM(seq, it->it_index++)); goto end_for_iter_list; // End of this instruction } it->it_seq = NULL; Py_DECREF(seq); } + Py_DECREF(iter); STACK_SHRINK(1); - Py_DECREF(it); + /* Jump forward oparg, then skip following END_FOR instruction */ JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + oparg + 1); + DISPATCH(); end_for_iter_list: + // Common case: no jump, leave it to the code generator + STACK_GROW(1); + POKE(1, next); + JUMPBY(1); DISPATCH(); } TARGET(FOR_ITER_TUPLE) { + PyObject *iter = PEEK(1); + PyObject *next; assert(cframe.use_tracing == 0); - _PyTupleIterObject *it = (_PyTupleIterObject *)TOP(); + _PyTupleIterObject *it = (_PyTupleIterObject *)iter; DEOPT_IF(Py_TYPE(it) != &PyTupleIter_Type, FOR_ITER); STAT_INC(FOR_ITER, hit); PyTupleObject *seq = it->it_seq; if (seq) { if (it->it_index < PyTuple_GET_SIZE(seq)) { - PyObject *next = PyTuple_GET_ITEM(seq, it->it_index++); - PUSH(Py_NewRef(next)); - JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER); + next = Py_NewRef(PyTuple_GET_ITEM(seq, it->it_index++)); goto end_for_iter_tuple; // End of this instruction } it->it_seq = NULL; Py_DECREF(seq); } + Py_DECREF(iter); STACK_SHRINK(1); - Py_DECREF(it); + /* Jump forward oparg, then skip following END_FOR instruction */ JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + oparg + 1); + DISPATCH(); end_for_iter_tuple: + // Common case: no jump, leave it to the code generator + STACK_GROW(1); + POKE(1, next); + JUMPBY(1); DISPATCH(); } TARGET(FOR_ITER_RANGE) { + PyObject *iter = PEEK(1); assert(cframe.use_tracing == 0); - _PyRangeIterObject *r = (_PyRangeIterObject *)TOP(); + _PyRangeIterObject *r = (_PyRangeIterObject *)iter; DEOPT_IF(Py_TYPE(r) != &PyRangeIter_Type, FOR_ITER); STAT_INC(FOR_ITER, hit); _Py_CODEUNIT next = next_instr[INLINE_CACHE_ENTRIES_FOR_ITER]; @@ -2713,6 +2729,7 @@ if (r->len <= 0) { STACK_SHRINK(1); Py_DECREF(r); + // Jump over END_FOR instruction. JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + oparg + 1); } else { @@ -2729,8 +2746,9 @@ } TARGET(FOR_ITER_GEN) { + PyObject *iter = PEEK(1); assert(cframe.use_tracing == 0); - PyGenObject *gen = (PyGenObject *)TOP(); + PyGenObject *gen = (PyGenObject *)iter; DEOPT_IF(Py_TYPE(gen) != &PyGen_Type, FOR_ITER); DEOPT_IF(gen->gi_frame_state >= FRAME_EXECUTING, FOR_ITER); STAT_INC(FOR_ITER, hit); diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index 948d17519e27..ed26ff00c7b1 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -261,15 +261,15 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case GET_YIELD_FROM_ITER: return 1; case FOR_ITER: - return -1; + return 1; case FOR_ITER_LIST: - return -1; + return 1; case FOR_ITER_TUPLE: - return -1; + return 1; case FOR_ITER_RANGE: - return -1; + return 1; case FOR_ITER_GEN: - return -1; + return 1; case BEFORE_ASYNC_WITH: return 1; case BEFORE_WITH: @@ -607,15 +607,15 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case GET_YIELD_FROM_ITER: return 1; case FOR_ITER: - return -1; + return 2; case FOR_ITER_LIST: - return -1; + return 2; case FOR_ITER_TUPLE: - return -1; + return 2; case FOR_ITER_RANGE: - return -1; + return 2; case FOR_ITER_GEN: - return -1; + return 2; case BEFORE_ASYNC_WITH: return 2; case BEFORE_WITH: @@ -829,11 +829,11 @@ struct opcode_metadata { [MATCH_KEYS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [GET_ITER] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [GET_YIELD_FROM_ITER] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, - [FOR_ITER] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [FOR_ITER_LIST] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [FOR_ITER_TUPLE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [FOR_ITER_RANGE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [FOR_ITER_GEN] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, + [FOR_ITER] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC }, + [FOR_ITER_LIST] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC }, + [FOR_ITER_TUPLE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC }, + [FOR_ITER_RANGE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC }, + [FOR_ITER_GEN] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC }, [BEFORE_ASYNC_WITH] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [BEFORE_WITH] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [WITH_EXCEPT_START] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, From webhook-mailer at python.org Tue Feb 7 12:23:05 2023 From: webhook-mailer at python.org (miss-islington) Date: Tue, 07 Feb 2023 17:23:05 -0000 Subject: [Python-checkins] Make use of TESTFN_ASCII in test_fileio (GH-101645) Message-ID: https://github.com/python/cpython/commit/6fd5eb640af19b535f4f2ba27b1b61b8d17f02e9 commit: 6fd5eb640af19b535f4f2ba27b1b61b8d17f02e9 branch: main author: Zachary Ware committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-07T09:22:58-08:00 summary: Make use of TESTFN_ASCII in test_fileio (GH-101645) testBytesOpen requires an ASCII filename, but TESTFN usually isn't ASCII. Automerge-Triggered-By: GH:zware files: M Lib/test/test_fileio.py diff --git a/Lib/test/test_fileio.py b/Lib/test/test_fileio.py index 2263604ed1f9..ebfcffd18291 100644 --- a/Lib/test/test_fileio.py +++ b/Lib/test/test_fileio.py @@ -12,7 +12,9 @@ from test.support import ( cpython_only, swap_attr, gc_collect, is_emscripten, is_wasi ) -from test.support.os_helper import (TESTFN, TESTFN_UNICODE, make_bad_fd) +from test.support.os_helper import ( + TESTFN, TESTFN_ASCII, TESTFN_UNICODE, make_bad_fd, + ) from test.support.warnings_helper import check_warnings from collections import UserList @@ -431,18 +433,15 @@ def testUnicodeOpen(self): def testBytesOpen(self): # Opening a bytes filename - try: - fn = TESTFN.encode("ascii") - except UnicodeEncodeError: - self.skipTest('could not encode %r to ascii' % TESTFN) + fn = TESTFN_ASCII.encode("ascii") f = self.FileIO(fn, "w") try: f.write(b"abc") f.close() - with open(TESTFN, "rb") as f: + with open(TESTFN_ASCII, "rb") as f: self.assertEqual(f.read(), b"abc") finally: - os.unlink(TESTFN) + os.unlink(TESTFN_ASCII) @unittest.skipIf(sys.getfilesystemencoding() != 'utf-8', "test only works for utf-8 filesystems") From webhook-mailer at python.org Tue Feb 7 13:04:40 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Tue, 07 Feb 2023 18:04:40 -0000 Subject: [Python-checkins] gh-97725: Fix documentation for the default file of `asyncio.Task.print_stack` (#101652) Message-ID: https://github.com/python/cpython/commit/f87f6e23964d7a4c38b655089cda65538a24ec36 commit: f87f6e23964d7a4c38b655089cda65538a24ec36 branch: main author: Oleg Iarygin committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-07T23:34:31+05:30 summary: gh-97725: Fix documentation for the default file of `asyncio.Task.print_stack` (#101652) files: A Misc/NEWS.d/next/Documentation/2023-02-07-21-43-24.gh-issue-97725.cuY7Cd.rst M Doc/library/asyncio-task.rst diff --git a/Doc/library/asyncio-task.rst b/Doc/library/asyncio-task.rst index 39112580285c..9b9842432822 100644 --- a/Doc/library/asyncio-task.rst +++ b/Doc/library/asyncio-task.rst @@ -1097,7 +1097,7 @@ Task Object The *limit* argument is passed to :meth:`get_stack` directly. The *file* argument is an I/O stream to which the output - is written; by default output is written to :data:`sys.stderr`. + is written; by default output is written to :data:`sys.stdout`. .. method:: get_coro() diff --git a/Misc/NEWS.d/next/Documentation/2023-02-07-21-43-24.gh-issue-97725.cuY7Cd.rst b/Misc/NEWS.d/next/Documentation/2023-02-07-21-43-24.gh-issue-97725.cuY7Cd.rst new file mode 100644 index 000000000000..fd9ea049c239 --- /dev/null +++ b/Misc/NEWS.d/next/Documentation/2023-02-07-21-43-24.gh-issue-97725.cuY7Cd.rst @@ -0,0 +1,2 @@ +Fix :meth:`asyncio.Task.print_stack` description for ``file=None``. +Patch by Oleg Iarygin. From webhook-mailer at python.org Tue Feb 7 15:37:52 2023 From: webhook-mailer at python.org (iritkatriel) Date: Tue, 07 Feb 2023 20:37:52 -0000 Subject: [Python-checkins] gh-98831: rewrite UNPACK_EX, UNPACK_SEQUENCE, UNPACK_SEQUENCE_TWO_TUPLE in the instruction definition DSL (#101641) Message-ID: https://github.com/python/cpython/commit/dec1ab03879e959f7efb910a723caf4a9ce453cf commit: dec1ab03879e959f7efb910a723caf4a9ce453cf branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-07T20:37:43Z summary: gh-98831: rewrite UNPACK_EX, UNPACK_SEQUENCE, UNPACK_SEQUENCE_TWO_TUPLE in the instruction definition DSL (#101641) files: M Python/bytecodes.c M Python/generated_cases.c.h M Python/opcode_metadata.h diff --git a/Python/bytecodes.c b/Python/bytecodes.c index ec0439af98e6..b43625fd283c 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -859,13 +859,11 @@ dummy_func( } } - // stack effect: (__0 -- __array[oparg]) - inst(UNPACK_SEQUENCE) { + inst(UNPACK_SEQUENCE, (unused/1, seq -- unused[oparg])) { #if ENABLE_SPECIALIZATION _PyUnpackSequenceCache *cache = (_PyUnpackSequenceCache *)next_instr; if (ADAPTIVE_COUNTER_IS_ZERO(cache->counter)) { assert(cframe.use_tracing == 0); - PyObject *seq = TOP(); next_instr--; _Py_Specialize_UnpackSequence(seq, next_instr, oparg); DISPATCH_SAME_OPARG(); @@ -873,27 +871,19 @@ dummy_func( STAT_INC(UNPACK_SEQUENCE, deferred); DECREMENT_ADAPTIVE_COUNTER(cache->counter); #endif /* ENABLE_SPECIALIZATION */ - PyObject *seq = POP(); - PyObject **top = stack_pointer + oparg; - if (!unpack_iterable(tstate, seq, oparg, -1, top)) { - Py_DECREF(seq); - goto error; - } - STACK_GROW(oparg); + PyObject **top = stack_pointer + oparg - 1; + int res = unpack_iterable(tstate, seq, oparg, -1, top); Py_DECREF(seq); - JUMPBY(INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE); + ERROR_IF(res == 0, error); } - // stack effect: (__0 -- __array[oparg]) - inst(UNPACK_SEQUENCE_TWO_TUPLE) { - PyObject *seq = TOP(); + inst(UNPACK_SEQUENCE_TWO_TUPLE, (unused/1, seq -- v1, v0)) { DEOPT_IF(!PyTuple_CheckExact(seq), UNPACK_SEQUENCE); DEOPT_IF(PyTuple_GET_SIZE(seq) != 2, UNPACK_SEQUENCE); STAT_INC(UNPACK_SEQUENCE, hit); - SET_TOP(Py_NewRef(PyTuple_GET_ITEM(seq, 1))); - PUSH(Py_NewRef(PyTuple_GET_ITEM(seq, 0))); + v1 = Py_NewRef(PyTuple_GET_ITEM(seq, 1)); + v0 = Py_NewRef(PyTuple_GET_ITEM(seq, 0)); Py_DECREF(seq); - JUMPBY(INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE); } // stack effect: (__0 -- __array[oparg]) @@ -926,17 +916,12 @@ dummy_func( JUMPBY(INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE); } - // error: UNPACK_EX has irregular stack effect - inst(UNPACK_EX) { + inst(UNPACK_EX, (seq -- unused[oparg & 0xFF], unused, unused[oparg >> 8])) { int totalargs = 1 + (oparg & 0xFF) + (oparg >> 8); - PyObject *seq = POP(); - PyObject **top = stack_pointer + totalargs; - if (!unpack_iterable(tstate, seq, oparg & 0xFF, oparg >> 8, top)) { - Py_DECREF(seq); - goto error; - } - STACK_GROW(totalargs); + PyObject **top = stack_pointer + totalargs - 1; + int res = unpack_iterable(tstate, seq, oparg & 0xFF, oparg >> 8, top); Py_DECREF(seq); + ERROR_IF(res == 0, error); } family(store_attr, INLINE_CACHE_ENTRIES_STORE_ATTR) = { diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index 4e511f43d950..ab19f4317106 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -1108,11 +1108,11 @@ TARGET(UNPACK_SEQUENCE) { PREDICTED(UNPACK_SEQUENCE); + PyObject *seq = PEEK(1); #if ENABLE_SPECIALIZATION _PyUnpackSequenceCache *cache = (_PyUnpackSequenceCache *)next_instr; if (ADAPTIVE_COUNTER_IS_ZERO(cache->counter)) { assert(cframe.use_tracing == 0); - PyObject *seq = TOP(); next_instr--; _Py_Specialize_UnpackSequence(seq, next_instr, oparg); DISPATCH_SAME_OPARG(); @@ -1120,27 +1120,30 @@ STAT_INC(UNPACK_SEQUENCE, deferred); DECREMENT_ADAPTIVE_COUNTER(cache->counter); #endif /* ENABLE_SPECIALIZATION */ - PyObject *seq = POP(); - PyObject **top = stack_pointer + oparg; - if (!unpack_iterable(tstate, seq, oparg, -1, top)) { - Py_DECREF(seq); - goto error; - } - STACK_GROW(oparg); + PyObject **top = stack_pointer + oparg - 1; + int res = unpack_iterable(tstate, seq, oparg, -1, top); Py_DECREF(seq); - JUMPBY(INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE); + if (res == 0) goto pop_1_error; + STACK_SHRINK(1); + STACK_GROW(oparg); + JUMPBY(1); DISPATCH(); } TARGET(UNPACK_SEQUENCE_TWO_TUPLE) { - PyObject *seq = TOP(); + PyObject *seq = PEEK(1); + PyObject *v1; + PyObject *v0; DEOPT_IF(!PyTuple_CheckExact(seq), UNPACK_SEQUENCE); DEOPT_IF(PyTuple_GET_SIZE(seq) != 2, UNPACK_SEQUENCE); STAT_INC(UNPACK_SEQUENCE, hit); - SET_TOP(Py_NewRef(PyTuple_GET_ITEM(seq, 1))); - PUSH(Py_NewRef(PyTuple_GET_ITEM(seq, 0))); + v1 = Py_NewRef(PyTuple_GET_ITEM(seq, 1)); + v0 = Py_NewRef(PyTuple_GET_ITEM(seq, 0)); Py_DECREF(seq); - JUMPBY(INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE); + STACK_GROW(1); + POKE(1, v0); + POKE(2, v1); + JUMPBY(1); DISPATCH(); } @@ -1175,15 +1178,13 @@ } TARGET(UNPACK_EX) { + PyObject *seq = PEEK(1); int totalargs = 1 + (oparg & 0xFF) + (oparg >> 8); - PyObject *seq = POP(); - PyObject **top = stack_pointer + totalargs; - if (!unpack_iterable(tstate, seq, oparg & 0xFF, oparg >> 8, top)) { - Py_DECREF(seq); - goto error; - } - STACK_GROW(totalargs); + PyObject **top = stack_pointer + totalargs - 1; + int res = unpack_iterable(tstate, seq, oparg & 0xFF, oparg >> 8, top); Py_DECREF(seq); + if (res == 0) goto pop_1_error; + STACK_GROW((oparg & 0xFF) + (oparg >> 8)); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index ed26ff00c7b1..d2585351f69f 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -121,15 +121,15 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case DELETE_NAME: return 0; case UNPACK_SEQUENCE: - return -1; + return 1; case UNPACK_SEQUENCE_TWO_TUPLE: - return -1; + return 1; case UNPACK_SEQUENCE_TUPLE: return -1; case UNPACK_SEQUENCE_LIST: return -1; case UNPACK_EX: - return -1; + return 1; case STORE_ATTR: return 2; case DELETE_ATTR: @@ -467,15 +467,15 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case DELETE_NAME: return 0; case UNPACK_SEQUENCE: - return -1; + return oparg; case UNPACK_SEQUENCE_TWO_TUPLE: - return -1; + return 2; case UNPACK_SEQUENCE_TUPLE: return -1; case UNPACK_SEQUENCE_LIST: return -1; case UNPACK_EX: - return -1; + return (oparg & 0xFF) + (oparg >> 8) + 1; case STORE_ATTR: return 0; case DELETE_ATTR: @@ -759,8 +759,8 @@ struct opcode_metadata { [LOAD_BUILD_CLASS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [STORE_NAME] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [DELETE_NAME] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [UNPACK_SEQUENCE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [UNPACK_SEQUENCE_TWO_TUPLE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, + [UNPACK_SEQUENCE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC }, + [UNPACK_SEQUENCE_TWO_TUPLE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IXC }, [UNPACK_SEQUENCE_TUPLE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [UNPACK_SEQUENCE_LIST] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [UNPACK_EX] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, From webhook-mailer at python.org Tue Feb 7 16:44:06 2023 From: webhook-mailer at python.org (ambv) Date: Tue, 07 Feb 2023 21:44:06 -0000 Subject: [Python-checkins] gh-101656: Fix "conversion from Py_ssize_t to int" warning in `_testcapimodule` (#101657) Message-ID: https://github.com/python/cpython/commit/acc2f3b19d28d4bf3f8fb32357f581cba5ba24c7 commit: acc2f3b19d28d4bf3f8fb32357f581cba5ba24c7 branch: main author: Nikita Sobolev committer: ambv date: 2023-02-07T22:43:33+01:00 summary: gh-101656: Fix "conversion from Py_ssize_t to int" warning in `_testcapimodule` (#101657) files: M Modules/_testcapimodule.c diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index 5a6097ef0ac5..8b2ce1a2cfd4 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -3185,11 +3185,11 @@ eval_eval_code_ex(PyObject *mod, PyObject *pos_args) globals, locals, c_args, - c_args_len, + (int)c_args_len, c_kwargs, - c_kwargs_len, + (int)c_kwargs_len, c_defaults, - c_defaults_len, + (int)c_defaults_len, kw_defaults, closure ); From webhook-mailer at python.org Tue Feb 7 17:16:14 2023 From: webhook-mailer at python.org (Yhg1s) Date: Tue, 07 Feb 2023 22:16:14 -0000 Subject: [Python-checkins] Python 3.12.0a5 Message-ID: https://github.com/python/cpython/commit/3c67ec394faac79d260804d569a18fab43018af0 commit: 3c67ec394faac79d260804d569a18fab43018af0 branch: main author: Thomas Wouters committer: Yhg1s date: 2023-02-07T13:21:15+01:00 summary: Python 3.12.0a5 files: A Misc/NEWS.d/3.12.0a5.rst D Misc/NEWS.d/next/Build/2022-08-30-10-16-31.gh-issue-96305.274i8B.rst D Misc/NEWS.d/next/Build/2022-10-25-11-53-55.gh-issue-98636.e0RPAr.rst D Misc/NEWS.d/next/Build/2022-10-27-09-57-12.gh-issue-98705.H11XmR.rst D Misc/NEWS.d/next/Build/2023-01-15-11-22-15.gh-issue-101060.0mYk9E.rst D Misc/NEWS.d/next/Build/2023-01-17-21-32-51.gh-issue-100340.i9zRGM.rst D Misc/NEWS.d/next/Build/2023-01-21-10-31-35.gh-issue-101152.xvM8pL.rst D Misc/NEWS.d/next/Build/2023-01-26-19-02-11.gh-issue-77532.cXD8bg.rst D Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst D Misc/NEWS.d/next/Build/2023-02-04-06-59-07.gh-issue-101282.7sQz5l.rst D Misc/NEWS.d/next/Core and Builtins/2018-02-05-21-54-46.bpo-32780.Dtiz8z.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-03-14-33-23.gh-issue-100712.po6xyB.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-03-20-59-20.gh-issue-100726.W9huFl.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-06-09-22-21.gh-issue-91351.iq2vZ_.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-10-14-11-17.gh-issue-100892.qfBVYI.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-10-16-59-33.gh-issue-100923.ypJAX-.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-11-22-52-19.gh-issue-100942.ontOy_.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-12-13-46-49.gh-issue-100982.mJ234s.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-13-12-56-20.gh-issue-100762.YvHaQJ.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-14-17-03-08.gh-issue-101037.9ATNuf.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-15-03-26-04.gh-issue-101046.g2CM4S.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-24-17-13-32.gh-issue-101291.Yr6u_c.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-30-08-59-47.gh-issue-101400.Di_ZFm.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-30-11-56-09.gh-issue-59956.7xqnC_.rst D Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst D Misc/NEWS.d/next/Documentation/2022-06-19-22-04-47.gh-issue-88324.GHhSQ1.rst D Misc/NEWS.d/next/Library/2018-05-21-17-18-00.gh-issue-77772.Fhg84L.rst D Misc/NEWS.d/next/Library/2020-04-18-17-45-03.bpo-29847.Uxtbq0.rst D Misc/NEWS.d/next/Library/2020-11-20-21-06-08.bpo-40077.M-iZq3.rst D Misc/NEWS.d/next/Library/2022-02-05-12-01-58.bpo-38941.8IhvyG.rst D Misc/NEWS.d/next/Library/2022-07-22-13-38-37.gh-issue-94518._ZP0cz.rst D Misc/NEWS.d/next/Library/2022-09-26-21-18-47.gh-issue-60580.0hBgde.rst D Misc/NEWS.d/next/Library/2022-11-14-03-06-03.gh-issue-88597.EYJA-Q.rst D Misc/NEWS.d/next/Library/2022-11-15-23-30-39.gh-issue-86682.gK9i1N.rst D Misc/NEWS.d/next/Library/2022-11-24-21-52-31.gh-issue-99266.88GcV9.rst D Misc/NEWS.d/next/Library/2022-12-10-15-30-17.gh-issue-67790.P9YUZM.rst D Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst D Misc/NEWS.d/next/Library/2022-12-19-23-19-26.gh-issue-96290.qFjsi6.rst D Misc/NEWS.d/next/Library/2022-12-21-17-49-50.gh-issue-100160.N0NHRj.rst D Misc/NEWS.d/next/Library/2023-01-04-14-42-59.gh-issue-100750.iFJs5Y.rst D Misc/NEWS.d/next/Library/2023-01-08-00-12-44.gh-issue-39615.gn4PhB.rst D Misc/NEWS.d/next/Library/2023-01-12-01-18-13.gh-issue-100573.KDskqo.rst D Misc/NEWS.d/next/Library/2023-01-12-21-22-20.gh-issue-101000.wz4Xgc.rst D Misc/NEWS.d/next/Library/2023-01-14-12-58-21.gh-issue-101015.stWFid.rst D Misc/NEWS.d/next/Library/2023-01-15-09-11-30.gh-issue-94518.jvxtxm.rst D Misc/NEWS.d/next/Library/2023-01-18-17-58-50.gh-issue-101144.FHd8Un.rst D Misc/NEWS.d/next/Library/2023-01-20-10-46-59.gh-issue-101143.hJo8hu.rst D Misc/NEWS.d/next/Library/2023-01-21-16-50-22.gh-issue-100795.NPMZf7.rst D Misc/NEWS.d/next/Library/2023-01-24-12-53-59.gh-issue-92123.jf6TO5.rst D Misc/NEWS.d/next/Library/2023-01-25-18-07-20.gh-issue-101326.KL4SFv.rst D Misc/NEWS.d/next/Library/2023-01-26-01-25-56.gh-issue-101317.vWaS1x.rst D Misc/NEWS.d/next/Library/2023-01-26-06-44-35.gh-issue-101323.h8Hk11.rst D Misc/NEWS.d/next/Library/2023-02-04-21-01-49.gh-issue-101570.lbtUsD.rst D Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst D Misc/NEWS.d/next/Security/2022-11-08-12-06-52.gh-issue-99108.4Wrsuh.rst D Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst D Misc/NEWS.d/next/Windows/2023-01-11-14-42-11.gh-issue-100247.YfEmSz.rst D Misc/NEWS.d/next/Windows/2023-01-11-16-28-09.gh-issue-100320.2DU2it.rst D Misc/NEWS.d/next/Windows/2023-01-17-18-17-58.gh-issue-82052.mWyysT.rst D Misc/NEWS.d/next/Windows/2023-01-18-18-25-18.gh-issue-101135.HF9VlG.rst D Misc/NEWS.d/next/Windows/2023-01-25-00-23-31.gh-issue-99834.WN41lc.rst D Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst D Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst M Include/patchlevel.h M Lib/pydoc_data/topics.py M README.rst diff --git a/Include/patchlevel.h b/Include/patchlevel.h index 3a3e40c2e092..df6098be8bbc 100644 --- a/Include/patchlevel.h +++ b/Include/patchlevel.h @@ -20,10 +20,10 @@ #define PY_MINOR_VERSION 12 #define PY_MICRO_VERSION 0 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_ALPHA -#define PY_RELEASE_SERIAL 4 +#define PY_RELEASE_SERIAL 5 /* Version as a string */ -#define PY_VERSION "3.12.0a4+" +#define PY_VERSION "3.12.0a5" /*--end constants--*/ /* Version as a single 4-byte hex number, e.g. 0x010502B2 == 1.5.2b2. diff --git a/Lib/pydoc_data/topics.py b/Lib/pydoc_data/topics.py index 11b75037e78b..e7f403d3ffbf 100644 --- a/Lib/pydoc_data/topics.py +++ b/Lib/pydoc_data/topics.py @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -# Autogenerated by Sphinx on Tue Jan 10 13:08:32 2023 +# Autogenerated by Sphinx on Tue Feb 7 13:18:04 2023 topics = {'assert': 'The "assert" statement\n' '**********************\n' '\n' @@ -4647,6 +4647,18 @@ 'the source. The extension interface uses the modules "bdb" and ' '"cmd".\n' '\n' + 'See also:\n' + '\n' + ' Module "faulthandler"\n' + ' Used to dump Python tracebacks explicitly, on a fault, ' + 'after a\n' + ' timeout, or on a user signal.\n' + '\n' + ' Module "traceback"\n' + ' Standard interface to extract, format and print stack ' + 'traces of\n' + ' Python programs.\n' + '\n' 'The debugger?s prompt is "(Pdb)". Typical usage to run a program ' 'under\n' 'control of the debugger is:\n' diff --git a/Misc/NEWS.d/3.12.0a5.rst b/Misc/NEWS.d/3.12.0a5.rst new file mode 100644 index 000000000000..f6f8de46cf70 --- /dev/null +++ b/Misc/NEWS.d/3.12.0a5.rst @@ -0,0 +1,664 @@ +.. date: 2022-11-08-12-06-52 +.. gh-issue: 99108 +.. nonce: 4Wrsuh +.. release date: 2023-02-07 +.. section: Security + +Replace the builtin :mod:`hashlib` implementations of SHA2-224 and SHA2-256 +originally from LibTomCrypt with formally verified, side-channel resistant +code from the `HACL* `_ project. +The builtins remain a fallback only used when OpenSSL does not provide them. + +.. + +.. date: 2023-02-06-20-13-36 +.. gh-issue: 92173 +.. nonce: RQE0mk +.. section: Core and Builtins + +Fix the ``defs`` and ``kwdefs`` arguments to :c:func:`PyEval_EvalCodeEx` and +a reference leak in that function. + +.. + +.. date: 2023-01-30-11-56-09 +.. gh-issue: 59956 +.. nonce: 7xqnC_ +.. section: Core and Builtins + +The GILState API is now partially compatible with subinterpreters. +Previously, ``PyThreadState_GET()`` and ``PyGILState_GetThisThreadState()`` +would get out of sync, causing inconsistent behavior and crashes. + +.. + +.. date: 2023-01-30-08-59-47 +.. gh-issue: 101400 +.. nonce: Di_ZFm +.. section: Core and Builtins + +Fix wrong lineno in exception message on :keyword:`continue` or +:keyword:`break` which are not in a loop. Patch by Dong-hee Na. + +.. + +.. date: 2023-01-28-20-31-42 +.. gh-issue: 101372 +.. nonce: 8BcpCC +.. section: Core and Builtins + +Fix :func:`~unicodedata.is_normalized` to properly handle the UCD 3.2.0 +cases. Patch by Dong-hee Na. + +.. + +.. date: 2023-01-28-13-11-52 +.. gh-issue: 101266 +.. nonce: AxV3OF +.. section: Core and Builtins + +Fix :func:`sys.getsizeof` reporting for :class:`int` subclasses. + +.. + +.. date: 2023-01-24-17-13-32 +.. gh-issue: 101291 +.. nonce: Yr6u_c +.. section: Core and Builtins + +Refactor the ``PyLongObject`` struct into a normal Python object header and +a ``PyLongValue`` struct. + +.. + +.. date: 2023-01-15-03-26-04 +.. gh-issue: 101046 +.. nonce: g2CM4S +.. section: Core and Builtins + +Fix a possible memory leak in the parser when raising :exc:`MemoryError`. +Patch by Pablo Galindo + +.. + +.. date: 2023-01-14-17-03-08 +.. gh-issue: 101037 +.. nonce: 9ATNuf +.. section: Core and Builtins + +Fix potential memory underallocation issue for instances of :class:`int` +subclasses with value zero. + +.. + +.. date: 2023-01-13-12-56-20 +.. gh-issue: 100762 +.. nonce: YvHaQJ +.. section: Core and Builtins + +Record the (virtual) exception block depth in the oparg of +:opcode:`YIELD_VALUE`. Use this to avoid the expensive ``throw()`` when +closing generators (and coroutines) that can be closed trivially. + +.. + +.. date: 2023-01-12-13-46-49 +.. gh-issue: 100982 +.. nonce: mJ234s +.. section: Core and Builtins + +Adds a new :opcode:`COMPARE_AND_BRANCH` instruction. This is a bit more +efficient when performing a comparison immediately followed by a branch, and +restores the design intent of PEP 659 that specializations are local to a +single instruction. + +.. + +.. date: 2023-01-11-22-52-19 +.. gh-issue: 100942 +.. nonce: ontOy_ +.. section: Core and Builtins + +Fixed segfault in property.getter/setter/deleter that occurred when a +property subclass overrode the ``__new__`` method to return a non-property +instance. + +.. + +.. date: 2023-01-10-16-59-33 +.. gh-issue: 100923 +.. nonce: ypJAX- +.. section: Core and Builtins + +Remove the ``mask`` cache entry for the :opcode:`COMPARE_OP` instruction and +embed the mask into the oparg. + +.. + +.. date: 2023-01-10-14-11-17 +.. gh-issue: 100892 +.. nonce: qfBVYI +.. section: Core and Builtins + +Fix race while iterating over thread states in clearing +:class:`threading.local`. Patch by Kumar Aditya. + +.. + +.. date: 2023-01-06-09-22-21 +.. gh-issue: 91351 +.. nonce: iq2vZ_ +.. section: Core and Builtins + +Fix a case where re-entrant imports could corrupt the import deadlock +detection code and cause a :exc:`KeyError` to be raised out of +:mod:`importlib/_bootstrap`. In addition to the straightforward cases, this +could also happen when garbage collection leads to a warning being emitted +-- as happens when it collects an open socket or file) + +.. + +.. date: 2023-01-03-20-59-20 +.. gh-issue: 100726 +.. nonce: W9huFl +.. section: Core and Builtins + +Optimize construction of ``range`` object for medium size integers. + +.. + +.. date: 2023-01-03-14-33-23 +.. gh-issue: 100712 +.. nonce: po6xyB +.. section: Core and Builtins + +Added option to build cpython with specialization disabled, by setting +``ENABLE_SPECIALIZATION=False`` in :mod:`opcode`, followed by ``make +regen-all``. + +.. + +.. bpo: 32780 +.. date: 2018-02-05-21-54-46 +.. nonce: Dtiz8z +.. section: Core and Builtins + +Inter-field padding is now inserted into the PEP3118 format strings obtained +from :class:`ctypes.Structure` objects, reflecting their true representation +in memory. + +.. + +.. date: 2023-02-05-14-39-49 +.. gh-issue: 101541 +.. nonce: Mo3ppp +.. section: Library + +[Enum] - fix psuedo-flag creation + +.. + +.. date: 2023-02-04-21-01-49 +.. gh-issue: 101570 +.. nonce: lbtUsD +.. section: Library + +Upgrade pip wheel bundled with ensurepip (pip 23.0) + +.. + +.. date: 2023-01-26-06-44-35 +.. gh-issue: 101323 +.. nonce: h8Hk11 +.. section: Library + +Fix a bug where errors where not thrown by zlib._ZlibDecompressor if +encountered during decompressing. + +.. + +.. date: 2023-01-26-01-25-56 +.. gh-issue: 101317 +.. nonce: vWaS1x +.. section: Library + +Add *ssl_shutdown_timeout* parameter for +:meth:`asyncio.StreamWriter.start_tls`. + +.. + +.. date: 2023-01-25-18-07-20 +.. gh-issue: 101326 +.. nonce: KL4SFv +.. section: Library + +Fix regression when passing ``None`` as second or third argument to +``FutureIter.throw``. + +.. + +.. date: 2023-01-24-12-53-59 +.. gh-issue: 92123 +.. nonce: jf6TO5 +.. section: Library + +Adapt the ``_elementtree`` extension module to multi-phase init +(:pep:`489`). Patches by Erlend E. Aasland. + +.. + +.. date: 2023-01-21-16-50-22 +.. gh-issue: 100795 +.. nonce: NPMZf7 +.. section: Library + +Avoid potential unexpected ``freeaddrinfo`` call (double free) in +:mod:`socket` when when a libc ``getaddrinfo()`` implementation leaves +garbage in an output pointer when returning an error. Original patch by +Sergey G. Brester. + +.. + +.. date: 2023-01-20-10-46-59 +.. gh-issue: 101143 +.. nonce: hJo8hu +.. section: Library + +Remove unused references to :class:`~asyncio.TimerHandle` in +``asyncio.base_events.BaseEventLoop._add_callback``. + +.. + +.. date: 2023-01-18-17-58-50 +.. gh-issue: 101144 +.. nonce: FHd8Un +.. section: Library + +Make :func:`zipfile.Path.open` and :func:`zipfile.Path.read_text` also +accept ``encoding`` as a positional argument. This was the behavior in +Python 3.9 and earlier. 3.10 introduced a regression where supplying it as +a positional argument would lead to a :exc:`TypeError`. + +.. + +.. date: 2023-01-15-09-11-30 +.. gh-issue: 94518 +.. nonce: jvxtxm +.. section: Library + +Group-related variables of ``_posixsubprocess`` module are renamed to stress +that supplimentary group affinity is added to a fork, not replace the +inherited ones. Patch by Oleg Iarygin. + +.. + +.. date: 2023-01-14-12-58-21 +.. gh-issue: 101015 +.. nonce: stWFid +.. section: Library + +Fix :func:`typing.get_type_hints` on ``'*tuple[...]'`` and ``*tuple[...]``. +It must not drop the ``Unpack`` part. + +.. + +.. date: 2023-01-12-21-22-20 +.. gh-issue: 101000 +.. nonce: wz4Xgc +.. section: Library + +Add :func:`os.path.splitroot()`, which splits a path into a 3-item tuple +``(drive, root, tail)``. This new function is used by :mod:`pathlib` to +improve the performance of path construction by up to a third. + +.. + +.. date: 2023-01-12-01-18-13 +.. gh-issue: 100573 +.. nonce: KDskqo +.. section: Library + +Fix a Windows :mod:`asyncio` bug with named pipes where a client doing +``os.stat()`` on the pipe would cause an error in the server that disabled +serving future requests. + +.. + +.. date: 2023-01-08-00-12-44 +.. gh-issue: 39615 +.. nonce: gn4PhB +.. section: Library + +:func:`warnings.warn` now has the ability to skip stack frames based on code +filename prefix rather than only a numeric ``stacklevel`` via the new +``skip_file_prefixes`` keyword argument. + +.. + +.. date: 2023-01-04-14-42-59 +.. gh-issue: 100750 +.. nonce: iFJs5Y +.. section: Library + +pass encoding kwarg to subprocess in platform + +.. + +.. date: 2022-12-21-17-49-50 +.. gh-issue: 100160 +.. nonce: N0NHRj +.. section: Library + +Emit a deprecation warning in +:meth:`asyncio.DefaultEventLoopPolicy.get_event_loop` if there is no current +event loop set and it decides to create one. + +.. + +.. date: 2022-12-19-23-19-26 +.. gh-issue: 96290 +.. nonce: qFjsi6 +.. section: Library + +Fix handling of partial and invalid UNC drives in ``ntpath.splitdrive()``, +and in ``ntpath.normpath()`` on non-Windows systems. Paths such as +'\\server' and '\\' are now considered by ``splitdrive()`` to contain only a +drive, and consequently are not modified by ``normpath()`` on non-Windows +systems. The behaviour of ``normpath()`` on Windows systems is unaffected, +as native OS APIs are used. Patch by Eryk Sun, with contributions by Barney +Gale. + +.. + +.. date: 2022-12-11-14-38-59 +.. gh-issue: 99952 +.. nonce: IYGLzr +.. section: Library + +Fix a reference undercounting issue in :class:`ctypes.Structure` with +``from_param()`` results larger than a C pointer. + +.. + +.. date: 2022-12-10-15-30-17 +.. gh-issue: 67790 +.. nonce: P9YUZM +.. section: Library + +Add float-style formatting support for :class:`fractions.Fraction` +instances. + +.. + +.. date: 2022-11-24-21-52-31 +.. gh-issue: 99266 +.. nonce: 88GcV9 +.. section: Library + +Preserve more detailed error messages in :mod:`ctypes`. + +.. + +.. date: 2022-11-15-23-30-39 +.. gh-issue: 86682 +.. nonce: gK9i1N +.. section: Library + +Ensure runtime-created collections have the correct module name using the +newly added (internal) :func:`sys._getframemodulename`. + +.. + +.. date: 2022-11-14-03-06-03 +.. gh-issue: 88597 +.. nonce: EYJA-Q +.. section: Library + +:mod:`uuid` now has a command line interface. Try ``python -m uuid -h``. + +.. + +.. date: 2022-09-26-21-18-47 +.. gh-issue: 60580 +.. nonce: 0hBgde +.. section: Library + +:data:`ctypes.wintypes.BYTE` definition changed from :data:`~ctypes.c_byte` +to :data:`~ctypes.c_ubyte` to match Windows SDK. Patch by Anatoly Techtonik +and Oleg Iarygin. + +.. + +.. date: 2022-07-22-13-38-37 +.. gh-issue: 94518 +.. nonce: _ZP0cz +.. section: Library + +``_posixsubprocess`` now initializes all UID and GID variables using a +reserved ``-1`` value instead of a separate flag. Patch by Oleg Iarygin. + +.. + +.. bpo: 38941 +.. date: 2022-02-05-12-01-58 +.. nonce: 8IhvyG +.. section: Library + +The :mod:`xml.etree.ElementTree` module now emits :exc:`DeprecationWarning` +when testing the truth value of an :class:`xml.etree.ElementTree.Element`. +Before, the Python implementation emitted :exc:`FutureWarning`, and the C +implementation emitted nothing. + +.. + +.. bpo: 40077 +.. date: 2020-11-20-21-06-08 +.. nonce: M-iZq3 +.. section: Library + +Convert :mod:`elementtree` types to heap types. Patch by Erlend E. Aasland. + +.. + +.. bpo: 29847 +.. date: 2020-04-18-17-45-03 +.. nonce: Uxtbq0 +.. section: Library + +Fix a bug where :class:`pathlib.Path` accepted and ignored keyword +arguments. Patch provided by Yurii Karabas. + +.. + +.. date: 2018-05-21-17-18-00 +.. gh-issue: 77772 +.. nonce: Fhg84L +.. section: Library + +:class:`ctypes.CDLL`, :class:`ctypes.OleDLL`, :class:`ctypes.WinDLL`, and +:class:`ctypes.PyDLL` now accept :term:`path-like objects ` as their ``name`` argument. Patch by Robert Hoelzl. + +.. + +.. date: 2022-06-19-22-04-47 +.. gh-issue: 88324 +.. nonce: GHhSQ1 +.. section: Documentation + +Reword :mod:`subprocess` to emphasize default behavior of *stdin*, *stdout*, +and *stderr* arguments. Remove inaccurate statement about child file handle +inheritance. + +.. + +.. date: 2023-02-04-17-24-33 +.. gh-issue: 101334 +.. nonce: _yOqwg +.. section: Tests + +``test_tarfile`` has been updated to pass when run as a high UID. + +.. + +.. date: 2023-02-04-06-59-07 +.. gh-issue: 101282 +.. nonce: 7sQz5l +.. section: Build + +Update BOLT configration not to use depreacted usage of ``--split +functions``. Patch by Dong-hee Na. + +.. + +.. date: 2023-02-02-23-43-46 +.. gh-issue: 101522 +.. nonce: lnUDta +.. section: Build + +Allow overriding Windows dependencies versions and paths using MSBuild +properties. + +.. + +.. date: 2023-01-26-19-02-11 +.. gh-issue: 77532 +.. nonce: cXD8bg +.. section: Build + +Minor fixes to allow building with ``PlatformToolset=ClangCL`` on Windows. + +.. + +.. date: 2023-01-21-10-31-35 +.. gh-issue: 101152 +.. nonce: xvM8pL +.. section: Build + +In accordance with :PEP:`699`, the ``ma_version_tag`` field in +:c:type:`PyDictObject` is deprecated for extension modules. Accessing this +field will generate a compiler warning at compile time. This field will be +removed in Python 3.14. + +.. + +.. date: 2023-01-17-21-32-51 +.. gh-issue: 100340 +.. nonce: i9zRGM +.. section: Build + +Allows -Wno-int-conversion for wasm-sdk 17 and onwards, thus enables +building WASI builds once against the latest sdk. + +.. + +.. date: 2023-01-15-11-22-15 +.. gh-issue: 101060 +.. nonce: 0mYk9E +.. section: Build + +Conditionally add ``-fno-reorder-blocks-and-partition`` in configure. +Effectively fixes ``--enable-bolt`` when using Clang, as this appears to be +a GCC-only flag. + +.. + +.. date: 2022-10-27-09-57-12 +.. gh-issue: 98705 +.. nonce: H11XmR +.. section: Build + +``__bool__`` is defined in AIX system header files which breaks the build in +AIX, so undefine it. + +.. + +.. date: 2022-10-25-11-53-55 +.. gh-issue: 98636 +.. nonce: e0RPAr +.. section: Build + +Fix a regression in detecting ``gdbm_compat`` library for the ``_gdbm`` +module build. + +.. + +.. date: 2022-08-30-10-16-31 +.. gh-issue: 96305 +.. nonce: 274i8B +.. section: Build + +``_aix_support`` now uses a simple code to get platform details rather than +the now non-existent ``_bootsubprocess`` during bootstrap. + +.. + +.. date: 2023-02-03-17-53-06 +.. gh-issue: 101543 +.. nonce: cORAT4 +.. section: Windows + +Ensure the install path in the registry is only used when the standard +library hasn't been located in any other way. + +.. + +.. date: 2023-01-31-16-50-07 +.. gh-issue: 101467 +.. nonce: ye9t-L +.. section: Windows + +The ``py.exe`` launcher now correctly filters when only a single runtime is +installed. It also correctly handles prefix matches on tags so that ``-3.1`` +does not match ``3.11``, but would still match ``3.1-32``. + +.. + +.. date: 2023-01-25-00-23-31 +.. gh-issue: 99834 +.. nonce: WN41lc +.. section: Windows + +Updates bundled copy of Tcl/Tk to 8.6.13.0 + +.. + +.. date: 2023-01-18-18-25-18 +.. gh-issue: 101135 +.. nonce: HF9VlG +.. section: Windows + +Restore ability to launch older 32-bit versions from the :file:`py.exe` +launcher when both 32-bit and 64-bit installs of the same version are +available. + +.. + +.. date: 2023-01-17-18-17-58 +.. gh-issue: 82052 +.. nonce: mWyysT +.. section: Windows + +Fixed an issue where writing more than 32K of Unicode output to the console +screen in one go can result in mojibake. + +.. + +.. date: 2023-01-11-16-28-09 +.. gh-issue: 100320 +.. nonce: 2DU2it +.. section: Windows + +Ensures the ``PythonPath`` registry key from an install is used when +launching from a different copy of Python that relies on an existing install +to provide a copy of its modules and standard library. + +.. + +.. date: 2023-01-11-14-42-11 +.. gh-issue: 100247 +.. nonce: YfEmSz +.. section: Windows + +Restores support for the :file:`py.exe` launcher finding shebang commands in +its configuration file using the full command name. diff --git a/Misc/NEWS.d/next/Build/2022-08-30-10-16-31.gh-issue-96305.274i8B.rst b/Misc/NEWS.d/next/Build/2022-08-30-10-16-31.gh-issue-96305.274i8B.rst deleted file mode 100644 index 64a48da658f2..000000000000 --- a/Misc/NEWS.d/next/Build/2022-08-30-10-16-31.gh-issue-96305.274i8B.rst +++ /dev/null @@ -1,2 +0,0 @@ -``_aix_support`` now uses a simple code to get platform details rather than -the now non-existent ``_bootsubprocess`` during bootstrap. diff --git a/Misc/NEWS.d/next/Build/2022-10-25-11-53-55.gh-issue-98636.e0RPAr.rst b/Misc/NEWS.d/next/Build/2022-10-25-11-53-55.gh-issue-98636.e0RPAr.rst deleted file mode 100644 index 26a7cc8acaf2..000000000000 --- a/Misc/NEWS.d/next/Build/2022-10-25-11-53-55.gh-issue-98636.e0RPAr.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix a regression in detecting ``gdbm_compat`` library for the ``_gdbm`` -module build. diff --git a/Misc/NEWS.d/next/Build/2022-10-27-09-57-12.gh-issue-98705.H11XmR.rst b/Misc/NEWS.d/next/Build/2022-10-27-09-57-12.gh-issue-98705.H11XmR.rst deleted file mode 100644 index 4519853cdbad..000000000000 --- a/Misc/NEWS.d/next/Build/2022-10-27-09-57-12.gh-issue-98705.H11XmR.rst +++ /dev/null @@ -1,2 +0,0 @@ -``__bool__`` is defined in AIX system header files which breaks the build in -AIX, so undefine it. diff --git a/Misc/NEWS.d/next/Build/2023-01-15-11-22-15.gh-issue-101060.0mYk9E.rst b/Misc/NEWS.d/next/Build/2023-01-15-11-22-15.gh-issue-101060.0mYk9E.rst deleted file mode 100644 index bebbf8c898d5..000000000000 --- a/Misc/NEWS.d/next/Build/2023-01-15-11-22-15.gh-issue-101060.0mYk9E.rst +++ /dev/null @@ -1,3 +0,0 @@ -Conditionally add ``-fno-reorder-blocks-and-partition`` in configure. -Effectively fixes ``--enable-bolt`` when using Clang, as this appears to be -a GCC-only flag. diff --git a/Misc/NEWS.d/next/Build/2023-01-17-21-32-51.gh-issue-100340.i9zRGM.rst b/Misc/NEWS.d/next/Build/2023-01-17-21-32-51.gh-issue-100340.i9zRGM.rst deleted file mode 100644 index 3a37f798dc6c..000000000000 --- a/Misc/NEWS.d/next/Build/2023-01-17-21-32-51.gh-issue-100340.i9zRGM.rst +++ /dev/null @@ -1,2 +0,0 @@ -Allows -Wno-int-conversion for wasm-sdk 17 and onwards, thus enables -building WASI builds once against the latest sdk. diff --git a/Misc/NEWS.d/next/Build/2023-01-21-10-31-35.gh-issue-101152.xvM8pL.rst b/Misc/NEWS.d/next/Build/2023-01-21-10-31-35.gh-issue-101152.xvM8pL.rst deleted file mode 100644 index e35b6178aa4c..000000000000 --- a/Misc/NEWS.d/next/Build/2023-01-21-10-31-35.gh-issue-101152.xvM8pL.rst +++ /dev/null @@ -1,3 +0,0 @@ -In accordance with :PEP:`699`, the ``ma_version_tag`` field in :c:type:`PyDictObject` -is deprecated for extension modules. Accessing this field will generate a compiler -warning at compile time. This field will be removed in Python 3.14. diff --git a/Misc/NEWS.d/next/Build/2023-01-26-19-02-11.gh-issue-77532.cXD8bg.rst b/Misc/NEWS.d/next/Build/2023-01-26-19-02-11.gh-issue-77532.cXD8bg.rst deleted file mode 100644 index 5a746dca2e7d..000000000000 --- a/Misc/NEWS.d/next/Build/2023-01-26-19-02-11.gh-issue-77532.cXD8bg.rst +++ /dev/null @@ -1 +0,0 @@ -Minor fixes to allow building with ``PlatformToolset=ClangCL`` on Windows. diff --git a/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst b/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst deleted file mode 100644 index 2e7f9029e9ee..000000000000 --- a/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst +++ /dev/null @@ -1,2 +0,0 @@ -Allow overriding Windows dependencies versions and paths using MSBuild -properties. diff --git a/Misc/NEWS.d/next/Build/2023-02-04-06-59-07.gh-issue-101282.7sQz5l.rst b/Misc/NEWS.d/next/Build/2023-02-04-06-59-07.gh-issue-101282.7sQz5l.rst deleted file mode 100644 index 49d485646673..000000000000 --- a/Misc/NEWS.d/next/Build/2023-02-04-06-59-07.gh-issue-101282.7sQz5l.rst +++ /dev/null @@ -1,2 +0,0 @@ -Update BOLT configration not to use depreacted usage of ``--split -functions``. Patch by Dong-hee Na. diff --git a/Misc/NEWS.d/next/Core and Builtins/2018-02-05-21-54-46.bpo-32780.Dtiz8z.rst b/Misc/NEWS.d/next/Core and Builtins/2018-02-05-21-54-46.bpo-32780.Dtiz8z.rst deleted file mode 100644 index 8996d471159a..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2018-02-05-21-54-46.bpo-32780.Dtiz8z.rst +++ /dev/null @@ -1,3 +0,0 @@ -Inter-field padding is now inserted into the PEP3118 format strings obtained -from :class:`ctypes.Structure` objects, reflecting their true representation in -memory. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-03-14-33-23.gh-issue-100712.po6xyB.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-03-14-33-23.gh-issue-100712.po6xyB.rst deleted file mode 100644 index 3ebee0dd2aa4..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-03-14-33-23.gh-issue-100712.po6xyB.rst +++ /dev/null @@ -1 +0,0 @@ -Added option to build cpython with specialization disabled, by setting ``ENABLE_SPECIALIZATION=False`` in :mod:`opcode`, followed by ``make regen-all``. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-03-20-59-20.gh-issue-100726.W9huFl.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-03-20-59-20.gh-issue-100726.W9huFl.rst deleted file mode 100644 index 2c93098b347a..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-03-20-59-20.gh-issue-100726.W9huFl.rst +++ /dev/null @@ -1 +0,0 @@ -Optimize construction of ``range`` object for medium size integers. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-06-09-22-21.gh-issue-91351.iq2vZ_.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-06-09-22-21.gh-issue-91351.iq2vZ_.rst deleted file mode 100644 index 19de1f8d0fb3..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-06-09-22-21.gh-issue-91351.iq2vZ_.rst +++ /dev/null @@ -1,5 +0,0 @@ -Fix a case where re-entrant imports could corrupt the import deadlock -detection code and cause a :exc:`KeyError` to be raised out of -:mod:`importlib/_bootstrap`. In addition to the straightforward cases, this -could also happen when garbage collection leads to a warning being emitted -- -as happens when it collects an open socket or file) diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-10-14-11-17.gh-issue-100892.qfBVYI.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-10-14-11-17.gh-issue-100892.qfBVYI.rst deleted file mode 100644 index f2576becc2fc..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-10-14-11-17.gh-issue-100892.qfBVYI.rst +++ /dev/null @@ -1 +0,0 @@ -Fix race while iterating over thread states in clearing :class:`threading.local`. Patch by Kumar Aditya. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-10-16-59-33.gh-issue-100923.ypJAX-.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-10-16-59-33.gh-issue-100923.ypJAX-.rst deleted file mode 100644 index b6b3f1d0c58f..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-10-16-59-33.gh-issue-100923.ypJAX-.rst +++ /dev/null @@ -1,2 +0,0 @@ -Remove the ``mask`` cache entry for the :opcode:`COMPARE_OP` instruction and -embed the mask into the oparg. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-11-22-52-19.gh-issue-100942.ontOy_.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-11-22-52-19.gh-issue-100942.ontOy_.rst deleted file mode 100644 index daccea255b16..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-11-22-52-19.gh-issue-100942.ontOy_.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fixed segfault in property.getter/setter/deleter that occurred when a property -subclass overrode the ``__new__`` method to return a non-property instance. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-12-13-46-49.gh-issue-100982.mJ234s.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-12-13-46-49.gh-issue-100982.mJ234s.rst deleted file mode 100644 index 4f43e783cd6a..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-12-13-46-49.gh-issue-100982.mJ234s.rst +++ /dev/null @@ -1,4 +0,0 @@ -Adds a new :opcode:`COMPARE_AND_BRANCH` instruction. This is a bit more -efficient when performing a comparison immediately followed by a branch, and -restores the design intent of PEP 659 that specializations are local to a -single instruction. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-13-12-56-20.gh-issue-100762.YvHaQJ.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-13-12-56-20.gh-issue-100762.YvHaQJ.rst deleted file mode 100644 index 2f6b121439a9..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-13-12-56-20.gh-issue-100762.YvHaQJ.rst +++ /dev/null @@ -1,3 +0,0 @@ -Record the (virtual) exception block depth in the oparg of -:opcode:`YIELD_VALUE`. Use this to avoid the expensive ``throw()`` when -closing generators (and coroutines) that can be closed trivially. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-14-17-03-08.gh-issue-101037.9ATNuf.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-14-17-03-08.gh-issue-101037.9ATNuf.rst deleted file mode 100644 index a48756657a29..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-14-17-03-08.gh-issue-101037.9ATNuf.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix potential memory underallocation issue for instances of :class:`int` -subclasses with value zero. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-15-03-26-04.gh-issue-101046.g2CM4S.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-15-03-26-04.gh-issue-101046.g2CM4S.rst deleted file mode 100644 index f600473620f1..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-15-03-26-04.gh-issue-101046.g2CM4S.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix a possible memory leak in the parser when raising :exc:`MemoryError`. -Patch by Pablo Galindo diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-24-17-13-32.gh-issue-101291.Yr6u_c.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-24-17-13-32.gh-issue-101291.Yr6u_c.rst deleted file mode 100644 index b585ff5a817e..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-24-17-13-32.gh-issue-101291.Yr6u_c.rst +++ /dev/null @@ -1,2 +0,0 @@ -Refactor the ``PyLongObject`` struct into a normal Python object header and -a ``PyLongValue`` struct. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst deleted file mode 100644 index 51999bacb8de..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-13-11-52.gh-issue-101266.AxV3OF.rst +++ /dev/null @@ -1 +0,0 @@ -Fix :func:`sys.getsizeof` reporting for :class:`int` subclasses. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst deleted file mode 100644 index 65a207e3f7e4..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix :func:`~unicodedata.is_normalized` to properly handle the UCD 3.2.0 -cases. Patch by Dong-hee Na. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-30-08-59-47.gh-issue-101400.Di_ZFm.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-30-08-59-47.gh-issue-101400.Di_ZFm.rst deleted file mode 100644 index f3dd783c01e7..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-30-08-59-47.gh-issue-101400.Di_ZFm.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix wrong lineno in exception message on :keyword:`continue` or -:keyword:`break` which are not in a loop. Patch by Dong-hee Na. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-30-11-56-09.gh-issue-59956.7xqnC_.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-30-11-56-09.gh-issue-59956.7xqnC_.rst deleted file mode 100644 index b3c1896b9493..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-30-11-56-09.gh-issue-59956.7xqnC_.rst +++ /dev/null @@ -1,3 +0,0 @@ -The GILState API is now partially compatible with subinterpreters. -Previously, ``PyThreadState_GET()`` and ``PyGILState_GetThisThreadState()`` -would get out of sync, causing inconsistent behavior and crashes. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst deleted file mode 100644 index 6b98aac2a465..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix the ``defs`` and ``kwdefs`` arguments to :c:func:`PyEval_EvalCodeEx` -and a reference leak in that function. diff --git a/Misc/NEWS.d/next/Documentation/2022-06-19-22-04-47.gh-issue-88324.GHhSQ1.rst b/Misc/NEWS.d/next/Documentation/2022-06-19-22-04-47.gh-issue-88324.GHhSQ1.rst deleted file mode 100644 index 6c8d192daa79..000000000000 --- a/Misc/NEWS.d/next/Documentation/2022-06-19-22-04-47.gh-issue-88324.GHhSQ1.rst +++ /dev/null @@ -1,3 +0,0 @@ -Reword :mod:`subprocess` to emphasize default behavior of *stdin*, *stdout*, -and *stderr* arguments. Remove inaccurate statement about child file handle -inheritance. diff --git a/Misc/NEWS.d/next/Library/2018-05-21-17-18-00.gh-issue-77772.Fhg84L.rst b/Misc/NEWS.d/next/Library/2018-05-21-17-18-00.gh-issue-77772.Fhg84L.rst deleted file mode 100644 index 3a7c6d45297b..000000000000 --- a/Misc/NEWS.d/next/Library/2018-05-21-17-18-00.gh-issue-77772.Fhg84L.rst +++ /dev/null @@ -1,3 +0,0 @@ -:class:`ctypes.CDLL`, :class:`ctypes.OleDLL`, :class:`ctypes.WinDLL`, -and :class:`ctypes.PyDLL` now accept :term:`path-like objects -` as their ``name`` argument. Patch by Robert Hoelzl. diff --git a/Misc/NEWS.d/next/Library/2020-04-18-17-45-03.bpo-29847.Uxtbq0.rst b/Misc/NEWS.d/next/Library/2020-04-18-17-45-03.bpo-29847.Uxtbq0.rst deleted file mode 100644 index 010d775a0d98..000000000000 --- a/Misc/NEWS.d/next/Library/2020-04-18-17-45-03.bpo-29847.Uxtbq0.rst +++ /dev/null @@ -1 +0,0 @@ -Fix a bug where :class:`pathlib.Path` accepted and ignored keyword arguments. Patch provided by Yurii Karabas. diff --git a/Misc/NEWS.d/next/Library/2020-11-20-21-06-08.bpo-40077.M-iZq3.rst b/Misc/NEWS.d/next/Library/2020-11-20-21-06-08.bpo-40077.M-iZq3.rst deleted file mode 100644 index 8a74477a4b35..000000000000 --- a/Misc/NEWS.d/next/Library/2020-11-20-21-06-08.bpo-40077.M-iZq3.rst +++ /dev/null @@ -1 +0,0 @@ -Convert :mod:`elementtree` types to heap types. Patch by Erlend E. Aasland. diff --git a/Misc/NEWS.d/next/Library/2022-02-05-12-01-58.bpo-38941.8IhvyG.rst b/Misc/NEWS.d/next/Library/2022-02-05-12-01-58.bpo-38941.8IhvyG.rst deleted file mode 100644 index 5f996042260d..000000000000 --- a/Misc/NEWS.d/next/Library/2022-02-05-12-01-58.bpo-38941.8IhvyG.rst +++ /dev/null @@ -1,4 +0,0 @@ -The :mod:`xml.etree.ElementTree` module now emits :exc:`DeprecationWarning` -when testing the truth value of an :class:`xml.etree.ElementTree.Element`. -Before, the Python implementation emitted :exc:`FutureWarning`, and the C -implementation emitted nothing. diff --git a/Misc/NEWS.d/next/Library/2022-07-22-13-38-37.gh-issue-94518._ZP0cz.rst b/Misc/NEWS.d/next/Library/2022-07-22-13-38-37.gh-issue-94518._ZP0cz.rst deleted file mode 100644 index a9d6d69f7eff..000000000000 --- a/Misc/NEWS.d/next/Library/2022-07-22-13-38-37.gh-issue-94518._ZP0cz.rst +++ /dev/null @@ -1,2 +0,0 @@ -``_posixsubprocess`` now initializes all UID and GID variables using a -reserved ``-1`` value instead of a separate flag. Patch by Oleg Iarygin. diff --git a/Misc/NEWS.d/next/Library/2022-09-26-21-18-47.gh-issue-60580.0hBgde.rst b/Misc/NEWS.d/next/Library/2022-09-26-21-18-47.gh-issue-60580.0hBgde.rst deleted file mode 100644 index 630e56cd2f7b..000000000000 --- a/Misc/NEWS.d/next/Library/2022-09-26-21-18-47.gh-issue-60580.0hBgde.rst +++ /dev/null @@ -1,3 +0,0 @@ -:data:`ctypes.wintypes.BYTE` definition changed from -:data:`~ctypes.c_byte` to :data:`~ctypes.c_ubyte` to match Windows -SDK. Patch by Anatoly Techtonik and Oleg Iarygin. diff --git a/Misc/NEWS.d/next/Library/2022-11-14-03-06-03.gh-issue-88597.EYJA-Q.rst b/Misc/NEWS.d/next/Library/2022-11-14-03-06-03.gh-issue-88597.EYJA-Q.rst deleted file mode 100644 index a98e1ab4d157..000000000000 --- a/Misc/NEWS.d/next/Library/2022-11-14-03-06-03.gh-issue-88597.EYJA-Q.rst +++ /dev/null @@ -1 +0,0 @@ -:mod:`uuid` now has a command line interface. Try ``python -m uuid -h``. diff --git a/Misc/NEWS.d/next/Library/2022-11-15-23-30-39.gh-issue-86682.gK9i1N.rst b/Misc/NEWS.d/next/Library/2022-11-15-23-30-39.gh-issue-86682.gK9i1N.rst deleted file mode 100644 index 64ef42a9a1c0..000000000000 --- a/Misc/NEWS.d/next/Library/2022-11-15-23-30-39.gh-issue-86682.gK9i1N.rst +++ /dev/null @@ -1,2 +0,0 @@ -Ensure runtime-created collections have the correct module name using -the newly added (internal) :func:`sys._getframemodulename`. diff --git a/Misc/NEWS.d/next/Library/2022-11-24-21-52-31.gh-issue-99266.88GcV9.rst b/Misc/NEWS.d/next/Library/2022-11-24-21-52-31.gh-issue-99266.88GcV9.rst deleted file mode 100644 index 97e9569e40a9..000000000000 --- a/Misc/NEWS.d/next/Library/2022-11-24-21-52-31.gh-issue-99266.88GcV9.rst +++ /dev/null @@ -1 +0,0 @@ -Preserve more detailed error messages in :mod:`ctypes`. diff --git a/Misc/NEWS.d/next/Library/2022-12-10-15-30-17.gh-issue-67790.P9YUZM.rst b/Misc/NEWS.d/next/Library/2022-12-10-15-30-17.gh-issue-67790.P9YUZM.rst deleted file mode 100644 index ba0db774f8b3..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-10-15-30-17.gh-issue-67790.P9YUZM.rst +++ /dev/null @@ -1,2 +0,0 @@ -Add float-style formatting support for :class:`fractions.Fraction` -instances. diff --git a/Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst b/Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst deleted file mode 100644 index 09ec96124953..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix a reference undercounting issue in :class:`ctypes.Structure` with ``from_param()`` -results larger than a C pointer. diff --git a/Misc/NEWS.d/next/Library/2022-12-19-23-19-26.gh-issue-96290.qFjsi6.rst b/Misc/NEWS.d/next/Library/2022-12-19-23-19-26.gh-issue-96290.qFjsi6.rst deleted file mode 100644 index 33f98602bd1b..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-19-23-19-26.gh-issue-96290.qFjsi6.rst +++ /dev/null @@ -1,5 +0,0 @@ -Fix handling of partial and invalid UNC drives in ``ntpath.splitdrive()``, and in -``ntpath.normpath()`` on non-Windows systems. Paths such as '\\server' and '\\' are now considered -by ``splitdrive()`` to contain only a drive, and consequently are not modified by ``normpath()`` on -non-Windows systems. The behaviour of ``normpath()`` on Windows systems is unaffected, as native -OS APIs are used. Patch by Eryk Sun, with contributions by Barney Gale. diff --git a/Misc/NEWS.d/next/Library/2022-12-21-17-49-50.gh-issue-100160.N0NHRj.rst b/Misc/NEWS.d/next/Library/2022-12-21-17-49-50.gh-issue-100160.N0NHRj.rst deleted file mode 100644 index d5cc785722d7..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-21-17-49-50.gh-issue-100160.N0NHRj.rst +++ /dev/null @@ -1,3 +0,0 @@ -Emit a deprecation warning in -:meth:`asyncio.DefaultEventLoopPolicy.get_event_loop` if there is no current -event loop set and it decides to create one. diff --git a/Misc/NEWS.d/next/Library/2023-01-04-14-42-59.gh-issue-100750.iFJs5Y.rst b/Misc/NEWS.d/next/Library/2023-01-04-14-42-59.gh-issue-100750.iFJs5Y.rst deleted file mode 100644 index be351532822c..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-04-14-42-59.gh-issue-100750.iFJs5Y.rst +++ /dev/null @@ -1 +0,0 @@ -pass encoding kwarg to subprocess in platform diff --git a/Misc/NEWS.d/next/Library/2023-01-08-00-12-44.gh-issue-39615.gn4PhB.rst b/Misc/NEWS.d/next/Library/2023-01-08-00-12-44.gh-issue-39615.gn4PhB.rst deleted file mode 100644 index 1d04cc2cd54b..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-08-00-12-44.gh-issue-39615.gn4PhB.rst +++ /dev/null @@ -1,3 +0,0 @@ -:func:`warnings.warn` now has the ability to skip stack frames based on code -filename prefix rather than only a numeric ``stacklevel`` via the new -``skip_file_prefixes`` keyword argument. diff --git a/Misc/NEWS.d/next/Library/2023-01-12-01-18-13.gh-issue-100573.KDskqo.rst b/Misc/NEWS.d/next/Library/2023-01-12-01-18-13.gh-issue-100573.KDskqo.rst deleted file mode 100644 index 97b95d18d1e4..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-12-01-18-13.gh-issue-100573.KDskqo.rst +++ /dev/null @@ -1 +0,0 @@ -Fix a Windows :mod:`asyncio` bug with named pipes where a client doing ``os.stat()`` on the pipe would cause an error in the server that disabled serving future requests. diff --git a/Misc/NEWS.d/next/Library/2023-01-12-21-22-20.gh-issue-101000.wz4Xgc.rst b/Misc/NEWS.d/next/Library/2023-01-12-21-22-20.gh-issue-101000.wz4Xgc.rst deleted file mode 100644 index 2082361c41d6..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-12-21-22-20.gh-issue-101000.wz4Xgc.rst +++ /dev/null @@ -1,3 +0,0 @@ -Add :func:`os.path.splitroot()`, which splits a path into a 3-item tuple -``(drive, root, tail)``. This new function is used by :mod:`pathlib` to -improve the performance of path construction by up to a third. diff --git a/Misc/NEWS.d/next/Library/2023-01-14-12-58-21.gh-issue-101015.stWFid.rst b/Misc/NEWS.d/next/Library/2023-01-14-12-58-21.gh-issue-101015.stWFid.rst deleted file mode 100644 index b9d73ff98552..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-14-12-58-21.gh-issue-101015.stWFid.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix :func:`typing.get_type_hints` on ``'*tuple[...]'`` and ``*tuple[...]``. -It must not drop the ``Unpack`` part. diff --git a/Misc/NEWS.d/next/Library/2023-01-15-09-11-30.gh-issue-94518.jvxtxm.rst b/Misc/NEWS.d/next/Library/2023-01-15-09-11-30.gh-issue-94518.jvxtxm.rst deleted file mode 100644 index 77563090464d..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-15-09-11-30.gh-issue-94518.jvxtxm.rst +++ /dev/null @@ -1,3 +0,0 @@ -Group-related variables of ``_posixsubprocess`` module are renamed to -stress that supplimentary group affinity is added to a fork, not -replace the inherited ones. Patch by Oleg Iarygin. diff --git a/Misc/NEWS.d/next/Library/2023-01-18-17-58-50.gh-issue-101144.FHd8Un.rst b/Misc/NEWS.d/next/Library/2023-01-18-17-58-50.gh-issue-101144.FHd8Un.rst deleted file mode 100644 index 297652259949..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-18-17-58-50.gh-issue-101144.FHd8Un.rst +++ /dev/null @@ -1,4 +0,0 @@ -Make :func:`zipfile.Path.open` and :func:`zipfile.Path.read_text` also accept -``encoding`` as a positional argument. This was the behavior in Python 3.9 and -earlier. 3.10 introduced a regression where supplying it as a positional -argument would lead to a :exc:`TypeError`. diff --git a/Misc/NEWS.d/next/Library/2023-01-20-10-46-59.gh-issue-101143.hJo8hu.rst b/Misc/NEWS.d/next/Library/2023-01-20-10-46-59.gh-issue-101143.hJo8hu.rst deleted file mode 100644 index d14b9e25a691..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-20-10-46-59.gh-issue-101143.hJo8hu.rst +++ /dev/null @@ -1,2 +0,0 @@ -Remove unused references to :class:`~asyncio.TimerHandle` in -``asyncio.base_events.BaseEventLoop._add_callback``. diff --git a/Misc/NEWS.d/next/Library/2023-01-21-16-50-22.gh-issue-100795.NPMZf7.rst b/Misc/NEWS.d/next/Library/2023-01-21-16-50-22.gh-issue-100795.NPMZf7.rst deleted file mode 100644 index 4cb56ea0f0e5..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-21-16-50-22.gh-issue-100795.NPMZf7.rst +++ /dev/null @@ -1,3 +0,0 @@ -Avoid potential unexpected ``freeaddrinfo`` call (double free) in :mod:`socket` -when when a libc ``getaddrinfo()`` implementation leaves garbage in an output -pointer when returning an error. Original patch by Sergey G. Brester. diff --git a/Misc/NEWS.d/next/Library/2023-01-24-12-53-59.gh-issue-92123.jf6TO5.rst b/Misc/NEWS.d/next/Library/2023-01-24-12-53-59.gh-issue-92123.jf6TO5.rst deleted file mode 100644 index 4b4443a55fdb..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-24-12-53-59.gh-issue-92123.jf6TO5.rst +++ /dev/null @@ -1,2 +0,0 @@ -Adapt the ``_elementtree`` extension module to multi-phase init (:pep:`489`). -Patches by Erlend E. Aasland. diff --git a/Misc/NEWS.d/next/Library/2023-01-25-18-07-20.gh-issue-101326.KL4SFv.rst b/Misc/NEWS.d/next/Library/2023-01-25-18-07-20.gh-issue-101326.KL4SFv.rst deleted file mode 100644 index 54b69b943091..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-25-18-07-20.gh-issue-101326.KL4SFv.rst +++ /dev/null @@ -1 +0,0 @@ -Fix regression when passing ``None`` as second or third argument to ``FutureIter.throw``. diff --git a/Misc/NEWS.d/next/Library/2023-01-26-01-25-56.gh-issue-101317.vWaS1x.rst b/Misc/NEWS.d/next/Library/2023-01-26-01-25-56.gh-issue-101317.vWaS1x.rst deleted file mode 100644 index f1ce0e0a5276..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-26-01-25-56.gh-issue-101317.vWaS1x.rst +++ /dev/null @@ -1,2 +0,0 @@ -Add *ssl_shutdown_timeout* parameter for :meth:`asyncio.StreamWriter.start_tls`. - diff --git a/Misc/NEWS.d/next/Library/2023-01-26-06-44-35.gh-issue-101323.h8Hk11.rst b/Misc/NEWS.d/next/Library/2023-01-26-06-44-35.gh-issue-101323.h8Hk11.rst deleted file mode 100644 index f8419e130dbc..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-26-06-44-35.gh-issue-101323.h8Hk11.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix a bug where errors where not thrown by zlib._ZlibDecompressor if -encountered during decompressing. diff --git a/Misc/NEWS.d/next/Library/2023-02-04-21-01-49.gh-issue-101570.lbtUsD.rst b/Misc/NEWS.d/next/Library/2023-02-04-21-01-49.gh-issue-101570.lbtUsD.rst deleted file mode 100644 index 599edab6c071..000000000000 --- a/Misc/NEWS.d/next/Library/2023-02-04-21-01-49.gh-issue-101570.lbtUsD.rst +++ /dev/null @@ -1 +0,0 @@ -Upgrade pip wheel bundled with ensurepip (pip 23.0) diff --git a/Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst b/Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst deleted file mode 100644 index 0f149e80dc54..000000000000 --- a/Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst +++ /dev/null @@ -1 +0,0 @@ -[Enum] - fix psuedo-flag creation diff --git a/Misc/NEWS.d/next/Security/2022-11-08-12-06-52.gh-issue-99108.4Wrsuh.rst b/Misc/NEWS.d/next/Security/2022-11-08-12-06-52.gh-issue-99108.4Wrsuh.rst deleted file mode 100644 index 64acc09c482e..000000000000 --- a/Misc/NEWS.d/next/Security/2022-11-08-12-06-52.gh-issue-99108.4Wrsuh.rst +++ /dev/null @@ -1,4 +0,0 @@ -Replace the builtin :mod:`hashlib` implementations of SHA2-224 and SHA2-256 -originally from LibTomCrypt with formally verified, side-channel resistant -code from the `HACL* `_ project. The -builtins remain a fallback only used when OpenSSL does not provide them. diff --git a/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst b/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst deleted file mode 100644 index 2a95fd9ae53c..000000000000 --- a/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst +++ /dev/null @@ -1 +0,0 @@ -``test_tarfile`` has been updated to pass when run as a high UID. diff --git a/Misc/NEWS.d/next/Windows/2023-01-11-14-42-11.gh-issue-100247.YfEmSz.rst b/Misc/NEWS.d/next/Windows/2023-01-11-14-42-11.gh-issue-100247.YfEmSz.rst deleted file mode 100644 index 7bfcbd7ddecf..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-11-14-42-11.gh-issue-100247.YfEmSz.rst +++ /dev/null @@ -1,2 +0,0 @@ -Restores support for the :file:`py.exe` launcher finding shebang commands in -its configuration file using the full command name. diff --git a/Misc/NEWS.d/next/Windows/2023-01-11-16-28-09.gh-issue-100320.2DU2it.rst b/Misc/NEWS.d/next/Windows/2023-01-11-16-28-09.gh-issue-100320.2DU2it.rst deleted file mode 100644 index c206fc8520a5..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-11-16-28-09.gh-issue-100320.2DU2it.rst +++ /dev/null @@ -1,3 +0,0 @@ -Ensures the ``PythonPath`` registry key from an install is used when -launching from a different copy of Python that relies on an existing install -to provide a copy of its modules and standard library. diff --git a/Misc/NEWS.d/next/Windows/2023-01-17-18-17-58.gh-issue-82052.mWyysT.rst b/Misc/NEWS.d/next/Windows/2023-01-17-18-17-58.gh-issue-82052.mWyysT.rst deleted file mode 100644 index 4f7ab200b85c..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-17-18-17-58.gh-issue-82052.mWyysT.rst +++ /dev/null @@ -1 +0,0 @@ -Fixed an issue where writing more than 32K of Unicode output to the console screen in one go can result in mojibake. diff --git a/Misc/NEWS.d/next/Windows/2023-01-18-18-25-18.gh-issue-101135.HF9VlG.rst b/Misc/NEWS.d/next/Windows/2023-01-18-18-25-18.gh-issue-101135.HF9VlG.rst deleted file mode 100644 index 2e6d6371340d..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-18-18-25-18.gh-issue-101135.HF9VlG.rst +++ /dev/null @@ -1,3 +0,0 @@ -Restore ability to launch older 32-bit versions from the :file:`py.exe` -launcher when both 32-bit and 64-bit installs of the same version are -available. diff --git a/Misc/NEWS.d/next/Windows/2023-01-25-00-23-31.gh-issue-99834.WN41lc.rst b/Misc/NEWS.d/next/Windows/2023-01-25-00-23-31.gh-issue-99834.WN41lc.rst deleted file mode 100644 index d3894fa4ea30..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-25-00-23-31.gh-issue-99834.WN41lc.rst +++ /dev/null @@ -1 +0,0 @@ -Updates bundled copy of Tcl/Tk to 8.6.13.0 diff --git a/Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst b/Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst deleted file mode 100644 index 4d4da05afa2d..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst +++ /dev/null @@ -1,3 +0,0 @@ -The ``py.exe`` launcher now correctly filters when only a single runtime is -installed. It also correctly handles prefix matches on tags so that ``-3.1`` -does not match ``3.11``, but would still match ``3.1-32``. diff --git a/Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst b/Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst deleted file mode 100644 index d4e2c6f23013..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst +++ /dev/null @@ -1,2 +0,0 @@ -Ensure the install path in the registry is only used when the standard -library hasn't been located in any other way. diff --git a/README.rst b/README.rst index 814efef83a35..b1756e20c141 100644 --- a/README.rst +++ b/README.rst @@ -1,4 +1,4 @@ -This is Python version 3.12.0 alpha 4 +This is Python version 3.12.0 alpha 5 ===================================== .. image:: https://github.com/python/cpython/workflows/Tests/badge.svg From webhook-mailer at python.org Tue Feb 7 17:32:28 2023 From: webhook-mailer at python.org (iritkatriel) Date: Tue, 07 Feb 2023 22:32:28 -0000 Subject: [Python-checkins] gh-101632: Add the new RETURN_CONST opcode (#101633) Message-ID: https://github.com/python/cpython/commit/753fc8a5d64369cd228c3e43fef1811ac3cfde83 commit: 753fc8a5d64369cd228c3e43fef1811ac3cfde83 branch: main author: penguin_wwy <940375606 at qq.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-07T22:32:21Z summary: gh-101632: Add the new RETURN_CONST opcode (#101633) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-07-14-56-43.gh-issue-101632.Fd1yxk.rst M Doc/library/dis.rst M Include/internal/pycore_opcode.h M Include/opcode.h M Lib/dis.py M Lib/importlib/_bootstrap_external.py M Lib/opcode.py M Lib/test/test_ast.py M Lib/test/test_code.py M Lib/test/test_compile.py M Lib/test/test_dis.py M Lib/test/test_peepholer.py M Objects/frameobject.c M Programs/test_frozenmain.h M Python/bytecodes.c M Python/compile.c M Python/generated_cases.c.h M Python/opcode_metadata.h M Python/opcode_targets.h diff --git a/Doc/library/dis.rst b/Doc/library/dis.rst index 1fe2d5d6227d..b1e61d7e77b2 100644 --- a/Doc/library/dis.rst +++ b/Doc/library/dis.rst @@ -694,6 +694,13 @@ iterations of the loop. Returns with ``STACK[-1]`` to the caller of the function. +.. opcode:: RETURN_CONST (consti) + + Returns with ``co_consts[consti]`` to the caller of the function. + + .. versionadded:: 3.12 + + .. opcode:: YIELD_VALUE Yields ``STACK.pop()`` from a :term:`generator`. diff --git a/Include/internal/pycore_opcode.h b/Include/internal/pycore_opcode.h index 05c0485b0641..47c847213351 100644 --- a/Include/internal/pycore_opcode.h +++ b/Include/internal/pycore_opcode.h @@ -192,6 +192,7 @@ const uint8_t _PyOpcode_Deopt[256] = { [RAISE_VARARGS] = RAISE_VARARGS, [RERAISE] = RERAISE, [RESUME] = RESUME, + [RETURN_CONST] = RETURN_CONST, [RETURN_GENERATOR] = RETURN_GENERATOR, [RETURN_VALUE] = RETURN_VALUE, [SEND] = SEND, @@ -349,7 +350,7 @@ static const char *const _PyOpcode_OpName[263] = { [CONTAINS_OP] = "CONTAINS_OP", [RERAISE] = "RERAISE", [COPY] = "COPY", - [STORE_FAST__LOAD_FAST] = "STORE_FAST__LOAD_FAST", + [RETURN_CONST] = "RETURN_CONST", [BINARY_OP] = "BINARY_OP", [SEND] = "SEND", [LOAD_FAST] = "LOAD_FAST", @@ -371,7 +372,7 @@ static const char *const _PyOpcode_OpName[263] = { [JUMP_BACKWARD] = "JUMP_BACKWARD", [COMPARE_AND_BRANCH] = "COMPARE_AND_BRANCH", [CALL_FUNCTION_EX] = "CALL_FUNCTION_EX", - [STORE_FAST__STORE_FAST] = "STORE_FAST__STORE_FAST", + [STORE_FAST__LOAD_FAST] = "STORE_FAST__LOAD_FAST", [EXTENDED_ARG] = "EXTENDED_ARG", [LIST_APPEND] = "LIST_APPEND", [SET_ADD] = "SET_ADD", @@ -381,15 +382,15 @@ static const char *const _PyOpcode_OpName[263] = { [YIELD_VALUE] = "YIELD_VALUE", [RESUME] = "RESUME", [MATCH_CLASS] = "MATCH_CLASS", + [STORE_FAST__STORE_FAST] = "STORE_FAST__STORE_FAST", [STORE_SUBSCR_DICT] = "STORE_SUBSCR_DICT", - [STORE_SUBSCR_LIST_INT] = "STORE_SUBSCR_LIST_INT", [FORMAT_VALUE] = "FORMAT_VALUE", [BUILD_CONST_KEY_MAP] = "BUILD_CONST_KEY_MAP", [BUILD_STRING] = "BUILD_STRING", + [STORE_SUBSCR_LIST_INT] = "STORE_SUBSCR_LIST_INT", [UNPACK_SEQUENCE_LIST] = "UNPACK_SEQUENCE_LIST", [UNPACK_SEQUENCE_TUPLE] = "UNPACK_SEQUENCE_TUPLE", [UNPACK_SEQUENCE_TWO_TUPLE] = "UNPACK_SEQUENCE_TWO_TUPLE", - [161] = "<161>", [LIST_EXTEND] = "LIST_EXTEND", [SET_UPDATE] = "SET_UPDATE", [DICT_MERGE] = "DICT_MERGE", @@ -495,7 +496,6 @@ static const char *const _PyOpcode_OpName[263] = { #endif #define EXTRA_CASES \ - case 161: \ case 166: \ case 167: \ case 168: \ diff --git a/Include/opcode.h b/Include/opcode.h index 827f9931beb3..77ad7c22440d 100644 --- a/Include/opcode.h +++ b/Include/opcode.h @@ -76,6 +76,7 @@ extern "C" { #define CONTAINS_OP 118 #define RERAISE 119 #define COPY 120 +#define RETURN_CONST 121 #define BINARY_OP 122 #define SEND 123 #define LOAD_FAST 124 @@ -179,13 +180,13 @@ extern "C" { #define STORE_ATTR_INSTANCE_VALUE 86 #define STORE_ATTR_SLOT 87 #define STORE_ATTR_WITH_HINT 113 -#define STORE_FAST__LOAD_FAST 121 -#define STORE_FAST__STORE_FAST 143 -#define STORE_SUBSCR_DICT 153 -#define STORE_SUBSCR_LIST_INT 154 -#define UNPACK_SEQUENCE_LIST 158 -#define UNPACK_SEQUENCE_TUPLE 159 -#define UNPACK_SEQUENCE_TWO_TUPLE 160 +#define STORE_FAST__LOAD_FAST 143 +#define STORE_FAST__STORE_FAST 153 +#define STORE_SUBSCR_DICT 154 +#define STORE_SUBSCR_LIST_INT 158 +#define UNPACK_SEQUENCE_LIST 159 +#define UNPACK_SEQUENCE_TUPLE 160 +#define UNPACK_SEQUENCE_TWO_TUPLE 161 #define DO_TRACING 255 #define HAS_ARG(op) ((((op) >= HAVE_ARGUMENT) && (!IS_PSEUDO_OPCODE(op)))\ @@ -196,6 +197,7 @@ extern "C" { #define HAS_CONST(op) (false\ || ((op) == LOAD_CONST) \ + || ((op) == RETURN_CONST) \ || ((op) == KW_NAMES) \ ) diff --git a/Lib/dis.py b/Lib/dis.py index 72ab9536a2bf..a6921008d9d0 100644 --- a/Lib/dis.py +++ b/Lib/dis.py @@ -34,6 +34,7 @@ MAKE_FUNCTION_FLAGS = ('defaults', 'kwdefaults', 'annotations', 'closure') LOAD_CONST = opmap['LOAD_CONST'] +RETURN_CONST = opmap['RETURN_CONST'] LOAD_GLOBAL = opmap['LOAD_GLOBAL'] BINARY_OP = opmap['BINARY_OP'] JUMP_BACKWARD = opmap['JUMP_BACKWARD'] @@ -363,7 +364,7 @@ def _get_const_value(op, arg, co_consts): assert op in hasconst argval = UNKNOWN - if op == LOAD_CONST: + if op == LOAD_CONST or op == RETURN_CONST: if co_consts is not None: argval = co_consts[arg] return argval diff --git a/Lib/importlib/_bootstrap_external.py b/Lib/importlib/_bootstrap_external.py index e760fbb15759..933c8c7d7e05 100644 --- a/Lib/importlib/_bootstrap_external.py +++ b/Lib/importlib/_bootstrap_external.py @@ -431,6 +431,7 @@ def _write_atomic(path, data, mode=0o666): # Python 3.12a5 3515 (Embed jump mask in COMPARE_OP oparg) # Python 3.12a5 3516 (Add COMPARE_AND_BRANCH instruction) # Python 3.12a5 3517 (Change YIELD_VALUE oparg to exception block depth) +# Python 3.12a5 3518 (Add RETURN_CONST instruction) # Python 3.13 will start with 3550 @@ -443,7 +444,7 @@ def _write_atomic(path, data, mode=0o666): # Whenever MAGIC_NUMBER is changed, the ranges in the magic_values array # in PC/launcher.c must also be updated. -MAGIC_NUMBER = (3517).to_bytes(2, 'little') + b'\r\n' +MAGIC_NUMBER = (3518).to_bytes(2, 'little') + b'\r\n' _RAW_MAGIC_NUMBER = int.from_bytes(MAGIC_NUMBER, 'little') # For import.c diff --git a/Lib/opcode.py b/Lib/opcode.py index c317e23beae6..5f163d2ccb80 100644 --- a/Lib/opcode.py +++ b/Lib/opcode.py @@ -164,6 +164,8 @@ def pseudo_op(name, op, real_ops): def_op('CONTAINS_OP', 118) def_op('RERAISE', 119) def_op('COPY', 120) +def_op('RETURN_CONST', 121) +hasconst.append(121) def_op('BINARY_OP', 122) jrel_op('SEND', 123) # Number of bytes to skip def_op('LOAD_FAST', 124) # Local variable number, no null check diff --git a/Lib/test/test_ast.py b/Lib/test/test_ast.py index 98d9b603bbc1..7c9a57c685df 100644 --- a/Lib/test/test_ast.py +++ b/Lib/test/test_ast.py @@ -1900,7 +1900,7 @@ def get_load_const(self, tree): co = compile(tree, '', 'exec') consts = [] for instr in dis.get_instructions(co): - if instr.opname == 'LOAD_CONST': + if instr.opname == 'LOAD_CONST' or instr.opname == 'RETURN_CONST': consts.append(instr.argval) return consts diff --git a/Lib/test/test_code.py b/Lib/test/test_code.py index 67ed1694205c..9c2ac83e1b69 100644 --- a/Lib/test/test_code.py +++ b/Lib/test/test_code.py @@ -723,6 +723,7 @@ def f(): pass PY_CODE_LOCATION_INFO_NO_COLUMNS = 13 f.__code__ = f.__code__.replace( + co_stacksize=1, co_firstlineno=42, co_code=bytes( [ diff --git a/Lib/test/test_compile.py b/Lib/test/test_compile.py index 05a5ed1fa9a6..90b067bcf309 100644 --- a/Lib/test/test_compile.py +++ b/Lib/test/test_compile.py @@ -741,7 +741,7 @@ def unused_code_at_end(): # RETURN_VALUE opcode. This does not always crash an interpreter. # When you build with the clang memory sanitizer it reliably aborts. self.assertEqual( - 'RETURN_VALUE', + 'RETURN_CONST', list(dis.get_instructions(unused_code_at_end))[-1].opname) def test_dont_merge_constants(self): @@ -822,10 +822,9 @@ def unused_block_while_else(): for func in funcs: opcodes = list(dis.get_instructions(func)) - self.assertLessEqual(len(opcodes), 4) - self.assertEqual('LOAD_CONST', opcodes[-2].opname) - self.assertEqual(None, opcodes[-2].argval) - self.assertEqual('RETURN_VALUE', opcodes[-1].opname) + self.assertLessEqual(len(opcodes), 3) + self.assertEqual('RETURN_CONST', opcodes[-1].opname) + self.assertEqual(None, opcodes[-1].argval) def test_false_while_loop(self): def break_in_while(): @@ -841,10 +840,9 @@ def continue_in_while(): # Check that we did not raise but we also don't generate bytecode for func in funcs: opcodes = list(dis.get_instructions(func)) - self.assertEqual(3, len(opcodes)) - self.assertEqual('LOAD_CONST', opcodes[1].opname) + self.assertEqual(2, len(opcodes)) + self.assertEqual('RETURN_CONST', opcodes[1].opname) self.assertEqual(None, opcodes[1].argval) - self.assertEqual('RETURN_VALUE', opcodes[2].opname) def test_consts_in_conditionals(self): def and_true(x): @@ -1311,7 +1309,7 @@ def test_multiline_generator_expression(self): line=1, end_line=2, column=1, end_column=8, occurrence=1) self.assertOpcodeSourcePositionIs(compiled_code, 'JUMP_BACKWARD', line=1, end_line=2, column=1, end_column=8, occurrence=1) - self.assertOpcodeSourcePositionIs(compiled_code, 'RETURN_VALUE', + self.assertOpcodeSourcePositionIs(compiled_code, 'RETURN_CONST', line=1, end_line=6, column=0, end_column=32, occurrence=1) def test_multiline_async_generator_expression(self): @@ -1328,7 +1326,7 @@ def test_multiline_async_generator_expression(self): self.assertIsInstance(compiled_code, types.CodeType) self.assertOpcodeSourcePositionIs(compiled_code, 'YIELD_VALUE', line=1, end_line=2, column=1, end_column=8, occurrence=2) - self.assertOpcodeSourcePositionIs(compiled_code, 'RETURN_VALUE', + self.assertOpcodeSourcePositionIs(compiled_code, 'RETURN_CONST', line=1, end_line=6, column=0, end_column=32, occurrence=1) def test_multiline_list_comprehension(self): diff --git a/Lib/test/test_dis.py b/Lib/test/test_dis.py index bdf48c153092..1050b15e16ea 100644 --- a/Lib/test/test_dis.py +++ b/Lib/test/test_dis.py @@ -49,8 +49,7 @@ def cm(cls, x): COMPARE_OP 32 (==) LOAD_FAST 0 (self) STORE_ATTR 0 (x) - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) """ % (_C.__init__.__code__.co_firstlineno, _C.__init__.__code__.co_firstlineno + 1,) dis_c_instance_method_bytes = """\ @@ -60,8 +59,7 @@ def cm(cls, x): COMPARE_OP 32 (==) LOAD_FAST 0 STORE_ATTR 0 - LOAD_CONST 0 - RETURN_VALUE + RETURN_CONST 0 """ dis_c_class_method = """\ @@ -72,8 +70,7 @@ def cm(cls, x): COMPARE_OP 32 (==) LOAD_FAST 0 (cls) STORE_ATTR 0 (x) - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) """ % (_C.cm.__code__.co_firstlineno, _C.cm.__code__.co_firstlineno + 2,) dis_c_static_method = """\ @@ -83,8 +80,7 @@ def cm(cls, x): LOAD_CONST 1 (1) COMPARE_OP 32 (==) STORE_FAST 0 (x) - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) """ % (_C.sm.__code__.co_firstlineno, _C.sm.__code__.co_firstlineno + 2,) # Class disassembling info has an extra newline at end. @@ -111,8 +107,7 @@ def _f(a): CALL 1 POP_TOP -%3d LOAD_CONST 1 (1) - RETURN_VALUE +%3d RETURN_CONST 1 (1) """ % (_f.__code__.co_firstlineno, _f.__code__.co_firstlineno + 1, _f.__code__.co_firstlineno + 2) @@ -124,8 +119,7 @@ def _f(a): LOAD_FAST 0 CALL 1 POP_TOP - LOAD_CONST 1 - RETURN_VALUE + RETURN_CONST 1 """ @@ -150,8 +144,7 @@ def bug708901(): %3d JUMP_BACKWARD 4 (to 30) %3d >> END_FOR - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) """ % (bug708901.__code__.co_firstlineno, bug708901.__code__.co_firstlineno + 1, bug708901.__code__.co_firstlineno + 2, @@ -198,8 +191,7 @@ def bug42562(): dis_bug42562 = """\ RESUME 0 - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) """ # Extended arg followed by NOP @@ -240,8 +232,7 @@ def bug42562(): %3d LOAD_GLOBAL 0 (spam) POP_TOP - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) """ _BIG_LINENO_FORMAT2 = """\ @@ -249,20 +240,17 @@ def bug42562(): %4d LOAD_GLOBAL 0 (spam) POP_TOP - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) """ dis_module_expected_results = """\ Disassembly of f: 4 RESUME 0 - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) Disassembly of g: 5 RESUME 0 - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) """ @@ -286,8 +274,7 @@ def bug42562(): LOAD_CONST 0 (1) BINARY_OP 0 (+) STORE_NAME 0 (x) - LOAD_CONST 1 (None) - RETURN_VALUE + RETURN_CONST 1 (None) """ annot_stmt_str = """\ @@ -326,8 +313,7 @@ def bug42562(): STORE_SUBSCR LOAD_NAME 1 (int) POP_TOP - LOAD_CONST 4 (None) - RETURN_VALUE + RETURN_CONST 4 (None) """ compound_stmt_str = """\ @@ -447,12 +433,11 @@ def _with(c): %3d LOAD_CONST 2 (2) STORE_FAST 2 (y) - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) %3d >> PUSH_EXC_INFO WITH_EXCEPT_START - POP_JUMP_IF_TRUE 1 (to 46) + POP_JUMP_IF_TRUE 1 (to 44) RERAISE 2 >> POP_TOP POP_EXCEPT @@ -461,8 +446,7 @@ def _with(c): %3d LOAD_CONST 2 (2) STORE_FAST 2 (y) - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) >> COPY 3 POP_EXCEPT RERAISE 1 @@ -514,23 +498,22 @@ async def _asyncwith(c): %3d LOAD_CONST 2 (2) STORE_FAST 2 (y) - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) %3d >> CLEANUP_THROW - JUMP_BACKWARD 24 (to 22) + JUMP_BACKWARD 23 (to 22) >> CLEANUP_THROW - JUMP_BACKWARD 9 (to 56) + JUMP_BACKWARD 8 (to 56) >> PUSH_EXC_INFO WITH_EXCEPT_START GET_AWAITABLE 2 LOAD_CONST 0 (None) - >> SEND 4 (to 92) + >> SEND 4 (to 90) YIELD_VALUE 3 RESUME 3 - JUMP_BACKWARD_NO_INTERRUPT 4 (to 82) + JUMP_BACKWARD_NO_INTERRUPT 4 (to 80) >> CLEANUP_THROW - >> POP_JUMP_IF_TRUE 1 (to 96) + >> POP_JUMP_IF_TRUE 1 (to 94) RERAISE 2 >> POP_TOP POP_EXCEPT @@ -539,8 +522,7 @@ async def _asyncwith(c): %3d LOAD_CONST 2 (2) STORE_FAST 2 (y) - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) >> COPY 3 POP_EXCEPT RERAISE 1 @@ -610,8 +592,7 @@ def _tryfinallyconst(b): LOAD_FAST 0 (b) CALL 0 POP_TOP - LOAD_CONST 1 (1) - RETURN_VALUE + RETURN_CONST 1 (1) PUSH_EXC_INFO PUSH_NULL LOAD_FAST 0 (b) @@ -754,8 +735,7 @@ def loop_test(): JUMP_BACKWARD 17 (to 16) %3d >> END_FOR - LOAD_CONST 0 (None) - RETURN_VALUE + RETURN_CONST 0 (None) """ % (loop_test.__code__.co_firstlineno, loop_test.__code__.co_firstlineno + 1, loop_test.__code__.co_firstlineno + 2, @@ -772,8 +752,7 @@ def extended_arg_quick(): 6 UNPACK_EX 256 8 STORE_FAST 0 (_) 10 STORE_FAST 0 (_) - 12 LOAD_CONST 0 (None) - 14 RETURN_VALUE + 12 RETURN_CONST 0 (None) """% (extended_arg_quick.__code__.co_firstlineno, extended_arg_quick.__code__.co_firstlineno + 1,) @@ -1549,8 +1528,7 @@ def _prepare_test_cases(): Instruction(opname='LOAD_FAST', opcode=124, arg=1, argval='f', argrepr='f', offset=26, starts_line=None, is_jump_target=False, positions=None), Instruction(opname='CALL', opcode=171, arg=6, argval=6, argrepr='', offset=28, starts_line=None, is_jump_target=False, positions=None), Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=38, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='LOAD_CONST', opcode=100, arg=0, argval=None, argrepr='None', offset=40, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='RETURN_VALUE', opcode=83, arg=None, argval=None, argrepr='', offset=42, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='RETURN_CONST', opcode=121, arg=0, argval=None, argrepr='None', offset=40, starts_line=None, is_jump_target=False, positions=None) ] expected_opinfo_jumpy = [ @@ -1630,52 +1608,50 @@ def _prepare_test_cases(): Instruction(opname='LOAD_CONST', opcode=100, arg=10, argval="OK, now we're done", argrepr='"OK, now we\'re done"', offset=286, starts_line=None, is_jump_target=False, positions=None), Instruction(opname='CALL', opcode=171, arg=1, argval=1, argrepr='', offset=288, starts_line=None, is_jump_target=False, positions=None), Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=298, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='LOAD_CONST', opcode=100, arg=0, argval=None, argrepr='None', offset=300, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='RETURN_VALUE', opcode=83, arg=None, argval=None, argrepr='', offset=302, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='PUSH_EXC_INFO', opcode=35, arg=None, argval=None, argrepr='', offset=304, starts_line=25, is_jump_target=False, positions=None), - Instruction(opname='WITH_EXCEPT_START', opcode=49, arg=None, argval=None, argrepr='', offset=306, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='POP_JUMP_IF_TRUE', opcode=115, arg=1, argval=312, argrepr='to 312', offset=308, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='RERAISE', opcode=119, arg=2, argval=2, argrepr='', offset=310, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=312, starts_line=None, is_jump_target=True, positions=None), - Instruction(opname='POP_EXCEPT', opcode=89, arg=None, argval=None, argrepr='', offset=314, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='RETURN_CONST', opcode=121, arg=0, argval=None, argrepr='None', offset=300, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='PUSH_EXC_INFO', opcode=35, arg=None, argval=None, argrepr='', offset=302, starts_line=25, is_jump_target=False, positions=None), + Instruction(opname='WITH_EXCEPT_START', opcode=49, arg=None, argval=None, argrepr='', offset=304, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='POP_JUMP_IF_TRUE', opcode=115, arg=1, argval=310, argrepr='to 310', offset=306, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='RERAISE', opcode=119, arg=2, argval=2, argrepr='', offset=308, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=310, starts_line=None, is_jump_target=True, positions=None), + Instruction(opname='POP_EXCEPT', opcode=89, arg=None, argval=None, argrepr='', offset=312, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=314, starts_line=None, is_jump_target=False, positions=None), Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=316, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=318, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='JUMP_BACKWARD', opcode=140, arg=24, argval=274, argrepr='to 274', offset=320, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='COPY', opcode=120, arg=3, argval=3, argrepr='', offset=322, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='POP_EXCEPT', opcode=89, arg=None, argval=None, argrepr='', offset=324, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='RERAISE', opcode=119, arg=1, argval=1, argrepr='', offset=326, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='PUSH_EXC_INFO', opcode=35, arg=None, argval=None, argrepr='', offset=328, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='LOAD_GLOBAL', opcode=116, arg=4, argval='ZeroDivisionError', argrepr='ZeroDivisionError', offset=330, starts_line=22, is_jump_target=False, positions=None), - Instruction(opname='CHECK_EXC_MATCH', opcode=36, arg=None, argval=None, argrepr='', offset=342, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='POP_JUMP_IF_FALSE', opcode=114, arg=16, argval=378, argrepr='to 378', offset=344, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=346, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='LOAD_GLOBAL', opcode=116, arg=3, argval='print', argrepr='NULL + print', offset=348, starts_line=23, is_jump_target=False, positions=None), - Instruction(opname='LOAD_CONST', opcode=100, arg=9, argval='Here we go, here we go, here we go...', argrepr="'Here we go, here we go, here we go...'", offset=360, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='CALL', opcode=171, arg=1, argval=1, argrepr='', offset=362, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=372, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='POP_EXCEPT', opcode=89, arg=None, argval=None, argrepr='', offset=374, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='JUMP_BACKWARD', opcode=140, arg=52, argval=274, argrepr='to 274', offset=376, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='RERAISE', opcode=119, arg=0, argval=0, argrepr='', offset=378, starts_line=22, is_jump_target=True, positions=None), - Instruction(opname='COPY', opcode=120, arg=3, argval=3, argrepr='', offset=380, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='POP_EXCEPT', opcode=89, arg=None, argval=None, argrepr='', offset=382, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='RERAISE', opcode=119, arg=1, argval=1, argrepr='', offset=384, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='PUSH_EXC_INFO', opcode=35, arg=None, argval=None, argrepr='', offset=386, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='LOAD_GLOBAL', opcode=116, arg=3, argval='print', argrepr='NULL + print', offset=388, starts_line=28, is_jump_target=False, positions=None), - Instruction(opname='LOAD_CONST', opcode=100, arg=10, argval="OK, now we're done", argrepr='"OK, now we\'re done"', offset=400, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='CALL', opcode=171, arg=1, argval=1, argrepr='', offset=402, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=412, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='RERAISE', opcode=119, arg=0, argval=0, argrepr='', offset=414, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='COPY', opcode=120, arg=3, argval=3, argrepr='', offset=416, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='POP_EXCEPT', opcode=89, arg=None, argval=None, argrepr='', offset=418, starts_line=None, is_jump_target=False, positions=None), - Instruction(opname='RERAISE', opcode=119, arg=1, argval=1, argrepr='', offset=420, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='JUMP_BACKWARD', opcode=140, arg=23, argval=274, argrepr='to 274', offset=318, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='COPY', opcode=120, arg=3, argval=3, argrepr='', offset=320, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='POP_EXCEPT', opcode=89, arg=None, argval=None, argrepr='', offset=322, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='RERAISE', opcode=119, arg=1, argval=1, argrepr='', offset=324, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='PUSH_EXC_INFO', opcode=35, arg=None, argval=None, argrepr='', offset=326, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='LOAD_GLOBAL', opcode=116, arg=4, argval='ZeroDivisionError', argrepr='ZeroDivisionError', offset=328, starts_line=22, is_jump_target=False, positions=None), + Instruction(opname='CHECK_EXC_MATCH', opcode=36, arg=None, argval=None, argrepr='', offset=340, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='POP_JUMP_IF_FALSE', opcode=114, arg=16, argval=376, argrepr='to 376', offset=342, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=344, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='LOAD_GLOBAL', opcode=116, arg=3, argval='print', argrepr='NULL + print', offset=346, starts_line=23, is_jump_target=False, positions=None), + Instruction(opname='LOAD_CONST', opcode=100, arg=9, argval='Here we go, here we go, here we go...', argrepr="'Here we go, here we go, here we go...'", offset=358, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='CALL', opcode=171, arg=1, argval=1, argrepr='', offset=360, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=370, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='POP_EXCEPT', opcode=89, arg=None, argval=None, argrepr='', offset=372, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='JUMP_BACKWARD', opcode=140, arg=51, argval=274, argrepr='to 274', offset=374, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='RERAISE', opcode=119, arg=0, argval=0, argrepr='', offset=376, starts_line=22, is_jump_target=True, positions=None), + Instruction(opname='COPY', opcode=120, arg=3, argval=3, argrepr='', offset=378, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='POP_EXCEPT', opcode=89, arg=None, argval=None, argrepr='', offset=380, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='RERAISE', opcode=119, arg=1, argval=1, argrepr='', offset=382, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='PUSH_EXC_INFO', opcode=35, arg=None, argval=None, argrepr='', offset=384, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='LOAD_GLOBAL', opcode=116, arg=3, argval='print', argrepr='NULL + print', offset=386, starts_line=28, is_jump_target=False, positions=None), + Instruction(opname='LOAD_CONST', opcode=100, arg=10, argval="OK, now we're done", argrepr='"OK, now we\'re done"', offset=398, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='CALL', opcode=171, arg=1, argval=1, argrepr='', offset=400, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='POP_TOP', opcode=1, arg=None, argval=None, argrepr='', offset=410, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='RERAISE', opcode=119, arg=0, argval=0, argrepr='', offset=412, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='COPY', opcode=120, arg=3, argval=3, argrepr='', offset=414, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='POP_EXCEPT', opcode=89, arg=None, argval=None, argrepr='', offset=416, starts_line=None, is_jump_target=False, positions=None), + Instruction(opname='RERAISE', opcode=119, arg=1, argval=1, argrepr='', offset=418, starts_line=None, is_jump_target=False, positions=None) ] # One last piece of inspect fodder to check the default line number handling def simple(): pass expected_opinfo_simple = [ Instruction(opname='RESUME', opcode=151, arg=0, argval=0, argrepr='', offset=0, starts_line=simple.__code__.co_firstlineno, is_jump_target=False, positions=None), - Instruction(opname='LOAD_CONST', opcode=100, arg=0, argval=None, argrepr='None', offset=2, starts_line=None, is_jump_target=False), - Instruction(opname='RETURN_VALUE', opcode=83, arg=None, argval=None, argrepr='', offset=4, starts_line=None, is_jump_target=False) + Instruction(opname='RETURN_CONST', opcode=121, arg=0, argval=None, argrepr='None', offset=2, starts_line=None, is_jump_target=False), ] @@ -1736,7 +1712,6 @@ def test_co_positions(self): (2, 2, 8, 9), (1, 3, 0, 1), (1, 3, 0, 1), - (1, 3, 0, 1), (1, 3, 0, 1) ] self.assertEqual(positions, expected) diff --git a/Lib/test/test_peepholer.py b/Lib/test/test_peepholer.py index 239c9d03fd9d..707ff821b31a 100644 --- a/Lib/test/test_peepholer.py +++ b/Lib/test/test_peepholer.py @@ -117,7 +117,7 @@ def f(): return None self.assertNotInBytecode(f, 'LOAD_GLOBAL') - self.assertInBytecode(f, 'LOAD_CONST', None) + self.assertInBytecode(f, 'RETURN_CONST', None) self.check_lnotab(f) def test_while_one(self): @@ -134,7 +134,7 @@ def f(): def test_pack_unpack(self): for line, elem in ( - ('a, = a,', 'LOAD_CONST',), + ('a, = a,', 'RETURN_CONST',), ('a, b = a, b', 'SWAP',), ('a, b, c = a, b, c', 'SWAP',), ): @@ -165,7 +165,7 @@ def test_folding_of_tuples_of_constants(self): # One LOAD_CONST for the tuple, one for the None return value load_consts = [instr for instr in dis.get_instructions(code) if instr.opname == 'LOAD_CONST'] - self.assertEqual(len(load_consts), 2) + self.assertEqual(len(load_consts), 1) self.check_lnotab(code) # Bug 1053819: Tuple of constants misidentified when presented with: diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-07-14-56-43.gh-issue-101632.Fd1yxk.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-07-14-56-43.gh-issue-101632.Fd1yxk.rst new file mode 100644 index 000000000000..136909ca6999 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-07-14-56-43.gh-issue-101632.Fd1yxk.rst @@ -0,0 +1 @@ +Adds a new :opcode:`RETURN_CONST` instruction. diff --git a/Objects/frameobject.c b/Objects/frameobject.c index 6bc04bc8e848..0e52a3e2399c 100644 --- a/Objects/frameobject.c +++ b/Objects/frameobject.c @@ -397,6 +397,8 @@ mark_stacks(PyCodeObject *code_obj, int len) assert(pop_value(next_stack) == EMPTY_STACK); assert(top_of_stack(next_stack) == Object); break; + case RETURN_CONST: + break; case RAISE_VARARGS: break; case RERAISE: diff --git a/Programs/test_frozenmain.h b/Programs/test_frozenmain.h index 95f78b19e65e..8e5055bd7bce 100644 --- a/Programs/test_frozenmain.h +++ b/Programs/test_frozenmain.h @@ -1,7 +1,7 @@ // Auto-generated by Programs/freeze_test_frozenmain.py unsigned char M_test_frozenmain[] = { 227,0,0,0,0,0,0,0,0,0,0,0,0,8,0,0, - 0,0,0,0,0,243,184,0,0,0,151,0,100,0,100,1, + 0,0,0,0,0,243,182,0,0,0,151,0,100,0,100,1, 108,0,90,0,100,0,100,1,108,1,90,1,2,0,101,2, 100,2,171,1,0,0,0,0,0,0,0,0,1,0,2,0, 101,2,100,3,101,0,106,6,0,0,0,0,0,0,0,0, @@ -12,28 +12,28 @@ unsigned char M_test_frozenmain[] = { 0,0,0,0,90,5,100,5,68,0,93,23,0,0,90,6, 2,0,101,2,100,6,101,6,155,0,100,7,101,5,101,6, 25,0,0,0,0,0,0,0,0,0,155,0,157,4,171,1, - 0,0,0,0,0,0,0,0,1,0,140,25,4,0,100,1, - 83,0,41,8,233,0,0,0,0,78,122,18,70,114,111,122, - 101,110,32,72,101,108,108,111,32,87,111,114,108,100,122,8, - 115,121,115,46,97,114,103,118,218,6,99,111,110,102,105,103, - 41,5,218,12,112,114,111,103,114,97,109,95,110,97,109,101, - 218,10,101,120,101,99,117,116,97,98,108,101,218,15,117,115, - 101,95,101,110,118,105,114,111,110,109,101,110,116,218,17,99, - 111,110,102,105,103,117,114,101,95,99,95,115,116,100,105,111, - 218,14,98,117,102,102,101,114,101,100,95,115,116,100,105,111, - 122,7,99,111,110,102,105,103,32,122,2,58,32,41,7,218, - 3,115,121,115,218,17,95,116,101,115,116,105,110,116,101,114, - 110,97,108,99,97,112,105,218,5,112,114,105,110,116,218,4, - 97,114,103,118,218,11,103,101,116,95,99,111,110,102,105,103, - 115,114,3,0,0,0,218,3,107,101,121,169,0,243,0,0, - 0,0,250,18,116,101,115,116,95,102,114,111,122,101,110,109, - 97,105,110,46,112,121,250,8,60,109,111,100,117,108,101,62, - 114,18,0,0,0,1,0,0,0,115,100,0,0,0,240,3, - 1,1,1,243,8,0,1,11,219,0,24,225,0,5,208,6, - 26,213,0,27,217,0,5,128,106,144,35,151,40,145,40,213, - 0,27,216,9,38,208,9,26,215,9,38,209,9,38,212,9, - 40,168,24,212,9,50,128,6,240,2,6,12,2,242,0,7, - 1,42,128,67,241,14,0,5,10,208,10,40,144,67,209,10, - 40,152,54,160,35,156,59,209,10,40,214,4,41,242,15,7, - 1,42,114,16,0,0,0, + 0,0,0,0,0,0,0,0,1,0,140,25,4,0,121,1, + 41,8,233,0,0,0,0,78,122,18,70,114,111,122,101,110, + 32,72,101,108,108,111,32,87,111,114,108,100,122,8,115,121, + 115,46,97,114,103,118,218,6,99,111,110,102,105,103,41,5, + 218,12,112,114,111,103,114,97,109,95,110,97,109,101,218,10, + 101,120,101,99,117,116,97,98,108,101,218,15,117,115,101,95, + 101,110,118,105,114,111,110,109,101,110,116,218,17,99,111,110, + 102,105,103,117,114,101,95,99,95,115,116,100,105,111,218,14, + 98,117,102,102,101,114,101,100,95,115,116,100,105,111,122,7, + 99,111,110,102,105,103,32,122,2,58,32,41,7,218,3,115, + 121,115,218,17,95,116,101,115,116,105,110,116,101,114,110,97, + 108,99,97,112,105,218,5,112,114,105,110,116,218,4,97,114, + 103,118,218,11,103,101,116,95,99,111,110,102,105,103,115,114, + 3,0,0,0,218,3,107,101,121,169,0,243,0,0,0,0, + 250,18,116,101,115,116,95,102,114,111,122,101,110,109,97,105, + 110,46,112,121,250,8,60,109,111,100,117,108,101,62,114,18, + 0,0,0,1,0,0,0,115,100,0,0,0,240,3,1,1, + 1,243,8,0,1,11,219,0,24,225,0,5,208,6,26,213, + 0,27,217,0,5,128,106,144,35,151,40,145,40,213,0,27, + 216,9,38,208,9,26,215,9,38,209,9,38,212,9,40,168, + 24,212,9,50,128,6,240,2,6,12,2,242,0,7,1,42, + 128,67,241,14,0,5,10,208,10,40,144,67,209,10,40,152, + 54,160,35,156,59,209,10,40,214,4,41,241,15,7,1,42, + 114,16,0,0,0, }; diff --git a/Python/bytecodes.c b/Python/bytecodes.c index b43625fd283c..0d7d922816ce 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -555,6 +555,23 @@ dummy_func( goto resume_frame; } + inst(RETURN_CONST, (--)) { + PyObject *retval = GETITEM(consts, oparg); + Py_INCREF(retval); + assert(EMPTY()); + _PyFrame_SetStackPointer(frame, stack_pointer); + TRACE_FUNCTION_EXIT(); + DTRACE_FUNCTION_EXIT(); + _Py_LeaveRecursiveCallPy(tstate); + assert(frame != &entry_frame); + // GH-99729: We need to unlink the frame *before* clearing it: + _PyInterpreterFrame *dying = frame; + frame = cframe.current_frame = dying->previous; + _PyEvalFrameClearAndPop(tstate, dying); + _PyFrame_StackPush(frame, retval); + goto resume_frame; + } + inst(GET_AITER, (obj -- iter)) { unaryfunc getter = NULL; PyTypeObject *type = Py_TYPE(obj); diff --git a/Python/compile.c b/Python/compile.c index d9ec68958972..df2dffb95bbd 100644 --- a/Python/compile.c +++ b/Python/compile.c @@ -124,6 +124,7 @@ #define IS_SCOPE_EXIT_OPCODE(opcode) \ ((opcode) == RETURN_VALUE || \ + (opcode) == RETURN_CONST || \ (opcode) == RAISE_VARARGS || \ (opcode) == RERAISE) @@ -354,7 +355,7 @@ basicblock_last_instr(const basicblock *b) { static inline int basicblock_returns(const basicblock *b) { struct instr *last = basicblock_last_instr(b); - return last && last->i_opcode == RETURN_VALUE; + return last && (last->i_opcode == RETURN_VALUE || last->i_opcode == RETURN_CONST); } static inline int @@ -1119,6 +1120,8 @@ stack_effect(int opcode, int oparg, int jump) case RETURN_VALUE: return -1; + case RETURN_CONST: + return 0; case SETUP_ANNOTATIONS: return 0; case YIELD_VALUE: @@ -9261,6 +9264,10 @@ optimize_basic_block(PyObject *const_cache, basicblock *bb, PyObject *consts) } Py_DECREF(cnt); break; + case RETURN_VALUE: + INSTR_SET_OP1(inst, RETURN_CONST, oparg); + INSTR_SET_OP0(&bb->b_instr[i + 1], NOP); + break; } break; } @@ -9723,9 +9730,7 @@ remove_unused_consts(basicblock *entryblock, PyObject *consts) /* mark used consts */ for (basicblock *b = entryblock; b != NULL; b = b->b_next) { for (int i = 0; i < b->b_iused; i++) { - if (b->b_instr[i].i_opcode == LOAD_CONST || - b->b_instr[i].i_opcode == KW_NAMES) { - + if (HAS_CONST(b->b_instr[i].i_opcode)) { int index = b->b_instr[i].i_oparg; index_map[index] = index; } @@ -9780,9 +9785,7 @@ remove_unused_consts(basicblock *entryblock, PyObject *consts) for (basicblock *b = entryblock; b != NULL; b = b->b_next) { for (int i = 0; i < b->b_iused; i++) { - if (b->b_instr[i].i_opcode == LOAD_CONST || - b->b_instr[i].i_opcode == KW_NAMES) { - + if (HAS_CONST(b->b_instr[i].i_opcode)) { int index = b->b_instr[i].i_oparg; assert(reverse_index_map[index] >= 0); assert(reverse_index_map[index] < n_used_consts); diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index ab19f4317106..de98b1a4f2ed 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -742,6 +742,23 @@ goto resume_frame; } + TARGET(RETURN_CONST) { + PyObject *retval = GETITEM(consts, oparg); + Py_INCREF(retval); + assert(EMPTY()); + _PyFrame_SetStackPointer(frame, stack_pointer); + TRACE_FUNCTION_EXIT(); + DTRACE_FUNCTION_EXIT(); + _Py_LeaveRecursiveCallPy(tstate); + assert(frame != &entry_frame); + // GH-99729: We need to unlink the frame *before* clearing it: + _PyInterpreterFrame *dying = frame; + frame = cframe.current_frame = dying->previous; + _PyEvalFrameClearAndPop(tstate, dying); + _PyFrame_StackPush(frame, retval); + goto resume_frame; + } + TARGET(GET_AITER) { PyObject *obj = PEEK(1); PyObject *iter; diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index d2585351f69f..bae5492c0496 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -92,6 +92,8 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { return 1; case RETURN_VALUE: return 1; + case RETURN_CONST: + return 0; case GET_AITER: return 1; case GET_ANEXT: @@ -438,6 +440,8 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { return 0; case RETURN_VALUE: return 0; + case RETURN_CONST: + return 0; case GET_AITER: return 1; case GET_ANEXT: @@ -745,6 +749,7 @@ struct opcode_metadata { [RAISE_VARARGS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [INTERPRETER_EXIT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [RETURN_VALUE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, + [RETURN_CONST] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [GET_AITER] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [GET_ANEXT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [GET_AWAITABLE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, diff --git a/Python/opcode_targets.h b/Python/opcode_targets.h index f1c3f3e0c4ee..eceb246fac49 100644 --- a/Python/opcode_targets.h +++ b/Python/opcode_targets.h @@ -120,7 +120,7 @@ static void *opcode_targets[256] = { &&TARGET_CONTAINS_OP, &&TARGET_RERAISE, &&TARGET_COPY, - &&TARGET_STORE_FAST__LOAD_FAST, + &&TARGET_RETURN_CONST, &&TARGET_BINARY_OP, &&TARGET_SEND, &&TARGET_LOAD_FAST, @@ -142,7 +142,7 @@ static void *opcode_targets[256] = { &&TARGET_JUMP_BACKWARD, &&TARGET_COMPARE_AND_BRANCH, &&TARGET_CALL_FUNCTION_EX, - &&TARGET_STORE_FAST__STORE_FAST, + &&TARGET_STORE_FAST__LOAD_FAST, &&TARGET_EXTENDED_ARG, &&TARGET_LIST_APPEND, &&TARGET_SET_ADD, @@ -152,15 +152,15 @@ static void *opcode_targets[256] = { &&TARGET_YIELD_VALUE, &&TARGET_RESUME, &&TARGET_MATCH_CLASS, + &&TARGET_STORE_FAST__STORE_FAST, &&TARGET_STORE_SUBSCR_DICT, - &&TARGET_STORE_SUBSCR_LIST_INT, &&TARGET_FORMAT_VALUE, &&TARGET_BUILD_CONST_KEY_MAP, &&TARGET_BUILD_STRING, + &&TARGET_STORE_SUBSCR_LIST_INT, &&TARGET_UNPACK_SEQUENCE_LIST, &&TARGET_UNPACK_SEQUENCE_TUPLE, &&TARGET_UNPACK_SEQUENCE_TWO_TUPLE, - &&_unknown_opcode, &&TARGET_LIST_EXTEND, &&TARGET_SET_UPDATE, &&TARGET_DICT_MERGE, From webhook-mailer at python.org Tue Feb 7 18:44:43 2023 From: webhook-mailer at python.org (gvanrossum) Date: Tue, 07 Feb 2023 23:44:43 -0000 Subject: [Python-checkins] gh-98831: Finish the UNPACK_SEQUENCE family (#101666) Message-ID: https://github.com/python/cpython/commit/aacbdb0c650492756738b044e6ddf8b72f89a1a2 commit: aacbdb0c650492756738b044e6ddf8b72f89a1a2 branch: main author: Guido van Rossum committer: gvanrossum date: 2023-02-07T15:44:37-08:00 summary: gh-98831: Finish the UNPACK_SEQUENCE family (#101666) New generator feature: Generate useful glue for output arrays, so you can just write values to the output array (no bounds checking). Rewrote UNPACK_SEQUENCE_TWO_TUPLE to use this, and also UNPACK_SEQUENCE_{TUPLE,LIST}. files: M Python/bytecodes.c M Python/generated_cases.c.h M Python/opcode_metadata.h M Tools/cases_generator/generate_cases.py M Tools/cases_generator/test_generator.py diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 0d7d922816ce..c6c00a7ab9b0 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -876,6 +876,13 @@ dummy_func( } } + family(unpack_sequence, INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE) = { + UNPACK_SEQUENCE, + UNPACK_SEQUENCE_TWO_TUPLE, + UNPACK_SEQUENCE_TUPLE, + UNPACK_SEQUENCE_LIST, + }; + inst(UNPACK_SEQUENCE, (unused/1, seq -- unused[oparg])) { #if ENABLE_SPECIALIZATION _PyUnpackSequenceCache *cache = (_PyUnpackSequenceCache *)next_instr; @@ -894,43 +901,36 @@ dummy_func( ERROR_IF(res == 0, error); } - inst(UNPACK_SEQUENCE_TWO_TUPLE, (unused/1, seq -- v1, v0)) { + inst(UNPACK_SEQUENCE_TWO_TUPLE, (unused/1, seq -- values[oparg])) { DEOPT_IF(!PyTuple_CheckExact(seq), UNPACK_SEQUENCE); DEOPT_IF(PyTuple_GET_SIZE(seq) != 2, UNPACK_SEQUENCE); + assert(oparg == 2); STAT_INC(UNPACK_SEQUENCE, hit); - v1 = Py_NewRef(PyTuple_GET_ITEM(seq, 1)); - v0 = Py_NewRef(PyTuple_GET_ITEM(seq, 0)); + values[0] = Py_NewRef(PyTuple_GET_ITEM(seq, 1)); + values[1] = Py_NewRef(PyTuple_GET_ITEM(seq, 0)); Py_DECREF(seq); } - // stack effect: (__0 -- __array[oparg]) - inst(UNPACK_SEQUENCE_TUPLE) { - PyObject *seq = TOP(); + inst(UNPACK_SEQUENCE_TUPLE, (unused/1, seq -- values[oparg])) { DEOPT_IF(!PyTuple_CheckExact(seq), UNPACK_SEQUENCE); DEOPT_IF(PyTuple_GET_SIZE(seq) != oparg, UNPACK_SEQUENCE); STAT_INC(UNPACK_SEQUENCE, hit); - STACK_SHRINK(1); PyObject **items = _PyTuple_ITEMS(seq); - while (oparg--) { - PUSH(Py_NewRef(items[oparg])); + for (int i = oparg; --i >= 0; ) { + *values++ = Py_NewRef(items[i]); } Py_DECREF(seq); - JUMPBY(INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE); } - // stack effect: (__0 -- __array[oparg]) - inst(UNPACK_SEQUENCE_LIST) { - PyObject *seq = TOP(); + inst(UNPACK_SEQUENCE_LIST, (unused/1, seq -- values[oparg])) { DEOPT_IF(!PyList_CheckExact(seq), UNPACK_SEQUENCE); DEOPT_IF(PyList_GET_SIZE(seq) != oparg, UNPACK_SEQUENCE); STAT_INC(UNPACK_SEQUENCE, hit); - STACK_SHRINK(1); PyObject **items = _PyList_ITEMS(seq); - while (oparg--) { - PUSH(Py_NewRef(items[oparg])); + for (int i = oparg; --i >= 0; ) { + *values++ = Py_NewRef(items[i]); } Py_DECREF(seq); - JUMPBY(INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE); } inst(UNPACK_EX, (seq -- unused[oparg & 0xFF], unused, unused[oparg >> 8])) { @@ -3185,6 +3185,3 @@ family(call, INLINE_CACHE_ENTRIES_CALL) = { CALL_NO_KW_METHOD_DESCRIPTOR_O, CALL_NO_KW_STR_1, CALL_NO_KW_TUPLE_1, CALL_NO_KW_TYPE_1 }; family(store_fast) = { STORE_FAST, STORE_FAST__LOAD_FAST, STORE_FAST__STORE_FAST }; -family(unpack_sequence, INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE) = { - UNPACK_SEQUENCE, UNPACK_SEQUENCE_LIST, - UNPACK_SEQUENCE_TUPLE, UNPACK_SEQUENCE_TWO_TUPLE }; diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index de98b1a4f2ed..ded68d011c6b 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -1125,6 +1125,7 @@ TARGET(UNPACK_SEQUENCE) { PREDICTED(UNPACK_SEQUENCE); + static_assert(INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE == 1, "incorrect cache size"); PyObject *seq = PEEK(1); #if ENABLE_SPECIALIZATION _PyUnpackSequenceCache *cache = (_PyUnpackSequenceCache *)next_instr; @@ -1149,48 +1150,51 @@ TARGET(UNPACK_SEQUENCE_TWO_TUPLE) { PyObject *seq = PEEK(1); - PyObject *v1; - PyObject *v0; + PyObject **values = stack_pointer - (1); DEOPT_IF(!PyTuple_CheckExact(seq), UNPACK_SEQUENCE); DEOPT_IF(PyTuple_GET_SIZE(seq) != 2, UNPACK_SEQUENCE); + assert(oparg == 2); STAT_INC(UNPACK_SEQUENCE, hit); - v1 = Py_NewRef(PyTuple_GET_ITEM(seq, 1)); - v0 = Py_NewRef(PyTuple_GET_ITEM(seq, 0)); + values[0] = Py_NewRef(PyTuple_GET_ITEM(seq, 1)); + values[1] = Py_NewRef(PyTuple_GET_ITEM(seq, 0)); Py_DECREF(seq); - STACK_GROW(1); - POKE(1, v0); - POKE(2, v1); + STACK_SHRINK(1); + STACK_GROW(oparg); JUMPBY(1); DISPATCH(); } TARGET(UNPACK_SEQUENCE_TUPLE) { - PyObject *seq = TOP(); + PyObject *seq = PEEK(1); + PyObject **values = stack_pointer - (1); DEOPT_IF(!PyTuple_CheckExact(seq), UNPACK_SEQUENCE); DEOPT_IF(PyTuple_GET_SIZE(seq) != oparg, UNPACK_SEQUENCE); STAT_INC(UNPACK_SEQUENCE, hit); - STACK_SHRINK(1); PyObject **items = _PyTuple_ITEMS(seq); - while (oparg--) { - PUSH(Py_NewRef(items[oparg])); + for (int i = oparg; --i >= 0; ) { + *values++ = Py_NewRef(items[i]); } Py_DECREF(seq); - JUMPBY(INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE); + STACK_SHRINK(1); + STACK_GROW(oparg); + JUMPBY(1); DISPATCH(); } TARGET(UNPACK_SEQUENCE_LIST) { - PyObject *seq = TOP(); + PyObject *seq = PEEK(1); + PyObject **values = stack_pointer - (1); DEOPT_IF(!PyList_CheckExact(seq), UNPACK_SEQUENCE); DEOPT_IF(PyList_GET_SIZE(seq) != oparg, UNPACK_SEQUENCE); STAT_INC(UNPACK_SEQUENCE, hit); - STACK_SHRINK(1); PyObject **items = _PyList_ITEMS(seq); - while (oparg--) { - PUSH(Py_NewRef(items[oparg])); + for (int i = oparg; --i >= 0; ) { + *values++ = Py_NewRef(items[i]); } Py_DECREF(seq); - JUMPBY(INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE); + STACK_SHRINK(1); + STACK_GROW(oparg); + JUMPBY(1); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index bae5492c0496..c1e12a4bbede 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -127,9 +127,9 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case UNPACK_SEQUENCE_TWO_TUPLE: return 1; case UNPACK_SEQUENCE_TUPLE: - return -1; + return 1; case UNPACK_SEQUENCE_LIST: - return -1; + return 1; case UNPACK_EX: return 1; case STORE_ATTR: @@ -473,11 +473,11 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case UNPACK_SEQUENCE: return oparg; case UNPACK_SEQUENCE_TWO_TUPLE: - return 2; + return oparg; case UNPACK_SEQUENCE_TUPLE: - return -1; + return oparg; case UNPACK_SEQUENCE_LIST: - return -1; + return oparg; case UNPACK_EX: return (oparg & 0xFF) + (oparg >> 8) + 1; case STORE_ATTR: @@ -765,9 +765,9 @@ struct opcode_metadata { [STORE_NAME] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [DELETE_NAME] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [UNPACK_SEQUENCE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC }, - [UNPACK_SEQUENCE_TWO_TUPLE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IXC }, - [UNPACK_SEQUENCE_TUPLE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [UNPACK_SEQUENCE_LIST] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, + [UNPACK_SEQUENCE_TWO_TUPLE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC }, + [UNPACK_SEQUENCE_TUPLE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC }, + [UNPACK_SEQUENCE_LIST] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC }, [UNPACK_EX] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [STORE_ATTR] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, [DELETE_ATTR] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, diff --git a/Tools/cases_generator/generate_cases.py b/Tools/cases_generator/generate_cases.py index 3925583b40e7..4f94b48d114d 100644 --- a/Tools/cases_generator/generate_cases.py +++ b/Tools/cases_generator/generate_cases.py @@ -180,11 +180,8 @@ def assign(self, dst: StackEffect, src: StackEffect): stmt = f"if ({src.cond}) {{ {stmt} }}" self.emit(stmt) elif m := re.match(r"^&PEEK\(.*\)$", dst.name): - # NOTE: MOVE_ITEMS() does not actually exist. - # The only supported output array forms are: - # - unused[...] - # - X[...] where X[...] matches an input array exactly - self.emit(f"MOVE_ITEMS({dst.name}, {src.name}, {src.size});") + # The user code is responsible for writing to the output array. + pass elif m := re.match(r"^REG\(oparg(\d+)\)$", dst.name): self.emit(f"Py_XSETREF({dst.name}, {cast}{src.name});") else: @@ -309,10 +306,24 @@ def write(self, out: Formatter) -> None: out.declare(ieffect, src) # Write output stack effect variable declarations + isize = string_effect_size(list_effect_size(self.input_effects)) input_names = {ieffect.name for ieffect in self.input_effects} - for oeffect in self.output_effects: + for i, oeffect in enumerate(self.output_effects): if oeffect.name not in input_names: - out.declare(oeffect, None) + if oeffect.size: + osize = string_effect_size( + list_effect_size([oeff for oeff in self.output_effects[:i]]) + ) + offset = "stack_pointer" + if isize != osize: + if isize != "0": + offset += f" - ({isize})" + if osize != "0": + offset += f" + {osize}" + src = StackEffect(offset, "PyObject **") + out.declare(oeffect, src) + else: + out.declare(oeffect, None) # out.emit(f"JUMPBY(OPSIZE({self.inst.name}) - 1);") diff --git a/Tools/cases_generator/test_generator.py b/Tools/cases_generator/test_generator.py index 9df97d24ab6f..0c3d04b45dd9 100644 --- a/Tools/cases_generator/test_generator.py +++ b/Tools/cases_generator/test_generator.py @@ -424,20 +424,18 @@ def test_array_input(): def test_array_output(): input = """ - inst(OP, (-- below, values[oparg*3], above)) { - spam(); + inst(OP, (unused, unused -- below, values[oparg*3], above)) { + spam(values, oparg); } """ output = """ TARGET(OP) { PyObject *below; - PyObject **values; + PyObject **values = stack_pointer - (2) + 1; PyObject *above; - spam(); - STACK_GROW(2); + spam(values, oparg); STACK_GROW(oparg*3); POKE(1, above); - MOVE_ITEMS(&PEEK(1 + oparg*3), values, oparg*3); POKE(2 + oparg*3, below); DISPATCH(); } @@ -446,18 +444,17 @@ def test_array_output(): def test_array_input_output(): input = """ - inst(OP, (below, values[oparg] -- values[oparg], above)) { - spam(); + inst(OP, (values[oparg] -- values[oparg], above)) { + spam(values, oparg); } """ output = """ TARGET(OP) { PyObject **values = &PEEK(oparg); - PyObject *below = PEEK(1 + oparg); PyObject *above; - spam(); + spam(values, oparg); + STACK_GROW(1); POKE(1, above); - MOVE_ITEMS(&PEEK(1 + oparg), values, oparg); DISPATCH(); } """ From webhook-mailer at python.org Tue Feb 7 20:36:02 2023 From: webhook-mailer at python.org (gvanrossum) Date: Wed, 08 Feb 2023 01:36:02 -0000 Subject: [Python-checkins] gh-98831: Modernize FORMAT_VALUE (#101628) Message-ID: https://github.com/python/cpython/commit/b2b85b5db9cfdb24f966b61757536a898abc3830 commit: b2b85b5db9cfdb24f966b61757536a898abc3830 branch: main author: Guido van Rossum committer: gvanrossum date: 2023-02-07T17:35:55-08:00 summary: gh-98831: Modernize FORMAT_VALUE (#101628) Generator update: support balanced parentheses and brackets in conditions and size expressions. files: M Python/bytecodes.c M Python/generated_cases.c.h M Python/opcode_metadata.h M Tools/cases_generator/parser.py M Tools/cases_generator/test_generator.py diff --git a/Python/bytecodes.c b/Python/bytecodes.c index c6c00a7ab9b0..d0f0513a36f8 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -3054,18 +3054,10 @@ dummy_func( ERROR_IF(slice == NULL, error); } - // error: FORMAT_VALUE has irregular stack effect - inst(FORMAT_VALUE) { + inst(FORMAT_VALUE, (value, fmt_spec if ((oparg & FVS_MASK) == FVS_HAVE_SPEC) -- result)) { /* Handles f-string value formatting. */ - PyObject *result; - PyObject *fmt_spec; - PyObject *value; PyObject *(*conv_fn)(PyObject *); int which_conversion = oparg & FVC_MASK; - int have_fmt_spec = (oparg & FVS_MASK) == FVS_HAVE_SPEC; - - fmt_spec = have_fmt_spec ? POP() : NULL; - value = POP(); /* See if any conversion is specified. */ switch (which_conversion) { @@ -3088,7 +3080,7 @@ dummy_func( Py_DECREF(value); if (result == NULL) { Py_XDECREF(fmt_spec); - goto error; + ERROR_IF(true, error); } value = result; } @@ -3106,12 +3098,8 @@ dummy_func( result = PyObject_Format(value, fmt_spec); Py_DECREF(value); Py_XDECREF(fmt_spec); - if (result == NULL) { - goto error; - } + ERROR_IF(result == NULL, error); } - - PUSH(result); } inst(COPY, (bottom, unused[oparg-1] -- bottom, unused[oparg-1], top)) { diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index ded68d011c6b..3ef808691e01 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -3703,16 +3703,12 @@ } TARGET(FORMAT_VALUE) { - /* Handles f-string value formatting. */ + PyObject *fmt_spec = ((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? PEEK((((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? 1 : 0)) : NULL; + PyObject *value = PEEK(1 + (((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? 1 : 0)); PyObject *result; - PyObject *fmt_spec; - PyObject *value; + /* Handles f-string value formatting. */ PyObject *(*conv_fn)(PyObject *); int which_conversion = oparg & FVC_MASK; - int have_fmt_spec = (oparg & FVS_MASK) == FVS_HAVE_SPEC; - - fmt_spec = have_fmt_spec ? POP() : NULL; - value = POP(); /* See if any conversion is specified. */ switch (which_conversion) { @@ -3735,7 +3731,7 @@ Py_DECREF(value); if (result == NULL) { Py_XDECREF(fmt_spec); - goto error; + if (true) { STACK_SHRINK((((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? 1 : 0)); goto pop_1_error; } } value = result; } @@ -3753,12 +3749,10 @@ result = PyObject_Format(value, fmt_spec); Py_DECREF(value); Py_XDECREF(fmt_spec); - if (result == NULL) { - goto error; - } + if (result == NULL) { STACK_SHRINK((((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? 1 : 0)); goto pop_1_error; } } - - PUSH(result); + STACK_SHRINK((((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? 1 : 0)); + POKE(1, result); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index c1e12a4bbede..52bab1c680e3 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -333,7 +333,7 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case BUILD_SLICE: return ((oparg == 3) ? 1 : 0) + 2; case FORMAT_VALUE: - return -1; + return (((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? 1 : 0) + 1; case COPY: return (oparg-1) + 1; case BINARY_OP: @@ -681,7 +681,7 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case BUILD_SLICE: return 1; case FORMAT_VALUE: - return -1; + return 1; case COPY: return (oparg-1) + 2; case BINARY_OP: diff --git a/Tools/cases_generator/parser.py b/Tools/cases_generator/parser.py index ced66faee493..c7c8d8af6b73 100644 --- a/Tools/cases_generator/parser.py +++ b/Tools/cases_generator/parser.py @@ -263,7 +263,14 @@ def stack_effect(self) -> StackEffect | None: @contextual def expression(self) -> Expression | None: tokens: list[lx.Token] = [] - while (tkn := self.peek()) and tkn.kind not in (lx.RBRACKET, lx.RPAREN): + level = 1 + while tkn := self.peek(): + if tkn.kind in (lx.LBRACKET, lx.LPAREN): + level += 1 + elif tkn.kind in (lx.RBRACKET, lx.RPAREN): + level -= 1 + if level == 0: + break tokens.append(tkn) self.next() if not tokens: diff --git a/Tools/cases_generator/test_generator.py b/Tools/cases_generator/test_generator.py index 0c3d04b45dd9..33bba7ee340a 100644 --- a/Tools/cases_generator/test_generator.py +++ b/Tools/cases_generator/test_generator.py @@ -500,20 +500,20 @@ def test_register(): def test_cond_effect(): input = """ - inst(OP, (aa, input if (oparg & 1), cc -- xx, output if (oparg & 2), zz)) { + inst(OP, (aa, input if ((oparg & 1) == 1), cc -- xx, output if (oparg & 2), zz)) { output = spam(oparg, input); } """ output = """ TARGET(OP) { PyObject *cc = PEEK(1); - PyObject *input = (oparg & 1) ? PEEK(1 + ((oparg & 1) ? 1 : 0)) : NULL; - PyObject *aa = PEEK(2 + ((oparg & 1) ? 1 : 0)); + PyObject *input = ((oparg & 1) == 1) ? PEEK(1 + (((oparg & 1) == 1) ? 1 : 0)) : NULL; + PyObject *aa = PEEK(2 + (((oparg & 1) == 1) ? 1 : 0)); PyObject *xx; PyObject *output = NULL; PyObject *zz; output = spam(oparg, input); - STACK_SHRINK(((oparg & 1) ? 1 : 0)); + STACK_SHRINK((((oparg & 1) == 1) ? 1 : 0)); STACK_GROW(((oparg & 2) ? 1 : 0)); POKE(1, zz); if (oparg & 2) { POKE(1 + ((oparg & 2) ? 1 : 0), output); } From webhook-mailer at python.org Tue Feb 7 21:01:16 2023 From: webhook-mailer at python.org (rhettinger) Date: Wed, 08 Feb 2023 02:01:16 -0000 Subject: [Python-checkins] gh-101446: Change `repr` of `collections.OrderedDict` (#101661) Message-ID: https://github.com/python/cpython/commit/790ff6bc6a56b4bd6e403aa43a984b99f7171dd7 commit: 790ff6bc6a56b4bd6e403aa43a984b99f7171dd7 branch: main author: Nikita Sobolev committer: rhettinger date: 2023-02-07T20:01:10-06:00 summary: gh-101446: Change `repr` of `collections.OrderedDict` (#101661) files: A Misc/NEWS.d/next/Library/2023-02-07-22-21-46.gh-issue-101446.-c0FdK.rst M Lib/collections/__init__.py M Lib/test/test_ordered_dict.py M Objects/odictobject.c diff --git a/Lib/collections/__init__.py b/Lib/collections/__init__.py index b5e4d16e9dbc..a5393aad4249 100644 --- a/Lib/collections/__init__.py +++ b/Lib/collections/__init__.py @@ -267,7 +267,7 @@ def __repr__(self): 'od.__repr__() <==> repr(od)' if not self: return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, list(self.items())) + return '%s(%r)' % (self.__class__.__name__, dict(self.items())) def __reduce__(self): 'Return state information for pickling' diff --git a/Lib/test/test_ordered_dict.py b/Lib/test/test_ordered_dict.py index 37447fd249b8..decbcc2419c9 100644 --- a/Lib/test/test_ordered_dict.py +++ b/Lib/test/test_ordered_dict.py @@ -362,7 +362,7 @@ def test_repr(self): OrderedDict = self.OrderedDict od = OrderedDict([('c', 1), ('b', 2), ('a', 3), ('d', 4), ('e', 5), ('f', 6)]) self.assertEqual(repr(od), - "OrderedDict([('c', 1), ('b', 2), ('a', 3), ('d', 4), ('e', 5), ('f', 6)])") + "OrderedDict({'c': 1, 'b': 2, 'a': 3, 'd': 4, 'e': 5, 'f': 6})") self.assertEqual(eval(repr(od)), od) self.assertEqual(repr(OrderedDict()), "OrderedDict()") @@ -372,7 +372,7 @@ def test_repr_recursive(self): od = OrderedDict.fromkeys('abc') od['x'] = od self.assertEqual(repr(od), - "OrderedDict([('a', None), ('b', None), ('c', None), ('x', ...)])") + "OrderedDict({'a': None, 'b': None, 'c': None, 'x': ...})") def test_repr_recursive_values(self): OrderedDict = self.OrderedDict diff --git a/Misc/NEWS.d/next/Library/2023-02-07-22-21-46.gh-issue-101446.-c0FdK.rst b/Misc/NEWS.d/next/Library/2023-02-07-22-21-46.gh-issue-101446.-c0FdK.rst new file mode 100644 index 000000000000..ddf897b71bb1 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-07-22-21-46.gh-issue-101446.-c0FdK.rst @@ -0,0 +1,2 @@ +Change repr of :class:`collections.OrderedDict` to use regular dictionary +formating instead of pairs of keys and values. diff --git a/Objects/odictobject.c b/Objects/odictobject.c index 4976b70b5dff..ab2bbed35873 100644 --- a/Objects/odictobject.c +++ b/Objects/odictobject.c @@ -1367,7 +1367,7 @@ static PyObject * odict_repr(PyODictObject *self) { int i; - PyObject *pieces = NULL, *result = NULL; + PyObject *result = NULL, *dcopy = NULL; if (PyODict_SIZE(self) == 0) return PyUnicode_FromFormat("%s()", _PyType_Name(Py_TYPE(self))); @@ -1377,57 +1377,16 @@ odict_repr(PyODictObject *self) return i > 0 ? PyUnicode_FromString("...") : NULL; } - if (PyODict_CheckExact(self)) { - Py_ssize_t count = 0; - _ODictNode *node; - pieces = PyList_New(PyODict_SIZE(self)); - if (pieces == NULL) - goto Done; - - _odict_FOREACH(self, node) { - PyObject *pair; - PyObject *key = _odictnode_KEY(node); - PyObject *value = _odictnode_VALUE(node, self); - if (value == NULL) { - if (!PyErr_Occurred()) - PyErr_SetObject(PyExc_KeyError, key); - goto Done; - } - pair = PyTuple_Pack(2, key, value); - if (pair == NULL) - goto Done; - - if (count < PyList_GET_SIZE(pieces)) - PyList_SET_ITEM(pieces, count, pair); /* steals reference */ - else { - if (PyList_Append(pieces, pair) < 0) { - Py_DECREF(pair); - goto Done; - } - Py_DECREF(pair); - } - count++; - } - if (count < PyList_GET_SIZE(pieces)) { - Py_SET_SIZE(pieces, count); - } - } - else { - PyObject *items = PyObject_CallMethodNoArgs( - (PyObject *)self, &_Py_ID(items)); - if (items == NULL) - goto Done; - pieces = PySequence_List(items); - Py_DECREF(items); - if (pieces == NULL) - goto Done; + dcopy = PyDict_Copy((PyObject *)self); + if (dcopy == NULL) { + goto Done; } result = PyUnicode_FromFormat("%s(%R)", - _PyType_Name(Py_TYPE(self)), pieces); + _PyType_Name(Py_TYPE(self)), + dcopy); Done: - Py_XDECREF(pieces); Py_ReprLeave((PyObject *)self); return result; } From webhook-mailer at python.org Tue Feb 7 23:03:29 2023 From: webhook-mailer at python.org (gvanrossum) Date: Wed, 08 Feb 2023 04:03:29 -0000 Subject: [Python-checkins] gh-98831: Modernize CALL_FUNCTION_EX (#101627) Message-ID: https://github.com/python/cpython/commit/a9f01448a99c6a2ae34d448806176f2df3a5b323 commit: a9f01448a99c6a2ae34d448806176f2df3a5b323 branch: main author: Guido van Rossum committer: gvanrossum date: 2023-02-07T20:03:22-08:00 summary: gh-98831: Modernize CALL_FUNCTION_EX (#101627) New generator feature: Move CHECK_EVAL_BREAKER() call to just before DISPATCH(). files: M Python/bytecodes.c M Python/generated_cases.c.h M Python/opcode_metadata.h M Tools/cases_generator/generate_cases.py M Tools/cases_generator/test_generator.py diff --git a/Python/bytecodes.c b/Python/bytecodes.c index d0f0513a36f8..9633f34212a6 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -2951,26 +2951,21 @@ dummy_func( CHECK_EVAL_BREAKER(); } - // error: CALL_FUNCTION_EX has irregular stack effect - inst(CALL_FUNCTION_EX) { - PyObject *func, *callargs, *kwargs = NULL, *result; - if (oparg & 0x01) { - kwargs = POP(); + inst(CALL_FUNCTION_EX, (unused, func, callargs, kwargs if (oparg & 1) -- result)) { + if (oparg & 1) { // DICT_MERGE is called before this opcode if there are kwargs. // It converts all dict subtypes in kwargs into regular dicts. assert(PyDict_CheckExact(kwargs)); } - callargs = POP(); - func = TOP(); if (!PyTuple_CheckExact(callargs)) { if (check_args_iterable(tstate, func, callargs) < 0) { - Py_DECREF(callargs); goto error; } - Py_SETREF(callargs, PySequence_Tuple(callargs)); - if (callargs == NULL) { + PyObject *tuple = PySequence_Tuple(callargs); + if (tuple == NULL) { goto error; } + Py_SETREF(callargs, tuple); } assert(PyTuple_CheckExact(callargs)); @@ -2979,12 +2974,8 @@ dummy_func( Py_DECREF(callargs); Py_XDECREF(kwargs); - STACK_SHRINK(1); - assert(TOP() == NULL); - SET_TOP(result); - if (result == NULL) { - goto error; - } + assert(PEEK(3 + (oparg & 1)) == NULL); + ERROR_IF(result == NULL, error); CHECK_EVAL_BREAKER(); } diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index 3ef808691e01..f38286441be4 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -3587,24 +3587,24 @@ TARGET(CALL_FUNCTION_EX) { PREDICTED(CALL_FUNCTION_EX); - PyObject *func, *callargs, *kwargs = NULL, *result; - if (oparg & 0x01) { - kwargs = POP(); + PyObject *kwargs = (oparg & 1) ? PEEK(((oparg & 1) ? 1 : 0)) : NULL; + PyObject *callargs = PEEK(1 + ((oparg & 1) ? 1 : 0)); + PyObject *func = PEEK(2 + ((oparg & 1) ? 1 : 0)); + PyObject *result; + if (oparg & 1) { // DICT_MERGE is called before this opcode if there are kwargs. // It converts all dict subtypes in kwargs into regular dicts. assert(PyDict_CheckExact(kwargs)); } - callargs = POP(); - func = TOP(); if (!PyTuple_CheckExact(callargs)) { if (check_args_iterable(tstate, func, callargs) < 0) { - Py_DECREF(callargs); goto error; } - Py_SETREF(callargs, PySequence_Tuple(callargs)); - if (callargs == NULL) { + PyObject *tuple = PySequence_Tuple(callargs); + if (tuple == NULL) { goto error; } + Py_SETREF(callargs, tuple); } assert(PyTuple_CheckExact(callargs)); @@ -3613,12 +3613,11 @@ Py_DECREF(callargs); Py_XDECREF(kwargs); - STACK_SHRINK(1); - assert(TOP() == NULL); - SET_TOP(result); - if (result == NULL) { - goto error; - } + assert(PEEK(3 + (oparg & 1)) == NULL); + if (result == NULL) { STACK_SHRINK(((oparg & 1) ? 1 : 0)); goto pop_3_error; } + STACK_SHRINK(((oparg & 1) ? 1 : 0)); + STACK_SHRINK(2); + POKE(1, result); CHECK_EVAL_BREAKER(); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index 52bab1c680e3..054ef6c29982 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -325,7 +325,7 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case CALL_NO_KW_METHOD_DESCRIPTOR_FAST: return -1; case CALL_FUNCTION_EX: - return -1; + return ((oparg & 1) ? 1 : 0) + 3; case MAKE_FUNCTION: return ((oparg & 0x01) ? 1 : 0) + ((oparg & 0x02) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0) + ((oparg & 0x08) ? 1 : 0) + 1; case RETURN_GENERATOR: @@ -673,7 +673,7 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case CALL_NO_KW_METHOD_DESCRIPTOR_FAST: return -1; case CALL_FUNCTION_EX: - return -1; + return 1; case MAKE_FUNCTION: return 1; case RETURN_GENERATOR: diff --git a/Tools/cases_generator/generate_cases.py b/Tools/cases_generator/generate_cases.py index 4f94b48d114d..9b5aa914cdee 100644 --- a/Tools/cases_generator/generate_cases.py +++ b/Tools/cases_generator/generate_cases.py @@ -227,7 +227,8 @@ def __init__(self, inst: parser.InstDef): self.kind = inst.kind self.name = inst.name self.block = inst.block - self.block_text, self.predictions = extract_block_text(self.block) + self.block_text, self.check_eval_breaker, self.predictions = \ + extract_block_text(self.block) self.always_exits = always_exits(self.block_text) self.cache_effects = [ effect for effect in inst.inputs if isinstance(effect, parser.CacheEffect) @@ -1027,6 +1028,8 @@ def write_instr(self, instr: Instruction) -> None: if not instr.always_exits: for prediction in instr.predictions: self.out.emit(f"PREDICT({prediction});") + if instr.check_eval_breaker: + self.out.emit("CHECK_EVAL_BREAKER();") self.out.emit(f"DISPATCH();") def write_super(self, sup: SuperInstruction) -> None: @@ -1102,7 +1105,7 @@ def wrap_super_or_macro(self, up: SuperOrMacroInstruction): self.out.emit(f"DISPATCH();") -def extract_block_text(block: parser.Block) -> tuple[list[str], list[str]]: +def extract_block_text(block: parser.Block) -> tuple[list[str], bool, list[str]]: # Get lines of text with proper dedent blocklines = block.text.splitlines(True) @@ -1122,6 +1125,12 @@ def extract_block_text(block: parser.Block) -> tuple[list[str], list[str]]: while blocklines and not blocklines[-1].strip(): blocklines.pop() + # Separate CHECK_EVAL_BREAKER() macro from end + check_eval_breaker = \ + blocklines != [] and blocklines[-1].strip() == "CHECK_EVAL_BREAKER();" + if check_eval_breaker: + del blocklines[-1] + # Separate PREDICT(...) macros from end predictions: list[str] = [] while blocklines and ( @@ -1130,7 +1139,7 @@ def extract_block_text(block: parser.Block) -> tuple[list[str], list[str]]: predictions.insert(0, m.group(1)) blocklines.pop() - return blocklines, predictions + return blocklines, check_eval_breaker, predictions def always_exits(lines: list[str]) -> bool: diff --git a/Tools/cases_generator/test_generator.py b/Tools/cases_generator/test_generator.py index 33bba7ee340a..036094ac8ef4 100644 --- a/Tools/cases_generator/test_generator.py +++ b/Tools/cases_generator/test_generator.py @@ -177,15 +177,16 @@ def test_overlap(): """ run_cases_test(input, output) -def test_predictions(): +def test_predictions_and_eval_breaker(): input = """ inst(OP1, (--)) { } inst(OP2, (--)) { } - inst(OP3, (--)) { + inst(OP3, (arg -- res)) { DEOPT_IF(xxx, OP1); PREDICT(OP2); + CHECK_EVAL_BREAKER(); } """ output = """ @@ -200,8 +201,12 @@ def test_predictions(): } TARGET(OP3) { + PyObject *arg = PEEK(1); + PyObject *res; DEOPT_IF(xxx, OP1); + POKE(1, res); PREDICT(OP2); + CHECK_EVAL_BREAKER(); DISPATCH(); } """ From webhook-mailer at python.org Wed Feb 8 03:12:53 2023 From: webhook-mailer at python.org (gpshead) Date: Wed, 08 Feb 2023 08:12:53 -0000 Subject: [Python-checkins] gh-47937: Note that Popen attributes are read-only (#93070) Message-ID: https://github.com/python/cpython/commit/027adf42cd85db41fee05b0a40d89ef822876c97 commit: 027adf42cd85db41fee05b0a40d89ef822876c97 branch: main author: Stanley <46876382+slateny at users.noreply.github.com> committer: gpshead date: 2023-02-08T00:12:46-08:00 summary: gh-47937: Note that Popen attributes are read-only (#93070) * Note that Popen attributes aren't meant to be set by users by rewording the text about the attributes. * Also update some universal_newlines references to mention the modern text parameter name while in the area. Co-authored-by: Gregory P. Smith files: M Doc/library/subprocess.rst diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst index 785251afdf26..a87369a2461a 100644 --- a/Doc/library/subprocess.rst +++ b/Doc/library/subprocess.rst @@ -457,7 +457,7 @@ functions. - :const:`0` means unbuffered (read and write are one system call and can return short) - :const:`1` means line buffered - (only usable if ``universal_newlines=True`` i.e., in a text mode) + (only usable if ``text=True`` or ``universal_newlines=True``) - any other positive value means use a buffer of approximately that size - negative bufsize (the default) means the system default of @@ -847,7 +847,8 @@ Instances of the :class:`Popen` class have the following methods: On Windows :meth:`kill` is an alias for :meth:`terminate`. -The following attributes are also available: +The following attributes are also set by the class for you to access. +Reassigning them to new values is unsupported: .. attribute:: Popen.args @@ -860,9 +861,9 @@ The following attributes are also available: If the *stdin* argument was :data:`PIPE`, this attribute is a writeable stream object as returned by :func:`open`. If the *encoding* or *errors* - arguments were specified or the *universal_newlines* argument was ``True``, - the stream is a text stream, otherwise it is a byte stream. If the *stdin* - argument was not :data:`PIPE`, this attribute is ``None``. + arguments were specified or the *text* or *universal_newlines* argument + was ``True``, the stream is a text stream, otherwise it is a byte stream. + If the *stdin* argument was not :data:`PIPE`, this attribute is ``None``. .. attribute:: Popen.stdout @@ -870,9 +871,9 @@ The following attributes are also available: If the *stdout* argument was :data:`PIPE`, this attribute is a readable stream object as returned by :func:`open`. Reading from the stream provides output from the child process. If the *encoding* or *errors* arguments were - specified or the *universal_newlines* argument was ``True``, the stream is a - text stream, otherwise it is a byte stream. If the *stdout* argument was not - :data:`PIPE`, this attribute is ``None``. + specified or the *text* or *universal_newlines* argument was ``True``, the + stream is a text stream, otherwise it is a byte stream. If the *stdout* + argument was not :data:`PIPE`, this attribute is ``None``. .. attribute:: Popen.stderr @@ -880,9 +881,9 @@ The following attributes are also available: If the *stderr* argument was :data:`PIPE`, this attribute is a readable stream object as returned by :func:`open`. Reading from the stream provides error output from the child process. If the *encoding* or *errors* arguments - were specified or the *universal_newlines* argument was ``True``, the stream - is a text stream, otherwise it is a byte stream. If the *stderr* argument was - not :data:`PIPE`, this attribute is ``None``. + were specified or the *text* or *universal_newlines* argument was ``True``, the + stream is a text stream, otherwise it is a byte stream. If the *stderr* argument + was not :data:`PIPE`, this attribute is ``None``. .. warning:: From webhook-mailer at python.org Wed Feb 8 04:31:56 2023 From: webhook-mailer at python.org (markshannon) Date: Wed, 08 Feb 2023 09:31:56 -0000 Subject: [Python-checkins] GH-101578: Normalize the current exception (GH-101607) Message-ID: https://github.com/python/cpython/commit/feec49c40736fc05626a183a8d14c4ebbea5ae28 commit: feec49c40736fc05626a183a8d14c4ebbea5ae28 branch: main author: Mark Shannon committer: markshannon date: 2023-02-08T09:31:12Z summary: GH-101578: Normalize the current exception (GH-101607) * Make sure that the current exception is always normalized. * Remove redundant type and traceback fields for the current exception. * Add new API functions: PyErr_GetRaisedException, PyErr_SetRaisedException * Add new API functions: PyException_GetArgs, PyException_SetArgs files: A Misc/NEWS.d/next/C API/2023-02-06-16-14-30.gh-issue-101578.PW5fA9.rst M Doc/c-api/exceptions.rst M Doc/data/stable_abi.dat M Include/cpython/pyerrors.h M Include/cpython/pystate.h M Include/internal/pycore_pyerrors.h M Include/pyerrors.h M Lib/test/test_capi/test_misc.py M Lib/test/test_exceptions.py M Lib/test/test_stable_abi_ctypes.py M Misc/stable_abi.toml M Modules/_testcapi/heaptype.c M Modules/_testcapimodule.c M Modules/gcmodule.c M Objects/dictobject.c M Objects/exceptions.c M Objects/object.c M PC/python3dll.c M Parser/pegen.c M Python/bytecodes.c M Python/ceval.c M Python/errors.c M Python/generated_cases.c.h M Python/import.c M Python/initconfig.c M Python/pystate.c M Python/pythonrun.c M Python/sysmodule.c M Python/traceback.c diff --git a/Doc/c-api/exceptions.rst b/Doc/c-api/exceptions.rst index 087e0a61d12d..de9b15edd685 100644 --- a/Doc/c-api/exceptions.rst +++ b/Doc/c-api/exceptions.rst @@ -400,8 +400,61 @@ Querying the error indicator recursively in subtuples) are searched for a match. +.. c:function:: PyObject *PyErr_GetRaisedException(void) + + Returns the exception currently being raised, clearing the exception at + the same time. Do not confuse this with the exception currently being + handled which can be accessed with :c:func:`PyErr_GetHandledException`. + + .. note:: + + This function is normally only used by code that needs to catch exceptions or + by code that needs to save and restore the error indicator temporarily, e.g.:: + + { + PyObject *exc = PyErr_GetRaisedException(); + + /* ... code that might produce other errors ... */ + + PyErr_SetRaisedException(exc); + } + + .. versionadded:: 3.12 + + +.. c:function:: void PyErr_SetRaisedException(PyObject *exc) + + Sets the exception currently being raised ``exc``. + If the exception is already set, it is cleared first. + + ``exc`` must be a valid exception. + (Violating this rules will cause subtle problems later.) + This call consumes a reference to the ``exc`` object: you must own a + reference to that object before the call and after the call you no longer own + that reference. + (If you don't understand this, don't use this function. I warned you.) + + .. note:: + + This function is normally only used by code that needs to save and restore the + error indicator temporarily. Use :c:func:`PyErr_GetRaisedException` to save + the current exception, e.g.:: + + { + PyObject *exc = PyErr_GetRaisedException(); + + /* ... code that might produce other errors ... */ + + PyErr_SetRaisedException(exc); + } + + .. versionadded:: 3.12 + + .. c:function:: void PyErr_Fetch(PyObject **ptype, PyObject **pvalue, PyObject **ptraceback) + As of 3.12, this function is deprecated. Use :c:func:`PyErr_GetRaisedException` instead. + Retrieve the error indicator into three variables whose addresses are passed. If the error indicator is not set, set all three variables to ``NULL``. If it is set, it will be cleared and you own a reference to each object retrieved. The @@ -421,10 +474,14 @@ Querying the error indicator PyErr_Restore(type, value, traceback); } + .. deprecated:: 3.12 + .. c:function:: void PyErr_Restore(PyObject *type, PyObject *value, PyObject *traceback) - Set the error indicator from the three objects. If the error indicator is + As of 3.12, this function is deprecated. Use :c:func:`PyErr_SetRaisedException` instead. + + Set the error indicator from the three objects. If the error indicator is already set, it is cleared first. If the objects are ``NULL``, the error indicator is cleared. Do not pass a ``NULL`` type and non-``NULL`` value or traceback. The exception type should be a class. Do not pass an invalid @@ -440,9 +497,15 @@ Querying the error indicator error indicator temporarily. Use :c:func:`PyErr_Fetch` to save the current error indicator. + .. deprecated:: 3.12 + .. c:function:: void PyErr_NormalizeException(PyObject **exc, PyObject **val, PyObject **tb) + As of 3.12, this function is deprecated. + Use :c:func:`PyErr_GetRaisedException` instead of :c:func:`PyErr_Fetch` to avoid + any possible de-normalization. + Under certain circumstances, the values returned by :c:func:`PyErr_Fetch` below can be "unnormalized", meaning that ``*exc`` is a class object but ``*val`` is not an instance of the same class. This function can be used to instantiate @@ -459,6 +522,8 @@ Querying the error indicator PyException_SetTraceback(val, tb); } + .. deprecated:: 3.12 + .. c:function:: PyObject* PyErr_GetHandledException(void) @@ -704,6 +769,18 @@ Exception Objects :attr:`__suppress_context__` is implicitly set to ``True`` by this function. +.. c:function:: PyObject* PyException_GetArgs(PyObject *ex) + + Return args of the given exception as a new reference, + as accessible from Python through :attr:`args`. + + +.. c:function:: void PyException_SetArgs(PyObject *ex, PyObject *args) + + Set the args of the given exception, + as accessible from Python through :attr:`args`. + + .. _unicodeexceptions: Unicode Exception Objects diff --git a/Doc/data/stable_abi.dat b/Doc/data/stable_abi.dat index 53895bbced84..68ab0b5061f4 100644 --- a/Doc/data/stable_abi.dat +++ b/Doc/data/stable_abi.dat @@ -139,6 +139,7 @@ function,PyErr_Format,3.2,, function,PyErr_FormatV,3.5,, function,PyErr_GetExcInfo,3.7,, function,PyErr_GetHandledException,3.11,, +function,PyErr_GetRaisedException,3.12,, function,PyErr_GivenExceptionMatches,3.2,, function,PyErr_NewException,3.2,, function,PyErr_NewExceptionWithDoc,3.2,, @@ -168,6 +169,7 @@ function,PyErr_SetInterrupt,3.2,, function,PyErr_SetInterruptEx,3.10,, function,PyErr_SetNone,3.2,, function,PyErr_SetObject,3.2,, +function,PyErr_SetRaisedException,3.12,, function,PyErr_SetString,3.2,, function,PyErr_SyntaxLocation,3.2,, function,PyErr_SyntaxLocationEx,3.7,, @@ -266,9 +268,11 @@ var,PyExc_Warning,3.2,, var,PyExc_WindowsError,3.7,on Windows, var,PyExc_ZeroDivisionError,3.2,, function,PyExceptionClass_Name,3.8,, +function,PyException_GetArgs,3.12,, function,PyException_GetCause,3.2,, function,PyException_GetContext,3.2,, function,PyException_GetTraceback,3.2,, +function,PyException_SetArgs,3.12,, function,PyException_SetCause,3.2,, function,PyException_SetContext,3.2,, function,PyException_SetTraceback,3.2,, diff --git a/Include/cpython/pyerrors.h b/Include/cpython/pyerrors.h index 141341667795..0d9cc9922f73 100644 --- a/Include/cpython/pyerrors.h +++ b/Include/cpython/pyerrors.h @@ -99,6 +99,7 @@ PyAPI_FUNC(void) _PyErr_GetExcInfo(PyThreadState *, PyObject **, PyObject **, Py /* Context manipulation (PEP 3134) */ PyAPI_FUNC(void) _PyErr_ChainExceptions(PyObject *, PyObject *, PyObject *); +PyAPI_FUNC(void) _PyErr_ChainExceptions1(PyObject *); /* Like PyErr_Format(), but saves current exception as __context__ and __cause__. diff --git a/Include/cpython/pystate.h b/Include/cpython/pystate.h index f5db52f76e8f..be1fcb61fa22 100644 --- a/Include/cpython/pystate.h +++ b/Include/cpython/pystate.h @@ -166,9 +166,7 @@ struct _ts { PyObject *c_traceobj; /* The exception currently being raised */ - PyObject *curexc_type; - PyObject *curexc_value; - PyObject *curexc_traceback; + PyObject *current_exception; /* Pointer to the top of the exception stack for the exceptions * we may be currently handling. (See _PyErr_StackItem above.) diff --git a/Include/internal/pycore_pyerrors.h b/Include/internal/pycore_pyerrors.h index 66f37942ef91..1bb4a9aa1038 100644 --- a/Include/internal/pycore_pyerrors.h +++ b/Include/internal/pycore_pyerrors.h @@ -20,7 +20,10 @@ extern void _PyErr_FiniTypes(PyInterpreterState *); static inline PyObject* _PyErr_Occurred(PyThreadState *tstate) { assert(tstate != NULL); - return tstate->curexc_type; + if (tstate->current_exception == NULL) { + return NULL; + } + return (PyObject *)Py_TYPE(tstate->current_exception); } static inline void _PyErr_ClearExcState(_PyErr_StackItem *exc_state) @@ -37,10 +40,16 @@ PyAPI_FUNC(void) _PyErr_Fetch( PyObject **value, PyObject **traceback); +extern PyObject * +_PyErr_GetRaisedException(PyThreadState *tstate); + PyAPI_FUNC(int) _PyErr_ExceptionMatches( PyThreadState *tstate, PyObject *exc); +void +_PyErr_SetRaisedException(PyThreadState *tstate, PyObject *exc); + PyAPI_FUNC(void) _PyErr_Restore( PyThreadState *tstate, PyObject *type, diff --git a/Include/pyerrors.h b/Include/pyerrors.h index d5ac6af5b32c..d089fa717793 100644 --- a/Include/pyerrors.h +++ b/Include/pyerrors.h @@ -18,6 +18,8 @@ PyAPI_FUNC(PyObject *) PyErr_Occurred(void); PyAPI_FUNC(void) PyErr_Clear(void); PyAPI_FUNC(void) PyErr_Fetch(PyObject **, PyObject **, PyObject **); PyAPI_FUNC(void) PyErr_Restore(PyObject *, PyObject *, PyObject *); +PyAPI_FUNC(PyObject *) PyErr_GetRaisedException(void); +PyAPI_FUNC(void) PyErr_SetRaisedException(PyObject *); #if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 >= 0x030b0000 PyAPI_FUNC(PyObject*) PyErr_GetHandledException(void); PyAPI_FUNC(void) PyErr_SetHandledException(PyObject *); @@ -51,6 +53,10 @@ PyAPI_FUNC(void) PyException_SetCause(PyObject *, PyObject *); PyAPI_FUNC(PyObject *) PyException_GetContext(PyObject *); PyAPI_FUNC(void) PyException_SetContext(PyObject *, PyObject *); + +PyAPI_FUNC(PyObject *) PyException_GetArgs(PyObject *); +PyAPI_FUNC(void) PyException_SetArgs(PyObject *, PyObject *); + /* */ #define PyExceptionClass_Check(x) \ diff --git a/Lib/test/test_capi/test_misc.py b/Lib/test/test_capi/test_misc.py index 03e22d7a2d38..7612cddb1f65 100644 --- a/Lib/test/test_capi/test_misc.py +++ b/Lib/test/test_capi/test_misc.py @@ -1553,5 +1553,44 @@ def func2(x=None): self.do_test(func2) +class Test_ErrSetAndRestore(unittest.TestCase): + + def test_err_set_raised(self): + with self.assertRaises(ValueError): + _testcapi.err_set_raised(ValueError()) + v = ValueError() + try: + _testcapi.err_set_raised(v) + except ValueError as ex: + self.assertIs(v, ex) + + def test_err_restore(self): + with self.assertRaises(ValueError): + _testcapi.err_restore(ValueError) + with self.assertRaises(ValueError): + _testcapi.err_restore(ValueError, 1) + with self.assertRaises(ValueError): + _testcapi.err_restore(ValueError, 1, None) + with self.assertRaises(ValueError): + _testcapi.err_restore(ValueError, ValueError()) + try: + _testcapi.err_restore(KeyError, "hi") + except KeyError as k: + self.assertEqual("hi", k.args[0]) + try: + 1/0 + except Exception as e: + tb = e.__traceback__ + with self.assertRaises(ValueError): + _testcapi.err_restore(ValueError, 1, tb) + with self.assertRaises(TypeError): + _testcapi.err_restore(ValueError, 1, 0) + try: + _testcapi.err_restore(ValueError, 1, tb) + except ValueError as v: + self.assertEqual(1, v.args[0]) + self.assertIs(tb, v.__traceback__.tb_next) + + if __name__ == "__main__": unittest.main() diff --git a/Lib/test/test_exceptions.py b/Lib/test/test_exceptions.py index f629321458d8..4ae71e431c56 100644 --- a/Lib/test/test_exceptions.py +++ b/Lib/test/test_exceptions.py @@ -347,6 +347,7 @@ def test_capi2(): _testcapi.raise_exception(BadException, 0) except RuntimeError as err: exc, err, tb = sys.exc_info() + tb = tb.tb_next co = tb.tb_frame.f_code self.assertEqual(co.co_name, "__init__") self.assertTrue(co.co_filename.endswith('test_exceptions.py')) @@ -1415,8 +1416,8 @@ def gen(): @cpython_only def test_recursion_normalizing_infinite_exception(self): # Issue #30697. Test that a RecursionError is raised when - # PyErr_NormalizeException() maximum recursion depth has been - # exceeded. + # maximum recursion depth has been exceeded when creating + # an exception code = """if 1: import _testcapi try: @@ -1426,8 +1427,7 @@ def test_recursion_normalizing_infinite_exception(self): """ rc, out, err = script_helper.assert_python_failure("-c", code) self.assertEqual(rc, 1) - self.assertIn(b'RecursionError: maximum recursion depth exceeded ' - b'while normalizing an exception', err) + self.assertIn(b'RecursionError: maximum recursion depth exceeded', err) self.assertIn(b'Done.', out) diff --git a/Lib/test/test_stable_abi_ctypes.py b/Lib/test/test_stable_abi_ctypes.py index 67c653428a6d..e77c1c840988 100644 --- a/Lib/test/test_stable_abi_ctypes.py +++ b/Lib/test/test_stable_abi_ctypes.py @@ -172,6 +172,7 @@ def test_windows_feature_macros(self): "PyErr_FormatV", "PyErr_GetExcInfo", "PyErr_GetHandledException", + "PyErr_GetRaisedException", "PyErr_GivenExceptionMatches", "PyErr_NewException", "PyErr_NewExceptionWithDoc", @@ -195,6 +196,7 @@ def test_windows_feature_macros(self): "PyErr_SetInterruptEx", "PyErr_SetNone", "PyErr_SetObject", + "PyErr_SetRaisedException", "PyErr_SetString", "PyErr_SyntaxLocation", "PyErr_SyntaxLocationEx", @@ -292,9 +294,11 @@ def test_windows_feature_macros(self): "PyExc_Warning", "PyExc_ZeroDivisionError", "PyExceptionClass_Name", + "PyException_GetArgs", "PyException_GetCause", "PyException_GetContext", "PyException_GetTraceback", + "PyException_SetArgs", "PyException_SetCause", "PyException_SetContext", "PyException_SetTraceback", diff --git a/Misc/NEWS.d/next/C API/2023-02-06-16-14-30.gh-issue-101578.PW5fA9.rst b/Misc/NEWS.d/next/C API/2023-02-06-16-14-30.gh-issue-101578.PW5fA9.rst new file mode 100644 index 000000000000..fc694f6e051b --- /dev/null +++ b/Misc/NEWS.d/next/C API/2023-02-06-16-14-30.gh-issue-101578.PW5fA9.rst @@ -0,0 +1,13 @@ +Add new C-API functions for saving and restoring the current exception: +``PyErr_GetRaisedException`` and ``PyErr_SetRaisedException``. +These functions take and return a single exception rather than +the triple of ``PyErr_Fetch`` and ``PyErr_Restore``. +This is less error prone and a bit more efficient. + +The three arguments forms of saving and restoring the +current exception: ``PyErr_Fetch`` and ``PyErr_Restore`` +are deprecated. + +Also add ``PyException_GetArgs`` and ``PyException_SetArgs`` +as convenience functions to help dealing with +exceptions in the C API. diff --git a/Misc/stable_abi.toml b/Misc/stable_abi.toml index c716f403d638..21ff96161334 100644 --- a/Misc/stable_abi.toml +++ b/Misc/stable_abi.toml @@ -2333,6 +2333,15 @@ added = '3.12' [function.PyVectorcall_Call] added = '3.12' +[function.PyErr_GetRaisedException] + added = '3.12' +[function.PyErr_SetRaisedException] + added = '3.12' +[function.PyException_GetArgs] + added = '3.12' +[function.PyException_SetArgs] + added = '3.12' + [typedef.vectorcallfunc] added = '3.12' [function.PyObject_Vectorcall] diff --git a/Modules/_testcapi/heaptype.c b/Modules/_testcapi/heaptype.c index bf80fd64d80b..39639f7ed048 100644 --- a/Modules/_testcapi/heaptype.c +++ b/Modules/_testcapi/heaptype.c @@ -116,10 +116,10 @@ test_from_spec_invalid_metatype_inheritance(PyObject *self, PyObject *Py_UNUSED( PyObject *bases = NULL; PyObject *new = NULL; PyObject *meta_error_string = NULL; - PyObject *exc_type = NULL; - PyObject *exc_value = NULL; - PyObject *exc_traceback = NULL; + PyObject *exc = NULL; PyObject *result = NULL; + PyObject *message = NULL; + PyObject *args = NULL; metaclass_a = PyType_FromSpecWithBases(&MinimalMetaclass_spec, (PyObject*)&PyType_Type); if (metaclass_a == NULL) { @@ -156,13 +156,19 @@ test_from_spec_invalid_metatype_inheritance(PyObject *self, PyObject *Py_UNUSED( // Assert that the correct exception was raised if (PyErr_ExceptionMatches(PyExc_TypeError)) { - PyErr_Fetch(&exc_type, &exc_value, &exc_traceback); - + exc = PyErr_GetRaisedException(); + args = PyException_GetArgs(exc); + if (!PyTuple_Check(args) || PyTuple_Size(args) != 1) { + PyErr_SetString(PyExc_AssertionError, + "TypeError args are not a one-tuple"); + goto finally; + } + message = Py_NewRef(PyTuple_GET_ITEM(args, 0)); meta_error_string = PyUnicode_FromString("metaclass conflict:"); if (meta_error_string == NULL) { goto finally; } - int res = PyUnicode_Contains(exc_value, meta_error_string); + int res = PyUnicode_Contains(message, meta_error_string); if (res < 0) { goto finally; } @@ -179,11 +185,11 @@ test_from_spec_invalid_metatype_inheritance(PyObject *self, PyObject *Py_UNUSED( Py_XDECREF(bases); Py_XDECREF(new); Py_XDECREF(meta_error_string); - Py_XDECREF(exc_type); - Py_XDECREF(exc_value); - Py_XDECREF(exc_traceback); + Py_XDECREF(exc); + Py_XDECREF(message); Py_XDECREF(class_a); Py_XDECREF(class_b); + Py_XDECREF(args); return result; } diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index 8b2ce1a2cfd4..3c411fa0d763 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -3470,6 +3470,41 @@ function_set_kw_defaults(PyObject *self, PyObject *args) Py_RETURN_NONE; } +static PyObject * +err_set_raised(PyObject *self, PyObject *exc) +{ + Py_INCREF(exc); + PyErr_SetRaisedException(exc); + assert(PyErr_Occurred()); + return NULL; +} + +static PyObject * +err_restore(PyObject *self, PyObject *args) { + PyObject *type = NULL, *value = NULL, *traceback = NULL; + switch(PyTuple_Size(args)) { + case 3: + traceback = PyTuple_GetItem(args, 2); + Py_INCREF(traceback); + /* fall through */ + case 2: + value = PyTuple_GetItem(args, 1); + Py_INCREF(value); + /* fall through */ + case 1: + type = PyTuple_GetItem(args, 0); + Py_INCREF(type); + break; + default: + PyErr_SetString(PyExc_TypeError, + "wrong number of arguments"); + return NULL; + } + PyErr_Restore(type, value, traceback); + assert(PyErr_Occurred()); + return NULL; +} + static PyObject *test_buildvalue_issue38913(PyObject *, PyObject *); static PyMethodDef TestMethods[] = { @@ -3622,6 +3657,8 @@ static PyMethodDef TestMethods[] = { {"function_set_defaults", function_set_defaults, METH_VARARGS, NULL}, {"function_get_kw_defaults", function_get_kw_defaults, METH_O, NULL}, {"function_set_kw_defaults", function_set_kw_defaults, METH_VARARGS, NULL}, + {"err_set_raised", err_set_raised, METH_O, NULL}, + {"err_restore", err_restore, METH_VARARGS, NULL}, {NULL, NULL} /* sentinel */ }; diff --git a/Modules/gcmodule.c b/Modules/gcmodule.c index 6630faa6f447..5879c5e11fe1 100644 --- a/Modules/gcmodule.c +++ b/Modules/gcmodule.c @@ -2082,11 +2082,10 @@ PyGC_Collect(void) n = 0; } else { - PyObject *exc, *value, *tb; gcstate->collecting = 1; - _PyErr_Fetch(tstate, &exc, &value, &tb); + PyObject *exc = _PyErr_GetRaisedException(tstate); n = gc_collect_with_callback(tstate, NUM_GENERATIONS - 1); - _PyErr_Restore(tstate, exc, value, tb); + _PyErr_SetRaisedException(tstate, exc); gcstate->collecting = 0; } diff --git a/Objects/dictobject.c b/Objects/dictobject.c index b9067213820b..fc658ca2f4b7 100644 --- a/Objects/dictobject.c +++ b/Objects/dictobject.c @@ -1663,15 +1663,15 @@ PyDict_GetItem(PyObject *op, PyObject *key) #endif /* Preserve the existing exception */ - PyObject *exc_type, *exc_value, *exc_tb; PyObject *value; Py_ssize_t ix; (void)ix; - _PyErr_Fetch(tstate, &exc_type, &exc_value, &exc_tb); + + PyObject *exc = _PyErr_GetRaisedException(tstate); ix = _Py_dict_lookup(mp, key, hash, &value); /* Ignore any exception raised by the lookup */ - _PyErr_Restore(tstate, exc_type, exc_value, exc_tb); + _PyErr_SetRaisedException(tstate, exc); assert(ix >= 0 || value == NULL); diff --git a/Objects/exceptions.c b/Objects/exceptions.c index db6f7d52804d..976f84dbf63c 100644 --- a/Objects/exceptions.c +++ b/Objects/exceptions.c @@ -8,6 +8,7 @@ #include #include #include "pycore_ceval.h" // _Py_EnterRecursiveCall +#include "pycore_pyerrors.h" // struct _PyErr_SetRaisedException #include "pycore_exceptions.h" // struct _Py_exc_state #include "pycore_initconfig.h" #include "pycore_object.h" @@ -288,13 +289,17 @@ BaseException_set_tb(PyBaseExceptionObject *self, PyObject *tb, void *Py_UNUSED( PyErr_SetString(PyExc_TypeError, "__traceback__ may not be deleted"); return -1; } - else if (!(tb == Py_None || PyTraceBack_Check(tb))) { + if (PyTraceBack_Check(tb)) { + Py_XSETREF(self->traceback, Py_NewRef(tb)); + } + else if (tb == Py_None) { + Py_CLEAR(self->traceback); + } + else { PyErr_SetString(PyExc_TypeError, "__traceback__ must be a traceback or None"); return -1; } - - Py_XSETREF(self->traceback, Py_NewRef(tb)); return 0; } @@ -413,6 +418,20 @@ PyException_SetContext(PyObject *self, PyObject *context) Py_XSETREF(_PyBaseExceptionObject_cast(self)->context, context); } +PyObject * +PyException_GetArgs(PyObject *self) +{ + PyObject *args = _PyBaseExceptionObject_cast(self)->args; + return Py_NewRef(args); +} + +void +PyException_SetArgs(PyObject *self, PyObject *args) +{ + Py_INCREF(args); + Py_XSETREF(_PyBaseExceptionObject_cast(self)->args, args); +} + const char * PyExceptionClass_Name(PyObject *ob) { @@ -3188,20 +3207,19 @@ SimpleExtendsException(PyExc_Exception, ReferenceError, #define MEMERRORS_SAVE 16 +static PyBaseExceptionObject last_resort_memory_error; + static PyObject * -MemoryError_new(PyTypeObject *type, PyObject *args, PyObject *kwds) +get_memory_error(int allow_allocation, PyObject *args, PyObject *kwds) { PyBaseExceptionObject *self; - - /* If this is a subclass of MemoryError, don't use the freelist - * and just return a fresh object */ - if (type != (PyTypeObject *) PyExc_MemoryError) { - return BaseException_new(type, args, kwds); - } - struct _Py_exc_state *state = get_exc_state(); if (state->memerrors_freelist == NULL) { - return BaseException_new(type, args, kwds); + if (!allow_allocation) { + return Py_NewRef(&last_resort_memory_error); + } + PyObject *result = BaseException_new((PyTypeObject *)PyExc_MemoryError, args, kwds); + return result; } /* Fetch object from freelist and revive it */ @@ -3221,6 +3239,35 @@ MemoryError_new(PyTypeObject *type, PyObject *args, PyObject *kwds) return (PyObject *)self; } +static PyBaseExceptionObject last_resort_memory_error; + +static PyObject * +MemoryError_new(PyTypeObject *type, PyObject *args, PyObject *kwds) +{ + /* If this is a subclass of MemoryError, don't use the freelist + * and just return a fresh object */ + if (type != (PyTypeObject *) PyExc_MemoryError) { + return BaseException_new(type, args, kwds); + } + return get_memory_error(1, args, kwds); +} + +PyObject * +_PyErr_NoMemory(PyThreadState *tstate) +{ + if (Py_IS_TYPE(PyExc_MemoryError, NULL)) { + /* PyErr_NoMemory() has been called before PyExc_MemoryError has been + initialized by _PyExc_Init() */ + Py_FatalError("Out of memory and PyExc_MemoryError is not " + "initialized yet"); + } + PyObject *err = get_memory_error(0, NULL, NULL); + if (err != NULL) { + _PyErr_SetRaisedException(tstate, err); + } + return NULL; +} + static void MemoryError_dealloc(PyBaseExceptionObject *self) { @@ -3252,6 +3299,7 @@ preallocate_memerrors(void) /* We create enough MemoryErrors and then decref them, which will fill up the freelist. */ int i; + PyObject *errors[MEMERRORS_SAVE]; for (i = 0; i < MEMERRORS_SAVE; i++) { errors[i] = MemoryError_new((PyTypeObject *) PyExc_MemoryError, @@ -3291,6 +3339,9 @@ static PyTypeObject _PyExc_MemoryError = { }; PyObject *PyExc_MemoryError = (PyObject *) &_PyExc_MemoryError; +static PyBaseExceptionObject last_resort_memory_error = { + _PyObject_IMMORTAL_INIT(&_PyExc_MemoryError) +}; /* * BufferError extends Exception diff --git a/Objects/object.c b/Objects/object.c index e940222c657e..7817c04ef5f5 100644 --- a/Objects/object.c +++ b/Objects/object.c @@ -2416,10 +2416,10 @@ _Py_Dealloc(PyObject *op) destructor dealloc = type->tp_dealloc; #ifdef Py_DEBUG PyThreadState *tstate = _PyThreadState_GET(); - PyObject *old_exc_type = tstate != NULL ? tstate->curexc_type : NULL; + PyObject *old_exc = tstate != NULL ? tstate->current_exception : NULL; // Keep the old exception type alive to prevent undefined behavior // on (tstate->curexc_type != old_exc_type) below - Py_XINCREF(old_exc_type); + Py_XINCREF(old_exc); // Make sure that type->tp_name remains valid Py_INCREF(type); #endif @@ -2432,12 +2432,12 @@ _Py_Dealloc(PyObject *op) #ifdef Py_DEBUG // gh-89373: The tp_dealloc function must leave the current exception // unchanged. - if (tstate != NULL && tstate->curexc_type != old_exc_type) { + if (tstate != NULL && tstate->current_exception != old_exc) { const char *err; - if (old_exc_type == NULL) { + if (old_exc == NULL) { err = "Deallocator of type '%s' raised an exception"; } - else if (tstate->curexc_type == NULL) { + else if (tstate->current_exception == NULL) { err = "Deallocator of type '%s' cleared the current exception"; } else { @@ -2448,7 +2448,7 @@ _Py_Dealloc(PyObject *op) } _Py_FatalErrorFormat(__func__, err, type->tp_name); } - Py_XDECREF(old_exc_type); + Py_XDECREF(old_exc); Py_DECREF(type); #endif } diff --git a/PC/python3dll.c b/PC/python3dll.c index 931f316bb998..e30081936575 100755 --- a/PC/python3dll.c +++ b/PC/python3dll.c @@ -198,6 +198,7 @@ EXPORT_FUNC(PyErr_Format) EXPORT_FUNC(PyErr_FormatV) EXPORT_FUNC(PyErr_GetExcInfo) EXPORT_FUNC(PyErr_GetHandledException) +EXPORT_FUNC(PyErr_GetRaisedException) EXPORT_FUNC(PyErr_GivenExceptionMatches) EXPORT_FUNC(PyErr_NewException) EXPORT_FUNC(PyErr_NewExceptionWithDoc) @@ -227,6 +228,7 @@ EXPORT_FUNC(PyErr_SetInterrupt) EXPORT_FUNC(PyErr_SetInterruptEx) EXPORT_FUNC(PyErr_SetNone) EXPORT_FUNC(PyErr_SetObject) +EXPORT_FUNC(PyErr_SetRaisedException) EXPORT_FUNC(PyErr_SetString) EXPORT_FUNC(PyErr_SyntaxLocation) EXPORT_FUNC(PyErr_SyntaxLocationEx) @@ -255,9 +257,11 @@ EXPORT_FUNC(PyEval_ReleaseThread) EXPORT_FUNC(PyEval_RestoreThread) EXPORT_FUNC(PyEval_SaveThread) EXPORT_FUNC(PyEval_ThreadsInitialized) +EXPORT_FUNC(PyException_GetArgs) EXPORT_FUNC(PyException_GetCause) EXPORT_FUNC(PyException_GetContext) EXPORT_FUNC(PyException_GetTraceback) +EXPORT_FUNC(PyException_SetArgs) EXPORT_FUNC(PyException_SetCause) EXPORT_FUNC(PyException_SetContext) EXPORT_FUNC(PyException_SetTraceback) diff --git a/Parser/pegen.c b/Parser/pegen.c index d84e06861ede..94dd9de8a431 100644 --- a/Parser/pegen.c +++ b/Parser/pegen.c @@ -643,13 +643,10 @@ _PyPegen_number_token(Parser *p) PyThreadState *tstate = _PyThreadState_GET(); // The only way a ValueError should happen in _this_ code is via // PyLong_FromString hitting a length limit. - if (tstate->curexc_type == PyExc_ValueError && - tstate->curexc_value != NULL) { - PyObject *type, *value, *tb; - // This acts as PyErr_Clear() as we're replacing curexc. - PyErr_Fetch(&type, &value, &tb); - Py_XDECREF(tb); - Py_DECREF(type); + if (tstate->current_exception != NULL && + Py_TYPE(tstate->current_exception) == (PyTypeObject *)PyExc_ValueError + ) { + PyObject *exc = PyErr_GetRaisedException(); /* Intentionally omitting columns to avoid a wall of 1000s of '^'s * on the error message. Nobody is going to overlook their huge * numeric literal once given the line. */ @@ -659,8 +656,8 @@ _PyPegen_number_token(Parser *p) t->end_lineno, -1 /* end_col_offset */, "%S - Consider hexadecimal for huge integer literals " "to avoid decimal conversion limits.", - value); - Py_DECREF(value); + exc); + Py_DECREF(exc); } return NULL; } diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 9633f34212a6..1169d8d172dd 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -804,9 +804,7 @@ dummy_func( DECREF_INPUTS(); } else { - PyObject *exc_type = Py_NewRef(Py_TYPE(exc_value)); - PyObject *exc_traceback = PyException_GetTraceback(exc_value); - _PyErr_Restore(tstate, exc_type, Py_NewRef(exc_value), exc_traceback); + _PyErr_SetRaisedException(tstate, Py_NewRef(exc_value)); goto exception_unwind; } } diff --git a/Python/ceval.c b/Python/ceval.c index ecb5bf965555..a91f5baca885 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -2902,13 +2902,13 @@ format_kwargs_error(PyThreadState *tstate, PyObject *func, PyObject *kwargs) } } else if (_PyErr_ExceptionMatches(tstate, PyExc_KeyError)) { - PyObject *exc, *val, *tb; - _PyErr_Fetch(tstate, &exc, &val, &tb); - if (val && PyTuple_Check(val) && PyTuple_GET_SIZE(val) == 1) { + PyObject *exc = _PyErr_GetRaisedException(tstate); + PyObject *args = ((PyBaseExceptionObject *)exc)->args; + if (exc && PyTuple_Check(args) && PyTuple_GET_SIZE(args) == 1) { _PyErr_Clear(tstate); PyObject *funcstr = _PyObject_FunctionStr(func); if (funcstr != NULL) { - PyObject *key = PyTuple_GET_ITEM(val, 0); + PyObject *key = PyTuple_GET_ITEM(args, 0); _PyErr_Format( tstate, PyExc_TypeError, "%U got multiple values for keyword argument '%S'", @@ -2916,11 +2916,9 @@ format_kwargs_error(PyThreadState *tstate, PyObject *func, PyObject *kwargs) Py_DECREF(funcstr); } Py_XDECREF(exc); - Py_XDECREF(val); - Py_XDECREF(tb); } else { - _PyErr_Restore(tstate, exc, val, tb); + _PyErr_SetRaisedException(tstate, exc); } } } diff --git a/Python/errors.c b/Python/errors.c index 05ef62246ec0..f573bed3d63e 100644 --- a/Python/errors.c +++ b/Python/errors.c @@ -27,32 +27,84 @@ static PyObject * _PyErr_FormatV(PyThreadState *tstate, PyObject *exception, const char *format, va_list vargs); - void -_PyErr_Restore(PyThreadState *tstate, PyObject *type, PyObject *value, - PyObject *traceback) +_PyErr_SetRaisedException(PyThreadState *tstate, PyObject *exc) { - PyObject *oldtype, *oldvalue, *oldtraceback; + PyObject *old_exc = tstate->current_exception; + tstate->current_exception = exc; + Py_XDECREF(old_exc); +} - if (traceback != NULL && !PyTraceBack_Check(traceback)) { - /* XXX Should never happen -- fatal error instead? */ - /* Well, it could be None. */ - Py_SETREF(traceback, NULL); +static PyObject* +_PyErr_CreateException(PyObject *exception_type, PyObject *value) +{ + PyObject *exc; + + if (value == NULL || value == Py_None) { + exc = _PyObject_CallNoArgs(exception_type); + } + else if (PyTuple_Check(value)) { + exc = PyObject_Call(exception_type, value, NULL); + } + else { + exc = PyObject_CallOneArg(exception_type, value); } - /* Save these in locals to safeguard against recursive - invocation through Py_XDECREF */ - oldtype = tstate->curexc_type; - oldvalue = tstate->curexc_value; - oldtraceback = tstate->curexc_traceback; + if (exc != NULL && !PyExceptionInstance_Check(exc)) { + PyErr_Format(PyExc_TypeError, + "calling %R should have returned an instance of " + "BaseException, not %s", + exception_type, Py_TYPE(exc)->tp_name); + Py_CLEAR(exc); + } - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = traceback; + return exc; +} - Py_XDECREF(oldtype); - Py_XDECREF(oldvalue); - Py_XDECREF(oldtraceback); +void +_PyErr_Restore(PyThreadState *tstate, PyObject *type, PyObject *value, + PyObject *traceback) +{ + if (type == NULL) { + assert(value == NULL); + assert(traceback == NULL); + _PyErr_SetRaisedException(tstate, NULL); + return; + } + assert(PyExceptionClass_Check(type)); + if (value != NULL && type == (PyObject *)Py_TYPE(value)) { + /* Already normalized */ + assert(((PyBaseExceptionObject *)value)->traceback != Py_None); + } + else { + PyObject *exc = _PyErr_CreateException(type, value); + Py_XDECREF(value); + if (exc == NULL) { + Py_DECREF(type); + Py_XDECREF(traceback); + return; + } + value = exc; + } + assert(PyExceptionInstance_Check(value)); + if (traceback != NULL && !PyTraceBack_Check(traceback)) { + if (traceback == Py_None) { + Py_DECREF(Py_None); + traceback = NULL; + } + else { + PyErr_SetString(PyExc_TypeError, "traceback must be a Traceback or None"); + Py_XDECREF(value); + Py_DECREF(type); + Py_XDECREF(traceback); + return; + } + } + PyObject *old_traceback = ((PyBaseExceptionObject *)value)->traceback; + ((PyBaseExceptionObject *)value)->traceback = traceback; + Py_XDECREF(old_traceback); + _PyErr_SetRaisedException(tstate, value); + Py_DECREF(type); } void @@ -62,6 +114,12 @@ PyErr_Restore(PyObject *type, PyObject *value, PyObject *traceback) _PyErr_Restore(tstate, type, value, traceback); } +void +PyErr_SetRaisedException(PyObject *exc) +{ + PyThreadState *tstate = _PyThreadState_GET(); + _PyErr_SetRaisedException(tstate, exc); +} _PyErr_StackItem * _PyErr_GetTopmostException(PyThreadState *tstate) @@ -77,32 +135,6 @@ _PyErr_GetTopmostException(PyThreadState *tstate) return exc_info; } -static PyObject* -_PyErr_CreateException(PyObject *exception_type, PyObject *value) -{ - PyObject *exc; - - if (value == NULL || value == Py_None) { - exc = _PyObject_CallNoArgs(exception_type); - } - else if (PyTuple_Check(value)) { - exc = PyObject_Call(exception_type, value, NULL); - } - else { - exc = PyObject_CallOneArg(exception_type, value); - } - - if (exc != NULL && !PyExceptionInstance_Check(exc)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %s", - exception_type, Py_TYPE(exc)->tp_name); - Py_CLEAR(exc); - } - - return exc; -} - void _PyErr_SetObject(PyThreadState *tstate, PyObject *exception, PyObject *value) { @@ -117,30 +149,29 @@ _PyErr_SetObject(PyThreadState *tstate, PyObject *exception, PyObject *value) exception); return; } - Py_XINCREF(value); - exc_value = _PyErr_GetTopmostException(tstate)->exc_value; - if (exc_value != NULL && exc_value != Py_None) { - /* Implicit exception chaining */ - Py_INCREF(exc_value); - if (value == NULL || !PyExceptionInstance_Check(value)) { - /* We must normalize the value right now */ - PyObject *fixed_value; - - /* Issue #23571: functions must not be called with an - exception set */ - _PyErr_Clear(tstate); + /* Normalize the exception */ + if (value == NULL || (PyObject *)Py_TYPE(value) != exception) { + /* We must normalize the value right now */ + PyObject *fixed_value; - fixed_value = _PyErr_CreateException(exception, value); - Py_XDECREF(value); - if (fixed_value == NULL) { - Py_DECREF(exc_value); - return; - } + /* Issue #23571: functions must not be called with an + exception set */ + _PyErr_Clear(tstate); - value = fixed_value; + fixed_value = _PyErr_CreateException(exception, value); + Py_XDECREF(value); + if (fixed_value == NULL) { + return; } + value = fixed_value; + } + + exc_value = _PyErr_GetTopmostException(tstate)->exc_value; + if (exc_value != NULL && exc_value != Py_None) { + /* Implicit exception chaining */ + Py_INCREF(exc_value); /* Avoid creating new reference cycles through the context chain, while taking care not to hang on pre-existing ones. @@ -414,17 +445,34 @@ PyErr_NormalizeException(PyObject **exc, PyObject **val, PyObject **tb) } +PyObject * +_PyErr_GetRaisedException(PyThreadState *tstate) { + PyObject *exc = tstate->current_exception; + tstate->current_exception = NULL; + return exc; +} + +PyObject * +PyErr_GetRaisedException(void) +{ + PyThreadState *tstate = _PyThreadState_GET(); + return _PyErr_GetRaisedException(tstate); +} + void _PyErr_Fetch(PyThreadState *tstate, PyObject **p_type, PyObject **p_value, PyObject **p_traceback) { - *p_type = tstate->curexc_type; - *p_value = tstate->curexc_value; - *p_traceback = tstate->curexc_traceback; - - tstate->curexc_type = NULL; - tstate->curexc_value = NULL; - tstate->curexc_traceback = NULL; + PyObject *exc = _PyErr_GetRaisedException(tstate); + *p_value = exc; + if (exc == NULL) { + *p_type = NULL; + *p_traceback = NULL; + } + else { + *p_type = Py_NewRef(Py_TYPE(exc)); + *p_traceback = Py_XNewRef(((PyBaseExceptionObject *)exc)->traceback); + } } @@ -597,6 +645,28 @@ _PyErr_ChainExceptions(PyObject *typ, PyObject *val, PyObject *tb) } } +/* Like PyErr_SetRaisedException(), but if an exception is already set, + set the context associated with it. + + The caller is responsible for ensuring that this call won't create + any cycles in the exception context chain. */ +void +_PyErr_ChainExceptions1(PyObject *exc) +{ + if (exc == NULL) { + return; + } + PyThreadState *tstate = _PyThreadState_GET(); + if (_PyErr_Occurred(tstate)) { + PyObject *exc2 = _PyErr_GetRaisedException(tstate); + PyException_SetContext(exc2, exc); + _PyErr_SetRaisedException(tstate, exc2); + } + else { + _PyErr_SetRaisedException(tstate, exc); + } +} + /* Set the currently set exception's context to the given exception. If the provided exc_info is NULL, then the current Python thread state's @@ -706,19 +776,6 @@ PyErr_BadArgument(void) return 0; } -PyObject * -_PyErr_NoMemory(PyThreadState *tstate) -{ - if (Py_IS_TYPE(PyExc_MemoryError, NULL)) { - /* PyErr_NoMemory() has been called before PyExc_MemoryError has been - initialized by _PyExc_Init() */ - Py_FatalError("Out of memory and PyExc_MemoryError is not " - "initialized yet"); - } - _PyErr_SetNone(tstate, PyExc_MemoryError); - return NULL; -} - PyObject * PyErr_NoMemory(void) { diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index f38286441be4..09eb6893ebf6 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -1036,9 +1036,7 @@ Py_DECREF(exc_value); } else { - PyObject *exc_type = Py_NewRef(Py_TYPE(exc_value)); - PyObject *exc_traceback = PyException_GetTraceback(exc_value); - _PyErr_Restore(tstate, exc_type, Py_NewRef(exc_value), exc_traceback); + _PyErr_SetRaisedException(tstate, Py_NewRef(exc_value)); goto exception_unwind; } STACK_SHRINK(2); diff --git a/Python/import.c b/Python/import.c index da6c15c5fd41..1318c09d9b32 100644 --- a/Python/import.c +++ b/Python/import.c @@ -1592,6 +1592,13 @@ remove_importlib_frames(PyThreadState *tstate) Py_DECREF(code); tb = next; } + assert(PyExceptionInstance_Check(value)); + assert((PyObject *)Py_TYPE(value) == exception); + if (base_tb == NULL) { + base_tb = Py_None; + Py_INCREF(Py_None); + } + PyException_SetTraceback(value, base_tb); done: _PyErr_Restore(tstate, exception, value, base_tb); } diff --git a/Python/initconfig.c b/Python/initconfig.c index d7b2dc4a2974..deec805a6b1c 100644 --- a/Python/initconfig.c +++ b/Python/initconfig.c @@ -3143,8 +3143,7 @@ init_dump_ascii_wstr(const wchar_t *str) void _Py_DumpPathConfig(PyThreadState *tstate) { - PyObject *exc_type, *exc_value, *exc_tb; - _PyErr_Fetch(tstate, &exc_type, &exc_value, &exc_tb); + PyObject *exc = _PyErr_GetRaisedException(tstate); PySys_WriteStderr("Python path configuration:\n"); @@ -3202,5 +3201,5 @@ _Py_DumpPathConfig(PyThreadState *tstate) PySys_WriteStderr(" ]\n"); } - _PyErr_Restore(tstate, exc_type, exc_value, exc_tb); + _PyErr_SetRaisedException(tstate, exc); } diff --git a/Python/pystate.c b/Python/pystate.c index ed8c2e212a55..1261092d1435 100644 --- a/Python/pystate.c +++ b/Python/pystate.c @@ -1375,9 +1375,7 @@ PyThreadState_Clear(PyThreadState *tstate) Py_CLEAR(tstate->dict); Py_CLEAR(tstate->async_exc); - Py_CLEAR(tstate->curexc_type); - Py_CLEAR(tstate->curexc_value); - Py_CLEAR(tstate->curexc_traceback); + Py_CLEAR(tstate->current_exception); Py_CLEAR(tstate->exc_state.exc_value); diff --git a/Python/pythonrun.c b/Python/pythonrun.c index 35292b6478a8..6a4d59376869 100644 --- a/Python/pythonrun.c +++ b/Python/pythonrun.c @@ -748,13 +748,10 @@ _Py_HandleSystemExit(int *exitcode_p) } done: - /* Restore and clear the exception info, in order to properly decref - * the exception, value, and traceback. If we just exit instead, - * these leak, which confuses PYTHONDUMPREFS output, and may prevent - * some finalizers from running. - */ - PyErr_Restore(exception, value, tb); - PyErr_Clear(); + /* Cleanup the exception */ + Py_CLEAR(exception); + Py_CLEAR(value); + Py_CLEAR(tb); *exitcode_p = exitcode; return 1; } diff --git a/Python/sysmodule.c b/Python/sysmodule.c index f9f766a94d14..6e81ef92b67f 100644 --- a/Python/sysmodule.c +++ b/Python/sysmodule.c @@ -66,12 +66,11 @@ _PySys_GetAttr(PyThreadState *tstate, PyObject *name) if (sd == NULL) { return NULL; } - PyObject *exc_type, *exc_value, *exc_tb; - _PyErr_Fetch(tstate, &exc_type, &exc_value, &exc_tb); + PyObject *exc = _PyErr_GetRaisedException(tstate); /* XXX Suppress a new exception if it was raised and restore * the old one. */ PyObject *value = _PyDict_GetItemWithError(sd, name); - _PyErr_Restore(tstate, exc_type, exc_value, exc_tb); + _PyErr_SetRaisedException(tstate, exc); return value; } @@ -3704,11 +3703,10 @@ static void sys_format(PyObject *key, FILE *fp, const char *format, va_list va) { PyObject *file, *message; - PyObject *error_type, *error_value, *error_traceback; const char *utf8; PyThreadState *tstate = _PyThreadState_GET(); - _PyErr_Fetch(tstate, &error_type, &error_value, &error_traceback); + PyObject *error = _PyErr_GetRaisedException(tstate); file = _PySys_GetAttr(tstate, key); message = PyUnicode_FromFormatV(format, va); if (message != NULL) { @@ -3720,7 +3718,7 @@ sys_format(PyObject *key, FILE *fp, const char *format, va_list va) } Py_DECREF(message); } - _PyErr_Restore(tstate, error_type, error_value, error_traceback); + _PyErr_SetRaisedException(tstate, error); } void diff --git a/Python/traceback.c b/Python/traceback.c index da26c9b260a3..31b85e77575e 100644 --- a/Python/traceback.c +++ b/Python/traceback.c @@ -249,6 +249,8 @@ PyTraceBack_Here(PyFrameObject *frame) _PyErr_ChainExceptions(exc, val, tb); return -1; } + assert(PyExceptionInstance_Check(val)); + PyException_SetTraceback(val, newtb); PyErr_Restore(exc, val, newtb); Py_XDECREF(tb); return 0; @@ -260,13 +262,12 @@ void _PyTraceback_Add(const char *funcname, const char *filename, int lineno) PyObject *globals; PyCodeObject *code; PyFrameObject *frame; - PyObject *exc, *val, *tb; PyThreadState *tstate = _PyThreadState_GET(); /* Save and clear the current exception. Python functions must not be called with an exception set. Calling Python functions happens when the codec of the filesystem encoding is implemented in pure Python. */ - _PyErr_Fetch(tstate, &exc, &val, &tb); + PyObject *exc = _PyErr_GetRaisedException(tstate); globals = PyDict_New(); if (!globals) @@ -283,13 +284,13 @@ void _PyTraceback_Add(const char *funcname, const char *filename, int lineno) goto error; frame->f_lineno = lineno; - _PyErr_Restore(tstate, exc, val, tb); + _PyErr_SetRaisedException(tstate, exc); PyTraceBack_Here(frame); Py_DECREF(frame); return; error: - _PyErr_ChainExceptions(exc, val, tb); + _PyErr_ChainExceptions1(exc); } static PyObject * From webhook-mailer at python.org Wed Feb 8 04:56:04 2023 From: webhook-mailer at python.org (pablogsal) Date: Wed, 08 Feb 2023 09:56:04 -0000 Subject: [Python-checkins] Python 3.11.2 Message-ID: https://github.com/python/cpython/commit/878ead1ac1651965126322c1b3d124faf5484dc6 commit: 878ead1ac1651965126322c1b3d124faf5484dc6 branch: 3.11 author: Pablo Galindo committer: pablogsal date: 2023-02-07T13:37:51Z summary: Python 3.11.2 files: A Misc/NEWS.d/3.11.2.rst D Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst D Misc/NEWS.d/next/C API/2022-11-30-16-39-22.gh-issue-99240.67nAX-.rst D Misc/NEWS.d/next/Core and Builtins/2018-02-06-23-21-13.bpo-32782.EJVSfR.rst D Misc/NEWS.d/next/Core and Builtins/2022-12-06-22-24-01.gh-issue-100050.lcrPqQ.rst D Misc/NEWS.d/next/Core and Builtins/2022-12-12-01-05-16.gh-issue-99110.1JqtIg.rst D Misc/NEWS.d/next/Core and Builtins/2022-12-20-16-14-19.gh-issue-100374.YRrVHT.rst D Misc/NEWS.d/next/Core and Builtins/2022-12-31-23-32-09.gh-issue-100649.C0fY4S.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-01-15-59-48.gh-issue-100637.M2n6Kg.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-06-02-02-11.gh-issue-100776.pP8xux.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-10-14-11-17.gh-issue-100892.qfBVYI.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-11-22-52-19.gh-issue-100942.ontOy_.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-14-17-03-08.gh-issue-101037.9ATNuf.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-15-03-26-04.gh-issue-101046.g2CM4S.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-30-08-59-47.gh-issue-101400.Di_ZFm.rst D Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst D Misc/NEWS.d/next/Documentation/2022-12-02-17-08-08.gh-issue-99931.wC46hE.rst D Misc/NEWS.d/next/Documentation/2022-12-23-21-42-26.gh-issue-100472.NNixfO.rst D Misc/NEWS.d/next/Documentation/2022-12-30-00-42-23.gh-issue-100616.eu80ij.rst D Misc/NEWS.d/next/Library/2019-05-13-11-37-30.bpo-36880.ZgBgH0.rst D Misc/NEWS.d/next/Library/2020-05-03-12-55-55.bpo-40447.oKR0Lj.rst D Misc/NEWS.d/next/Library/2021-08-03-05-31-00.bpo-44817.wOW_Qn.rst D Misc/NEWS.d/next/Library/2022-08-11-10-02-19.gh-issue-95882.FsUr72.rst D Misc/NEWS.d/next/Library/2022-10-28-07-24-34.gh-issue-85267.xUy_Wm.rst D Misc/NEWS.d/next/Library/2022-11-08-11-18-51.gh-issue-64490.VcBgrN.rst D Misc/NEWS.d/next/Library/2022-11-08-15-54-43.gh-issue-99240.MhYwcz.rst D Misc/NEWS.d/next/Library/2022-11-13-15-32-19.gh-issue-99433.Ys6y0A.rst D Misc/NEWS.d/next/Library/2022-11-20-11-59-54.gh-issue-99576.ZD7jU6.rst D Misc/NEWS.d/next/Library/2022-11-21-16-24-01.gh-issue-83035.qZIujU.rst D Misc/NEWS.d/next/Library/2022-12-03-20-06-16.gh-issue-98778.t5U9uc.rst D Misc/NEWS.d/next/Library/2022-12-08-06-18-06.gh-issue-100098.uBvPlp.rst D Misc/NEWS.d/next/Library/2022-12-10-08-36-07.gh-issue-100133.g-zQlp.rst D Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst D Misc/NEWS.d/next/Library/2022-12-19-20-54-04.gh-issue-78878.JrkYqJ.rst D Misc/NEWS.d/next/Library/2022-12-19-23-19-26.gh-issue-96290.qFjsi6.rst D Misc/NEWS.d/next/Library/2022-12-21-18-29-24.gh-issue-100160.isBmL5.rst D Misc/NEWS.d/next/Library/2022-12-23-21-02-43.gh-issue-100474.gppA4U.rst D Misc/NEWS.d/next/Library/2022-12-24-08-42-05.gh-issue-100287.n0oEuG.rst D Misc/NEWS.d/next/Library/2022-12-30-07-49-08.gh-issue-86508.nGZDzC.rst D Misc/NEWS.d/next/Library/2023-01-04-09-53-38.gh-issue-100740.-j5UjI.rst D Misc/NEWS.d/next/Library/2023-01-04-12-58-59.gh-issue-100689.Ce0ITG.rst D Misc/NEWS.d/next/Library/2023-01-04-14-42-59.gh-issue-100750.iFJs5Y.rst D Misc/NEWS.d/next/Library/2023-01-04-22-10-31.gh-issue-90104.yZk5EX.rst D Misc/NEWS.d/next/Library/2023-01-07-15-13-47.gh-issue-100805.05rBz9.rst D Misc/NEWS.d/next/Library/2023-01-12-01-18-13.gh-issue-100573.KDskqo.rst D Misc/NEWS.d/next/Library/2023-01-14-12-58-21.gh-issue-101015.stWFid.rst D Misc/NEWS.d/next/Library/2023-01-18-17-58-50.gh-issue-101144.FHd8Un.rst D Misc/NEWS.d/next/Library/2023-01-20-10-46-59.gh-issue-101143.hJo8hu.rst D Misc/NEWS.d/next/Library/2023-01-21-16-50-22.gh-issue-100795.NPMZf7.rst D Misc/NEWS.d/next/Library/2023-01-25-18-07-20.gh-issue-101326.KL4SFv.rst D Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst D Misc/NEWS.d/next/Tests/2022-08-22-15-49-14.gh-issue-96002.4UE9UE.rst D Misc/NEWS.d/next/Tests/2022-12-23-13-29-55.gh-issue-100454.3no0cW.rst D Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst D Misc/NEWS.d/next/Tools-Demos/2022-08-11-09-58-15.gh-issue-64490.PjwhM4.rst D Misc/NEWS.d/next/Tools-Demos/2022-12-19-10-08-53.gh-issue-100342.qDFlQG.rst D Misc/NEWS.d/next/Tools-Demos/2022-12-29-19-22-11.bpo-45256.a0ee_H.rst D Misc/NEWS.d/next/Windows/2021-05-02-15-29-33.bpo-43984.U92jiv.rst D Misc/NEWS.d/next/Windows/2023-01-09-23-03-57.gh-issue-100180.b5phrg.rst D Misc/NEWS.d/next/Windows/2023-01-11-14-42-11.gh-issue-100247.YfEmSz.rst D Misc/NEWS.d/next/Windows/2023-01-11-16-28-09.gh-issue-100320.2DU2it.rst D Misc/NEWS.d/next/Windows/2023-01-17-18-17-58.gh-issue-82052.mWyysT.rst D Misc/NEWS.d/next/Windows/2023-01-18-18-25-18.gh-issue-101135.HF9VlG.rst D Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst D Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst D Misc/NEWS.d/next/macOS/2023-01-09-22-04-21.gh-issue-100180.WVhCny.rst M Include/patchlevel.h M Lib/pydoc_data/topics.py M README.rst diff --git a/Include/patchlevel.h b/Include/patchlevel.h index 2fa3d2e3e533..806e0441b2fa 100644 --- a/Include/patchlevel.h +++ b/Include/patchlevel.h @@ -18,12 +18,12 @@ /*--start constants--*/ #define PY_MAJOR_VERSION 3 #define PY_MINOR_VERSION 11 -#define PY_MICRO_VERSION 1 +#define PY_MICRO_VERSION 2 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL #define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "3.11.1+" +#define PY_VERSION "3.11.2" /*--end constants--*/ /* Version as a single 4-byte hex number, e.g. 0x010502B2 == 1.5.2b2. diff --git a/Lib/pydoc_data/topics.py b/Lib/pydoc_data/topics.py index 11a1c73bf70f..7e3f4f9830d5 100644 --- a/Lib/pydoc_data/topics.py +++ b/Lib/pydoc_data/topics.py @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -# Autogenerated by Sphinx on Tue Dec 6 19:05:00 2022 +# Autogenerated by Sphinx on Tue Feb 7 13:37:35 2023 topics = {'assert': 'The "assert" statement\n' '**********************\n' '\n' @@ -2382,12 +2382,10 @@ 'finished,\n' 'but if the sequence is empty, they will not have been assigned ' 'to at\n' - 'all by the loop. Hint: the built-in function "range()" returns ' - 'an\n' - 'iterator of integers suitable to emulate the effect of Pascal?s ' - '"for i\n' - ':= a to b do"; e.g., "list(range(3))" returns the list "[0, 1, ' - '2]".\n' + 'all by the loop. Hint: the built-in type "range()" represents\n' + 'immutable arithmetic sequences of integers. For instance, ' + 'iterating\n' + '"range(3)" successively yields 0, 1, and then 2.\n' '\n' 'Changed in version 3.11: Starred elements are now allowed in ' 'the\n' @@ -2726,7 +2724,7 @@ 'the\n' ' target list, it will be treated the same as an error ' 'occurring\n' - ' within the suite would be. See step 6 below.\n' + ' within the suite would be. See step 7 below.\n' '\n' '6. The suite is executed.\n' '\n' @@ -4649,6 +4647,18 @@ 'the source. The extension interface uses the modules "bdb" and ' '"cmd".\n' '\n' + 'See also:\n' + '\n' + ' Module "faulthandler"\n' + ' Used to dump Python tracebacks explicitly, on a fault, ' + 'after a\n' + ' timeout, or on a user signal.\n' + '\n' + ' Module "traceback"\n' + ' Standard interface to extract, format and print stack ' + 'traces of\n' + ' Python programs.\n' + '\n' 'The debugger?s prompt is "(Pdb)". Typical usage to run a program ' 'under\n' 'control of the debugger is:\n' @@ -5668,7 +5678,8 @@ 'be\n' 'determined by scanning the entire text of the block for name ' 'binding\n' - 'operations.\n' + 'operations. See the FAQ entry on UnboundLocalError for ' + 'examples.\n' '\n' 'If the "global" statement occurs within a block, all uses of ' 'the names\n' @@ -5970,10 +5981,9 @@ '\n' 'Names in the target list are not deleted when the loop is finished,\n' 'but if the sequence is empty, they will not have been assigned to at\n' - 'all by the loop. Hint: the built-in function "range()" returns an\n' - 'iterator of integers suitable to emulate the effect of Pascal?s "for ' - 'i\n' - ':= a to b do"; e.g., "list(range(3))" returns the list "[0, 1, 2]".\n' + 'all by the loop. Hint: the built-in type "range()" represents\n' + 'immutable arithmetic sequences of integers. For instance, iterating\n' + '"range(3)" successively yields 0, 1, and then 2.\n' '\n' 'Changed in version 3.11: Starred elements are now allowed in the\n' 'expression list.\n', @@ -7781,7 +7791,7 @@ 'within a code block. The local variables of a code block can be\n' 'determined by scanning the entire text of the block for name ' 'binding\n' - 'operations.\n' + 'operations. See the FAQ entry on UnboundLocalError for examples.\n' '\n' 'If the "global" statement occurs within a block, all uses of the ' 'names\n' @@ -11322,35 +11332,35 @@ '\n' "str.encode(encoding='utf-8', errors='strict')\n" '\n' - ' Return an encoded version of the string as a bytes ' - 'object. Default\n' - ' encoding is "\'utf-8\'". *errors* may be given to set a ' - 'different\n' - ' error handling scheme. The default for *errors* is ' - '"\'strict\'",\n' - ' meaning that encoding errors raise a "UnicodeError". ' + ' Return the string encoded to "bytes".\n' + '\n' + ' *encoding* defaults to "\'utf-8\'"; see Standard ' + 'Encodings for\n' + ' possible values.\n' + '\n' + ' *errors* controls how encoding errors are handled. If ' + '"\'strict\'"\n' + ' (the default), a "UnicodeError" exception is raised. ' 'Other possible\n' ' values are "\'ignore\'", "\'replace\'", ' '"\'xmlcharrefreplace\'",\n' ' "\'backslashreplace\'" and any other name registered ' 'via\n' - ' "codecs.register_error()", see section Error Handlers. ' - 'For a list\n' - ' of possible encodings, see section Standard Encodings.\n' + ' "codecs.register_error()". See Error Handlers for ' + 'details.\n' '\n' - ' By default, the *errors* argument is not checked for ' - 'best\n' - ' performances, but only used at the first encoding ' - 'error. Enable the\n' - ' Python Development Mode, or use a debug build to check ' - '*errors*.\n' + ' For performance reasons, the value of *errors* is not ' + 'checked for\n' + ' validity unless an encoding error actually occurs, ' + 'Python\n' + ' Development Mode is enabled or a debug build is used.\n' '\n' - ' Changed in version 3.1: Support for keyword arguments ' - 'added.\n' + ' Changed in version 3.1: Added support for keyword ' + 'arguments.\n' '\n' - ' Changed in version 3.9: The *errors* is now checked in ' - 'development\n' - ' mode and in debug mode.\n' + ' Changed in version 3.9: The value of the *errors* ' + 'argument is now\n' + ' checked in Python Development Mode and in debug mode.\n' '\n' 'str.endswith(suffix[, start[, end]])\n' '\n' @@ -14437,8 +14447,7 @@ ' >>> # get back a read-only proxy for the original ' 'dictionary\n' ' >>> values.mapping\n' - " mappingproxy({'eggs': 2, 'sausage': 1, 'bacon': 1, " - "'spam': 500})\n" + " mappingproxy({'bacon': 1, 'spam': 500})\n" " >>> values.mapping['spam']\n" ' 500\n', 'typesmethods': 'Methods\n' @@ -15484,7 +15493,7 @@ ' returns without an error, then "__exit__()" will always be\n' ' called. Thus, if an error occurs during the assignment to the\n' ' target list, it will be treated the same as an error occurring\n' - ' within the suite would be. See step 6 below.\n' + ' within the suite would be. See step 7 below.\n' '\n' '6. The suite is executed.\n' '\n' diff --git a/Misc/NEWS.d/3.11.2.rst b/Misc/NEWS.d/3.11.2.rst new file mode 100644 index 000000000000..3f4ff595501c --- /dev/null +++ b/Misc/NEWS.d/3.11.2.rst @@ -0,0 +1,676 @@ +.. date: 2023-02-06-20-13-36 +.. gh-issue: 92173 +.. nonce: RQE0mk +.. release date: 2023-02-07 +.. section: Core and Builtins + +Fix the ``defs`` and ``kwdefs`` arguments to :c:func:`PyEval_EvalCodeEx` and +a reference leak in that function. + +.. + +.. date: 2023-01-30-08-59-47 +.. gh-issue: 101400 +.. nonce: Di_ZFm +.. section: Core and Builtins + +Fix wrong lineno in exception message on :keyword:`continue` or +:keyword:`break` which are not in a loop. Patch by Dong-hee Na. + +.. + +.. date: 2023-01-28-20-31-42 +.. gh-issue: 101372 +.. nonce: 8BcpCC +.. section: Core and Builtins + +Fix :func:`~unicodedata.is_normalized` to properly handle the UCD 3.2.0 +cases. Patch by Dong-hee Na. + +.. + +.. date: 2023-01-15-03-26-04 +.. gh-issue: 101046 +.. nonce: g2CM4S +.. section: Core and Builtins + +Fix a possible memory leak in the parser when raising :exc:`MemoryError`. +Patch by Pablo Galindo + +.. + +.. date: 2023-01-14-17-03-08 +.. gh-issue: 101037 +.. nonce: 9ATNuf +.. section: Core and Builtins + +Fix potential memory underallocation issue for instances of :class:`int` +subclasses with value zero. + +.. + +.. date: 2023-01-11-22-52-19 +.. gh-issue: 100942 +.. nonce: ontOy_ +.. section: Core and Builtins + +Fixed segfault in property.getter/setter/deleter that occurred when a +property subclass overrode the ``__new__`` method to return a non-property +instance. + +.. + +.. date: 2023-01-10-14-11-17 +.. gh-issue: 100892 +.. nonce: qfBVYI +.. section: Core and Builtins + +Fix race while iterating over thread states in clearing +:class:`threading.local`. Patch by Kumar Aditya. + +.. + +.. date: 2023-01-06-02-02-11 +.. gh-issue: 100776 +.. nonce: pP8xux +.. section: Core and Builtins + +Fix misleading default value in :func:`input`'s ``__text_signature__``. + +.. + +.. date: 2023-01-01-15-59-48 +.. gh-issue: 100637 +.. nonce: M2n6Kg +.. section: Core and Builtins + +Fix :func:`int.__sizeof__` calculation to include the 1 element ob_digit +array for 0 and False. + +.. + +.. date: 2022-12-31-23-32-09 +.. gh-issue: 100649 +.. nonce: C0fY4S +.. section: Core and Builtins + +Update the native_thread_id field of PyThreadState after fork. + +.. + +.. date: 2022-12-20-16-14-19 +.. gh-issue: 100374 +.. nonce: YRrVHT +.. section: Core and Builtins + +Fix incorrect result and delay in :func:`socket.getfqdn`. Patch by Dominic +Socular. + +.. + +.. date: 2022-12-12-01-05-16 +.. gh-issue: 99110 +.. nonce: 1JqtIg +.. section: Core and Builtins + +Initialize frame->previous in frameobject.c to fix a segmentation fault when +accessing frames created by :c:func:`PyFrame_New`. + +.. + +.. date: 2022-12-06-22-24-01 +.. gh-issue: 100050 +.. nonce: lcrPqQ +.. section: Core and Builtins + +Honor existing errors obtained when searching for mismatching parentheses in +the tokenizer. Patch by Pablo Galindo + +.. + +.. bpo: 32782 +.. date: 2018-02-06-23-21-13 +.. nonce: EJVSfR +.. section: Core and Builtins + +``ctypes`` arrays of length 0 now report a correct itemsize when a +``memoryview`` is constructed from them, rather than always giving a value +of 0. + +.. + +.. date: 2023-02-05-14-39-49 +.. gh-issue: 101541 +.. nonce: Mo3ppp +.. section: Library + +[Enum] - fix psuedo-flag creation + +.. + +.. date: 2023-01-25-18-07-20 +.. gh-issue: 101326 +.. nonce: KL4SFv +.. section: Library + +Fix regression when passing ``None`` as second or third argument to +``FutureIter.throw``. + +.. + +.. date: 2023-01-21-16-50-22 +.. gh-issue: 100795 +.. nonce: NPMZf7 +.. section: Library + +Avoid potential unexpected ``freeaddrinfo`` call (double free) in +:mod:`socket` when when a libc ``getaddrinfo()`` implementation leaves +garbage in an output pointer when returning an error. Original patch by +Sergey G. Brester. + +.. + +.. date: 2023-01-20-10-46-59 +.. gh-issue: 101143 +.. nonce: hJo8hu +.. section: Library + +Remove unused references to :class:`~asyncio.TimerHandle` in +``asyncio.base_events.BaseEventLoop._add_callback``. + +.. + +.. date: 2023-01-18-17-58-50 +.. gh-issue: 101144 +.. nonce: FHd8Un +.. section: Library + +Make :func:`zipfile.Path.open` and :func:`zipfile.Path.read_text` also +accept ``encoding`` as a positional argument. This was the behavior in +Python 3.9 and earlier. 3.10 introduced a regression where supplying it as +a positional argument would lead to a :exc:`TypeError`. + +.. + +.. date: 2023-01-14-12-58-21 +.. gh-issue: 101015 +.. nonce: stWFid +.. section: Library + +Fix :func:`typing.get_type_hints` on ``'*tuple[...]'`` and ``*tuple[...]``. +It must not drop the ``Unpack`` part. + +.. + +.. date: 2023-01-12-01-18-13 +.. gh-issue: 100573 +.. nonce: KDskqo +.. section: Library + +Fix a Windows :mod:`asyncio` bug with named pipes where a client doing +``os.stat()`` on the pipe would cause an error in the server that disabled +serving future requests. + +.. + +.. date: 2023-01-07-15-13-47 +.. gh-issue: 100805 +.. nonce: 05rBz9 +.. section: Library + +Modify :func:`random.choice` implementation to once again work with NumPy +arrays. + +.. + +.. date: 2023-01-04-22-10-31 +.. gh-issue: 90104 +.. nonce: yZk5EX +.. section: Library + +Avoid RecursionError on ``repr`` if a dataclass field definition has a +cyclic reference. + +.. + +.. date: 2023-01-04-14-42-59 +.. gh-issue: 100750 +.. nonce: iFJs5Y +.. section: Library + +pass encoding kwarg to subprocess in platform + +.. + +.. date: 2023-01-04-12-58-59 +.. gh-issue: 100689 +.. nonce: Ce0ITG +.. section: Library + +Fix crash in :mod:`pyexpat` by statically allocating ``PyExpat_CAPI`` +capsule. + +.. + +.. date: 2023-01-04-09-53-38 +.. gh-issue: 100740 +.. nonce: -j5UjI +.. section: Library + +Fix ``unittest.mock.Mock`` not respecting the spec for attribute names +prefixed with ``assert``. + +.. + +.. date: 2022-12-30-07-49-08 +.. gh-issue: 86508 +.. nonce: nGZDzC +.. section: Library + +Fix :func:`asyncio.open_connection` to skip binding to local addresses of +different family. Patch by Kumar Aditya. + +.. + +.. date: 2022-12-24-08-42-05 +.. gh-issue: 100287 +.. nonce: n0oEuG +.. section: Library + +Fix the interaction of :func:`unittest.mock.seal` with +:class:`unittest.mock.AsyncMock`. + +.. + +.. date: 2022-12-23-21-02-43 +.. gh-issue: 100474 +.. nonce: gppA4U +.. section: Library + +:mod:`http.server` now checks that an index page is actually a regular file +before trying to serve it. This avoids issues with directories named +``index.html``. + +.. + +.. date: 2022-12-21-18-29-24 +.. gh-issue: 100160 +.. nonce: isBmL5 +.. section: Library + +Remove any deprecation warnings in :func:`asyncio.get_event_loop`. They are +deferred to Python 3.12. + +.. + +.. date: 2022-12-19-23-19-26 +.. gh-issue: 96290 +.. nonce: qFjsi6 +.. section: Library + +Fix handling of partial and invalid UNC drives in ``ntpath.splitdrive()``, +and in ``ntpath.normpath()`` on non-Windows systems. Paths such as +'\\server' and '\\' are now considered by ``splitdrive()`` to contain only a +drive, and consequently are not modified by ``normpath()`` on non-Windows +systems. The behaviour of ``normpath()`` on Windows systems is unaffected, +as native OS APIs are used. Patch by Eryk Sun, with contributions by Barney +Gale. + +.. + +.. date: 2022-12-19-20-54-04 +.. gh-issue: 78878 +.. nonce: JrkYqJ +.. section: Library + +Fix crash when creating an instance of :class:`!_ctypes.CField`. + +.. + +.. date: 2022-12-11-14-38-59 +.. gh-issue: 99952 +.. nonce: IYGLzr +.. section: Library + +Fix a reference undercounting issue in :class:`ctypes.Structure` with +``from_param()`` results larger than a C pointer. + +.. + +.. date: 2022-12-10-08-36-07 +.. gh-issue: 100133 +.. nonce: g-zQlp +.. section: Library + +Fix regression in :mod:`asyncio` where a subprocess would sometimes lose +data received from pipe. + +.. + +.. date: 2022-12-08-06-18-06 +.. gh-issue: 100098 +.. nonce: uBvPlp +.. section: Library + +Fix ``tuple`` subclasses being cast to ``tuple`` when used as enum values. + +.. + +.. date: 2022-12-03-20-06-16 +.. gh-issue: 98778 +.. nonce: t5U9uc +.. section: Library + +Update :exc:`~urllib.error.HTTPError` to be initialized properly, even if +the ``fp`` is ``None``. Patch by Dong-hee Na. + +.. + +.. date: 2022-11-21-16-24-01 +.. gh-issue: 83035 +.. nonce: qZIujU +.. section: Library + +Fix :func:`inspect.getsource` handling of decorator calls with nested +parentheses. + +.. + +.. date: 2022-11-20-11-59-54 +.. gh-issue: 99576 +.. nonce: ZD7jU6 +.. section: Library + +Fix ``.save()`` method for ``LWPCookieJar`` and ``MozillaCookieJar``: saved +file was not truncated on repeated save. + +.. + +.. date: 2022-11-13-15-32-19 +.. gh-issue: 99433 +.. nonce: Ys6y0A +.. section: Library + +Fix :mod:`doctest` failure on :class:`types.MethodWrapperType` in modules. + +.. + +.. date: 2022-11-08-15-54-43 +.. gh-issue: 99240 +.. nonce: MhYwcz +.. section: Library + +Fix double-free bug in Argument Clinic ``str_converter`` by extracting +memory clean up to a new ``post_parsing`` section. + +.. + +.. date: 2022-11-08-11-18-51 +.. gh-issue: 64490 +.. nonce: VcBgrN +.. section: Library + +Fix refcount error when arguments are packed to tuple in Argument Clinic. + +.. + +.. date: 2022-10-28-07-24-34 +.. gh-issue: 85267 +.. nonce: xUy_Wm +.. section: Library + +Several improvements to :func:`inspect.signature`'s handling of +``__text_signature``. - Fixes a case where :func:`inspect.signature` dropped +parameters - Fixes a case where :func:`inspect.signature` raised +:exc:`tokenize.TokenError` - Allows :func:`inspect.signature` to understand +defaults involving binary operations of constants - +:func:`inspect.signature` is documented as only raising :exc:`TypeError` or +:exc:`ValueError`, but sometimes raised :exc:`RuntimeError`. These cases now +raise :exc:`ValueError` - Removed a dead code path + +.. + +.. date: 2022-08-11-10-02-19 +.. gh-issue: 95882 +.. nonce: FsUr72 +.. section: Library + +Fix a 3.11 regression in :func:`~contextlib.asynccontextmanager`, which +caused it to propagate exceptions with incorrect tracebacks and fix a 3.11 +regression in :func:`~contextlib.contextmanager`, which caused it to +propagate exceptions with incorrect tracebacks for :exc:`StopIteration`. + +.. + +.. bpo: 44817 +.. date: 2021-08-03-05-31-00 +.. nonce: wOW_Qn +.. section: Library + +Ignore WinError 53 (ERROR_BAD_NETPATH), 65 (ERROR_NETWORK_ACCESS_DENIED) and +161 (ERROR_BAD_PATHNAME) when using ntpath.realpath(). + +.. + +.. bpo: 40447 +.. date: 2020-05-03-12-55-55 +.. nonce: oKR0Lj +.. section: Library + +Accept :class:`os.PathLike` (such as :class:`pathlib.Path`) in the +``stripdir`` arguments of :meth:`compileall.compile_file` and +:meth:`compileall.compile_dir`. + +.. + +.. bpo: 36880 +.. date: 2019-05-13-11-37-30 +.. nonce: ZgBgH0 +.. section: Library + +Fix a reference counting issue when a :mod:`ctypes` callback with return +type :class:`~ctypes.py_object` returns ``None``, which could cause crashes. + +.. + +.. date: 2022-12-30-00-42-23 +.. gh-issue: 100616 +.. nonce: eu80ij +.. section: Documentation + +Document existing ``attr`` parameter to :func:`curses.window.vline` function +in :mod:`curses`. + +.. + +.. date: 2022-12-23-21-42-26 +.. gh-issue: 100472 +.. nonce: NNixfO +.. section: Documentation + +Remove claim in documentation that the ``stripdir``, ``prependdir`` and +``limit_sl_dest`` parameters of :func:`compileall.compile_dir` and +:func:`compileall.compile_file` could be :class:`bytes`. + +.. + +.. date: 2022-12-02-17-08-08 +.. gh-issue: 99931 +.. nonce: wC46hE +.. section: Documentation + +Use `sphinxext-opengraph `__ to +generate `OpenGraph metadata `__. + +.. + +.. date: 2023-02-04-17-24-33 +.. gh-issue: 101334 +.. nonce: _yOqwg +.. section: Tests + +``test_tarfile`` has been updated to pass when run as a high UID. + +.. + +.. date: 2022-12-23-13-29-55 +.. gh-issue: 100454 +.. nonce: 3no0cW +.. section: Tests + +Start running SSL tests with OpenSSL 3.1.0-beta1. + +.. + +.. date: 2022-08-22-15-49-14 +.. gh-issue: 96002 +.. nonce: 4UE9UE +.. section: Tests + +Add functional test for Argument Clinic. + +.. + +.. date: 2023-02-02-23-43-46 +.. gh-issue: 101522 +.. nonce: lnUDta +.. section: Build + +Allow overriding Windows dependencies versions and paths using MSBuild +properties. + +.. + +.. date: 2023-02-03-17-53-06 +.. gh-issue: 101543 +.. nonce: cORAT4 +.. section: Windows + +Ensure the install path in the registry is only used when the standard +library hasn't been located in any other way. + +.. + +.. date: 2023-01-31-16-50-07 +.. gh-issue: 101467 +.. nonce: ye9t-L +.. section: Windows + +The ``py.exe`` launcher now correctly filters when only a single runtime is +installed. It also correctly handles prefix matches on tags so that ``-3.1`` +does not match ``3.11``, but would still match ``3.1-32``. + +.. + +.. date: 2023-01-18-18-25-18 +.. gh-issue: 101135 +.. nonce: HF9VlG +.. section: Windows + +Restore ability to launch older 32-bit versions from the :file:`py.exe` +launcher when both 32-bit and 64-bit installs of the same version are +available. + +.. + +.. date: 2023-01-17-18-17-58 +.. gh-issue: 82052 +.. nonce: mWyysT +.. section: Windows + +Fixed an issue where writing more than 32K of Unicode output to the console +screen in one go can result in mojibake. + +.. + +.. date: 2023-01-11-16-28-09 +.. gh-issue: 100320 +.. nonce: 2DU2it +.. section: Windows + +Ensures the ``PythonPath`` registry key from an install is used when +launching from a different copy of Python that relies on an existing install +to provide a copy of its modules and standard library. + +.. + +.. date: 2023-01-11-14-42-11 +.. gh-issue: 100247 +.. nonce: YfEmSz +.. section: Windows + +Restores support for the :file:`py.exe` launcher finding shebang commands in +its configuration file using the full command name. + +.. + +.. date: 2023-01-09-23-03-57 +.. gh-issue: 100180 +.. nonce: b5phrg +.. section: Windows + +Update Windows installer to OpenSSL 1.1.1s + +.. + +.. bpo: 43984 +.. date: 2021-05-02-15-29-33 +.. nonce: U92jiv +.. section: Windows + +:meth:`winreg.SetValueEx` now leaves the target value untouched in the case +of conversion errors. Previously, ``-1`` would be written in case of such +errors. + +.. + +.. date: 2023-01-09-22-04-21 +.. gh-issue: 100180 +.. nonce: WVhCny +.. section: macOS + +Update macOS installer to OpenSSL 1.1.1s + +.. + +.. bpo: 45256 +.. date: 2022-12-29-19-22-11 +.. nonce: a0ee_H +.. section: Tools/Demos + +Fix a bug that caused an :exc:`AttributeError` to be raised in +``python-gdb.py`` when ``py-locals`` is used without a frame. + +.. + +.. date: 2022-12-19-10-08-53 +.. gh-issue: 100342 +.. nonce: qDFlQG +.. section: Tools/Demos + +Add missing ``NULL`` check for possible allocation failure in ``*args`` +parsing in Argument Clinic. + +.. + +.. date: 2022-08-11-09-58-15 +.. gh-issue: 64490 +.. nonce: PjwhM4 +.. section: Tools/Demos + +Argument Clinic varargs bugfixes + +* Fix out-of-bounds error in :c:func:`!_PyArg_UnpackKeywordsWithVararg`. +* Fix incorrect check which allowed more than one varargs in clinic.py. +* Fix miscalculation of ``noptargs`` in generated code. +* Do not generate ``noptargs`` when there is a vararg argument and no optional argument. + +.. + +.. date: 2022-11-30-16-39-22 +.. gh-issue: 99240 +.. nonce: 67nAX- +.. section: C API + +In argument parsing, after deallocating newly allocated memory, reset its +pointer to NULL. diff --git a/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst b/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst deleted file mode 100644 index 2e7f9029e9ee..000000000000 --- a/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst +++ /dev/null @@ -1,2 +0,0 @@ -Allow overriding Windows dependencies versions and paths using MSBuild -properties. diff --git a/Misc/NEWS.d/next/C API/2022-11-30-16-39-22.gh-issue-99240.67nAX-.rst b/Misc/NEWS.d/next/C API/2022-11-30-16-39-22.gh-issue-99240.67nAX-.rst deleted file mode 100644 index 9a1bb3cf5a73..000000000000 --- a/Misc/NEWS.d/next/C API/2022-11-30-16-39-22.gh-issue-99240.67nAX-.rst +++ /dev/null @@ -1,2 +0,0 @@ -In argument parsing, after deallocating newly allocated memory, reset its -pointer to NULL. diff --git a/Misc/NEWS.d/next/Core and Builtins/2018-02-06-23-21-13.bpo-32782.EJVSfR.rst b/Misc/NEWS.d/next/Core and Builtins/2018-02-06-23-21-13.bpo-32782.EJVSfR.rst deleted file mode 100644 index 841740130bd1..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2018-02-06-23-21-13.bpo-32782.EJVSfR.rst +++ /dev/null @@ -1,3 +0,0 @@ -``ctypes`` arrays of length 0 now report a correct itemsize when a -``memoryview`` is constructed from them, rather than always giving a value -of 0. diff --git a/Misc/NEWS.d/next/Core and Builtins/2022-12-06-22-24-01.gh-issue-100050.lcrPqQ.rst b/Misc/NEWS.d/next/Core and Builtins/2022-12-06-22-24-01.gh-issue-100050.lcrPqQ.rst deleted file mode 100644 index 8e7c72d804f8..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2022-12-06-22-24-01.gh-issue-100050.lcrPqQ.rst +++ /dev/null @@ -1,2 +0,0 @@ -Honor existing errors obtained when searching for mismatching parentheses in -the tokenizer. Patch by Pablo Galindo diff --git a/Misc/NEWS.d/next/Core and Builtins/2022-12-12-01-05-16.gh-issue-99110.1JqtIg.rst b/Misc/NEWS.d/next/Core and Builtins/2022-12-12-01-05-16.gh-issue-99110.1JqtIg.rst deleted file mode 100644 index 175740dfca07..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2022-12-12-01-05-16.gh-issue-99110.1JqtIg.rst +++ /dev/null @@ -1,2 +0,0 @@ -Initialize frame->previous in frameobject.c to fix a segmentation fault when -accessing frames created by :c:func:`PyFrame_New`. diff --git a/Misc/NEWS.d/next/Core and Builtins/2022-12-20-16-14-19.gh-issue-100374.YRrVHT.rst b/Misc/NEWS.d/next/Core and Builtins/2022-12-20-16-14-19.gh-issue-100374.YRrVHT.rst deleted file mode 100644 index e78352fb188e..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2022-12-20-16-14-19.gh-issue-100374.YRrVHT.rst +++ /dev/null @@ -1 +0,0 @@ -Fix incorrect result and delay in :func:`socket.getfqdn`. Patch by Dominic Socular. diff --git a/Misc/NEWS.d/next/Core and Builtins/2022-12-31-23-32-09.gh-issue-100649.C0fY4S.rst b/Misc/NEWS.d/next/Core and Builtins/2022-12-31-23-32-09.gh-issue-100649.C0fY4S.rst deleted file mode 100644 index 7ee929ad09fb..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2022-12-31-23-32-09.gh-issue-100649.C0fY4S.rst +++ /dev/null @@ -1 +0,0 @@ -Update the native_thread_id field of PyThreadState after fork. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-01-15-59-48.gh-issue-100637.M2n6Kg.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-01-15-59-48.gh-issue-100637.M2n6Kg.rst deleted file mode 100644 index 164f6f5f2f44..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-01-15-59-48.gh-issue-100637.M2n6Kg.rst +++ /dev/null @@ -1 +0,0 @@ -Fix :func:`int.__sizeof__` calculation to include the 1 element ob_digit array for 0 and False. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-06-02-02-11.gh-issue-100776.pP8xux.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-06-02-02-11.gh-issue-100776.pP8xux.rst deleted file mode 100644 index b94121ea5f29..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-06-02-02-11.gh-issue-100776.pP8xux.rst +++ /dev/null @@ -1 +0,0 @@ -Fix misleading default value in :func:`input`'s ``__text_signature__``. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-10-14-11-17.gh-issue-100892.qfBVYI.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-10-14-11-17.gh-issue-100892.qfBVYI.rst deleted file mode 100644 index f2576becc2fc..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-10-14-11-17.gh-issue-100892.qfBVYI.rst +++ /dev/null @@ -1 +0,0 @@ -Fix race while iterating over thread states in clearing :class:`threading.local`. Patch by Kumar Aditya. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-11-22-52-19.gh-issue-100942.ontOy_.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-11-22-52-19.gh-issue-100942.ontOy_.rst deleted file mode 100644 index daccea255b16..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-11-22-52-19.gh-issue-100942.ontOy_.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fixed segfault in property.getter/setter/deleter that occurred when a property -subclass overrode the ``__new__`` method to return a non-property instance. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-14-17-03-08.gh-issue-101037.9ATNuf.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-14-17-03-08.gh-issue-101037.9ATNuf.rst deleted file mode 100644 index a48756657a29..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-14-17-03-08.gh-issue-101037.9ATNuf.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix potential memory underallocation issue for instances of :class:`int` -subclasses with value zero. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-15-03-26-04.gh-issue-101046.g2CM4S.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-15-03-26-04.gh-issue-101046.g2CM4S.rst deleted file mode 100644 index f600473620f1..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-15-03-26-04.gh-issue-101046.g2CM4S.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix a possible memory leak in the parser when raising :exc:`MemoryError`. -Patch by Pablo Galindo diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst deleted file mode 100644 index 65a207e3f7e4..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix :func:`~unicodedata.is_normalized` to properly handle the UCD 3.2.0 -cases. Patch by Dong-hee Na. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-30-08-59-47.gh-issue-101400.Di_ZFm.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-30-08-59-47.gh-issue-101400.Di_ZFm.rst deleted file mode 100644 index f3dd783c01e7..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-30-08-59-47.gh-issue-101400.Di_ZFm.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix wrong lineno in exception message on :keyword:`continue` or -:keyword:`break` which are not in a loop. Patch by Dong-hee Na. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst deleted file mode 100644 index 6b98aac2a465..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-02-06-20-13-36.gh-issue-92173.RQE0mk.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix the ``defs`` and ``kwdefs`` arguments to :c:func:`PyEval_EvalCodeEx` -and a reference leak in that function. diff --git a/Misc/NEWS.d/next/Documentation/2022-12-02-17-08-08.gh-issue-99931.wC46hE.rst b/Misc/NEWS.d/next/Documentation/2022-12-02-17-08-08.gh-issue-99931.wC46hE.rst deleted file mode 100644 index 0c01a2cb2cfa..000000000000 --- a/Misc/NEWS.d/next/Documentation/2022-12-02-17-08-08.gh-issue-99931.wC46hE.rst +++ /dev/null @@ -1,2 +0,0 @@ -Use `sphinxext-opengraph `__ -to generate `OpenGraph metadata `__. diff --git a/Misc/NEWS.d/next/Documentation/2022-12-23-21-42-26.gh-issue-100472.NNixfO.rst b/Misc/NEWS.d/next/Documentation/2022-12-23-21-42-26.gh-issue-100472.NNixfO.rst deleted file mode 100644 index 4f4162150750..000000000000 --- a/Misc/NEWS.d/next/Documentation/2022-12-23-21-42-26.gh-issue-100472.NNixfO.rst +++ /dev/null @@ -1 +0,0 @@ -Remove claim in documentation that the ``stripdir``, ``prependdir`` and ``limit_sl_dest`` parameters of :func:`compileall.compile_dir` and :func:`compileall.compile_file` could be :class:`bytes`. diff --git a/Misc/NEWS.d/next/Documentation/2022-12-30-00-42-23.gh-issue-100616.eu80ij.rst b/Misc/NEWS.d/next/Documentation/2022-12-30-00-42-23.gh-issue-100616.eu80ij.rst deleted file mode 100644 index 97a7022c94b4..000000000000 --- a/Misc/NEWS.d/next/Documentation/2022-12-30-00-42-23.gh-issue-100616.eu80ij.rst +++ /dev/null @@ -1,2 +0,0 @@ -Document existing ``attr`` parameter to :func:`curses.window.vline` function -in :mod:`curses`. diff --git a/Misc/NEWS.d/next/Library/2019-05-13-11-37-30.bpo-36880.ZgBgH0.rst b/Misc/NEWS.d/next/Library/2019-05-13-11-37-30.bpo-36880.ZgBgH0.rst deleted file mode 100644 index f65323813d05..000000000000 --- a/Misc/NEWS.d/next/Library/2019-05-13-11-37-30.bpo-36880.ZgBgH0.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix a reference counting issue when a :mod:`ctypes` callback with return -type :class:`~ctypes.py_object` returns ``None``, which could cause crashes. diff --git a/Misc/NEWS.d/next/Library/2020-05-03-12-55-55.bpo-40447.oKR0Lj.rst b/Misc/NEWS.d/next/Library/2020-05-03-12-55-55.bpo-40447.oKR0Lj.rst deleted file mode 100644 index 941038d095b3..000000000000 --- a/Misc/NEWS.d/next/Library/2020-05-03-12-55-55.bpo-40447.oKR0Lj.rst +++ /dev/null @@ -1,2 +0,0 @@ -Accept :class:`os.PathLike` (such as :class:`pathlib.Path`) in the ``stripdir`` arguments of -:meth:`compileall.compile_file` and :meth:`compileall.compile_dir`. diff --git a/Misc/NEWS.d/next/Library/2021-08-03-05-31-00.bpo-44817.wOW_Qn.rst b/Misc/NEWS.d/next/Library/2021-08-03-05-31-00.bpo-44817.wOW_Qn.rst deleted file mode 100644 index 79f8c506b54f..000000000000 --- a/Misc/NEWS.d/next/Library/2021-08-03-05-31-00.bpo-44817.wOW_Qn.rst +++ /dev/null @@ -1,2 +0,0 @@ -Ignore WinError 53 (ERROR_BAD_NETPATH), 65 (ERROR_NETWORK_ACCESS_DENIED) -and 161 (ERROR_BAD_PATHNAME) when using ntpath.realpath(). diff --git a/Misc/NEWS.d/next/Library/2022-08-11-10-02-19.gh-issue-95882.FsUr72.rst b/Misc/NEWS.d/next/Library/2022-08-11-10-02-19.gh-issue-95882.FsUr72.rst deleted file mode 100644 index 9cdb237d5c87..000000000000 --- a/Misc/NEWS.d/next/Library/2022-08-11-10-02-19.gh-issue-95882.FsUr72.rst +++ /dev/null @@ -1 +0,0 @@ -Fix a 3.11 regression in :func:`~contextlib.asynccontextmanager`, which caused it to propagate exceptions with incorrect tracebacks and fix a 3.11 regression in :func:`~contextlib.contextmanager`, which caused it to propagate exceptions with incorrect tracebacks for :exc:`StopIteration`. diff --git a/Misc/NEWS.d/next/Library/2022-10-28-07-24-34.gh-issue-85267.xUy_Wm.rst b/Misc/NEWS.d/next/Library/2022-10-28-07-24-34.gh-issue-85267.xUy_Wm.rst deleted file mode 100644 index e69fd1ca1c2f..000000000000 --- a/Misc/NEWS.d/next/Library/2022-10-28-07-24-34.gh-issue-85267.xUy_Wm.rst +++ /dev/null @@ -1,6 +0,0 @@ -Several improvements to :func:`inspect.signature`'s handling of ``__text_signature``. -- Fixes a case where :func:`inspect.signature` dropped parameters -- Fixes a case where :func:`inspect.signature` raised :exc:`tokenize.TokenError` -- Allows :func:`inspect.signature` to understand defaults involving binary operations of constants -- :func:`inspect.signature` is documented as only raising :exc:`TypeError` or :exc:`ValueError`, but sometimes raised :exc:`RuntimeError`. These cases now raise :exc:`ValueError` -- Removed a dead code path diff --git a/Misc/NEWS.d/next/Library/2022-11-08-11-18-51.gh-issue-64490.VcBgrN.rst b/Misc/NEWS.d/next/Library/2022-11-08-11-18-51.gh-issue-64490.VcBgrN.rst deleted file mode 100644 index f98c181cc9c5..000000000000 --- a/Misc/NEWS.d/next/Library/2022-11-08-11-18-51.gh-issue-64490.VcBgrN.rst +++ /dev/null @@ -1 +0,0 @@ -Fix refcount error when arguments are packed to tuple in Argument Clinic. diff --git a/Misc/NEWS.d/next/Library/2022-11-08-15-54-43.gh-issue-99240.MhYwcz.rst b/Misc/NEWS.d/next/Library/2022-11-08-15-54-43.gh-issue-99240.MhYwcz.rst deleted file mode 100644 index 0a4db052755f..000000000000 --- a/Misc/NEWS.d/next/Library/2022-11-08-15-54-43.gh-issue-99240.MhYwcz.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix double-free bug in Argument Clinic ``str_converter`` by -extracting memory clean up to a new ``post_parsing`` section. diff --git a/Misc/NEWS.d/next/Library/2022-11-13-15-32-19.gh-issue-99433.Ys6y0A.rst b/Misc/NEWS.d/next/Library/2022-11-13-15-32-19.gh-issue-99433.Ys6y0A.rst deleted file mode 100644 index 8e13a9a66a7e..000000000000 --- a/Misc/NEWS.d/next/Library/2022-11-13-15-32-19.gh-issue-99433.Ys6y0A.rst +++ /dev/null @@ -1 +0,0 @@ -Fix :mod:`doctest` failure on :class:`types.MethodWrapperType` in modules. diff --git a/Misc/NEWS.d/next/Library/2022-11-20-11-59-54.gh-issue-99576.ZD7jU6.rst b/Misc/NEWS.d/next/Library/2022-11-20-11-59-54.gh-issue-99576.ZD7jU6.rst deleted file mode 100644 index 9cbeb64b5625..000000000000 --- a/Misc/NEWS.d/next/Library/2022-11-20-11-59-54.gh-issue-99576.ZD7jU6.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix ``.save()`` method for ``LWPCookieJar`` and ``MozillaCookieJar``: saved -file was not truncated on repeated save. diff --git a/Misc/NEWS.d/next/Library/2022-11-21-16-24-01.gh-issue-83035.qZIujU.rst b/Misc/NEWS.d/next/Library/2022-11-21-16-24-01.gh-issue-83035.qZIujU.rst deleted file mode 100644 index 629d9aefb2d8..000000000000 --- a/Misc/NEWS.d/next/Library/2022-11-21-16-24-01.gh-issue-83035.qZIujU.rst +++ /dev/null @@ -1 +0,0 @@ -Fix :func:`inspect.getsource` handling of decorator calls with nested parentheses. diff --git a/Misc/NEWS.d/next/Library/2022-12-03-20-06-16.gh-issue-98778.t5U9uc.rst b/Misc/NEWS.d/next/Library/2022-12-03-20-06-16.gh-issue-98778.t5U9uc.rst deleted file mode 100644 index b1c170dff3ea..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-03-20-06-16.gh-issue-98778.t5U9uc.rst +++ /dev/null @@ -1,2 +0,0 @@ -Update :exc:`~urllib.error.HTTPError` to be initialized properly, even if -the ``fp`` is ``None``. Patch by Dong-hee Na. diff --git a/Misc/NEWS.d/next/Library/2022-12-08-06-18-06.gh-issue-100098.uBvPlp.rst b/Misc/NEWS.d/next/Library/2022-12-08-06-18-06.gh-issue-100098.uBvPlp.rst deleted file mode 100644 index 256f2bcd39f8..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-08-06-18-06.gh-issue-100098.uBvPlp.rst +++ /dev/null @@ -1 +0,0 @@ -Fix ``tuple`` subclasses being cast to ``tuple`` when used as enum values. diff --git a/Misc/NEWS.d/next/Library/2022-12-10-08-36-07.gh-issue-100133.g-zQlp.rst b/Misc/NEWS.d/next/Library/2022-12-10-08-36-07.gh-issue-100133.g-zQlp.rst deleted file mode 100644 index 881e6ed80fed..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-10-08-36-07.gh-issue-100133.g-zQlp.rst +++ /dev/null @@ -1 +0,0 @@ -Fix regression in :mod:`asyncio` where a subprocess would sometimes lose data received from pipe. diff --git a/Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst b/Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst deleted file mode 100644 index 09ec96124953..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix a reference undercounting issue in :class:`ctypes.Structure` with ``from_param()`` -results larger than a C pointer. diff --git a/Misc/NEWS.d/next/Library/2022-12-19-20-54-04.gh-issue-78878.JrkYqJ.rst b/Misc/NEWS.d/next/Library/2022-12-19-20-54-04.gh-issue-78878.JrkYqJ.rst deleted file mode 100644 index 8b455fd2ef7f..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-19-20-54-04.gh-issue-78878.JrkYqJ.rst +++ /dev/null @@ -1 +0,0 @@ -Fix crash when creating an instance of :class:`!_ctypes.CField`. diff --git a/Misc/NEWS.d/next/Library/2022-12-19-23-19-26.gh-issue-96290.qFjsi6.rst b/Misc/NEWS.d/next/Library/2022-12-19-23-19-26.gh-issue-96290.qFjsi6.rst deleted file mode 100644 index 33f98602bd1b..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-19-23-19-26.gh-issue-96290.qFjsi6.rst +++ /dev/null @@ -1,5 +0,0 @@ -Fix handling of partial and invalid UNC drives in ``ntpath.splitdrive()``, and in -``ntpath.normpath()`` on non-Windows systems. Paths such as '\\server' and '\\' are now considered -by ``splitdrive()`` to contain only a drive, and consequently are not modified by ``normpath()`` on -non-Windows systems. The behaviour of ``normpath()`` on Windows systems is unaffected, as native -OS APIs are used. Patch by Eryk Sun, with contributions by Barney Gale. diff --git a/Misc/NEWS.d/next/Library/2022-12-21-18-29-24.gh-issue-100160.isBmL5.rst b/Misc/NEWS.d/next/Library/2022-12-21-18-29-24.gh-issue-100160.isBmL5.rst deleted file mode 100644 index c3b518ca85d7..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-21-18-29-24.gh-issue-100160.isBmL5.rst +++ /dev/null @@ -1,2 +0,0 @@ -Remove any deprecation warnings in :func:`asyncio.get_event_loop`. They are -deferred to Python 3.12. diff --git a/Misc/NEWS.d/next/Library/2022-12-23-21-02-43.gh-issue-100474.gppA4U.rst b/Misc/NEWS.d/next/Library/2022-12-23-21-02-43.gh-issue-100474.gppA4U.rst deleted file mode 100644 index 31abfb8b87fb..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-23-21-02-43.gh-issue-100474.gppA4U.rst +++ /dev/null @@ -1,2 +0,0 @@ -:mod:`http.server` now checks that an index page is actually a regular file before trying -to serve it. This avoids issues with directories named ``index.html``. diff --git a/Misc/NEWS.d/next/Library/2022-12-24-08-42-05.gh-issue-100287.n0oEuG.rst b/Misc/NEWS.d/next/Library/2022-12-24-08-42-05.gh-issue-100287.n0oEuG.rst deleted file mode 100644 index b353f0810c6a..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-24-08-42-05.gh-issue-100287.n0oEuG.rst +++ /dev/null @@ -1 +0,0 @@ -Fix the interaction of :func:`unittest.mock.seal` with :class:`unittest.mock.AsyncMock`. diff --git a/Misc/NEWS.d/next/Library/2022-12-30-07-49-08.gh-issue-86508.nGZDzC.rst b/Misc/NEWS.d/next/Library/2022-12-30-07-49-08.gh-issue-86508.nGZDzC.rst deleted file mode 100644 index aedad0f252c2..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-30-07-49-08.gh-issue-86508.nGZDzC.rst +++ /dev/null @@ -1 +0,0 @@ -Fix :func:`asyncio.open_connection` to skip binding to local addresses of different family. Patch by Kumar Aditya. diff --git a/Misc/NEWS.d/next/Library/2023-01-04-09-53-38.gh-issue-100740.-j5UjI.rst b/Misc/NEWS.d/next/Library/2023-01-04-09-53-38.gh-issue-100740.-j5UjI.rst deleted file mode 100644 index 4753e7b40825..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-04-09-53-38.gh-issue-100740.-j5UjI.rst +++ /dev/null @@ -1 +0,0 @@ -Fix ``unittest.mock.Mock`` not respecting the spec for attribute names prefixed with ``assert``. diff --git a/Misc/NEWS.d/next/Library/2023-01-04-12-58-59.gh-issue-100689.Ce0ITG.rst b/Misc/NEWS.d/next/Library/2023-01-04-12-58-59.gh-issue-100689.Ce0ITG.rst deleted file mode 100644 index 09aeab7bceaa..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-04-12-58-59.gh-issue-100689.Ce0ITG.rst +++ /dev/null @@ -1 +0,0 @@ -Fix crash in :mod:`pyexpat` by statically allocating ``PyExpat_CAPI`` capsule. diff --git a/Misc/NEWS.d/next/Library/2023-01-04-14-42-59.gh-issue-100750.iFJs5Y.rst b/Misc/NEWS.d/next/Library/2023-01-04-14-42-59.gh-issue-100750.iFJs5Y.rst deleted file mode 100644 index be351532822c..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-04-14-42-59.gh-issue-100750.iFJs5Y.rst +++ /dev/null @@ -1 +0,0 @@ -pass encoding kwarg to subprocess in platform diff --git a/Misc/NEWS.d/next/Library/2023-01-04-22-10-31.gh-issue-90104.yZk5EX.rst b/Misc/NEWS.d/next/Library/2023-01-04-22-10-31.gh-issue-90104.yZk5EX.rst deleted file mode 100644 index 6695c920ee24..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-04-22-10-31.gh-issue-90104.yZk5EX.rst +++ /dev/null @@ -1 +0,0 @@ -Avoid RecursionError on ``repr`` if a dataclass field definition has a cyclic reference. diff --git a/Misc/NEWS.d/next/Library/2023-01-07-15-13-47.gh-issue-100805.05rBz9.rst b/Misc/NEWS.d/next/Library/2023-01-07-15-13-47.gh-issue-100805.05rBz9.rst deleted file mode 100644 index 4424d7cff21b..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-07-15-13-47.gh-issue-100805.05rBz9.rst +++ /dev/null @@ -1,2 +0,0 @@ -Modify :func:`random.choice` implementation to once again work with NumPy -arrays. diff --git a/Misc/NEWS.d/next/Library/2023-01-12-01-18-13.gh-issue-100573.KDskqo.rst b/Misc/NEWS.d/next/Library/2023-01-12-01-18-13.gh-issue-100573.KDskqo.rst deleted file mode 100644 index 97b95d18d1e4..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-12-01-18-13.gh-issue-100573.KDskqo.rst +++ /dev/null @@ -1 +0,0 @@ -Fix a Windows :mod:`asyncio` bug with named pipes where a client doing ``os.stat()`` on the pipe would cause an error in the server that disabled serving future requests. diff --git a/Misc/NEWS.d/next/Library/2023-01-14-12-58-21.gh-issue-101015.stWFid.rst b/Misc/NEWS.d/next/Library/2023-01-14-12-58-21.gh-issue-101015.stWFid.rst deleted file mode 100644 index b9d73ff98552..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-14-12-58-21.gh-issue-101015.stWFid.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix :func:`typing.get_type_hints` on ``'*tuple[...]'`` and ``*tuple[...]``. -It must not drop the ``Unpack`` part. diff --git a/Misc/NEWS.d/next/Library/2023-01-18-17-58-50.gh-issue-101144.FHd8Un.rst b/Misc/NEWS.d/next/Library/2023-01-18-17-58-50.gh-issue-101144.FHd8Un.rst deleted file mode 100644 index 297652259949..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-18-17-58-50.gh-issue-101144.FHd8Un.rst +++ /dev/null @@ -1,4 +0,0 @@ -Make :func:`zipfile.Path.open` and :func:`zipfile.Path.read_text` also accept -``encoding`` as a positional argument. This was the behavior in Python 3.9 and -earlier. 3.10 introduced a regression where supplying it as a positional -argument would lead to a :exc:`TypeError`. diff --git a/Misc/NEWS.d/next/Library/2023-01-20-10-46-59.gh-issue-101143.hJo8hu.rst b/Misc/NEWS.d/next/Library/2023-01-20-10-46-59.gh-issue-101143.hJo8hu.rst deleted file mode 100644 index d14b9e25a691..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-20-10-46-59.gh-issue-101143.hJo8hu.rst +++ /dev/null @@ -1,2 +0,0 @@ -Remove unused references to :class:`~asyncio.TimerHandle` in -``asyncio.base_events.BaseEventLoop._add_callback``. diff --git a/Misc/NEWS.d/next/Library/2023-01-21-16-50-22.gh-issue-100795.NPMZf7.rst b/Misc/NEWS.d/next/Library/2023-01-21-16-50-22.gh-issue-100795.NPMZf7.rst deleted file mode 100644 index 4cb56ea0f0e5..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-21-16-50-22.gh-issue-100795.NPMZf7.rst +++ /dev/null @@ -1,3 +0,0 @@ -Avoid potential unexpected ``freeaddrinfo`` call (double free) in :mod:`socket` -when when a libc ``getaddrinfo()`` implementation leaves garbage in an output -pointer when returning an error. Original patch by Sergey G. Brester. diff --git a/Misc/NEWS.d/next/Library/2023-01-25-18-07-20.gh-issue-101326.KL4SFv.rst b/Misc/NEWS.d/next/Library/2023-01-25-18-07-20.gh-issue-101326.KL4SFv.rst deleted file mode 100644 index 54b69b943091..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-25-18-07-20.gh-issue-101326.KL4SFv.rst +++ /dev/null @@ -1 +0,0 @@ -Fix regression when passing ``None`` as second or third argument to ``FutureIter.throw``. diff --git a/Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst b/Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst deleted file mode 100644 index 0f149e80dc54..000000000000 --- a/Misc/NEWS.d/next/Library/2023-02-05-14-39-49.gh-issue-101541.Mo3ppp.rst +++ /dev/null @@ -1 +0,0 @@ -[Enum] - fix psuedo-flag creation diff --git a/Misc/NEWS.d/next/Tests/2022-08-22-15-49-14.gh-issue-96002.4UE9UE.rst b/Misc/NEWS.d/next/Tests/2022-08-22-15-49-14.gh-issue-96002.4UE9UE.rst deleted file mode 100644 index dc86e1d70f12..000000000000 --- a/Misc/NEWS.d/next/Tests/2022-08-22-15-49-14.gh-issue-96002.4UE9UE.rst +++ /dev/null @@ -1 +0,0 @@ -Add functional test for Argument Clinic. diff --git a/Misc/NEWS.d/next/Tests/2022-12-23-13-29-55.gh-issue-100454.3no0cW.rst b/Misc/NEWS.d/next/Tests/2022-12-23-13-29-55.gh-issue-100454.3no0cW.rst deleted file mode 100644 index 8b08ca0dcef7..000000000000 --- a/Misc/NEWS.d/next/Tests/2022-12-23-13-29-55.gh-issue-100454.3no0cW.rst +++ /dev/null @@ -1 +0,0 @@ -Start running SSL tests with OpenSSL 3.1.0-beta1. diff --git a/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst b/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst deleted file mode 100644 index 2a95fd9ae53c..000000000000 --- a/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst +++ /dev/null @@ -1 +0,0 @@ -``test_tarfile`` has been updated to pass when run as a high UID. diff --git a/Misc/NEWS.d/next/Tools-Demos/2022-08-11-09-58-15.gh-issue-64490.PjwhM4.rst b/Misc/NEWS.d/next/Tools-Demos/2022-08-11-09-58-15.gh-issue-64490.PjwhM4.rst deleted file mode 100644 index 4a308a930605..000000000000 --- a/Misc/NEWS.d/next/Tools-Demos/2022-08-11-09-58-15.gh-issue-64490.PjwhM4.rst +++ /dev/null @@ -1,7 +0,0 @@ -Argument Clinic varargs bugfixes - -* Fix out-of-bounds error in :c:func:`!_PyArg_UnpackKeywordsWithVararg`. -* Fix incorrect check which allowed more than one varargs in clinic.py. -* Fix miscalculation of ``noptargs`` in generated code. -* Do not generate ``noptargs`` when there is a vararg argument and no optional argument. - diff --git a/Misc/NEWS.d/next/Tools-Demos/2022-12-19-10-08-53.gh-issue-100342.qDFlQG.rst b/Misc/NEWS.d/next/Tools-Demos/2022-12-19-10-08-53.gh-issue-100342.qDFlQG.rst deleted file mode 100644 index 28f736337526..000000000000 --- a/Misc/NEWS.d/next/Tools-Demos/2022-12-19-10-08-53.gh-issue-100342.qDFlQG.rst +++ /dev/null @@ -1 +0,0 @@ -Add missing ``NULL`` check for possible allocation failure in ``*args`` parsing in Argument Clinic. diff --git a/Misc/NEWS.d/next/Tools-Demos/2022-12-29-19-22-11.bpo-45256.a0ee_H.rst b/Misc/NEWS.d/next/Tools-Demos/2022-12-29-19-22-11.bpo-45256.a0ee_H.rst deleted file mode 100644 index 9c1aa5762583..000000000000 --- a/Misc/NEWS.d/next/Tools-Demos/2022-12-29-19-22-11.bpo-45256.a0ee_H.rst +++ /dev/null @@ -1 +0,0 @@ -Fix a bug that caused an :exc:`AttributeError` to be raised in ``python-gdb.py`` when ``py-locals`` is used without a frame. diff --git a/Misc/NEWS.d/next/Windows/2021-05-02-15-29-33.bpo-43984.U92jiv.rst b/Misc/NEWS.d/next/Windows/2021-05-02-15-29-33.bpo-43984.U92jiv.rst deleted file mode 100644 index a5975b2d00c7..000000000000 --- a/Misc/NEWS.d/next/Windows/2021-05-02-15-29-33.bpo-43984.U92jiv.rst +++ /dev/null @@ -1,3 +0,0 @@ -:meth:`winreg.SetValueEx` now leaves the target value untouched in the case of conversion errors. -Previously, ``-1`` would be written in case of such errors. - diff --git a/Misc/NEWS.d/next/Windows/2023-01-09-23-03-57.gh-issue-100180.b5phrg.rst b/Misc/NEWS.d/next/Windows/2023-01-09-23-03-57.gh-issue-100180.b5phrg.rst deleted file mode 100644 index 5b0f42568d92..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-09-23-03-57.gh-issue-100180.b5phrg.rst +++ /dev/null @@ -1 +0,0 @@ -Update Windows installer to OpenSSL 1.1.1s diff --git a/Misc/NEWS.d/next/Windows/2023-01-11-14-42-11.gh-issue-100247.YfEmSz.rst b/Misc/NEWS.d/next/Windows/2023-01-11-14-42-11.gh-issue-100247.YfEmSz.rst deleted file mode 100644 index 7bfcbd7ddecf..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-11-14-42-11.gh-issue-100247.YfEmSz.rst +++ /dev/null @@ -1,2 +0,0 @@ -Restores support for the :file:`py.exe` launcher finding shebang commands in -its configuration file using the full command name. diff --git a/Misc/NEWS.d/next/Windows/2023-01-11-16-28-09.gh-issue-100320.2DU2it.rst b/Misc/NEWS.d/next/Windows/2023-01-11-16-28-09.gh-issue-100320.2DU2it.rst deleted file mode 100644 index c206fc8520a5..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-11-16-28-09.gh-issue-100320.2DU2it.rst +++ /dev/null @@ -1,3 +0,0 @@ -Ensures the ``PythonPath`` registry key from an install is used when -launching from a different copy of Python that relies on an existing install -to provide a copy of its modules and standard library. diff --git a/Misc/NEWS.d/next/Windows/2023-01-17-18-17-58.gh-issue-82052.mWyysT.rst b/Misc/NEWS.d/next/Windows/2023-01-17-18-17-58.gh-issue-82052.mWyysT.rst deleted file mode 100644 index 4f7ab200b85c..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-17-18-17-58.gh-issue-82052.mWyysT.rst +++ /dev/null @@ -1 +0,0 @@ -Fixed an issue where writing more than 32K of Unicode output to the console screen in one go can result in mojibake. diff --git a/Misc/NEWS.d/next/Windows/2023-01-18-18-25-18.gh-issue-101135.HF9VlG.rst b/Misc/NEWS.d/next/Windows/2023-01-18-18-25-18.gh-issue-101135.HF9VlG.rst deleted file mode 100644 index 2e6d6371340d..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-18-18-25-18.gh-issue-101135.HF9VlG.rst +++ /dev/null @@ -1,3 +0,0 @@ -Restore ability to launch older 32-bit versions from the :file:`py.exe` -launcher when both 32-bit and 64-bit installs of the same version are -available. diff --git a/Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst b/Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst deleted file mode 100644 index 4d4da05afa2d..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-31-16-50-07.gh-issue-101467.ye9t-L.rst +++ /dev/null @@ -1,3 +0,0 @@ -The ``py.exe`` launcher now correctly filters when only a single runtime is -installed. It also correctly handles prefix matches on tags so that ``-3.1`` -does not match ``3.11``, but would still match ``3.1-32``. diff --git a/Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst b/Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst deleted file mode 100644 index d4e2c6f23013..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-02-03-17-53-06.gh-issue-101543.cORAT4.rst +++ /dev/null @@ -1,2 +0,0 @@ -Ensure the install path in the registry is only used when the standard -library hasn't been located in any other way. diff --git a/Misc/NEWS.d/next/macOS/2023-01-09-22-04-21.gh-issue-100180.WVhCny.rst b/Misc/NEWS.d/next/macOS/2023-01-09-22-04-21.gh-issue-100180.WVhCny.rst deleted file mode 100644 index b7d94a0a94a0..000000000000 --- a/Misc/NEWS.d/next/macOS/2023-01-09-22-04-21.gh-issue-100180.WVhCny.rst +++ /dev/null @@ -1 +0,0 @@ -Update macOS installer to OpenSSL 1.1.1s diff --git a/README.rst b/README.rst index 193fc9c3de8e..2cd94716f756 100644 --- a/README.rst +++ b/README.rst @@ -1,4 +1,4 @@ -This is Python version 3.11.1 +This is Python version 3.11.2 ============================= .. image:: https://github.com/python/cpython/workflows/Tests/badge.svg From webhook-mailer at python.org Wed Feb 8 04:59:49 2023 From: webhook-mailer at python.org (pablogsal) Date: Wed, 08 Feb 2023 09:59:49 -0000 Subject: [Python-checkins] Python 3.10.10 Message-ID: https://github.com/python/cpython/commit/aad5f6a89125ad4529ab6ed5add693064351df1a commit: aad5f6a89125ad4529ab6ed5add693064351df1a branch: 3.10 author: Pablo Galindo committer: pablogsal date: 2023-02-07T12:05:45Z summary: Python 3.10.10 files: A Misc/NEWS.d/3.10.10.rst D Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst D Misc/NEWS.d/next/C API/2022-11-30-16-39-22.gh-issue-99240.67nAX-.rst D Misc/NEWS.d/next/Core and Builtins/2018-02-06-23-21-13.bpo-32782.EJVSfR.rst D Misc/NEWS.d/next/Core and Builtins/2022-12-06-22-24-01.gh-issue-100050.lcrPqQ.rst D Misc/NEWS.d/next/Core and Builtins/2022-12-20-16-14-19.gh-issue-100374.YRrVHT.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-06-02-02-11.gh-issue-100776.pP8xux.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-10-14-11-17.gh-issue-100892.qfBVYI.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-11-22-52-19.gh-issue-100942.ontOy_.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-15-03-26-04.gh-issue-101046.g2CM4S.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst D Misc/NEWS.d/next/Core and Builtins/2023-01-30-08-59-47.gh-issue-101400.Di_ZFm.rst D Misc/NEWS.d/next/Documentation/2022-12-23-21-42-26.gh-issue-100472.NNixfO.rst D Misc/NEWS.d/next/Documentation/2022-12-30-00-42-23.gh-issue-100616.eu80ij.rst D Misc/NEWS.d/next/Library/2019-05-13-11-37-30.bpo-36880.ZgBgH0.rst D Misc/NEWS.d/next/Library/2020-05-03-12-55-55.bpo-40447.oKR0Lj.rst D Misc/NEWS.d/next/Library/2021-08-03-05-31-00.bpo-44817.wOW_Qn.rst D Misc/NEWS.d/next/Library/2022-08-23-03-13-18.gh-issue-96192.TJywOF.rst D Misc/NEWS.d/next/Library/2022-10-28-07-24-34.gh-issue-85267.xUy_Wm.rst D Misc/NEWS.d/next/Library/2022-11-08-15-54-43.gh-issue-99240.MhYwcz.rst D Misc/NEWS.d/next/Library/2022-11-21-16-24-01.gh-issue-83035.qZIujU.rst D Misc/NEWS.d/next/Library/2022-12-03-20-06-16.gh-issue-98778.t5U9uc.rst D Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst D Misc/NEWS.d/next/Library/2022-12-21-18-29-24.gh-issue-100160.isBmL5.rst D Misc/NEWS.d/next/Library/2022-12-23-21-02-43.gh-issue-100474.gppA4U.rst D Misc/NEWS.d/next/Library/2022-12-24-08-42-05.gh-issue-100287.n0oEuG.rst D Misc/NEWS.d/next/Library/2022-12-30-07-49-08.gh-issue-86508.nGZDzC.rst D Misc/NEWS.d/next/Library/2023-01-04-09-53-38.gh-issue-100740.-j5UjI.rst D Misc/NEWS.d/next/Library/2023-01-04-12-58-59.gh-issue-100689.Ce0ITG.rst D Misc/NEWS.d/next/Library/2023-01-04-22-10-31.gh-issue-90104.yZk5EX.rst D Misc/NEWS.d/next/Library/2023-01-12-01-18-13.gh-issue-100573.KDskqo.rst D Misc/NEWS.d/next/Library/2023-01-18-17-58-50.gh-issue-101144.FHd8Un.rst D Misc/NEWS.d/next/Library/2023-01-20-10-46-59.gh-issue-101143.hJo8hu.rst D Misc/NEWS.d/next/Library/2023-01-21-16-50-22.gh-issue-100795.NPMZf7.rst D Misc/NEWS.d/next/Tests/2022-08-22-15-49-14.gh-issue-96002.4UE9UE.rst D Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst D Misc/NEWS.d/next/Windows/2021-05-02-15-29-33.bpo-43984.U92jiv.rst D Misc/NEWS.d/next/Windows/2023-01-09-23-03-57.gh-issue-100180.b5phrg.rst D Misc/NEWS.d/next/Windows/2023-01-17-18-17-58.gh-issue-82052.mWyysT.rst D Misc/NEWS.d/next/macOS/2023-01-09-22-04-21.gh-issue-100180.WVhCny.rst M Include/patchlevel.h M Lib/pydoc_data/topics.py M README.rst diff --git a/Include/patchlevel.h b/Include/patchlevel.h index 26cefeab5401..db881ff68fa0 100644 --- a/Include/patchlevel.h +++ b/Include/patchlevel.h @@ -18,12 +18,12 @@ /*--start constants--*/ #define PY_MAJOR_VERSION 3 #define PY_MINOR_VERSION 10 -#define PY_MICRO_VERSION 9 +#define PY_MICRO_VERSION 10 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL #define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "3.10.9+" +#define PY_VERSION "3.10.10" /*--end constants--*/ /* Version as a single 4-byte hex number, e.g. 0x010502B2 == 1.5.2b2. diff --git a/Lib/pydoc_data/topics.py b/Lib/pydoc_data/topics.py index e8bedaae60be..1a888e4fb241 100644 --- a/Lib/pydoc_data/topics.py +++ b/Lib/pydoc_data/topics.py @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -# Autogenerated by Sphinx on Tue Dec 6 18:31:02 2022 +# Autogenerated by Sphinx on Tue Feb 7 12:04:02 2023 topics = {'assert': 'The "assert" statement\n' '**********************\n' '\n' @@ -2391,12 +2391,10 @@ 'finished,\n' 'but if the sequence is empty, they will not have been assigned ' 'to at\n' - 'all by the loop. Hint: the built-in function "range()" returns ' - 'an\n' - 'iterator of integers suitable to emulate the effect of Pascal?s ' - '"for i\n' - ':= a to b do"; e.g., "list(range(3))" returns the list "[0, 1, ' - '2]".\n' + 'all by the loop. Hint: the built-in type "range()" represents\n' + 'immutable arithmetic sequences of integers. For instance, ' + 'iterating\n' + '"range(3)" successively yields 0, 1, and then 2.\n' '\n' '\n' 'The "try" statement\n' @@ -2644,7 +2642,7 @@ 'the\n' ' target list, it will be treated the same as an error ' 'occurring\n' - ' within the suite would be. See step 6 below.\n' + ' within the suite would be. See step 7 below.\n' '\n' '6. The suite is executed.\n' '\n' @@ -4567,6 +4565,18 @@ 'the source. The extension interface uses the modules "bdb" and ' '"cmd".\n' '\n' + 'See also:\n' + '\n' + ' Module "faulthandler"\n' + ' Used to dump Python tracebacks explicitly, on a fault, ' + 'after a\n' + ' timeout, or on a user signal.\n' + '\n' + ' Module "traceback"\n' + ' Standard interface to extract, format and print stack ' + 'traces of\n' + ' Python programs.\n' + '\n' 'The debugger?s prompt is "(Pdb)". Typical usage to run a program ' 'under\n' 'control of the debugger is:\n' @@ -5580,7 +5590,8 @@ 'be\n' 'determined by scanning the entire text of the block for name ' 'binding\n' - 'operations.\n' + 'operations. See the FAQ entry on UnboundLocalError for ' + 'examples.\n' '\n' 'If the "global" statement occurs within a block, all uses of ' 'the names\n' @@ -5888,10 +5899,9 @@ '\n' 'Names in the target list are not deleted when the loop is finished,\n' 'but if the sequence is empty, they will not have been assigned to at\n' - 'all by the loop. Hint: the built-in function "range()" returns an\n' - 'iterator of integers suitable to emulate the effect of Pascal?s "for ' - 'i\n' - ':= a to b do"; e.g., "list(range(3))" returns the list "[0, 1, 2]".\n', + 'all by the loop. Hint: the built-in type "range()" represents\n' + 'immutable arithmetic sequences of integers. For instance, iterating\n' + '"range(3)" successively yields 0, 1, and then 2.\n', 'formatstrings': 'Format String Syntax\n' '********************\n' '\n' @@ -7687,7 +7697,7 @@ 'within a code block. The local variables of a code block can be\n' 'determined by scanning the entire text of the block for name ' 'binding\n' - 'operations.\n' + 'operations. See the FAQ entry on UnboundLocalError for examples.\n' '\n' 'If the "global" statement occurs within a block, all uses of the ' 'names\n' @@ -11213,35 +11223,35 @@ '\n' "str.encode(encoding='utf-8', errors='strict')\n" '\n' - ' Return an encoded version of the string as a bytes ' - 'object. Default\n' - ' encoding is "\'utf-8\'". *errors* may be given to set a ' - 'different\n' - ' error handling scheme. The default for *errors* is ' - '"\'strict\'",\n' - ' meaning that encoding errors raise a "UnicodeError". ' + ' Return the string encoded to "bytes".\n' + '\n' + ' *encoding* defaults to "\'utf-8\'"; see Standard ' + 'Encodings for\n' + ' possible values.\n' + '\n' + ' *errors* controls how encoding errors are handled. If ' + '"\'strict\'"\n' + ' (the default), a "UnicodeError" exception is raised. ' 'Other possible\n' ' values are "\'ignore\'", "\'replace\'", ' '"\'xmlcharrefreplace\'",\n' ' "\'backslashreplace\'" and any other name registered ' 'via\n' - ' "codecs.register_error()", see section Error Handlers. ' - 'For a list\n' - ' of possible encodings, see section Standard Encodings.\n' + ' "codecs.register_error()". See Error Handlers for ' + 'details.\n' '\n' - ' By default, the *errors* argument is not checked for ' - 'best\n' - ' performances, but only used at the first encoding ' - 'error. Enable the\n' - ' Python Development Mode, or use a debug build to check ' - '*errors*.\n' + ' For performance reasons, the value of *errors* is not ' + 'checked for\n' + ' validity unless an encoding error actually occurs, ' + 'Python\n' + ' Development Mode is enabled or a debug build is used.\n' '\n' - ' Changed in version 3.1: Support for keyword arguments ' - 'added.\n' + ' Changed in version 3.1: Added support for keyword ' + 'arguments.\n' '\n' - ' Changed in version 3.9: The *errors* is now checked in ' - 'development\n' - ' mode and in debug mode.\n' + ' Changed in version 3.9: The value of the *errors* ' + 'argument is now\n' + ' checked in Python Development Mode and in debug mode.\n' '\n' 'str.endswith(suffix[, start[, end]])\n' '\n' @@ -14199,8 +14209,7 @@ ' >>> # get back a read-only proxy for the original ' 'dictionary\n' ' >>> values.mapping\n' - " mappingproxy({'eggs': 2, 'sausage': 1, 'bacon': 1, " - "'spam': 500})\n" + " mappingproxy({'bacon': 1, 'spam': 500})\n" " >>> values.mapping['spam']\n" ' 500\n', 'typesmethods': 'Methods\n' @@ -15246,7 +15255,7 @@ ' returns without an error, then "__exit__()" will always be\n' ' called. Thus, if an error occurs during the assignment to the\n' ' target list, it will be treated the same as an error occurring\n' - ' within the suite would be. See step 6 below.\n' + ' within the suite would be. See step 7 below.\n' '\n' '6. The suite is executed.\n' '\n' diff --git a/Misc/NEWS.d/3.10.10.rst b/Misc/NEWS.d/3.10.10.rst new file mode 100644 index 000000000000..9ae55fa09613 --- /dev/null +++ b/Misc/NEWS.d/3.10.10.rst @@ -0,0 +1,400 @@ +.. date: 2023-01-30-08-59-47 +.. gh-issue: 101400 +.. nonce: Di_ZFm +.. release date: 2023-02-07 +.. section: Core and Builtins + +Fix wrong lineno in exception message on :keyword:`continue` or +:keyword:`break` which are not in a loop. Patch by Dong-hee Na. + +.. + +.. date: 2023-01-28-20-31-42 +.. gh-issue: 101372 +.. nonce: 8BcpCC +.. section: Core and Builtins + +Fix :func:`~unicodedata.is_normalized` to properly handle the UCD 3.2.0 +cases. Patch by Dong-hee Na. + +.. + +.. date: 2023-01-15-03-26-04 +.. gh-issue: 101046 +.. nonce: g2CM4S +.. section: Core and Builtins + +Fix a possible memory leak in the parser when raising :exc:`MemoryError`. +Patch by Pablo Galindo + +.. + +.. date: 2023-01-11-22-52-19 +.. gh-issue: 100942 +.. nonce: ontOy_ +.. section: Core and Builtins + +Fixed segfault in property.getter/setter/deleter that occurred when a +property subclass overrode the ``__new__`` method to return a non-property +instance. + +.. + +.. date: 2023-01-10-14-11-17 +.. gh-issue: 100892 +.. nonce: qfBVYI +.. section: Core and Builtins + +Fix race while iterating over thread states in clearing +:class:`threading.local`. Patch by Kumar Aditya. + +.. + +.. date: 2023-01-06-02-02-11 +.. gh-issue: 100776 +.. nonce: pP8xux +.. section: Core and Builtins + +Fix misleading default value in :func:`input`'s ``__text_signature__``. + +.. + +.. date: 2022-12-20-16-14-19 +.. gh-issue: 100374 +.. nonce: YRrVHT +.. section: Core and Builtins + +Fix incorrect result and delay in :func:`socket.getfqdn`. Patch by Dominic +Socular. + +.. + +.. date: 2022-12-06-22-24-01 +.. gh-issue: 100050 +.. nonce: lcrPqQ +.. section: Core and Builtins + +Honor existing errors obtained when searching for mismatching parentheses in +the tokenizer. Patch by Pablo Galindo + +.. + +.. bpo: 32782 +.. date: 2018-02-06-23-21-13 +.. nonce: EJVSfR +.. section: Core and Builtins + +``ctypes`` arrays of length 0 now report a correct itemsize when a +``memoryview`` is constructed from them, rather than always giving a value +of 0. + +.. + +.. date: 2023-01-21-16-50-22 +.. gh-issue: 100795 +.. nonce: NPMZf7 +.. section: Library + +Avoid potential unexpected ``freeaddrinfo`` call (double free) in +:mod:`socket` when when a libc ``getaddrinfo()`` implementation leaves +garbage in an output pointer when returning an error. Original patch by +Sergey G. Brester. + +.. + +.. date: 2023-01-20-10-46-59 +.. gh-issue: 101143 +.. nonce: hJo8hu +.. section: Library + +Remove unused references to :class:`~asyncio.TimerHandle` in +``asyncio.base_events.BaseEventLoop._add_callback``. + +.. + +.. date: 2023-01-18-17-58-50 +.. gh-issue: 101144 +.. nonce: FHd8Un +.. section: Library + +Make :func:`zipfile.Path.open` and :func:`zipfile.Path.read_text` also +accept ``encoding`` as a positional argument. This was the behavior in +Python 3.9 and earlier. Earlier 3.10 versions had a regression where +supplying it as a positional argument would lead to a :exc:`TypeError`. + +.. + +.. date: 2023-01-12-01-18-13 +.. gh-issue: 100573 +.. nonce: KDskqo +.. section: Library + +Fix a Windows :mod:`asyncio` bug with named pipes where a client doing +``os.stat()`` on the pipe would cause an error in the server that disabled +serving future requests. + +.. + +.. date: 2023-01-04-22-10-31 +.. gh-issue: 90104 +.. nonce: yZk5EX +.. section: Library + +Avoid RecursionError on ``repr`` if a dataclass field definition has a +cyclic reference. + +.. + +.. date: 2023-01-04-12-58-59 +.. gh-issue: 100689 +.. nonce: Ce0ITG +.. section: Library + +Fix crash in :mod:`pyexpat` by statically allocating ``PyExpat_CAPI`` +capsule. + +.. + +.. date: 2023-01-04-09-53-38 +.. gh-issue: 100740 +.. nonce: -j5UjI +.. section: Library + +Fix ``unittest.mock.Mock`` not respecting the spec for attribute names +prefixed with ``assert``. + +.. + +.. date: 2022-12-30-07-49-08 +.. gh-issue: 86508 +.. nonce: nGZDzC +.. section: Library + +Fix :func:`asyncio.open_connection` to skip binding to local addresses of +different family. Patch by Kumar Aditya. + +.. + +.. date: 2022-12-24-08-42-05 +.. gh-issue: 100287 +.. nonce: n0oEuG +.. section: Library + +Fix the interaction of :func:`unittest.mock.seal` with +:class:`unittest.mock.AsyncMock`. + +.. + +.. date: 2022-12-23-21-02-43 +.. gh-issue: 100474 +.. nonce: gppA4U +.. section: Library + +:mod:`http.server` now checks that an index page is actually a regular file +before trying to serve it. This avoids issues with directories named +``index.html``. + +.. + +.. date: 2022-12-21-18-29-24 +.. gh-issue: 100160 +.. nonce: isBmL5 +.. section: Library + +Remove any deprecation warnings in :func:`asyncio.get_event_loop`. They are +deferred to Python 3.12. + +.. + +.. date: 2022-12-11-14-38-59 +.. gh-issue: 99952 +.. nonce: IYGLzr +.. section: Library + +Fix a reference undercounting issue in :class:`ctypes.Structure` with +``from_param()`` results larger than a C pointer. + +.. + +.. date: 2022-12-03-20-06-16 +.. gh-issue: 98778 +.. nonce: t5U9uc +.. section: Library + +Update :exc:`~urllib.error.HTTPError` to be initialized properly, even if +the ``fp`` is ``None``. Patch by Dong-hee Na. + +.. + +.. date: 2022-11-21-16-24-01 +.. gh-issue: 83035 +.. nonce: qZIujU +.. section: Library + +Fix :func:`inspect.getsource` handling of decorator calls with nested +parentheses. + +.. + +.. date: 2022-11-08-15-54-43 +.. gh-issue: 99240 +.. nonce: MhYwcz +.. section: Library + +Fix double-free bug in Argument Clinic ``str_converter`` by extracting +memory clean up to a new ``post_parsing`` section. + +.. + +.. date: 2022-10-28-07-24-34 +.. gh-issue: 85267 +.. nonce: xUy_Wm +.. section: Library + +Several improvements to :func:`inspect.signature`'s handling of +``__text_signature``. - Fixes a case where :func:`inspect.signature` dropped +parameters - Fixes a case where :func:`inspect.signature` raised +:exc:`tokenize.TokenError` - Allows :func:`inspect.signature` to understand +defaults involving binary operations of constants - +:func:`inspect.signature` is documented as only raising :exc:`TypeError` or +:exc:`ValueError`, but sometimes raised :exc:`RuntimeError`. These cases now +raise :exc:`ValueError` - Removed a dead code path + +.. + +.. date: 2022-08-23-03-13-18 +.. gh-issue: 96192 +.. nonce: TJywOF +.. section: Library + +Fix handling of ``bytes`` :term:`path-like objects ` in +:func:`os.ismount()`. + +.. + +.. bpo: 44817 +.. date: 2021-08-03-05-31-00 +.. nonce: wOW_Qn +.. section: Library + +Ignore WinError 53 (ERROR_BAD_NETPATH), 65 (ERROR_NETWORK_ACCESS_DENIED) and +161 (ERROR_BAD_PATHNAME) when using ntpath.realpath(). + +.. + +.. bpo: 40447 +.. date: 2020-05-03-12-55-55 +.. nonce: oKR0Lj +.. section: Library + +Accept :class:`os.PathLike` (such as :class:`pathlib.Path`) in the +``stripdir`` arguments of :meth:`compileall.compile_file` and +:meth:`compileall.compile_dir`. + +.. + +.. bpo: 36880 +.. date: 2019-05-13-11-37-30 +.. nonce: ZgBgH0 +.. section: Library + +Fix a reference counting issue when a :mod:`ctypes` callback with return +type :class:`~ctypes.py_object` returns ``None``, which could cause crashes. + +.. + +.. date: 2022-12-30-00-42-23 +.. gh-issue: 100616 +.. nonce: eu80ij +.. section: Documentation + +Document existing ``attr`` parameter to :func:`curses.window.vline` function +in :mod:`curses`. + +.. + +.. date: 2022-12-23-21-42-26 +.. gh-issue: 100472 +.. nonce: NNixfO +.. section: Documentation + +Remove claim in documentation that the ``stripdir``, ``prependdir`` and +``limit_sl_dest`` parameters of :func:`compileall.compile_dir` and +:func:`compileall.compile_file` could be :class:`bytes`. + +.. + +.. date: 2023-02-04-17-24-33 +.. gh-issue: 101334 +.. nonce: _yOqwg +.. section: Tests + +``test_tarfile`` has been updated to pass when run as a high UID. + +.. + +.. date: 2022-08-22-15-49-14 +.. gh-issue: 96002 +.. nonce: 4UE9UE +.. section: Tests + +Add functional test for Argument Clinic. + +.. + +.. date: 2023-02-02-23-43-46 +.. gh-issue: 101522 +.. nonce: lnUDta +.. section: Build + +Allow overriding Windows dependencies versions and paths using MSBuild +properties. + +.. + +.. date: 2023-01-17-18-17-58 +.. gh-issue: 82052 +.. nonce: mWyysT +.. section: Windows + +Fixed an issue where writing more than 32K of Unicode output to the console +screen in one go can result in mojibake. + +.. + +.. date: 2023-01-09-23-03-57 +.. gh-issue: 100180 +.. nonce: b5phrg +.. section: Windows + +Update Windows installer to OpenSSL 1.1.1s + +.. + +.. bpo: 43984 +.. date: 2021-05-02-15-29-33 +.. nonce: U92jiv +.. section: Windows + +:meth:`winreg.SetValueEx` now leaves the target value untouched in the case +of conversion errors. Previously, ``-1`` would be written in case of such +errors. + +.. + +.. date: 2023-01-09-22-04-21 +.. gh-issue: 100180 +.. nonce: WVhCny +.. section: macOS + +Update macOS installer to OpenSSL 1.1.1s + +.. + +.. date: 2022-11-30-16-39-22 +.. gh-issue: 99240 +.. nonce: 67nAX- +.. section: C API + +In argument parsing, after deallocating newly allocated memory, reset its +pointer to NULL. diff --git a/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst b/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst deleted file mode 100644 index 2e7f9029e9ee..000000000000 --- a/Misc/NEWS.d/next/Build/2023-02-02-23-43-46.gh-issue-101522.lnUDta.rst +++ /dev/null @@ -1,2 +0,0 @@ -Allow overriding Windows dependencies versions and paths using MSBuild -properties. diff --git a/Misc/NEWS.d/next/C API/2022-11-30-16-39-22.gh-issue-99240.67nAX-.rst b/Misc/NEWS.d/next/C API/2022-11-30-16-39-22.gh-issue-99240.67nAX-.rst deleted file mode 100644 index 9a1bb3cf5a73..000000000000 --- a/Misc/NEWS.d/next/C API/2022-11-30-16-39-22.gh-issue-99240.67nAX-.rst +++ /dev/null @@ -1,2 +0,0 @@ -In argument parsing, after deallocating newly allocated memory, reset its -pointer to NULL. diff --git a/Misc/NEWS.d/next/Core and Builtins/2018-02-06-23-21-13.bpo-32782.EJVSfR.rst b/Misc/NEWS.d/next/Core and Builtins/2018-02-06-23-21-13.bpo-32782.EJVSfR.rst deleted file mode 100644 index 841740130bd1..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2018-02-06-23-21-13.bpo-32782.EJVSfR.rst +++ /dev/null @@ -1,3 +0,0 @@ -``ctypes`` arrays of length 0 now report a correct itemsize when a -``memoryview`` is constructed from them, rather than always giving a value -of 0. diff --git a/Misc/NEWS.d/next/Core and Builtins/2022-12-06-22-24-01.gh-issue-100050.lcrPqQ.rst b/Misc/NEWS.d/next/Core and Builtins/2022-12-06-22-24-01.gh-issue-100050.lcrPqQ.rst deleted file mode 100644 index 8e7c72d804f8..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2022-12-06-22-24-01.gh-issue-100050.lcrPqQ.rst +++ /dev/null @@ -1,2 +0,0 @@ -Honor existing errors obtained when searching for mismatching parentheses in -the tokenizer. Patch by Pablo Galindo diff --git a/Misc/NEWS.d/next/Core and Builtins/2022-12-20-16-14-19.gh-issue-100374.YRrVHT.rst b/Misc/NEWS.d/next/Core and Builtins/2022-12-20-16-14-19.gh-issue-100374.YRrVHT.rst deleted file mode 100644 index e78352fb188e..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2022-12-20-16-14-19.gh-issue-100374.YRrVHT.rst +++ /dev/null @@ -1 +0,0 @@ -Fix incorrect result and delay in :func:`socket.getfqdn`. Patch by Dominic Socular. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-06-02-02-11.gh-issue-100776.pP8xux.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-06-02-02-11.gh-issue-100776.pP8xux.rst deleted file mode 100644 index b94121ea5f29..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-06-02-02-11.gh-issue-100776.pP8xux.rst +++ /dev/null @@ -1 +0,0 @@ -Fix misleading default value in :func:`input`'s ``__text_signature__``. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-10-14-11-17.gh-issue-100892.qfBVYI.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-10-14-11-17.gh-issue-100892.qfBVYI.rst deleted file mode 100644 index f2576becc2fc..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-10-14-11-17.gh-issue-100892.qfBVYI.rst +++ /dev/null @@ -1 +0,0 @@ -Fix race while iterating over thread states in clearing :class:`threading.local`. Patch by Kumar Aditya. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-11-22-52-19.gh-issue-100942.ontOy_.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-11-22-52-19.gh-issue-100942.ontOy_.rst deleted file mode 100644 index daccea255b16..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-11-22-52-19.gh-issue-100942.ontOy_.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fixed segfault in property.getter/setter/deleter that occurred when a property -subclass overrode the ``__new__`` method to return a non-property instance. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-15-03-26-04.gh-issue-101046.g2CM4S.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-15-03-26-04.gh-issue-101046.g2CM4S.rst deleted file mode 100644 index f600473620f1..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-15-03-26-04.gh-issue-101046.g2CM4S.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix a possible memory leak in the parser when raising :exc:`MemoryError`. -Patch by Pablo Galindo diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst deleted file mode 100644 index 65a207e3f7e4..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-28-20-31-42.gh-issue-101372.8BcpCC.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix :func:`~unicodedata.is_normalized` to properly handle the UCD 3.2.0 -cases. Patch by Dong-hee Na. diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-30-08-59-47.gh-issue-101400.Di_ZFm.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-30-08-59-47.gh-issue-101400.Di_ZFm.rst deleted file mode 100644 index f3dd783c01e7..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2023-01-30-08-59-47.gh-issue-101400.Di_ZFm.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix wrong lineno in exception message on :keyword:`continue` or -:keyword:`break` which are not in a loop. Patch by Dong-hee Na. diff --git a/Misc/NEWS.d/next/Documentation/2022-12-23-21-42-26.gh-issue-100472.NNixfO.rst b/Misc/NEWS.d/next/Documentation/2022-12-23-21-42-26.gh-issue-100472.NNixfO.rst deleted file mode 100644 index 4f4162150750..000000000000 --- a/Misc/NEWS.d/next/Documentation/2022-12-23-21-42-26.gh-issue-100472.NNixfO.rst +++ /dev/null @@ -1 +0,0 @@ -Remove claim in documentation that the ``stripdir``, ``prependdir`` and ``limit_sl_dest`` parameters of :func:`compileall.compile_dir` and :func:`compileall.compile_file` could be :class:`bytes`. diff --git a/Misc/NEWS.d/next/Documentation/2022-12-30-00-42-23.gh-issue-100616.eu80ij.rst b/Misc/NEWS.d/next/Documentation/2022-12-30-00-42-23.gh-issue-100616.eu80ij.rst deleted file mode 100644 index 97a7022c94b4..000000000000 --- a/Misc/NEWS.d/next/Documentation/2022-12-30-00-42-23.gh-issue-100616.eu80ij.rst +++ /dev/null @@ -1,2 +0,0 @@ -Document existing ``attr`` parameter to :func:`curses.window.vline` function -in :mod:`curses`. diff --git a/Misc/NEWS.d/next/Library/2019-05-13-11-37-30.bpo-36880.ZgBgH0.rst b/Misc/NEWS.d/next/Library/2019-05-13-11-37-30.bpo-36880.ZgBgH0.rst deleted file mode 100644 index f65323813d05..000000000000 --- a/Misc/NEWS.d/next/Library/2019-05-13-11-37-30.bpo-36880.ZgBgH0.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix a reference counting issue when a :mod:`ctypes` callback with return -type :class:`~ctypes.py_object` returns ``None``, which could cause crashes. diff --git a/Misc/NEWS.d/next/Library/2020-05-03-12-55-55.bpo-40447.oKR0Lj.rst b/Misc/NEWS.d/next/Library/2020-05-03-12-55-55.bpo-40447.oKR0Lj.rst deleted file mode 100644 index 941038d095b3..000000000000 --- a/Misc/NEWS.d/next/Library/2020-05-03-12-55-55.bpo-40447.oKR0Lj.rst +++ /dev/null @@ -1,2 +0,0 @@ -Accept :class:`os.PathLike` (such as :class:`pathlib.Path`) in the ``stripdir`` arguments of -:meth:`compileall.compile_file` and :meth:`compileall.compile_dir`. diff --git a/Misc/NEWS.d/next/Library/2021-08-03-05-31-00.bpo-44817.wOW_Qn.rst b/Misc/NEWS.d/next/Library/2021-08-03-05-31-00.bpo-44817.wOW_Qn.rst deleted file mode 100644 index 79f8c506b54f..000000000000 --- a/Misc/NEWS.d/next/Library/2021-08-03-05-31-00.bpo-44817.wOW_Qn.rst +++ /dev/null @@ -1,2 +0,0 @@ -Ignore WinError 53 (ERROR_BAD_NETPATH), 65 (ERROR_NETWORK_ACCESS_DENIED) -and 161 (ERROR_BAD_PATHNAME) when using ntpath.realpath(). diff --git a/Misc/NEWS.d/next/Library/2022-08-23-03-13-18.gh-issue-96192.TJywOF.rst b/Misc/NEWS.d/next/Library/2022-08-23-03-13-18.gh-issue-96192.TJywOF.rst deleted file mode 100644 index 58e51da6ba39..000000000000 --- a/Misc/NEWS.d/next/Library/2022-08-23-03-13-18.gh-issue-96192.TJywOF.rst +++ /dev/null @@ -1 +0,0 @@ -Fix handling of ``bytes`` :term:`path-like objects ` in :func:`os.ismount()`. diff --git a/Misc/NEWS.d/next/Library/2022-10-28-07-24-34.gh-issue-85267.xUy_Wm.rst b/Misc/NEWS.d/next/Library/2022-10-28-07-24-34.gh-issue-85267.xUy_Wm.rst deleted file mode 100644 index e69fd1ca1c2f..000000000000 --- a/Misc/NEWS.d/next/Library/2022-10-28-07-24-34.gh-issue-85267.xUy_Wm.rst +++ /dev/null @@ -1,6 +0,0 @@ -Several improvements to :func:`inspect.signature`'s handling of ``__text_signature``. -- Fixes a case where :func:`inspect.signature` dropped parameters -- Fixes a case where :func:`inspect.signature` raised :exc:`tokenize.TokenError` -- Allows :func:`inspect.signature` to understand defaults involving binary operations of constants -- :func:`inspect.signature` is documented as only raising :exc:`TypeError` or :exc:`ValueError`, but sometimes raised :exc:`RuntimeError`. These cases now raise :exc:`ValueError` -- Removed a dead code path diff --git a/Misc/NEWS.d/next/Library/2022-11-08-15-54-43.gh-issue-99240.MhYwcz.rst b/Misc/NEWS.d/next/Library/2022-11-08-15-54-43.gh-issue-99240.MhYwcz.rst deleted file mode 100644 index 0a4db052755f..000000000000 --- a/Misc/NEWS.d/next/Library/2022-11-08-15-54-43.gh-issue-99240.MhYwcz.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix double-free bug in Argument Clinic ``str_converter`` by -extracting memory clean up to a new ``post_parsing`` section. diff --git a/Misc/NEWS.d/next/Library/2022-11-21-16-24-01.gh-issue-83035.qZIujU.rst b/Misc/NEWS.d/next/Library/2022-11-21-16-24-01.gh-issue-83035.qZIujU.rst deleted file mode 100644 index 629d9aefb2d8..000000000000 --- a/Misc/NEWS.d/next/Library/2022-11-21-16-24-01.gh-issue-83035.qZIujU.rst +++ /dev/null @@ -1 +0,0 @@ -Fix :func:`inspect.getsource` handling of decorator calls with nested parentheses. diff --git a/Misc/NEWS.d/next/Library/2022-12-03-20-06-16.gh-issue-98778.t5U9uc.rst b/Misc/NEWS.d/next/Library/2022-12-03-20-06-16.gh-issue-98778.t5U9uc.rst deleted file mode 100644 index b1c170dff3ea..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-03-20-06-16.gh-issue-98778.t5U9uc.rst +++ /dev/null @@ -1,2 +0,0 @@ -Update :exc:`~urllib.error.HTTPError` to be initialized properly, even if -the ``fp`` is ``None``. Patch by Dong-hee Na. diff --git a/Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst b/Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst deleted file mode 100644 index 09ec96124953..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-11-14-38-59.gh-issue-99952.IYGLzr.rst +++ /dev/null @@ -1,2 +0,0 @@ -Fix a reference undercounting issue in :class:`ctypes.Structure` with ``from_param()`` -results larger than a C pointer. diff --git a/Misc/NEWS.d/next/Library/2022-12-21-18-29-24.gh-issue-100160.isBmL5.rst b/Misc/NEWS.d/next/Library/2022-12-21-18-29-24.gh-issue-100160.isBmL5.rst deleted file mode 100644 index c3b518ca85d7..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-21-18-29-24.gh-issue-100160.isBmL5.rst +++ /dev/null @@ -1,2 +0,0 @@ -Remove any deprecation warnings in :func:`asyncio.get_event_loop`. They are -deferred to Python 3.12. diff --git a/Misc/NEWS.d/next/Library/2022-12-23-21-02-43.gh-issue-100474.gppA4U.rst b/Misc/NEWS.d/next/Library/2022-12-23-21-02-43.gh-issue-100474.gppA4U.rst deleted file mode 100644 index 31abfb8b87fb..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-23-21-02-43.gh-issue-100474.gppA4U.rst +++ /dev/null @@ -1,2 +0,0 @@ -:mod:`http.server` now checks that an index page is actually a regular file before trying -to serve it. This avoids issues with directories named ``index.html``. diff --git a/Misc/NEWS.d/next/Library/2022-12-24-08-42-05.gh-issue-100287.n0oEuG.rst b/Misc/NEWS.d/next/Library/2022-12-24-08-42-05.gh-issue-100287.n0oEuG.rst deleted file mode 100644 index b353f0810c6a..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-24-08-42-05.gh-issue-100287.n0oEuG.rst +++ /dev/null @@ -1 +0,0 @@ -Fix the interaction of :func:`unittest.mock.seal` with :class:`unittest.mock.AsyncMock`. diff --git a/Misc/NEWS.d/next/Library/2022-12-30-07-49-08.gh-issue-86508.nGZDzC.rst b/Misc/NEWS.d/next/Library/2022-12-30-07-49-08.gh-issue-86508.nGZDzC.rst deleted file mode 100644 index aedad0f252c2..000000000000 --- a/Misc/NEWS.d/next/Library/2022-12-30-07-49-08.gh-issue-86508.nGZDzC.rst +++ /dev/null @@ -1 +0,0 @@ -Fix :func:`asyncio.open_connection` to skip binding to local addresses of different family. Patch by Kumar Aditya. diff --git a/Misc/NEWS.d/next/Library/2023-01-04-09-53-38.gh-issue-100740.-j5UjI.rst b/Misc/NEWS.d/next/Library/2023-01-04-09-53-38.gh-issue-100740.-j5UjI.rst deleted file mode 100644 index 4753e7b40825..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-04-09-53-38.gh-issue-100740.-j5UjI.rst +++ /dev/null @@ -1 +0,0 @@ -Fix ``unittest.mock.Mock`` not respecting the spec for attribute names prefixed with ``assert``. diff --git a/Misc/NEWS.d/next/Library/2023-01-04-12-58-59.gh-issue-100689.Ce0ITG.rst b/Misc/NEWS.d/next/Library/2023-01-04-12-58-59.gh-issue-100689.Ce0ITG.rst deleted file mode 100644 index 09aeab7bceaa..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-04-12-58-59.gh-issue-100689.Ce0ITG.rst +++ /dev/null @@ -1 +0,0 @@ -Fix crash in :mod:`pyexpat` by statically allocating ``PyExpat_CAPI`` capsule. diff --git a/Misc/NEWS.d/next/Library/2023-01-04-22-10-31.gh-issue-90104.yZk5EX.rst b/Misc/NEWS.d/next/Library/2023-01-04-22-10-31.gh-issue-90104.yZk5EX.rst deleted file mode 100644 index 6695c920ee24..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-04-22-10-31.gh-issue-90104.yZk5EX.rst +++ /dev/null @@ -1 +0,0 @@ -Avoid RecursionError on ``repr`` if a dataclass field definition has a cyclic reference. diff --git a/Misc/NEWS.d/next/Library/2023-01-12-01-18-13.gh-issue-100573.KDskqo.rst b/Misc/NEWS.d/next/Library/2023-01-12-01-18-13.gh-issue-100573.KDskqo.rst deleted file mode 100644 index 97b95d18d1e4..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-12-01-18-13.gh-issue-100573.KDskqo.rst +++ /dev/null @@ -1 +0,0 @@ -Fix a Windows :mod:`asyncio` bug with named pipes where a client doing ``os.stat()`` on the pipe would cause an error in the server that disabled serving future requests. diff --git a/Misc/NEWS.d/next/Library/2023-01-18-17-58-50.gh-issue-101144.FHd8Un.rst b/Misc/NEWS.d/next/Library/2023-01-18-17-58-50.gh-issue-101144.FHd8Un.rst deleted file mode 100644 index 2454ea83ef00..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-18-17-58-50.gh-issue-101144.FHd8Un.rst +++ /dev/null @@ -1,4 +0,0 @@ -Make :func:`zipfile.Path.open` and :func:`zipfile.Path.read_text` also accept -``encoding`` as a positional argument. This was the behavior in Python 3.9 and -earlier. Earlier 3.10 versions had a regression where supplying it as a positional -argument would lead to a :exc:`TypeError`. diff --git a/Misc/NEWS.d/next/Library/2023-01-20-10-46-59.gh-issue-101143.hJo8hu.rst b/Misc/NEWS.d/next/Library/2023-01-20-10-46-59.gh-issue-101143.hJo8hu.rst deleted file mode 100644 index d14b9e25a691..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-20-10-46-59.gh-issue-101143.hJo8hu.rst +++ /dev/null @@ -1,2 +0,0 @@ -Remove unused references to :class:`~asyncio.TimerHandle` in -``asyncio.base_events.BaseEventLoop._add_callback``. diff --git a/Misc/NEWS.d/next/Library/2023-01-21-16-50-22.gh-issue-100795.NPMZf7.rst b/Misc/NEWS.d/next/Library/2023-01-21-16-50-22.gh-issue-100795.NPMZf7.rst deleted file mode 100644 index 4cb56ea0f0e5..000000000000 --- a/Misc/NEWS.d/next/Library/2023-01-21-16-50-22.gh-issue-100795.NPMZf7.rst +++ /dev/null @@ -1,3 +0,0 @@ -Avoid potential unexpected ``freeaddrinfo`` call (double free) in :mod:`socket` -when when a libc ``getaddrinfo()`` implementation leaves garbage in an output -pointer when returning an error. Original patch by Sergey G. Brester. diff --git a/Misc/NEWS.d/next/Tests/2022-08-22-15-49-14.gh-issue-96002.4UE9UE.rst b/Misc/NEWS.d/next/Tests/2022-08-22-15-49-14.gh-issue-96002.4UE9UE.rst deleted file mode 100644 index dc86e1d70f12..000000000000 --- a/Misc/NEWS.d/next/Tests/2022-08-22-15-49-14.gh-issue-96002.4UE9UE.rst +++ /dev/null @@ -1 +0,0 @@ -Add functional test for Argument Clinic. diff --git a/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst b/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst deleted file mode 100644 index 2a95fd9ae53c..000000000000 --- a/Misc/NEWS.d/next/Tests/2023-02-04-17-24-33.gh-issue-101334._yOqwg.rst +++ /dev/null @@ -1 +0,0 @@ -``test_tarfile`` has been updated to pass when run as a high UID. diff --git a/Misc/NEWS.d/next/Windows/2021-05-02-15-29-33.bpo-43984.U92jiv.rst b/Misc/NEWS.d/next/Windows/2021-05-02-15-29-33.bpo-43984.U92jiv.rst deleted file mode 100644 index a5975b2d00c7..000000000000 --- a/Misc/NEWS.d/next/Windows/2021-05-02-15-29-33.bpo-43984.U92jiv.rst +++ /dev/null @@ -1,3 +0,0 @@ -:meth:`winreg.SetValueEx` now leaves the target value untouched in the case of conversion errors. -Previously, ``-1`` would be written in case of such errors. - diff --git a/Misc/NEWS.d/next/Windows/2023-01-09-23-03-57.gh-issue-100180.b5phrg.rst b/Misc/NEWS.d/next/Windows/2023-01-09-23-03-57.gh-issue-100180.b5phrg.rst deleted file mode 100644 index 5b0f42568d92..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-09-23-03-57.gh-issue-100180.b5phrg.rst +++ /dev/null @@ -1 +0,0 @@ -Update Windows installer to OpenSSL 1.1.1s diff --git a/Misc/NEWS.d/next/Windows/2023-01-17-18-17-58.gh-issue-82052.mWyysT.rst b/Misc/NEWS.d/next/Windows/2023-01-17-18-17-58.gh-issue-82052.mWyysT.rst deleted file mode 100644 index 4f7ab200b85c..000000000000 --- a/Misc/NEWS.d/next/Windows/2023-01-17-18-17-58.gh-issue-82052.mWyysT.rst +++ /dev/null @@ -1 +0,0 @@ -Fixed an issue where writing more than 32K of Unicode output to the console screen in one go can result in mojibake. diff --git a/Misc/NEWS.d/next/macOS/2023-01-09-22-04-21.gh-issue-100180.WVhCny.rst b/Misc/NEWS.d/next/macOS/2023-01-09-22-04-21.gh-issue-100180.WVhCny.rst deleted file mode 100644 index b7d94a0a94a0..000000000000 --- a/Misc/NEWS.d/next/macOS/2023-01-09-22-04-21.gh-issue-100180.WVhCny.rst +++ /dev/null @@ -1 +0,0 @@ -Update macOS installer to OpenSSL 1.1.1s diff --git a/README.rst b/README.rst index f8b9c88e9282..42dc2aeface1 100644 --- a/README.rst +++ b/README.rst @@ -1,4 +1,4 @@ -This is Python version 3.10.9 +This is Python version 3.10.10 ============================= .. image:: https://travis-ci.com/python/cpython.svg?branch=master From webhook-mailer at python.org Wed Feb 8 05:02:16 2023 From: webhook-mailer at python.org (ambv) Date: Wed, 08 Feb 2023 10:02:16 -0000 Subject: [Python-checkins] [3.11] gh-101656: Fix "conversion from Py_ssize_t to int" warning in `_testcapimodule` (#101680) Message-ID: https://github.com/python/cpython/commit/d0ff3d996fec197ea967039d0a8fafcd4c26fc7a commit: d0ff3d996fec197ea967039d0a8fafcd4c26fc7a branch: 3.11 author: Nikita Sobolev committer: ambv date: 2023-02-08T11:02:07+01:00 summary: [3.11] gh-101656: Fix "conversion from Py_ssize_t to int" warning in `_testcapimodule` (#101680) (cherry picked from commit acc2f3b19d28d4bf3f8fb32357f581cba5ba24c7) files: M Modules/_testcapimodule.c diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index daf2df4ffdde..71501b97a8c5 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -6179,11 +6179,11 @@ eval_eval_code_ex(PyObject *mod, PyObject *pos_args) globals, locals, c_args, - c_args_len, + (int)c_args_len, c_kwargs, - c_kwargs_len, + (int)c_kwargs_len, c_defaults, - c_defaults_len, + (int)c_defaults_len, kw_defaults, closure ); From webhook-mailer at python.org Wed Feb 8 05:02:44 2023 From: webhook-mailer at python.org (ambv) Date: Wed, 08 Feb 2023 10:02:44 -0000 Subject: [Python-checkins] [3.11] gh-97725: Fix documentation for the default file of `asyncio.Task.print_stack` (#101652) (#101653) Message-ID: https://github.com/python/cpython/commit/fa906714808cb8ce1428037a3a8c92b931969478 commit: fa906714808cb8ce1428037a3a8c92b931969478 branch: 3.11 author: Oleg Iarygin committer: ambv date: 2023-02-08T11:02:37+01:00 summary: [3.11] gh-97725: Fix documentation for the default file of `asyncio.Task.print_stack` (#101652) (#101653) (cherry picked from commit f87f6e23964d7a4c38b655089cda65538a24ec36) files: A Misc/NEWS.d/next/Documentation/2023-02-07-21-43-24.gh-issue-97725.cuY7Cd.rst M Doc/library/asyncio-task.rst diff --git a/Doc/library/asyncio-task.rst b/Doc/library/asyncio-task.rst index 1fd7afad99a8..ade5293e7f07 100644 --- a/Doc/library/asyncio-task.rst +++ b/Doc/library/asyncio-task.rst @@ -1097,7 +1097,7 @@ Task Object The *limit* argument is passed to :meth:`get_stack` directly. The *file* argument is an I/O stream to which the output - is written; by default output is written to :data:`sys.stderr`. + is written; by default output is written to :data:`sys.stdout`. .. method:: get_coro() diff --git a/Misc/NEWS.d/next/Documentation/2023-02-07-21-43-24.gh-issue-97725.cuY7Cd.rst b/Misc/NEWS.d/next/Documentation/2023-02-07-21-43-24.gh-issue-97725.cuY7Cd.rst new file mode 100644 index 000000000000..fd9ea049c239 --- /dev/null +++ b/Misc/NEWS.d/next/Documentation/2023-02-07-21-43-24.gh-issue-97725.cuY7Cd.rst @@ -0,0 +1,2 @@ +Fix :meth:`asyncio.Task.print_stack` description for ``file=None``. +Patch by Oleg Iarygin. From webhook-mailer at python.org Wed Feb 8 05:03:01 2023 From: webhook-mailer at python.org (ambv) Date: Wed, 08 Feb 2023 10:03:01 -0000 Subject: [Python-checkins] [3.10] gh-97725: Fix documentation for the default file of `asyncio.Task.print_stack` (#101652) (#101654) Message-ID: https://github.com/python/cpython/commit/7d727518bed1aa2b46c269a6cf4f0856ad2fe848 commit: 7d727518bed1aa2b46c269a6cf4f0856ad2fe848 branch: 3.10 author: Oleg Iarygin committer: ambv date: 2023-02-08T11:02:54+01:00 summary: [3.10] gh-97725: Fix documentation for the default file of `asyncio.Task.print_stack` (#101652) (#101654) (cherry picked from commit f87f6e23964d7a4c38b655089cda65538a24ec36) files: A Misc/NEWS.d/next/Documentation/2023-02-07-21-43-24.gh-issue-97725.cuY7Cd.rst M Doc/library/asyncio-task.rst diff --git a/Doc/library/asyncio-task.rst b/Doc/library/asyncio-task.rst index 5608022c9afd..d7e13ea3c6fe 100644 --- a/Doc/library/asyncio-task.rst +++ b/Doc/library/asyncio-task.rst @@ -1002,7 +1002,7 @@ Task Object The *limit* argument is passed to :meth:`get_stack` directly. The *file* argument is an I/O stream to which the output - is written; by default output is written to :data:`sys.stderr`. + is written; by default output is written to :data:`sys.stdout`. .. method:: get_coro() diff --git a/Misc/NEWS.d/next/Documentation/2023-02-07-21-43-24.gh-issue-97725.cuY7Cd.rst b/Misc/NEWS.d/next/Documentation/2023-02-07-21-43-24.gh-issue-97725.cuY7Cd.rst new file mode 100644 index 000000000000..fd9ea049c239 --- /dev/null +++ b/Misc/NEWS.d/next/Documentation/2023-02-07-21-43-24.gh-issue-97725.cuY7Cd.rst @@ -0,0 +1,2 @@ +Fix :meth:`asyncio.Task.print_stack` description for ``file=None``. +Patch by Oleg Iarygin. From webhook-mailer at python.org Wed Feb 8 05:04:03 2023 From: webhook-mailer at python.org (ambv) Date: Wed, 08 Feb 2023 10:04:03 -0000 Subject: [Python-checkins] [3.11] gh-47937: Note that Popen attributes are read-only (GH-93070) (#101684) Message-ID: https://github.com/python/cpython/commit/7240ba7f94daaf3935def47f2a02e6b6b6053043 commit: 7240ba7f94daaf3935def47f2a02e6b6b6053043 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: ambv date: 2023-02-08T11:03:56+01:00 summary: [3.11] gh-47937: Note that Popen attributes are read-only (GH-93070) (#101684) * Note that Popen attributes aren't meant to be set by users by rewording the text about the attributes. * Also update some universal_newlines references to mention the modern text parameter name while in the area. (cherry picked from commit 027adf42cd85db41fee05b0a40d89ef822876c97) Co-authored-by: Stanley <46876382+slateny at users.noreply.github.com> Co-authored-by: Gregory P. Smith files: M Doc/library/subprocess.rst diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst index e4e38e933681..7ae8a9493608 100644 --- a/Doc/library/subprocess.rst +++ b/Doc/library/subprocess.rst @@ -458,7 +458,7 @@ functions. - :const:`0` means unbuffered (read and write are one system call and can return short) - :const:`1` means line buffered - (only usable if ``universal_newlines=True`` i.e., in a text mode) + (only usable if ``text=True`` or ``universal_newlines=True``) - any other positive value means use a buffer of approximately that size - negative bufsize (the default) means the system default of @@ -849,7 +849,8 @@ Instances of the :class:`Popen` class have the following methods: On Windows :meth:`kill` is an alias for :meth:`terminate`. -The following attributes are also available: +The following attributes are also set by the class for you to access. +Reassigning them to new values is unsupported: .. attribute:: Popen.args @@ -862,9 +863,9 @@ The following attributes are also available: If the *stdin* argument was :data:`PIPE`, this attribute is a writeable stream object as returned by :func:`open`. If the *encoding* or *errors* - arguments were specified or the *universal_newlines* argument was ``True``, - the stream is a text stream, otherwise it is a byte stream. If the *stdin* - argument was not :data:`PIPE`, this attribute is ``None``. + arguments were specified or the *text* or *universal_newlines* argument + was ``True``, the stream is a text stream, otherwise it is a byte stream. + If the *stdin* argument was not :data:`PIPE`, this attribute is ``None``. .. attribute:: Popen.stdout @@ -872,9 +873,9 @@ The following attributes are also available: If the *stdout* argument was :data:`PIPE`, this attribute is a readable stream object as returned by :func:`open`. Reading from the stream provides output from the child process. If the *encoding* or *errors* arguments were - specified or the *universal_newlines* argument was ``True``, the stream is a - text stream, otherwise it is a byte stream. If the *stdout* argument was not - :data:`PIPE`, this attribute is ``None``. + specified or the *text* or *universal_newlines* argument was ``True``, the + stream is a text stream, otherwise it is a byte stream. If the *stdout* + argument was not :data:`PIPE`, this attribute is ``None``. .. attribute:: Popen.stderr @@ -882,9 +883,9 @@ The following attributes are also available: If the *stderr* argument was :data:`PIPE`, this attribute is a readable stream object as returned by :func:`open`. Reading from the stream provides error output from the child process. If the *encoding* or *errors* arguments - were specified or the *universal_newlines* argument was ``True``, the stream - is a text stream, otherwise it is a byte stream. If the *stderr* argument was - not :data:`PIPE`, this attribute is ``None``. + were specified or the *text* or *universal_newlines* argument was ``True``, the + stream is a text stream, otherwise it is a byte stream. If the *stderr* argument + was not :data:`PIPE`, this attribute is ``None``. .. warning:: From webhook-mailer at python.org Wed Feb 8 05:04:22 2023 From: webhook-mailer at python.org (ambv) Date: Wed, 08 Feb 2023 10:04:22 -0000 Subject: [Python-checkins] [3.10] gh-47937: Note that Popen attributes are read-only (GH-93070) (#101683) Message-ID: https://github.com/python/cpython/commit/8bc41112a0b4ebab0291e21cb694597cb407f0a3 commit: 8bc41112a0b4ebab0291e21cb694597cb407f0a3 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: ambv date: 2023-02-08T11:04:15+01:00 summary: [3.10] gh-47937: Note that Popen attributes are read-only (GH-93070) (#101683) * Note that Popen attributes aren't meant to be set by users by rewording the text about the attributes. * Also update some universal_newlines references to mention the modern text parameter name while in the area. (cherry picked from commit 027adf42cd85db41fee05b0a40d89ef822876c97) Co-authored-by: Stanley <46876382+slateny at users.noreply.github.com> Co-authored-by: Gregory P. Smith files: M Doc/library/subprocess.rst diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst index ae1cb8faa3be..d9576cc4b066 100644 --- a/Doc/library/subprocess.rst +++ b/Doc/library/subprocess.rst @@ -456,7 +456,7 @@ functions. - :const:`0` means unbuffered (read and write are one system call and can return short) - :const:`1` means line buffered - (only usable if ``universal_newlines=True`` i.e., in a text mode) + (only usable if ``text=True`` or ``universal_newlines=True``) - any other positive value means use a buffer of approximately that size - negative bufsize (the default) means the system default of @@ -841,7 +841,8 @@ Instances of the :class:`Popen` class have the following methods: On Windows :meth:`kill` is an alias for :meth:`terminate`. -The following attributes are also available: +The following attributes are also set by the class for you to access. +Reassigning them to new values is unsupported: .. attribute:: Popen.args @@ -854,9 +855,9 @@ The following attributes are also available: If the *stdin* argument was :data:`PIPE`, this attribute is a writeable stream object as returned by :func:`open`. If the *encoding* or *errors* - arguments were specified or the *universal_newlines* argument was ``True``, - the stream is a text stream, otherwise it is a byte stream. If the *stdin* - argument was not :data:`PIPE`, this attribute is ``None``. + arguments were specified or the *text* or *universal_newlines* argument + was ``True``, the stream is a text stream, otherwise it is a byte stream. + If the *stdin* argument was not :data:`PIPE`, this attribute is ``None``. .. attribute:: Popen.stdout @@ -864,9 +865,9 @@ The following attributes are also available: If the *stdout* argument was :data:`PIPE`, this attribute is a readable stream object as returned by :func:`open`. Reading from the stream provides output from the child process. If the *encoding* or *errors* arguments were - specified or the *universal_newlines* argument was ``True``, the stream is a - text stream, otherwise it is a byte stream. If the *stdout* argument was not - :data:`PIPE`, this attribute is ``None``. + specified or the *text* or *universal_newlines* argument was ``True``, the + stream is a text stream, otherwise it is a byte stream. If the *stdout* + argument was not :data:`PIPE`, this attribute is ``None``. .. attribute:: Popen.stderr @@ -874,9 +875,9 @@ The following attributes are also available: If the *stderr* argument was :data:`PIPE`, this attribute is a readable stream object as returned by :func:`open`. Reading from the stream provides error output from the child process. If the *encoding* or *errors* arguments - were specified or the *universal_newlines* argument was ``True``, the stream - is a text stream, otherwise it is a byte stream. If the *stderr* argument was - not :data:`PIPE`, this attribute is ``None``. + were specified or the *text* or *universal_newlines* argument was ``True``, the + stream is a text stream, otherwise it is a byte stream. If the *stderr* argument + was not :data:`PIPE`, this attribute is ``None``. .. warning:: From webhook-mailer at python.org Wed Feb 8 05:05:08 2023 From: webhook-mailer at python.org (ambv) Date: Wed, 08 Feb 2023 10:05:08 -0000 Subject: [Python-checkins] [3.10] Fix MSI build PlatformToolset detection (#101651) Message-ID: https://github.com/python/cpython/commit/a0b7c3fd2a957174550eca430590f3345b595cd8 commit: a0b7c3fd2a957174550eca430590f3345b595cd8 branch: 3.10 author: Steve Dower committer: ambv date: 2023-02-08T11:05:01+01:00 summary: [3.10] Fix MSI build PlatformToolset detection (#101651) Fix MSI build PlatformToolset detection files: M Tools/msi/bundle/bootstrap/pythonba.vcxproj diff --git a/Tools/msi/bundle/bootstrap/pythonba.vcxproj b/Tools/msi/bundle/bootstrap/pythonba.vcxproj index d90b5e3ff000..e40fb444716e 100644 --- a/Tools/msi/bundle/bootstrap/pythonba.vcxproj +++ b/Tools/msi/bundle/bootstrap/pythonba.vcxproj @@ -21,11 +21,10 @@ Release Win32 - v142 + v143 + v142 v141 - v140 - v140 - v120 + v140 {7A09B132-B3EE-499B-A700-A4B2157FEA3D} PythonBA From webhook-mailer at python.org Wed Feb 8 05:05:43 2023 From: webhook-mailer at python.org (ambv) Date: Wed, 08 Feb 2023 10:05:43 -0000 Subject: [Python-checkins] [3.11] Make use of TESTFN_ASCII in test_fileio (GH-101645) (#101650) Message-ID: https://github.com/python/cpython/commit/c38b4e75b1a0532b523351ed974722c0a3b96694 commit: c38b4e75b1a0532b523351ed974722c0a3b96694 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: ambv date: 2023-02-08T11:05:36+01:00 summary: [3.11] Make use of TESTFN_ASCII in test_fileio (GH-101645) (#101650) testBytesOpen requires an ASCII filename, but TESTFN usually isn't ASCII. (cherry picked from commit 6fd5eb640af19b535f4f2ba27b1b61b8d17f02e9) Co-authored-by: Zachary Ware files: M Lib/test/test_fileio.py diff --git a/Lib/test/test_fileio.py b/Lib/test/test_fileio.py index c26cdc028cc8..5cc3d4bf9f5d 100644 --- a/Lib/test/test_fileio.py +++ b/Lib/test/test_fileio.py @@ -12,7 +12,9 @@ from test.support import ( cpython_only, swap_attr, gc_collect, is_emscripten, is_wasi ) -from test.support.os_helper import (TESTFN, TESTFN_UNICODE, make_bad_fd) +from test.support.os_helper import ( + TESTFN, TESTFN_ASCII, TESTFN_UNICODE, make_bad_fd, + ) from test.support.warnings_helper import check_warnings from collections import UserList @@ -431,18 +433,15 @@ def testUnicodeOpen(self): def testBytesOpen(self): # Opening a bytes filename - try: - fn = TESTFN.encode("ascii") - except UnicodeEncodeError: - self.skipTest('could not encode %r to ascii' % TESTFN) + fn = TESTFN_ASCII.encode("ascii") f = self.FileIO(fn, "w") try: f.write(b"abc") f.close() - with open(TESTFN, "rb") as f: + with open(TESTFN_ASCII, "rb") as f: self.assertEqual(f.read(), b"abc") finally: - os.unlink(TESTFN) + os.unlink(TESTFN_ASCII) @unittest.skipIf(sys.getfilesystemencoding() != 'utf-8', "test only works for utf-8 filesystems") From webhook-mailer at python.org Wed Feb 8 05:06:04 2023 From: webhook-mailer at python.org (ambv) Date: Wed, 08 Feb 2023 10:06:04 -0000 Subject: [Python-checkins] [3.11] gh-96127: Fix `inspect.signature` call on mocks (#96335) (#101646) Message-ID: https://github.com/python/cpython/commit/efcab386990576338f318f1adc839bbc694fa87e commit: efcab386990576338f318f1adc839bbc694fa87e branch: 3.11 author: Oleg Iarygin committer: ambv date: 2023-02-08T11:05:57+01:00 summary: [3.11] gh-96127: Fix `inspect.signature` call on mocks (#96335) (#101646) (cherry picked from commit 9e7d7266ecdcccc02385fe4ccb094f3444102e26) Co-authored-by: Nikita Sobolev files: A Misc/NEWS.d/next/Library/2022-08-27-10-35-50.gh-issue-96127.8RdLre.rst M Lib/test/test_inspect.py M Lib/unittest/mock.py diff --git a/Lib/test/test_inspect.py b/Lib/test/test_inspect.py index 9ea49854cbc63..c50486003a4da 100644 --- a/Lib/test/test_inspect.py +++ b/Lib/test/test_inspect.py @@ -3238,6 +3238,25 @@ def test_signature_on_lambdas(self): ((('a', 10, ..., "positional_or_keyword"),), ...)) + def test_signature_on_mocks(self): + # https://github.com/python/cpython/issues/96127 + for mock in ( + unittest.mock.Mock(), + unittest.mock.AsyncMock(), + unittest.mock.MagicMock(), + ): + with self.subTest(mock=mock): + self.assertEqual(str(inspect.signature(mock)), '(*args, **kwargs)') + + def test_signature_on_noncallable_mocks(self): + for mock in ( + unittest.mock.NonCallableMock(), + unittest.mock.NonCallableMagicMock(), + ): + with self.subTest(mock=mock): + with self.assertRaises(TypeError): + inspect.signature(mock) + def test_signature_equality(self): def foo(a, *, b:int) -> float: pass self.assertFalse(inspect.signature(foo) == 42) diff --git a/Lib/unittest/mock.py b/Lib/unittest/mock.py index fa0bd9131a21e..54bd3ecdd76f1 100644 --- a/Lib/unittest/mock.py +++ b/Lib/unittest/mock.py @@ -2201,7 +2201,15 @@ def __init__(self, /, *args, **kwargs): self.__dict__['_mock_await_args'] = None self.__dict__['_mock_await_args_list'] = _CallList() code_mock = NonCallableMock(spec_set=CodeType) - code_mock.co_flags = inspect.CO_COROUTINE + code_mock.co_flags = ( + inspect.CO_COROUTINE + + inspect.CO_VARARGS + + inspect.CO_VARKEYWORDS + ) + code_mock.co_argcount = 0 + code_mock.co_varnames = ('args', 'kwargs') + code_mock.co_posonlyargcount = 0 + code_mock.co_kwonlyargcount = 0 self.__dict__['__code__'] = code_mock self.__dict__['__name__'] = 'AsyncMock' self.__dict__['__defaults__'] = tuple() diff --git a/Misc/NEWS.d/next/Library/2022-08-27-10-35-50.gh-issue-96127.8RdLre.rst b/Misc/NEWS.d/next/Library/2022-08-27-10-35-50.gh-issue-96127.8RdLre.rst new file mode 100644 index 0000000000000..79edd8fd5d8f4 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2022-08-27-10-35-50.gh-issue-96127.8RdLre.rst @@ -0,0 +1,2 @@ +``inspect.signature`` was raising ``TypeError`` on call with mock objects. +Now it correctly returns ``(*args, **kwargs)`` as infered signature. From webhook-mailer at python.org Wed Feb 8 05:06:33 2023 From: webhook-mailer at python.org (ambv) Date: Wed, 08 Feb 2023 10:06:33 -0000 Subject: [Python-checkins] [3.8] gh-95778: add doc missing in some places (GH-100627) (#101630) Message-ID: https://github.com/python/cpython/commit/41d301ae2c564afff786b7dcd5919212106e7c4f commit: 41d301ae2c564afff786b7dcd5919212106e7c4f branch: 3.8 author: ?ric committer: ambv date: 2023-02-08T11:06:21+01:00 summary: [3.8] gh-95778: add doc missing in some places (GH-100627) (#101630) (cherry picked from commit 46521826cb1883e29e4640f94089dd92c57efc5b) files: M Misc/python.man diff --git a/Misc/python.man b/Misc/python.man index fa5d79996e2d..770a7ed403ef 100644 --- a/Misc/python.man +++ b/Misc/python.man @@ -312,6 +312,11 @@ Set implementation specific option. The following options are available: -X pycache_prefix=PATH: enable writing .pyc files to a parallel tree rooted at the given directory instead of to the code tree. + + -X int_max_str_digits=number: limit the size of int<->str conversions. + This helps avoid denial of service attacks when parsing untrusted data. + The default is sys.int_info.default_max_str_digits. 0 disables. + .TP .B \-x Skip the first line of the source. This is intended for a DOS @@ -479,6 +484,11 @@ values. The integer must be a decimal number in the range [0,4294967295]. Specifying the value 0 will disable hash randomization. +.IP PYTHONINTMAXSTRDIGITS +Limit the maximum digit characters in an int value +when converting from a string and when converting an int back to a str. +A value of 0 disables the limit. Conversions to or from bases 2, 4, 8, +16, and 32 are never limited. .IP PYTHONMALLOC Set the Python memory allocators and/or install debug hooks. The available memory allocators are From webhook-mailer at python.org Wed Feb 8 05:14:00 2023 From: webhook-mailer at python.org (ambv) Date: Wed, 08 Feb 2023 10:14:00 -0000 Subject: [Python-checkins] gh-100933: Improve `check_element` helper in `test_xml_etree` (#100934) Message-ID: https://github.com/python/cpython/commit/eb49d32b9af0b3b01a5588626179187f11d145c9 commit: eb49d32b9af0b3b01a5588626179187f11d145c9 branch: main author: Nikita Sobolev committer: ambv date: 2023-02-08T11:13:43+01:00 summary: gh-100933: Improve `check_element` helper in `test_xml_etree` (#100934) Items checked by this test are always `str` and `dict` instances. files: M Lib/test/test_xml_etree.py diff --git a/Lib/test/test_xml_etree.py b/Lib/test/test_xml_etree.py index ca5bb562996b..11efee00582e 100644 --- a/Lib/test/test_xml_etree.py +++ b/Lib/test/test_xml_etree.py @@ -203,25 +203,6 @@ def serialize_check(self, elem, expected): def test_interface(self): # Test element tree interface. - def check_string(string): - len(string) - for char in string: - self.assertEqual(len(char), 1, - msg="expected one-character string, got %r" % char) - new_string = string + "" - new_string = string + " " - string[:0] - - def check_mapping(mapping): - len(mapping) - keys = mapping.keys() - items = mapping.items() - for key in keys: - item = mapping[key] - mapping["key"] = "value" - self.assertEqual(mapping["key"], "value", - msg="expected value string, got %r" % mapping["key"]) - def check_element(element): self.assertTrue(ET.iselement(element), msg="not an element") direlem = dir(element) @@ -231,12 +212,12 @@ def check_element(element): self.assertIn(attr, direlem, msg='no %s visible by dir' % attr) - check_string(element.tag) - check_mapping(element.attrib) + self.assertIsInstance(element.tag, str) + self.assertIsInstance(element.attrib, dict) if element.text is not None: - check_string(element.text) + self.assertIsInstance(element.text, str) if element.tail is not None: - check_string(element.tail) + self.assertIsInstance(element.tail, str) for elem in element: check_element(elem) From webhook-mailer at python.org Wed Feb 8 06:12:41 2023 From: webhook-mailer at python.org (ambv) Date: Wed, 08 Feb 2023 11:12:41 -0000 Subject: [Python-checkins] [3.11] gh-100933: Improve `check_element` helper in `test_xml_etree` (GH-100934) (#101686) Message-ID: https://github.com/python/cpython/commit/5f0b81905084eb5da7131d06a6a4f15beeab460f commit: 5f0b81905084eb5da7131d06a6a4f15beeab460f branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: ambv date: 2023-02-08T12:11:54+01:00 summary: [3.11] gh-100933: Improve `check_element` helper in `test_xml_etree` (GH-100934) (#101686) Items checked by this test are always `str` and `dict` instances. (cherry picked from commit eb49d32b9af0b3b01a5588626179187f11d145c9) Co-authored-by: Nikita Sobolev files: M Lib/test/test_xml_etree.py diff --git a/Lib/test/test_xml_etree.py b/Lib/test/test_xml_etree.py index 63b6725af897..abf3c62156a9 100644 --- a/Lib/test/test_xml_etree.py +++ b/Lib/test/test_xml_etree.py @@ -203,25 +203,6 @@ def serialize_check(self, elem, expected): def test_interface(self): # Test element tree interface. - def check_string(string): - len(string) - for char in string: - self.assertEqual(len(char), 1, - msg="expected one-character string, got %r" % char) - new_string = string + "" - new_string = string + " " - string[:0] - - def check_mapping(mapping): - len(mapping) - keys = mapping.keys() - items = mapping.items() - for key in keys: - item = mapping[key] - mapping["key"] = "value" - self.assertEqual(mapping["key"], "value", - msg="expected value string, got %r" % mapping["key"]) - def check_element(element): self.assertTrue(ET.iselement(element), msg="not an element") direlem = dir(element) @@ -231,12 +212,12 @@ def check_element(element): self.assertIn(attr, direlem, msg='no %s visible by dir' % attr) - check_string(element.tag) - check_mapping(element.attrib) + self.assertIsInstance(element.tag, str) + self.assertIsInstance(element.attrib, dict) if element.text is not None: - check_string(element.text) + self.assertIsInstance(element.text, str) if element.tail is not None: - check_string(element.tail) + self.assertIsInstance(element.tail, str) for elem in element: check_element(elem) From webhook-mailer at python.org Wed Feb 8 06:40:46 2023 From: webhook-mailer at python.org (ambv) Date: Wed, 08 Feb 2023 11:40:46 -0000 Subject: [Python-checkins] [3.10] gh-100933: Improve `check_element` helper in `test_xml_etree` (GH-100934) (#101687) Message-ID: https://github.com/python/cpython/commit/c51cd54b6560879c35ec6d36f8ec72194c4f27a9 commit: c51cd54b6560879c35ec6d36f8ec72194c4f27a9 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: ambv date: 2023-02-08T12:40:40+01:00 summary: [3.10] gh-100933: Improve `check_element` helper in `test_xml_etree` (GH-100934) (#101687) Items checked by this test are always `str` and `dict` instances. (cherry picked from commit eb49d32b9af0b3b01a5588626179187f11d145c9) Co-authored-by: Nikita Sobolev files: M Lib/test/test_xml_etree.py diff --git a/Lib/test/test_xml_etree.py b/Lib/test/test_xml_etree.py index d7b5e5208c53..940e02630e90 100644 --- a/Lib/test/test_xml_etree.py +++ b/Lib/test/test_xml_etree.py @@ -203,25 +203,6 @@ def serialize_check(self, elem, expected): def test_interface(self): # Test element tree interface. - def check_string(string): - len(string) - for char in string: - self.assertEqual(len(char), 1, - msg="expected one-character string, got %r" % char) - new_string = string + "" - new_string = string + " " - string[:0] - - def check_mapping(mapping): - len(mapping) - keys = mapping.keys() - items = mapping.items() - for key in keys: - item = mapping[key] - mapping["key"] = "value" - self.assertEqual(mapping["key"], "value", - msg="expected value string, got %r" % mapping["key"]) - def check_element(element): self.assertTrue(ET.iselement(element), msg="not an element") direlem = dir(element) @@ -231,12 +212,12 @@ def check_element(element): self.assertIn(attr, direlem, msg='no %s visible by dir' % attr) - check_string(element.tag) - check_mapping(element.attrib) + self.assertIsInstance(element.tag, str) + self.assertIsInstance(element.attrib, dict) if element.text is not None: - check_string(element.text) + self.assertIsInstance(element.text, str) if element.tail is not None: - check_string(element.tail) + self.assertIsInstance(element.tail, str) for elem in element: check_element(elem) From webhook-mailer at python.org Wed Feb 8 09:24:05 2023 From: webhook-mailer at python.org (zooba) Date: Wed, 08 Feb 2023 14:24:05 -0000 Subject: [Python-checkins] gh-101614: Don't treat python3_d.dll as a Python DLL when checking extension modules for incompatibility (GH-101615) Message-ID: https://github.com/python/cpython/commit/3a88de7a0af00872d9d57e1d98bc2f035cb15a1c commit: 3a88de7a0af00872d9d57e1d98bc2f035cb15a1c branch: main author: David Hewitt <1939362+davidhewitt at users.noreply.github.com> committer: zooba date: 2023-02-08T14:23:57Z summary: gh-101614: Don't treat python3_d.dll as a Python DLL when checking extension modules for incompatibility (GH-101615) files: A Misc/NEWS.d/next/Windows/2023-02-07-18-22-54.gh-issue-101614.NjVP0n.rst M Python/dynload_win.c diff --git a/Misc/NEWS.d/next/Windows/2023-02-07-18-22-54.gh-issue-101614.NjVP0n.rst b/Misc/NEWS.d/next/Windows/2023-02-07-18-22-54.gh-issue-101614.NjVP0n.rst new file mode 100644 index 000000000000..8ed0995d7892 --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-07-18-22-54.gh-issue-101614.NjVP0n.rst @@ -0,0 +1 @@ +Correctly handle extensions built against debug binaries that reference ``python3_d.dll``. diff --git a/Python/dynload_win.c b/Python/dynload_win.c index c03bc5602bff..7bd04d573df4 100644 --- a/Python/dynload_win.c +++ b/Python/dynload_win.c @@ -125,14 +125,15 @@ static char *GetPythonImport (HINSTANCE hModule) !strncmp(import_name,"python",6)) { char *pch; -#ifndef _DEBUG - /* In a release version, don't claim that python3.dll is - a Python DLL. */ + /* Don't claim that python3.dll is a Python DLL. */ +#ifdef _DEBUG + if (strcmp(import_name, "python3_d.dll") == 0) { +#else if (strcmp(import_name, "python3.dll") == 0) { +#endif import_data += 20; continue; } -#endif /* Ensure python prefix is followed only by numbers to the end of the basename */ From webhook-mailer at python.org Wed Feb 8 09:34:34 2023 From: webhook-mailer at python.org (zooba) Date: Wed, 08 Feb 2023 14:34:34 -0000 Subject: [Python-checkins] gh-101196: Make isdir/isfile/exists faster on Windows (GH-101324) Message-ID: https://github.com/python/cpython/commit/86ebd5c3fa9ac0fba3b651f1d4abfca79614af5f commit: 86ebd5c3fa9ac0fba3b651f1d4abfca79614af5f branch: main author: Michael Droettboom committer: zooba date: 2023-02-08T14:34:24Z summary: gh-101196: Make isdir/isfile/exists faster on Windows (GH-101324) Co-authored-by: Eryk Sun files: A Misc/NEWS.d/next/Windows/2023-01-25-11-33-54.gh-issue-101196.wAX_2g.rst M Include/pyport.h M Lib/genericpath.py M Lib/ntpath.py M Lib/posixpath.py M Lib/test/test_ntpath.py M Lib/test/test_os.py M Modules/clinic/posixmodule.c.h M Modules/posixmodule.c diff --git a/Include/pyport.h b/Include/pyport.h index 22085049a304..40092c2f81ad 100644 --- a/Include/pyport.h +++ b/Include/pyport.h @@ -247,6 +247,10 @@ typedef Py_ssize_t Py_ssize_clean_t; #define S_ISCHR(x) (((x) & S_IFMT) == S_IFCHR) #endif +#ifndef S_ISLNK +#define S_ISLNK(x) (((x) & S_IFMT) == S_IFLNK) +#endif + #ifdef __cplusplus /* Move this down here since some C++ #include's don't like to be included inside an extern "C" */ diff --git a/Lib/genericpath.py b/Lib/genericpath.py index ce36451a3af0..1bd5b3897c3a 100644 --- a/Lib/genericpath.py +++ b/Lib/genericpath.py @@ -7,7 +7,7 @@ import stat __all__ = ['commonprefix', 'exists', 'getatime', 'getctime', 'getmtime', - 'getsize', 'isdir', 'isfile', 'samefile', 'sameopenfile', + 'getsize', 'isdir', 'isfile', 'islink', 'samefile', 'sameopenfile', 'samestat'] @@ -45,6 +45,18 @@ def isdir(s): return stat.S_ISDIR(st.st_mode) +# Is a path a symbolic link? +# This will always return false on systems where os.lstat doesn't exist. + +def islink(path): + """Test whether a path is a symbolic link""" + try: + st = os.lstat(path) + except (OSError, ValueError, AttributeError): + return False + return stat.S_ISLNK(st.st_mode) + + def getsize(filename): """Return the size of a file, reported by os.stat().""" return os.stat(filename).st_size diff --git a/Lib/ntpath.py b/Lib/ntpath.py index f9ee8e02a576..e93a5e696009 100644 --- a/Lib/ntpath.py +++ b/Lib/ntpath.py @@ -276,19 +276,6 @@ def dirname(p): """Returns the directory component of a pathname""" return split(p)[0] -# Is a path a symbolic link? -# This will always return false on systems where os.lstat doesn't exist. - -def islink(path): - """Test whether a path is a symbolic link. - This will always return false for Windows prior to 6.0. - """ - try: - st = os.lstat(path) - except (OSError, ValueError, AttributeError): - return False - return stat.S_ISLNK(st.st_mode) - # Is a path a junction? @@ -870,11 +857,13 @@ def commonpath(paths): try: - # The genericpath.isdir implementation uses os.stat and checks the mode - # attribute to tell whether or not the path is a directory. - # This is overkill on Windows - just pass the path to GetFileAttributes - # and check the attribute from there. - from nt import _isdir as isdir + # The isdir(), isfile(), islink() and exists() implementations in + # genericpath use os.stat(). This is overkill on Windows. Use simpler + # builtin functions if they are available. + from nt import _path_isdir as isdir + from nt import _path_isfile as isfile + from nt import _path_islink as islink + from nt import _path_exists as exists except ImportError: - # Use genericpath.isdir as imported above. + # Use genericpath.* as imported above pass diff --git a/Lib/posixpath.py b/Lib/posixpath.py index 32b5d6e105dd..e4f155e41a32 100644 --- a/Lib/posixpath.py +++ b/Lib/posixpath.py @@ -187,18 +187,6 @@ def dirname(p): return head -# Is a path a symbolic link? -# This will always return false on systems where os.lstat doesn't exist. - -def islink(path): - """Test whether a path is a symbolic link""" - try: - st = os.lstat(path) - except (OSError, ValueError, AttributeError): - return False - return stat.S_ISLNK(st.st_mode) - - # Is a path a junction? def isjunction(path): diff --git a/Lib/test/test_ntpath.py b/Lib/test/test_ntpath.py index bce38a534a6a..b32900697874 100644 --- a/Lib/test/test_ntpath.py +++ b/Lib/test/test_ntpath.py @@ -1,9 +1,10 @@ +import inspect import ntpath import os import sys import unittest import warnings -from test.support import os_helper +from test.support import cpython_only, os_helper from test.support import TestFailed, is_emscripten from test.support.os_helper import FakePath from test import test_genericpath @@ -938,6 +939,35 @@ def test_isjunction(self): self.assertFalse(ntpath.isjunction('tmpdir')) self.assertPathEqual(ntpath.realpath('testjunc'), ntpath.realpath('tmpdir')) + @unittest.skipIf(sys.platform != 'win32', "drive letters are a windows concept") + def test_isfile_driveletter(self): + drive = os.environ.get('SystemDrive') + if drive is None or len(drive) != 2 or drive[1] != ':': + raise unittest.SkipTest('SystemDrive is not defined or malformed') + self.assertFalse(os.path.isfile('\\\\.\\' + drive)) + + @unittest.skipIf(sys.platform != 'win32', "windows only") + def test_con_device(self): + self.assertFalse(os.path.isfile(r"\\.\CON")) + self.assertFalse(os.path.isdir(r"\\.\CON")) + self.assertFalse(os.path.islink(r"\\.\CON")) + self.assertTrue(os.path.exists(r"\\.\CON")) + + @unittest.skipIf(sys.platform != 'win32', "Fast paths are only for win32") + @cpython_only + def test_fast_paths_in_use(self): + # There are fast paths of these functions implemented in posixmodule.c. + # Confirm that they are being used, and not the Python fallbacks in + # genericpath.py. + self.assertTrue(os.path.isdir is nt._path_isdir) + self.assertFalse(inspect.isfunction(os.path.isdir)) + self.assertTrue(os.path.isfile is nt._path_isfile) + self.assertFalse(inspect.isfunction(os.path.isfile)) + self.assertTrue(os.path.islink is nt._path_islink) + self.assertFalse(inspect.isfunction(os.path.islink)) + self.assertTrue(os.path.exists is nt._path_exists) + self.assertFalse(inspect.isfunction(os.path.exists)) + class NtCommonTest(test_genericpath.CommonTest, unittest.TestCase): pathmodule = ntpath diff --git a/Lib/test/test_os.py b/Lib/test/test_os.py index 58e04dd1348f..387d2581c06f 100644 --- a/Lib/test/test_os.py +++ b/Lib/test/test_os.py @@ -742,6 +742,7 @@ def test_access_denied(self): ) result = os.stat(fname) self.assertNotEqual(result.st_size, 0) + self.assertTrue(os.path.isfile(fname)) @unittest.skipUnless(sys.platform == "win32", "Win32 specific tests") def test_stat_block_device(self): @@ -2860,6 +2861,7 @@ def test_appexeclink(self): self.assertEqual(st, os.stat(alias)) self.assertFalse(stat.S_ISLNK(st.st_mode)) self.assertEqual(st.st_reparse_tag, stat.IO_REPARSE_TAG_APPEXECLINK) + self.assertTrue(os.path.isfile(alias)) # testing the first one we see is sufficient break else: diff --git a/Misc/NEWS.d/next/Windows/2023-01-25-11-33-54.gh-issue-101196.wAX_2g.rst b/Misc/NEWS.d/next/Windows/2023-01-25-11-33-54.gh-issue-101196.wAX_2g.rst new file mode 100644 index 000000000000..c61e9b90fb53 --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-01-25-11-33-54.gh-issue-101196.wAX_2g.rst @@ -0,0 +1,3 @@ +The functions ``os.path.isdir``, ``os.path.isfile``, ``os.path.islink`` and +``os.path.exists`` are now 13% to 28% faster on Windows, by making fewer Win32 +API calls. diff --git a/Modules/clinic/posixmodule.c.h b/Modules/clinic/posixmodule.c.h index d4722cc533cb..5e04507ddd69 100644 --- a/Modules/clinic/posixmodule.c.h +++ b/Modules/clinic/posixmodule.c.h @@ -1794,6 +1794,242 @@ os__path_splitroot(PyObject *module, PyObject *const *args, Py_ssize_t nargs, Py #endif /* defined(MS_WINDOWS) */ +#if defined(MS_WINDOWS) + +PyDoc_STRVAR(os__path_isdir__doc__, +"_path_isdir($module, /, path)\n" +"--\n" +"\n" +"Return true if the pathname refers to an existing directory."); + +#define OS__PATH_ISDIR_METHODDEF \ + {"_path_isdir", _PyCFunction_CAST(os__path_isdir), METH_FASTCALL|METH_KEYWORDS, os__path_isdir__doc__}, + +static PyObject * +os__path_isdir_impl(PyObject *module, PyObject *path); + +static PyObject * +os__path_isdir(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + PyObject *return_value = NULL; + #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) + + #define NUM_KEYWORDS 1 + static struct { + PyGC_Head _this_is_not_used; + PyObject_VAR_HEAD + PyObject *ob_item[NUM_KEYWORDS]; + } _kwtuple = { + .ob_base = PyVarObject_HEAD_INIT(&PyTuple_Type, NUM_KEYWORDS) + .ob_item = { &_Py_ID(path), }, + }; + #undef NUM_KEYWORDS + #define KWTUPLE (&_kwtuple.ob_base.ob_base) + + #else // !Py_BUILD_CORE + # define KWTUPLE NULL + #endif // !Py_BUILD_CORE + + static const char * const _keywords[] = {"path", NULL}; + static _PyArg_Parser _parser = { + .keywords = _keywords, + .fname = "_path_isdir", + .kwtuple = KWTUPLE, + }; + #undef KWTUPLE + PyObject *argsbuf[1]; + PyObject *path; + + args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 1, 1, 0, argsbuf); + if (!args) { + goto exit; + } + path = args[0]; + return_value = os__path_isdir_impl(module, path); + +exit: + return return_value; +} + +#endif /* defined(MS_WINDOWS) */ + +#if defined(MS_WINDOWS) + +PyDoc_STRVAR(os__path_isfile__doc__, +"_path_isfile($module, /, path)\n" +"--\n" +"\n" +"Test whether a path is a regular file"); + +#define OS__PATH_ISFILE_METHODDEF \ + {"_path_isfile", _PyCFunction_CAST(os__path_isfile), METH_FASTCALL|METH_KEYWORDS, os__path_isfile__doc__}, + +static PyObject * +os__path_isfile_impl(PyObject *module, PyObject *path); + +static PyObject * +os__path_isfile(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + PyObject *return_value = NULL; + #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) + + #define NUM_KEYWORDS 1 + static struct { + PyGC_Head _this_is_not_used; + PyObject_VAR_HEAD + PyObject *ob_item[NUM_KEYWORDS]; + } _kwtuple = { + .ob_base = PyVarObject_HEAD_INIT(&PyTuple_Type, NUM_KEYWORDS) + .ob_item = { &_Py_ID(path), }, + }; + #undef NUM_KEYWORDS + #define KWTUPLE (&_kwtuple.ob_base.ob_base) + + #else // !Py_BUILD_CORE + # define KWTUPLE NULL + #endif // !Py_BUILD_CORE + + static const char * const _keywords[] = {"path", NULL}; + static _PyArg_Parser _parser = { + .keywords = _keywords, + .fname = "_path_isfile", + .kwtuple = KWTUPLE, + }; + #undef KWTUPLE + PyObject *argsbuf[1]; + PyObject *path; + + args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 1, 1, 0, argsbuf); + if (!args) { + goto exit; + } + path = args[0]; + return_value = os__path_isfile_impl(module, path); + +exit: + return return_value; +} + +#endif /* defined(MS_WINDOWS) */ + +#if defined(MS_WINDOWS) + +PyDoc_STRVAR(os__path_exists__doc__, +"_path_exists($module, /, path)\n" +"--\n" +"\n" +"Test whether a path exists. Returns False for broken symbolic links"); + +#define OS__PATH_EXISTS_METHODDEF \ + {"_path_exists", _PyCFunction_CAST(os__path_exists), METH_FASTCALL|METH_KEYWORDS, os__path_exists__doc__}, + +static PyObject * +os__path_exists_impl(PyObject *module, PyObject *path); + +static PyObject * +os__path_exists(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + PyObject *return_value = NULL; + #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) + + #define NUM_KEYWORDS 1 + static struct { + PyGC_Head _this_is_not_used; + PyObject_VAR_HEAD + PyObject *ob_item[NUM_KEYWORDS]; + } _kwtuple = { + .ob_base = PyVarObject_HEAD_INIT(&PyTuple_Type, NUM_KEYWORDS) + .ob_item = { &_Py_ID(path), }, + }; + #undef NUM_KEYWORDS + #define KWTUPLE (&_kwtuple.ob_base.ob_base) + + #else // !Py_BUILD_CORE + # define KWTUPLE NULL + #endif // !Py_BUILD_CORE + + static const char * const _keywords[] = {"path", NULL}; + static _PyArg_Parser _parser = { + .keywords = _keywords, + .fname = "_path_exists", + .kwtuple = KWTUPLE, + }; + #undef KWTUPLE + PyObject *argsbuf[1]; + PyObject *path; + + args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 1, 1, 0, argsbuf); + if (!args) { + goto exit; + } + path = args[0]; + return_value = os__path_exists_impl(module, path); + +exit: + return return_value; +} + +#endif /* defined(MS_WINDOWS) */ + +#if defined(MS_WINDOWS) + +PyDoc_STRVAR(os__path_islink__doc__, +"_path_islink($module, /, path)\n" +"--\n" +"\n" +"Test whether a path is a symbolic link"); + +#define OS__PATH_ISLINK_METHODDEF \ + {"_path_islink", _PyCFunction_CAST(os__path_islink), METH_FASTCALL|METH_KEYWORDS, os__path_islink__doc__}, + +static PyObject * +os__path_islink_impl(PyObject *module, PyObject *path); + +static PyObject * +os__path_islink(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + PyObject *return_value = NULL; + #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) + + #define NUM_KEYWORDS 1 + static struct { + PyGC_Head _this_is_not_used; + PyObject_VAR_HEAD + PyObject *ob_item[NUM_KEYWORDS]; + } _kwtuple = { + .ob_base = PyVarObject_HEAD_INIT(&PyTuple_Type, NUM_KEYWORDS) + .ob_item = { &_Py_ID(path), }, + }; + #undef NUM_KEYWORDS + #define KWTUPLE (&_kwtuple.ob_base.ob_base) + + #else // !Py_BUILD_CORE + # define KWTUPLE NULL + #endif // !Py_BUILD_CORE + + static const char * const _keywords[] = {"path", NULL}; + static _PyArg_Parser _parser = { + .keywords = _keywords, + .fname = "_path_islink", + .kwtuple = KWTUPLE, + }; + #undef KWTUPLE + PyObject *argsbuf[1]; + PyObject *path; + + args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 1, 1, 0, argsbuf); + if (!args) { + goto exit; + } + path = args[0]; + return_value = os__path_islink_impl(module, path); + +exit: + return return_value; +} + +#endif /* defined(MS_WINDOWS) */ + PyDoc_STRVAR(os__path_normpath__doc__, "_path_normpath($module, /, path)\n" "--\n" @@ -11041,6 +11277,22 @@ os_waitstatus_to_exitcode(PyObject *module, PyObject *const *args, Py_ssize_t na #define OS__PATH_SPLITROOT_METHODDEF #endif /* !defined(OS__PATH_SPLITROOT_METHODDEF) */ +#ifndef OS__PATH_ISDIR_METHODDEF + #define OS__PATH_ISDIR_METHODDEF +#endif /* !defined(OS__PATH_ISDIR_METHODDEF) */ + +#ifndef OS__PATH_ISFILE_METHODDEF + #define OS__PATH_ISFILE_METHODDEF +#endif /* !defined(OS__PATH_ISFILE_METHODDEF) */ + +#ifndef OS__PATH_EXISTS_METHODDEF + #define OS__PATH_EXISTS_METHODDEF +#endif /* !defined(OS__PATH_EXISTS_METHODDEF) */ + +#ifndef OS__PATH_ISLINK_METHODDEF + #define OS__PATH_ISLINK_METHODDEF +#endif /* !defined(OS__PATH_ISLINK_METHODDEF) */ + #ifndef OS_NICE_METHODDEF #define OS_NICE_METHODDEF #endif /* !defined(OS_NICE_METHODDEF) */ @@ -11560,4 +11812,4 @@ os_waitstatus_to_exitcode(PyObject *module, PyObject *const *args, Py_ssize_t na #ifndef OS_WAITSTATUS_TO_EXITCODE_METHODDEF #define OS_WAITSTATUS_TO_EXITCODE_METHODDEF #endif /* !defined(OS_WAITSTATUS_TO_EXITCODE_METHODDEF) */ -/*[clinic end generated code: output=41eab6c3523792a9 input=a9049054013a1b77]*/ +/*[clinic end generated code: output=a3f76228b549e8ec input=a9049054013a1b77]*/ diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c index b84fb0d280f4..cba6cea48b77 100644 --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -4490,6 +4490,311 @@ os__path_splitroot_impl(PyObject *module, path_t *path) } +/*[clinic input] +os._path_isdir + + path: 'O' + +Return true if the pathname refers to an existing directory. + +[clinic start generated code]*/ + +static PyObject * +os__path_isdir_impl(PyObject *module, PyObject *path) +/*[clinic end generated code: output=00faea0af309669d input=b1d2571cf7291aaf]*/ +{ + HANDLE hfile; + BOOL close_file = TRUE; + FILE_BASIC_INFO info; + path_t _path = PATH_T_INITIALIZE("isdir", "path", 0, 1); + int result; + + if (!path_converter(path, &_path)) { + path_cleanup(&_path); + if (PyErr_ExceptionMatches(PyExc_ValueError)) { + PyErr_Clear(); + Py_RETURN_FALSE; + } + return NULL; + } + + Py_BEGIN_ALLOW_THREADS + if (_path.fd != -1) { + hfile = _Py_get_osfhandle_noraise(_path.fd); + close_file = FALSE; + } + else { + hfile = CreateFileW(_path.wide, FILE_READ_ATTRIBUTES, 0, NULL, + OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); + } + if (hfile != INVALID_HANDLE_VALUE) { + if (GetFileInformationByHandleEx(hfile, FileBasicInfo, &info, + sizeof(info))) + { + result = info.FileAttributes & FILE_ATTRIBUTE_DIRECTORY; + } + else { + result = 0; + } + if (close_file) { + CloseHandle(hfile); + } + } + else { + STRUCT_STAT st; + switch (GetLastError()) { + case ERROR_ACCESS_DENIED: + case ERROR_SHARING_VIOLATION: + case ERROR_CANT_ACCESS_FILE: + case ERROR_INVALID_PARAMETER: + if (STAT(_path.wide, &st)) { + result = 0; + } + else { + result = S_ISDIR(st.st_mode); + } + break; + default: + result = 0; + } + } + Py_END_ALLOW_THREADS + + path_cleanup(&_path); + if (result) { + Py_RETURN_TRUE; + } + Py_RETURN_FALSE; +} + + +/*[clinic input] +os._path_isfile + + path: 'O' + +Test whether a path is a regular file + +[clinic start generated code]*/ + +static PyObject * +os__path_isfile_impl(PyObject *module, PyObject *path) +/*[clinic end generated code: output=2394ed7c4b5cfd85 input=de22d74960ade365]*/ +{ + HANDLE hfile; + BOOL close_file = TRUE; + FILE_BASIC_INFO info; + path_t _path = PATH_T_INITIALIZE("isfile", "path", 0, 1); + int result; + + if (!path_converter(path, &_path)) { + path_cleanup(&_path); + if (PyErr_ExceptionMatches(PyExc_ValueError)) { + PyErr_Clear(); + Py_RETURN_FALSE; + } + return NULL; + } + + Py_BEGIN_ALLOW_THREADS + if (_path.fd != -1) { + hfile = _Py_get_osfhandle_noraise(_path.fd); + close_file = FALSE; + } + else { + hfile = CreateFileW(_path.wide, FILE_READ_ATTRIBUTES, 0, NULL, + OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); + } + if (hfile != INVALID_HANDLE_VALUE) { + if (GetFileInformationByHandleEx(hfile, FileBasicInfo, &info, + sizeof(info))) + { + result = !(info.FileAttributes & FILE_ATTRIBUTE_DIRECTORY); + } + else { + result = 0; + } + if (close_file) { + CloseHandle(hfile); + } + } + else { + STRUCT_STAT st; + switch (GetLastError()) { + case ERROR_ACCESS_DENIED: + case ERROR_SHARING_VIOLATION: + case ERROR_CANT_ACCESS_FILE: + case ERROR_INVALID_PARAMETER: + if (STAT(_path.wide, &st)) { + result = 0; + } + else { + result = S_ISREG(st.st_mode); + } + break; + default: + result = 0; + } + } + Py_END_ALLOW_THREADS + + path_cleanup(&_path); + if (result) { + Py_RETURN_TRUE; + } + Py_RETURN_FALSE; +} + + +/*[clinic input] +os._path_exists + + path: 'O' + +Test whether a path exists. Returns False for broken symbolic links + +[clinic start generated code]*/ + +static PyObject * +os__path_exists_impl(PyObject *module, PyObject *path) +/*[clinic end generated code: output=f508c3b35e13a249 input=380f77cdfa0f7ae8]*/ +{ + HANDLE hfile; + BOOL close_file = TRUE; + path_t _path = PATH_T_INITIALIZE("exists", "path", 0, 1); + int result; + + if (!path_converter(path, &_path)) { + path_cleanup(&_path); + if (PyErr_ExceptionMatches(PyExc_ValueError)) { + PyErr_Clear(); + Py_RETURN_FALSE; + } + return NULL; + } + + Py_BEGIN_ALLOW_THREADS + if (_path.fd != -1) { + hfile = _Py_get_osfhandle_noraise(_path.fd); + close_file = FALSE; + } + else { + hfile = CreateFileW(_path.wide, FILE_READ_ATTRIBUTES, 0, NULL, + OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); + } + if (hfile != INVALID_HANDLE_VALUE) { + result = 1; + if (close_file) { + CloseHandle(hfile); + } + } + else { + STRUCT_STAT st; + switch (GetLastError()) { + case ERROR_ACCESS_DENIED: + case ERROR_SHARING_VIOLATION: + case ERROR_CANT_ACCESS_FILE: + case ERROR_INVALID_PARAMETER: + if (STAT(_path.wide, &st)) { + result = 0; + } + else { + result = 1; + } + break; + default: + result = 0; + } + } + Py_END_ALLOW_THREADS + + path_cleanup(&_path); + if (result) { + Py_RETURN_TRUE; + } + Py_RETURN_FALSE; +} + + +/*[clinic input] +os._path_islink + + path: 'O' + +Test whether a path is a symbolic link + +[clinic start generated code]*/ + +static PyObject * +os__path_islink_impl(PyObject *module, PyObject *path) +/*[clinic end generated code: output=6d8640b1a390c054 input=38a3cb937ccf59bf]*/ +{ + HANDLE hfile; + BOOL close_file = TRUE; + FILE_ATTRIBUTE_TAG_INFO info; + path_t _path = PATH_T_INITIALIZE("islink", "path", 0, 1); + int result; + + if (!path_converter(path, &_path)) { + path_cleanup(&_path); + if (PyErr_ExceptionMatches(PyExc_ValueError)) { + PyErr_Clear(); + Py_RETURN_FALSE; + } + return NULL; + } + + Py_BEGIN_ALLOW_THREADS + if (_path.fd != -1) { + hfile = _Py_get_osfhandle_noraise(_path.fd); + close_file = FALSE; + } + else { + hfile = CreateFileW(_path.wide, FILE_READ_ATTRIBUTES, 0, NULL, + OPEN_EXISTING, + FILE_FLAG_OPEN_REPARSE_POINT | FILE_FLAG_BACKUP_SEMANTICS, + NULL); + } + if (hfile != INVALID_HANDLE_VALUE) { + if (GetFileInformationByHandleEx(hfile, FileAttributeTagInfo, &info, + sizeof(info))) + { + result = (info.ReparseTag == IO_REPARSE_TAG_SYMLINK); + } + else { + result = 0; + } + if (close_file) { + CloseHandle(hfile); + } + } + else { + STRUCT_STAT st; + switch (GetLastError()) { + case ERROR_ACCESS_DENIED: + case ERROR_SHARING_VIOLATION: + case ERROR_CANT_ACCESS_FILE: + case ERROR_INVALID_PARAMETER: + if (LSTAT(_path.wide, &st)) { + result = 0; + } + else { + result = S_ISLNK(st.st_mode); + } + break; + default: + result = 0; + } + } + Py_END_ALLOW_THREADS + + path_cleanup(&_path); + if (result) { + Py_RETURN_TRUE; + } + Py_RETURN_FALSE; +} + #endif /* MS_WINDOWS */ @@ -15150,6 +15455,11 @@ static PyMethodDef posix_methods[] = { OS_WAITSTATUS_TO_EXITCODE_METHODDEF OS_SETNS_METHODDEF OS_UNSHARE_METHODDEF + + OS__PATH_ISDIR_METHODDEF + OS__PATH_ISFILE_METHODDEF + OS__PATH_ISLINK_METHODDEF + OS__PATH_EXISTS_METHODDEF {NULL, NULL} /* Sentinel */ }; From webhook-mailer at python.org Wed Feb 8 09:49:00 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 08 Feb 2023 14:49:00 -0000 Subject: [Python-checkins] gh-101614: Don't treat python3_d.dll as a Python DLL when checking extension modules for incompatibility (GH-101615) Message-ID: https://github.com/python/cpython/commit/c63d7c95bafd0beabc36ea1461966f1ef8fe9c7f commit: c63d7c95bafd0beabc36ea1461966f1ef8fe9c7f branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-08T06:48:42-08:00 summary: gh-101614: Don't treat python3_d.dll as a Python DLL when checking extension modules for incompatibility (GH-101615) (cherry picked from commit 3a88de7a0af00872d9d57e1d98bc2f035cb15a1c) Co-authored-by: David Hewitt <1939362+davidhewitt at users.noreply.github.com> files: A Misc/NEWS.d/next/Windows/2023-02-07-18-22-54.gh-issue-101614.NjVP0n.rst M Python/dynload_win.c diff --git a/Misc/NEWS.d/next/Windows/2023-02-07-18-22-54.gh-issue-101614.NjVP0n.rst b/Misc/NEWS.d/next/Windows/2023-02-07-18-22-54.gh-issue-101614.NjVP0n.rst new file mode 100644 index 000000000000..8ed0995d7892 --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-07-18-22-54.gh-issue-101614.NjVP0n.rst @@ -0,0 +1 @@ +Correctly handle extensions built against debug binaries that reference ``python3_d.dll``. diff --git a/Python/dynload_win.c b/Python/dynload_win.c index 5702ab2cd71b..96faa4b46297 100644 --- a/Python/dynload_win.c +++ b/Python/dynload_win.c @@ -123,14 +123,15 @@ static char *GetPythonImport (HINSTANCE hModule) !strncmp(import_name,"python",6)) { char *pch; -#ifndef _DEBUG - /* In a release version, don't claim that python3.dll is - a Python DLL. */ + /* Don't claim that python3.dll is a Python DLL. */ +#ifdef _DEBUG + if (strcmp(import_name, "python3_d.dll") == 0) { +#else if (strcmp(import_name, "python3.dll") == 0) { +#endif import_data += 20; continue; } -#endif /* Ensure python prefix is followed only by numbers to the end of the basename */ From webhook-mailer at python.org Wed Feb 8 09:50:51 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 08 Feb 2023 14:50:51 -0000 Subject: [Python-checkins] gh-101614: Don't treat python3_d.dll as a Python DLL when checking extension modules for incompatibility (GH-101615) Message-ID: https://github.com/python/cpython/commit/e8ce85de594e62c73db8d523408c9e1b720f0d0b commit: e8ce85de594e62c73db8d523408c9e1b720f0d0b branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-08T06:50:43-08:00 summary: gh-101614: Don't treat python3_d.dll as a Python DLL when checking extension modules for incompatibility (GH-101615) (cherry picked from commit 3a88de7a0af00872d9d57e1d98bc2f035cb15a1c) Co-authored-by: David Hewitt <1939362+davidhewitt at users.noreply.github.com> files: A Misc/NEWS.d/next/Windows/2023-02-07-18-22-54.gh-issue-101614.NjVP0n.rst M Python/dynload_win.c diff --git a/Misc/NEWS.d/next/Windows/2023-02-07-18-22-54.gh-issue-101614.NjVP0n.rst b/Misc/NEWS.d/next/Windows/2023-02-07-18-22-54.gh-issue-101614.NjVP0n.rst new file mode 100644 index 000000000000..8ed0995d7892 --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-07-18-22-54.gh-issue-101614.NjVP0n.rst @@ -0,0 +1 @@ +Correctly handle extensions built against debug binaries that reference ``python3_d.dll``. diff --git a/Python/dynload_win.c b/Python/dynload_win.c index b43e9fc26f61..5dc4095237a6 100644 --- a/Python/dynload_win.c +++ b/Python/dynload_win.c @@ -125,14 +125,15 @@ static char *GetPythonImport (HINSTANCE hModule) !strncmp(import_name,"python",6)) { char *pch; -#ifndef _DEBUG - /* In a release version, don't claim that python3.dll is - a Python DLL. */ + /* Don't claim that python3.dll is a Python DLL. */ +#ifdef _DEBUG + if (strcmp(import_name, "python3_d.dll") == 0) { +#else if (strcmp(import_name, "python3.dll") == 0) { +#endif import_data += 20; continue; } -#endif /* Ensure python prefix is followed only by numbers to the end of the basename */ From webhook-mailer at python.org Wed Feb 8 10:49:13 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 08 Feb 2023 15:49:13 -0000 Subject: [Python-checkins] gh-101670: typo fix in PyImport_AppendInittab() (GH-101672) Message-ID: https://github.com/python/cpython/commit/35dd55005ee9aea2843eff7f514ee689a0995df8 commit: 35dd55005ee9aea2843eff7f514ee689a0995df8 branch: main author: Sergey B Kirpichev committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-08T07:49:04-08:00 summary: gh-101670: typo fix in PyImport_AppendInittab() (GH-101672) files: M Python/import.c diff --git a/Python/import.c b/Python/import.c index 1318c09d9b32..795d36896648 100644 --- a/Python/import.c +++ b/Python/import.c @@ -2699,7 +2699,7 @@ PyImport_AppendInittab(const char *name, PyObject* (*initfunc)(void)) struct _inittab newtab[2]; if (_PyRuntime.imports.inittab != NULL) { - Py_FatalError("PyImport_AppendInittab() may be be called after Py_Initialize()"); + Py_FatalError("PyImport_AppendInittab() may not be called after Py_Initialize()"); } memset(newtab, '\0', sizeof newtab); From webhook-mailer at python.org Wed Feb 8 11:51:08 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 08 Feb 2023 16:51:08 -0000 Subject: [Python-checkins] gh-100221: Fix creating dirs in `make sharedinstall` (GH-100329) Message-ID: https://github.com/python/cpython/commit/2a8bf2580441147f1a15e61229d669abc0ab86ee commit: 2a8bf2580441147f1a15e61229d669abc0ab86ee branch: main author: Micha? G?rny committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-08T08:50:43-08:00 summary: gh-100221: Fix creating dirs in `make sharedinstall` (GH-100329) Fix creating install directories in `make sharedinstall` if they exist already outside `DESTDIR`. The previous make rules assumed that the directories would be created via a dependency on a rule for `$(DESTSHARED)` that did not fire if the directory did exist outside `$(DESTDIR)`. While technically `$(DESTDIR)` could be prepended to the rule name, moving the rules for creating directories straight into the `sharedinstall` rule seems to fit the common practices better. Since the rule explicitly checks whether the individual directories exist anyway, there seems to be no reason to rely on make determining that implicitly as well. files: A Misc/NEWS.d/next/Build/2022-12-18-08-33-28.gh-issue-100221.K94Ct3.rst M Makefile.pre.in diff --git a/Makefile.pre.in b/Makefile.pre.in index 3641c4eeebee..2559df8e7495 100644 --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -1816,7 +1816,15 @@ commoninstall: check-clean-src @FRAMEWORKALTINSTALLFIRST@ \ # Install shared libraries enabled by Setup DESTDIRS= $(exec_prefix) $(LIBDIR) $(BINLIBDEST) $(DESTSHARED) -sharedinstall: $(DESTSHARED) all +sharedinstall: all + @for i in $(DESTDIRS); \ + do \ + if test ! -d $(DESTDIR)$$i; then \ + echo "Creating directory $$i"; \ + $(INSTALL) -d -m $(DIRMODE) $(DESTDIR)$$i; \ + else true; \ + fi; \ + done @for i in X $(SHAREDMODS); do \ if test $$i != X; then \ echo $(INSTALL_SHARED) $$i $(DESTSHARED)/`basename $$i`; \ @@ -1828,17 +1836,6 @@ sharedinstall: $(DESTSHARED) all fi; \ done - -$(DESTSHARED): - @for i in $(DESTDIRS); \ - do \ - if test ! -d $(DESTDIR)$$i; then \ - echo "Creating directory $$i"; \ - $(INSTALL) -d -m $(DIRMODE) $(DESTDIR)$$i; \ - else true; \ - fi; \ - done - # Install the interpreter with $(VERSION) affixed # This goes into $(exec_prefix) altbininstall: $(BUILDPYTHON) @FRAMEWORKPYTHONW@ diff --git a/Misc/NEWS.d/next/Build/2022-12-18-08-33-28.gh-issue-100221.K94Ct3.rst b/Misc/NEWS.d/next/Build/2022-12-18-08-33-28.gh-issue-100221.K94Ct3.rst new file mode 100644 index 000000000000..27c948330cfc --- /dev/null +++ b/Misc/NEWS.d/next/Build/2022-12-18-08-33-28.gh-issue-100221.K94Ct3.rst @@ -0,0 +1,2 @@ +Fix creating install directories in ``make sharedinstall`` if they exist +outside ``DESTDIR`` already. From webhook-mailer at python.org Wed Feb 8 13:02:24 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Wed, 08 Feb 2023 18:02:24 -0000 Subject: [Python-checkins] GH-101696: invalidate type version tag in `_PyStaticType_Dealloc` (#101697) Message-ID: https://github.com/python/cpython/commit/d9de0792482d2ded364b0c7d2867b97a5da41b12 commit: d9de0792482d2ded364b0c7d2867b97a5da41b12 branch: main author: Kumar Aditya <59607654+kumaraditya303 at users.noreply.github.com> committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-08T23:32:15+05:30 summary: GH-101696: invalidate type version tag in `_PyStaticType_Dealloc` (#101697) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-08-17-13-31.gh-issue-101696.seJhTt.rst M Objects/typeobject.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-08-17-13-31.gh-issue-101696.seJhTt.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-08-17-13-31.gh-issue-101696.seJhTt.rst new file mode 100644 index 000000000000..ff2bbb4b5642 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-08-17-13-31.gh-issue-101696.seJhTt.rst @@ -0,0 +1 @@ +Invalidate type version tag in ``_PyStaticType_Dealloc`` for static types, avoiding bug where a false cache hit could crash the interpreter. Patch by Kumar Aditya. diff --git a/Objects/typeobject.c b/Objects/typeobject.c index 59e0bf2995ba..bf6ccdb77a90 100644 --- a/Objects/typeobject.c +++ b/Objects/typeobject.c @@ -4469,6 +4469,8 @@ _PyStaticType_Dealloc(PyTypeObject *type) } type->tp_flags &= ~Py_TPFLAGS_READY; + type->tp_flags &= ~Py_TPFLAGS_VALID_VERSION_TAG; + type->tp_version_tag = 0; if (type->tp_flags & _Py_TPFLAGS_STATIC_BUILTIN) { _PyStaticType_ClearWeakRefs(type); From webhook-mailer at python.org Wed Feb 8 14:40:17 2023 From: webhook-mailer at python.org (gvanrossum) Date: Wed, 08 Feb 2023 19:40:17 -0000 Subject: [Python-checkins] gh-98831: Modernize CALL and family (#101508) Message-ID: https://github.com/python/cpython/commit/616aec1ff148ba4570aa2d4b8ea420c715c206e4 commit: 616aec1ff148ba4570aa2d4b8ea420c715c206e4 branch: main author: Guido van Rossum committer: gvanrossum date: 2023-02-08T11:40:10-08:00 summary: gh-98831: Modernize CALL and family (#101508) Includes a slight improvement to `DECREF_INPUTS()`. files: M Python/bytecodes.c M Python/ceval.c M Python/ceval_macros.h M Python/generated_cases.c.h M Python/opcode_metadata.h M Tools/cases_generator/generate_cases.py diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 1169d8d172dd..2b9f12fefa14 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -2360,70 +2360,87 @@ dummy_func( assert(oparg & 1); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_BOUND_METHOD_EXACT_ARGS) { - DEOPT_IF(is_method(stack_pointer, oparg), CALL); - PyObject *function = PEEK(oparg + 1); - DEOPT_IF(Py_TYPE(function) != &PyMethod_Type, CALL); - STAT_INC(CALL, hit); - PyObject *self = ((PyMethodObject *)function)->im_self; - PEEK(oparg + 1) = Py_NewRef(self); - PyObject *meth = ((PyMethodObject *)function)->im_func; - PEEK(oparg + 2) = Py_NewRef(meth); - Py_DECREF(function); - GO_TO_INSTRUCTION(CALL_PY_EXACT_ARGS); - } - inst(KW_NAMES, (--)) { assert(kwnames == NULL); assert(oparg < PyTuple_GET_SIZE(consts)); kwnames = GETITEM(consts, oparg); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL) { + // Cache layout: counter/1, func_version/2, min_args/1 + // Neither CALL_INTRINSIC_1 nor CALL_FUNCTION_EX are members! + family(call, INLINE_CACHE_ENTRIES_CALL) = { + CALL, + CALL_BOUND_METHOD_EXACT_ARGS, + CALL_PY_EXACT_ARGS, + CALL_PY_WITH_DEFAULTS, + CALL_NO_KW_TYPE_1, + CALL_NO_KW_STR_1, + CALL_NO_KW_TUPLE_1, + CALL_BUILTIN_CLASS, + CALL_NO_KW_BUILTIN_O, + CALL_NO_KW_BUILTIN_FAST, + CALL_BUILTIN_FAST_WITH_KEYWORDS, + CALL_NO_KW_LEN, + CALL_NO_KW_ISINSTANCE, + CALL_NO_KW_LIST_APPEND, + CALL_NO_KW_METHOD_DESCRIPTOR_O, + CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS, + CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS, + CALL_NO_KW_METHOD_DESCRIPTOR_FAST, + }; + + // On entry, the stack is either + // [NULL, callable, arg1, arg2, ...] + // or + // [method, self, arg1, arg2, ...] + // (Some args may be keywords, see KW_NAMES, which sets 'kwnames'.) + // On exit, the stack is [result]. + // When calling Python, inline the call using DISPATCH_INLINED(). + inst(CALL, (unused/1, unused/2, unused/1, method, callable, args[oparg] -- res)) { + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } #if ENABLE_SPECIALIZATION _PyCallCache *cache = (_PyCallCache *)next_instr; if (ADAPTIVE_COUNTER_IS_ZERO(cache->counter)) { assert(cframe.use_tracing == 0); - int is_meth = is_method(stack_pointer, oparg); - int nargs = oparg + is_meth; - PyObject *callable = PEEK(nargs + 1); next_instr--; - _Py_Specialize_Call(callable, next_instr, nargs, kwnames); + _Py_Specialize_Call(callable, next_instr, total_args, kwnames); DISPATCH_SAME_OPARG(); } STAT_INC(CALL, deferred); DECREMENT_ADAPTIVE_COUNTER(cache->counter); #endif /* ENABLE_SPECIALIZATION */ - int total_args, is_meth; - is_meth = is_method(stack_pointer, oparg); - PyObject *function = PEEK(oparg + 1); - if (!is_meth && Py_TYPE(function) == &PyMethod_Type) { - PyObject *self = ((PyMethodObject *)function)->im_self; - PEEK(oparg+1) = Py_NewRef(self); - PyObject *meth = ((PyMethodObject *)function)->im_func; - PEEK(oparg+2) = Py_NewRef(meth); - Py_DECREF(function); - is_meth = 1; - } - total_args = oparg + is_meth; - function = PEEK(total_args + 1); + if (!is_meth && Py_TYPE(callable) == &PyMethod_Type) { + is_meth = 1; // For consistenct; it's dead, though + args--; + total_args++; + PyObject *self = ((PyMethodObject *)callable)->im_self; + args[0] = Py_NewRef(self); + method = ((PyMethodObject *)callable)->im_func; + args[-1] = Py_NewRef(method); + Py_DECREF(callable); + callable = method; + } int positional_args = total_args - KWNAMES_LEN(); // Check if the call can be inlined or not - if (Py_TYPE(function) == &PyFunction_Type && + if (Py_TYPE(callable) == &PyFunction_Type && tstate->interp->eval_frame == NULL && - ((PyFunctionObject *)function)->vectorcall == _PyFunction_Vectorcall) + ((PyFunctionObject *)callable)->vectorcall == _PyFunction_Vectorcall) { - int code_flags = ((PyCodeObject*)PyFunction_GET_CODE(function))->co_flags; - PyObject *locals = code_flags & CO_OPTIMIZED ? NULL : Py_NewRef(PyFunction_GET_GLOBALS(function)); - STACK_SHRINK(total_args); + int code_flags = ((PyCodeObject*)PyFunction_GET_CODE(callable))->co_flags; + PyObject *locals = code_flags & CO_OPTIMIZED ? NULL : Py_NewRef(PyFunction_GET_GLOBALS(callable)); _PyInterpreterFrame *new_frame = _PyEvalFramePushAndInit( - tstate, (PyFunctionObject *)function, locals, - stack_pointer, positional_args, kwnames + tstate, (PyFunctionObject *)callable, locals, + args, positional_args, kwnames ); kwnames = NULL; - STACK_SHRINK(2-is_meth); + // Manipulate stack directly since we leave using DISPATCH_INLINED(). + STACK_SHRINK(oparg + 2); // The frame has stolen all the arguments from the stack, // so there is no need to clean them up. if (new_frame == NULL) { @@ -2433,190 +2450,180 @@ dummy_func( DISPATCH_INLINED(new_frame); } /* Callable is not a normal Python function */ - PyObject *res; if (cframe.use_tracing) { res = trace_call_function( - tstate, function, stack_pointer-total_args, + tstate, callable, args, positional_args, kwnames); } else { res = PyObject_Vectorcall( - function, stack_pointer-total_args, + callable, args, positional_args | PY_VECTORCALL_ARGUMENTS_OFFSET, kwnames); } kwnames = NULL; assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); - Py_DECREF(function); - /* Clear the stack */ - STACK_SHRINK(total_args); + Py_DECREF(callable); for (int i = 0; i < total_args; i++) { - Py_DECREF(stack_pointer[i]); + Py_DECREF(args[i]); } - STACK_SHRINK(2-is_meth); - PUSH(res); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + ERROR_IF(res == NULL, error); CHECK_EVAL_BREAKER(); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_PY_EXACT_ARGS) { + // Start out with [NULL, bound_method, arg1, arg2, ...] + // Transform to [callable, self, arg1, arg2, ...] + // Then fall through to CALL_PY_EXACT_ARGS + inst(CALL_BOUND_METHOD_EXACT_ARGS, (unused/1, unused/2, unused/1, method, callable, unused[oparg] -- unused)) { + DEOPT_IF(method != NULL, CALL); + DEOPT_IF(Py_TYPE(callable) != &PyMethod_Type, CALL); + STAT_INC(CALL, hit); + PyObject *self = ((PyMethodObject *)callable)->im_self; + PEEK(oparg + 1) = Py_NewRef(self); // callable + PyObject *meth = ((PyMethodObject *)callable)->im_func; + PEEK(oparg + 2) = Py_NewRef(meth); // method + Py_DECREF(callable); + GO_TO_INSTRUCTION(CALL_PY_EXACT_ARGS); + } + + inst(CALL_PY_EXACT_ARGS, (unused/1, func_version/2, unused/1, method, callable, args[oparg] -- unused)) { assert(kwnames == NULL); DEOPT_IF(tstate->interp->eval_frame, CALL); - _PyCallCache *cache = (_PyCallCache *)next_instr; - int is_meth = is_method(stack_pointer, oparg); - int argcount = oparg + is_meth; - PyObject *callable = PEEK(argcount + 1); + int is_meth = method != NULL; + int argcount = oparg; + if (is_meth) { + callable = method; + args--; + argcount++; + } DEOPT_IF(!PyFunction_Check(callable), CALL); PyFunctionObject *func = (PyFunctionObject *)callable; - DEOPT_IF(func->func_version != read_u32(cache->func_version), CALL); + DEOPT_IF(func->func_version != func_version, CALL); PyCodeObject *code = (PyCodeObject *)func->func_code; DEOPT_IF(code->co_argcount != argcount, CALL); DEOPT_IF(!_PyThreadState_HasStackSpace(tstate, code->co_framesize), CALL); STAT_INC(CALL, hit); _PyInterpreterFrame *new_frame = _PyFrame_PushUnchecked(tstate, func, argcount); - STACK_SHRINK(argcount); for (int i = 0; i < argcount; i++) { - new_frame->localsplus[i] = stack_pointer[i]; + new_frame->localsplus[i] = args[i]; } - STACK_SHRINK(2-is_meth); + // Manipulate stack directly since we leave using DISPATCH_INLINED(). + STACK_SHRINK(oparg + 2); JUMPBY(INLINE_CACHE_ENTRIES_CALL); DISPATCH_INLINED(new_frame); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_PY_WITH_DEFAULTS) { + inst(CALL_PY_WITH_DEFAULTS, (unused/1, func_version/2, min_args/1, method, callable, args[oparg] -- unused)) { assert(kwnames == NULL); DEOPT_IF(tstate->interp->eval_frame, CALL); - _PyCallCache *cache = (_PyCallCache *)next_instr; - int is_meth = is_method(stack_pointer, oparg); - int argcount = oparg + is_meth; - PyObject *callable = PEEK(argcount + 1); + int is_meth = method != NULL; + int argcount = oparg; + if (is_meth) { + callable = method; + args--; + argcount++; + } DEOPT_IF(!PyFunction_Check(callable), CALL); PyFunctionObject *func = (PyFunctionObject *)callable; - DEOPT_IF(func->func_version != read_u32(cache->func_version), CALL); + DEOPT_IF(func->func_version != func_version, CALL); PyCodeObject *code = (PyCodeObject *)func->func_code; DEOPT_IF(argcount > code->co_argcount, CALL); - int minargs = cache->min_args; - DEOPT_IF(argcount < minargs, CALL); + DEOPT_IF(argcount < min_args, CALL); DEOPT_IF(!_PyThreadState_HasStackSpace(tstate, code->co_framesize), CALL); STAT_INC(CALL, hit); _PyInterpreterFrame *new_frame = _PyFrame_PushUnchecked(tstate, func, code->co_argcount); - STACK_SHRINK(argcount); for (int i = 0; i < argcount; i++) { - new_frame->localsplus[i] = stack_pointer[i]; + new_frame->localsplus[i] = args[i]; } for (int i = argcount; i < code->co_argcount; i++) { - PyObject *def = PyTuple_GET_ITEM(func->func_defaults, - i - minargs); + PyObject *def = PyTuple_GET_ITEM(func->func_defaults, i - min_args); new_frame->localsplus[i] = Py_NewRef(def); } - STACK_SHRINK(2-is_meth); + // Manipulate stack and cache directly since we leave using DISPATCH_INLINED(). + STACK_SHRINK(oparg + 2); JUMPBY(INLINE_CACHE_ENTRIES_CALL); DISPATCH_INLINED(new_frame); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_NO_KW_TYPE_1) { + inst(CALL_NO_KW_TYPE_1, (unused/1, unused/2, unused/1, null, callable, args[oparg] -- res)) { assert(kwnames == NULL); assert(cframe.use_tracing == 0); assert(oparg == 1); - DEOPT_IF(is_method(stack_pointer, 1), CALL); - PyObject *obj = TOP(); - PyObject *callable = SECOND(); + DEOPT_IF(null != NULL, CALL); + PyObject *obj = args[0]; DEOPT_IF(callable != (PyObject *)&PyType_Type, CALL); STAT_INC(CALL, hit); - JUMPBY(INLINE_CACHE_ENTRIES_CALL); - PyObject *res = Py_NewRef(Py_TYPE(obj)); - Py_DECREF(callable); + res = Py_NewRef(Py_TYPE(obj)); Py_DECREF(obj); - STACK_SHRINK(2); - SET_TOP(res); + Py_DECREF(&PyType_Type); // I.e., callable } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_NO_KW_STR_1) { + inst(CALL_NO_KW_STR_1, (unused/1, unused/2, unused/1, null, callable, args[oparg] -- res)) { assert(kwnames == NULL); assert(cframe.use_tracing == 0); assert(oparg == 1); - DEOPT_IF(is_method(stack_pointer, 1), CALL); - PyObject *callable = PEEK(2); + DEOPT_IF(null != NULL, CALL); DEOPT_IF(callable != (PyObject *)&PyUnicode_Type, CALL); STAT_INC(CALL, hit); - PyObject *arg = TOP(); - PyObject *res = PyObject_Str(arg); + PyObject *arg = args[0]; + res = PyObject_Str(arg); Py_DECREF(arg); - Py_DECREF(&PyUnicode_Type); - STACK_SHRINK(2); - SET_TOP(res); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + Py_DECREF(&PyUnicode_Type); // I.e., callable + ERROR_IF(res == NULL, error); CHECK_EVAL_BREAKER(); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_NO_KW_TUPLE_1) { + inst(CALL_NO_KW_TUPLE_1, (unused/1, unused/2, unused/1, null, callable, args[oparg] -- res)) { assert(kwnames == NULL); assert(oparg == 1); - DEOPT_IF(is_method(stack_pointer, 1), CALL); - PyObject *callable = PEEK(2); + DEOPT_IF(null != NULL, CALL); DEOPT_IF(callable != (PyObject *)&PyTuple_Type, CALL); STAT_INC(CALL, hit); - PyObject *arg = TOP(); - PyObject *res = PySequence_Tuple(arg); + PyObject *arg = args[0]; + res = PySequence_Tuple(arg); Py_DECREF(arg); - Py_DECREF(&PyTuple_Type); - STACK_SHRINK(2); - SET_TOP(res); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + Py_DECREF(&PyTuple_Type); // I.e., tuple + ERROR_IF(res == NULL, error); CHECK_EVAL_BREAKER(); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_BUILTIN_CLASS) { - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + inst(CALL_BUILTIN_CLASS, (unused/1, unused/2, unused/1, method, callable, args[oparg] -- res)) { + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } int kwnames_len = KWNAMES_LEN(); - PyObject *callable = PEEK(total_args + 1); DEOPT_IF(!PyType_Check(callable), CALL); PyTypeObject *tp = (PyTypeObject *)callable; DEOPT_IF(tp->tp_vectorcall == NULL, CALL); STAT_INC(CALL, hit); - STACK_SHRINK(total_args); - PyObject *res = tp->tp_vectorcall((PyObject *)tp, stack_pointer, - total_args-kwnames_len, kwnames); + res = tp->tp_vectorcall((PyObject *)tp, args, + total_args - kwnames_len, kwnames); kwnames = NULL; /* Free the arguments. */ for (int i = 0; i < total_args; i++) { - Py_DECREF(stack_pointer[i]); + Py_DECREF(args[i]); } Py_DECREF(tp); - STACK_SHRINK(1-is_meth); - SET_TOP(res); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + ERROR_IF(res == NULL, error); CHECK_EVAL_BREAKER(); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_NO_KW_BUILTIN_O) { + inst(CALL_NO_KW_BUILTIN_O, (unused/1, unused/2, unused/1, method, callable, args[oparg] -- res)) { assert(cframe.use_tracing == 0); /* Builtin METH_O functions */ assert(kwnames == NULL); - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } DEOPT_IF(total_args != 1, CALL); - PyObject *callable = PEEK(total_args + 1); DEOPT_IF(!PyCFunction_CheckExact(callable), CALL); DEOPT_IF(PyCFunction_GET_FLAGS(callable) != METH_O, CALL); STAT_INC(CALL, hit); @@ -2626,81 +2633,74 @@ dummy_func( if (_Py_EnterRecursiveCallTstate(tstate, " while calling a Python object")) { goto error; } - PyObject *arg = TOP(); - PyObject *res = _PyCFunction_TrampolineCall(cfunc, PyCFunction_GET_SELF(callable), arg); + PyObject *arg = args[0]; + res = _PyCFunction_TrampolineCall(cfunc, PyCFunction_GET_SELF(callable), arg); _Py_LeaveRecursiveCallTstate(tstate); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); Py_DECREF(arg); Py_DECREF(callable); - STACK_SHRINK(2-is_meth); - SET_TOP(res); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + ERROR_IF(res == NULL, error); CHECK_EVAL_BREAKER(); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_NO_KW_BUILTIN_FAST) { + inst(CALL_NO_KW_BUILTIN_FAST, (unused/1, unused/2, unused/1, method, callable, args[oparg] -- res)) { assert(cframe.use_tracing == 0); /* Builtin METH_FASTCALL functions, without keywords */ assert(kwnames == NULL); - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; - PyObject *callable = PEEK(total_args + 1); + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } DEOPT_IF(!PyCFunction_CheckExact(callable), CALL); - DEOPT_IF(PyCFunction_GET_FLAGS(callable) != METH_FASTCALL, - CALL); + DEOPT_IF(PyCFunction_GET_FLAGS(callable) != METH_FASTCALL, CALL); STAT_INC(CALL, hit); PyCFunction cfunc = PyCFunction_GET_FUNCTION(callable); - STACK_SHRINK(total_args); /* res = func(self, args, nargs) */ - PyObject *res = ((_PyCFunctionFast)(void(*)(void))cfunc)( + res = ((_PyCFunctionFast)(void(*)(void))cfunc)( PyCFunction_GET_SELF(callable), - stack_pointer, + args, total_args); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); /* Free the arguments. */ for (int i = 0; i < total_args; i++) { - Py_DECREF(stack_pointer[i]); + Py_DECREF(args[i]); } - STACK_SHRINK(2-is_meth); - PUSH(res); Py_DECREF(callable); - if (res == NULL) { + ERROR_IF(res == NULL, error); /* Not deopting because this doesn't mean our optimization was wrong. `res` can be NULL for valid reasons. Eg. getattr(x, 'invalid'). In those cases an exception is set, so we must handle it. */ - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); CHECK_EVAL_BREAKER(); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_BUILTIN_FAST_WITH_KEYWORDS) { + inst(CALL_BUILTIN_FAST_WITH_KEYWORDS, (unused/1, unused/2, unused/1, method, callable, args[oparg] -- res)) { assert(cframe.use_tracing == 0); /* Builtin METH_FASTCALL | METH_KEYWORDS functions */ - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; - PyObject *callable = PEEK(total_args + 1); + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } DEOPT_IF(!PyCFunction_CheckExact(callable), CALL); DEOPT_IF(PyCFunction_GET_FLAGS(callable) != (METH_FASTCALL | METH_KEYWORDS), CALL); STAT_INC(CALL, hit); - STACK_SHRINK(total_args); /* res = func(self, args, nargs, kwnames) */ _PyCFunctionFastWithKeywords cfunc = (_PyCFunctionFastWithKeywords)(void(*)(void)) PyCFunction_GET_FUNCTION(callable); - PyObject *res = cfunc( + res = cfunc( PyCFunction_GET_SELF(callable), - stack_pointer, + args, total_args - KWNAMES_LEN(), kwnames ); @@ -2709,117 +2709,109 @@ dummy_func( /* Free the arguments. */ for (int i = 0; i < total_args; i++) { - Py_DECREF(stack_pointer[i]); + Py_DECREF(args[i]); } - STACK_SHRINK(2-is_meth); - PUSH(res); Py_DECREF(callable); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + ERROR_IF(res == NULL, error); CHECK_EVAL_BREAKER(); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_NO_KW_LEN) { + inst(CALL_NO_KW_LEN, (unused/1, unused/2, unused/1, method, callable, args[oparg] -- res)) { assert(cframe.use_tracing == 0); assert(kwnames == NULL); /* len(o) */ - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } DEOPT_IF(total_args != 1, CALL); - PyObject *callable = PEEK(total_args + 1); PyInterpreterState *interp = _PyInterpreterState_GET(); DEOPT_IF(callable != interp->callable_cache.len, CALL); STAT_INC(CALL, hit); - PyObject *arg = TOP(); + PyObject *arg = args[0]; Py_ssize_t len_i = PyObject_Length(arg); if (len_i < 0) { goto error; } - PyObject *res = PyLong_FromSsize_t(len_i); + res = PyLong_FromSsize_t(len_i); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); - STACK_SHRINK(2-is_meth); - SET_TOP(res); Py_DECREF(callable); Py_DECREF(arg); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + ERROR_IF(res == NULL, error); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_NO_KW_ISINSTANCE) { + inst(CALL_NO_KW_ISINSTANCE, (unused/1, unused/2, unused/1, method, callable, args[oparg] -- res)) { assert(cframe.use_tracing == 0); assert(kwnames == NULL); /* isinstance(o, o2) */ - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; - PyObject *callable = PEEK(total_args + 1); + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } DEOPT_IF(total_args != 2, CALL); PyInterpreterState *interp = _PyInterpreterState_GET(); DEOPT_IF(callable != interp->callable_cache.isinstance, CALL); STAT_INC(CALL, hit); - PyObject *cls = POP(); - PyObject *inst = TOP(); + PyObject *cls = args[1]; + PyObject *inst = args[0]; int retval = PyObject_IsInstance(inst, cls); if (retval < 0) { - Py_DECREF(cls); goto error; } - PyObject *res = PyBool_FromLong(retval); + res = PyBool_FromLong(retval); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); - STACK_SHRINK(2-is_meth); - SET_TOP(res); Py_DECREF(inst); Py_DECREF(cls); Py_DECREF(callable); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + ERROR_IF(res == NULL, error); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_NO_KW_LIST_APPEND) { + // This is secretly a super-instruction + inst(CALL_NO_KW_LIST_APPEND, (unused/1, unused/2, unused/1, method, self, args[oparg] -- unused)) { assert(cframe.use_tracing == 0); assert(kwnames == NULL); assert(oparg == 1); - PyObject *callable = PEEK(3); + assert(method != NULL); PyInterpreterState *interp = _PyInterpreterState_GET(); - DEOPT_IF(callable != interp->callable_cache.list_append, CALL); - PyObject *list = SECOND(); - DEOPT_IF(!PyList_Check(list), CALL); + DEOPT_IF(method != interp->callable_cache.list_append, CALL); + DEOPT_IF(!PyList_Check(self), CALL); STAT_INC(CALL, hit); - PyObject *arg = POP(); - if (_PyList_AppendTakeRef((PyListObject *)list, arg) < 0) { - goto error; + if (_PyList_AppendTakeRef((PyListObject *)self, args[0]) < 0) { + goto pop_1_error; // Since arg is DECREF'ed already } - STACK_SHRINK(2); - Py_DECREF(list); - Py_DECREF(callable); + Py_DECREF(self); + Py_DECREF(method); + STACK_SHRINK(3); // CALL + POP_TOP JUMPBY(INLINE_CACHE_ENTRIES_CALL + 1); assert(_Py_OPCODE(next_instr[-1]) == POP_TOP); + DISPATCH(); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_NO_KW_METHOD_DESCRIPTOR_O) { + inst(CALL_NO_KW_METHOD_DESCRIPTOR_O, (unused/1, unused/2, unused/1, method, unused, args[oparg] -- res)) { assert(kwnames == NULL); - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + args--; + total_args++; + } PyMethodDescrObject *callable = (PyMethodDescrObject *)PEEK(total_args + 1); DEOPT_IF(total_args != 2, CALL); DEOPT_IF(!Py_IS_TYPE(callable, &PyMethodDescr_Type), CALL); PyMethodDef *meth = callable->d_method; DEOPT_IF(meth->ml_flags != METH_O, CALL); - PyObject *arg = TOP(); - PyObject *self = SECOND(); + PyObject *arg = args[1]; + PyObject *self = args[0]; DEOPT_IF(!Py_IS_TYPE(self, callable->d_common.d_type), CALL); STAT_INC(CALL, hit); PyCFunction cfunc = meth->ml_meth; @@ -2828,69 +2820,62 @@ dummy_func( if (_Py_EnterRecursiveCallTstate(tstate, " while calling a Python object")) { goto error; } - PyObject *res = _PyCFunction_TrampolineCall(cfunc, self, arg); + res = _PyCFunction_TrampolineCall(cfunc, self, arg); _Py_LeaveRecursiveCallTstate(tstate); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); Py_DECREF(self); Py_DECREF(arg); - STACK_SHRINK(oparg + 1); - SET_TOP(res); Py_DECREF(callable); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + ERROR_IF(res == NULL, error); CHECK_EVAL_BREAKER(); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS) { - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + inst(CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS, (unused/1, unused/2, unused/1, method, unused, args[oparg] -- res)) { + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + args--; + total_args++; + } PyMethodDescrObject *callable = (PyMethodDescrObject *)PEEK(total_args + 1); DEOPT_IF(!Py_IS_TYPE(callable, &PyMethodDescr_Type), CALL); PyMethodDef *meth = callable->d_method; DEOPT_IF(meth->ml_flags != (METH_FASTCALL|METH_KEYWORDS), CALL); PyTypeObject *d_type = callable->d_common.d_type; - PyObject *self = PEEK(total_args); + PyObject *self = args[0]; DEOPT_IF(!Py_IS_TYPE(self, d_type), CALL); STAT_INC(CALL, hit); - int nargs = total_args-1; - STACK_SHRINK(nargs); + int nargs = total_args - 1; _PyCFunctionFastWithKeywords cfunc = (_PyCFunctionFastWithKeywords)(void(*)(void))meth->ml_meth; - PyObject *res = cfunc(self, stack_pointer, nargs - KWNAMES_LEN(), - kwnames); + res = cfunc(self, args + 1, nargs - KWNAMES_LEN(), kwnames); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); kwnames = NULL; /* Free the arguments. */ - for (int i = 0; i < nargs; i++) { - Py_DECREF(stack_pointer[i]); + for (int i = 0; i < total_args; i++) { + Py_DECREF(args[i]); } - Py_DECREF(self); - STACK_SHRINK(2-is_meth); - SET_TOP(res); Py_DECREF(callable); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + ERROR_IF(res == NULL, error); CHECK_EVAL_BREAKER(); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS) { + inst(CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS, (unused/1, unused/2, unused/1, method, unused, args[oparg] -- res)) { assert(kwnames == NULL); assert(oparg == 0 || oparg == 1); - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + args--; + total_args++; + } DEOPT_IF(total_args != 1, CALL); PyMethodDescrObject *callable = (PyMethodDescrObject *)SECOND(); DEOPT_IF(!Py_IS_TYPE(callable, &PyMethodDescr_Type), CALL); PyMethodDef *meth = callable->d_method; - PyObject *self = TOP(); + PyObject *self = args[0]; DEOPT_IF(!Py_IS_TYPE(self, callable->d_common.d_type), CALL); DEOPT_IF(meth->ml_flags != METH_NOARGS, CALL); STAT_INC(CALL, hit); @@ -2900,52 +2885,43 @@ dummy_func( if (_Py_EnterRecursiveCallTstate(tstate, " while calling a Python object")) { goto error; } - PyObject *res = _PyCFunction_TrampolineCall(cfunc, self, NULL); + res = _PyCFunction_TrampolineCall(cfunc, self, NULL); _Py_LeaveRecursiveCallTstate(tstate); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); Py_DECREF(self); - STACK_SHRINK(oparg + 1); - SET_TOP(res); Py_DECREF(callable); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + ERROR_IF(res == NULL, error); CHECK_EVAL_BREAKER(); } - // stack effect: (__0, __array[oparg] -- ) - inst(CALL_NO_KW_METHOD_DESCRIPTOR_FAST) { + inst(CALL_NO_KW_METHOD_DESCRIPTOR_FAST, (unused/1, unused/2, unused/1, method, unused, args[oparg] -- res)) { assert(kwnames == NULL); - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + args--; + total_args++; + } PyMethodDescrObject *callable = (PyMethodDescrObject *)PEEK(total_args + 1); /* Builtin METH_FASTCALL methods, without keywords */ DEOPT_IF(!Py_IS_TYPE(callable, &PyMethodDescr_Type), CALL); PyMethodDef *meth = callable->d_method; DEOPT_IF(meth->ml_flags != METH_FASTCALL, CALL); - PyObject *self = PEEK(total_args); + PyObject *self = args[0]; DEOPT_IF(!Py_IS_TYPE(self, callable->d_common.d_type), CALL); STAT_INC(CALL, hit); _PyCFunctionFast cfunc = (_PyCFunctionFast)(void(*)(void))meth->ml_meth; - int nargs = total_args-1; - STACK_SHRINK(nargs); - PyObject *res = cfunc(self, stack_pointer, nargs); + int nargs = total_args - 1; + res = cfunc(self, args + 1, nargs); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); /* Clear the stack of the arguments. */ - for (int i = 0; i < nargs; i++) { - Py_DECREF(stack_pointer[i]); + for (int i = 0; i < total_args; i++) { + Py_DECREF(args[i]); } - Py_DECREF(self); - STACK_SHRINK(2-is_meth); - SET_TOP(res); Py_DECREF(callable); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + ERROR_IF(res == NULL, error); CHECK_EVAL_BREAKER(); } @@ -3153,12 +3129,4 @@ dummy_func( // Future families go below this point // -family(call, INLINE_CACHE_ENTRIES_CALL) = { - CALL, CALL_PY_EXACT_ARGS, - CALL_PY_WITH_DEFAULTS, CALL_BOUND_METHOD_EXACT_ARGS, CALL_BUILTIN_CLASS, - CALL_BUILTIN_FAST_WITH_KEYWORDS, CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS, CALL_NO_KW_BUILTIN_FAST, - CALL_NO_KW_BUILTIN_O, CALL_NO_KW_ISINSTANCE, CALL_NO_KW_LEN, - CALL_NO_KW_LIST_APPEND, CALL_NO_KW_METHOD_DESCRIPTOR_FAST, CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS, - CALL_NO_KW_METHOD_DESCRIPTOR_O, CALL_NO_KW_STR_1, CALL_NO_KW_TUPLE_1, - CALL_NO_KW_TYPE_1 }; family(store_fast) = { STORE_FAST, STORE_FAST__LOAD_FAST, STORE_FAST__STORE_FAST }; diff --git a/Python/ceval.c b/Python/ceval.c index a91f5baca885..611d62b0eba9 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -688,12 +688,6 @@ static inline void _Py_LeaveRecursiveCallPy(PyThreadState *tstate) { } -// GH-89279: Must be a macro to be sure it's inlined by MSVC. -#define is_method(stack_pointer, args) (PEEK((args)+2) != NULL) - -#define KWNAMES_LEN() \ - (kwnames == NULL ? 0 : ((int)PyTuple_GET_SIZE(kwnames))) - /* Disable unused label warnings. They are handy for debugging, even if computed gotos aren't used. */ diff --git a/Python/ceval_macros.h b/Python/ceval_macros.h index d7a8f0beeec8..691bf8e1caae 100644 --- a/Python/ceval_macros.h +++ b/Python/ceval_macros.h @@ -347,3 +347,6 @@ GETITEM(PyObject *v, Py_ssize_t i) { } while (0); #define NAME_ERROR_MSG "name '%.200s' is not defined" + +#define KWNAMES_LEN() \ + (kwnames == NULL ? 0 : ((int)PyTuple_GET_SIZE(kwnames))) diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index 09eb6893ebf6..a224d4eb8927 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -2994,19 +2994,6 @@ DISPATCH(); } - TARGET(CALL_BOUND_METHOD_EXACT_ARGS) { - DEOPT_IF(is_method(stack_pointer, oparg), CALL); - PyObject *function = PEEK(oparg + 1); - DEOPT_IF(Py_TYPE(function) != &PyMethod_Type, CALL); - STAT_INC(CALL, hit); - PyObject *self = ((PyMethodObject *)function)->im_self; - PEEK(oparg + 1) = Py_NewRef(self); - PyObject *meth = ((PyMethodObject *)function)->im_func; - PEEK(oparg + 2) = Py_NewRef(meth); - Py_DECREF(function); - GO_TO_INSTRUCTION(CALL_PY_EXACT_ARGS); - } - TARGET(KW_NAMES) { assert(kwnames == NULL); assert(oparg < PyTuple_GET_SIZE(consts)); @@ -3016,48 +3003,55 @@ TARGET(CALL) { PREDICTED(CALL); + static_assert(INLINE_CACHE_ENTRIES_CALL == 4, "incorrect cache size"); + PyObject **args = &PEEK(oparg); + PyObject *callable = PEEK(1 + oparg); + PyObject *method = PEEK(2 + oparg); + PyObject *res; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } #if ENABLE_SPECIALIZATION _PyCallCache *cache = (_PyCallCache *)next_instr; if (ADAPTIVE_COUNTER_IS_ZERO(cache->counter)) { assert(cframe.use_tracing == 0); - int is_meth = is_method(stack_pointer, oparg); - int nargs = oparg + is_meth; - PyObject *callable = PEEK(nargs + 1); next_instr--; - _Py_Specialize_Call(callable, next_instr, nargs, kwnames); + _Py_Specialize_Call(callable, next_instr, total_args, kwnames); DISPATCH_SAME_OPARG(); } STAT_INC(CALL, deferred); DECREMENT_ADAPTIVE_COUNTER(cache->counter); #endif /* ENABLE_SPECIALIZATION */ - int total_args, is_meth; - is_meth = is_method(stack_pointer, oparg); - PyObject *function = PEEK(oparg + 1); - if (!is_meth && Py_TYPE(function) == &PyMethod_Type) { - PyObject *self = ((PyMethodObject *)function)->im_self; - PEEK(oparg+1) = Py_NewRef(self); - PyObject *meth = ((PyMethodObject *)function)->im_func; - PEEK(oparg+2) = Py_NewRef(meth); - Py_DECREF(function); - is_meth = 1; - } - total_args = oparg + is_meth; - function = PEEK(total_args + 1); + if (!is_meth && Py_TYPE(callable) == &PyMethod_Type) { + is_meth = 1; // For consistenct; it's dead, though + args--; + total_args++; + PyObject *self = ((PyMethodObject *)callable)->im_self; + args[0] = Py_NewRef(self); + method = ((PyMethodObject *)callable)->im_func; + args[-1] = Py_NewRef(method); + Py_DECREF(callable); + callable = method; + } int positional_args = total_args - KWNAMES_LEN(); // Check if the call can be inlined or not - if (Py_TYPE(function) == &PyFunction_Type && + if (Py_TYPE(callable) == &PyFunction_Type && tstate->interp->eval_frame == NULL && - ((PyFunctionObject *)function)->vectorcall == _PyFunction_Vectorcall) + ((PyFunctionObject *)callable)->vectorcall == _PyFunction_Vectorcall) { - int code_flags = ((PyCodeObject*)PyFunction_GET_CODE(function))->co_flags; - PyObject *locals = code_flags & CO_OPTIMIZED ? NULL : Py_NewRef(PyFunction_GET_GLOBALS(function)); - STACK_SHRINK(total_args); + int code_flags = ((PyCodeObject*)PyFunction_GET_CODE(callable))->co_flags; + PyObject *locals = code_flags & CO_OPTIMIZED ? NULL : Py_NewRef(PyFunction_GET_GLOBALS(callable)); _PyInterpreterFrame *new_frame = _PyEvalFramePushAndInit( - tstate, (PyFunctionObject *)function, locals, - stack_pointer, positional_args, kwnames + tstate, (PyFunctionObject *)callable, locals, + args, positional_args, kwnames ); kwnames = NULL; - STACK_SHRINK(2-is_meth); + // Manipulate stack directly since we leave using DISPATCH_INLINED(). + STACK_SHRINK(oparg + 2); // The frame has stolen all the arguments from the stack, // so there is no need to clean them up. if (new_frame == NULL) { @@ -3067,189 +3061,234 @@ DISPATCH_INLINED(new_frame); } /* Callable is not a normal Python function */ - PyObject *res; if (cframe.use_tracing) { res = trace_call_function( - tstate, function, stack_pointer-total_args, + tstate, callable, args, positional_args, kwnames); } else { res = PyObject_Vectorcall( - function, stack_pointer-total_args, + callable, args, positional_args | PY_VECTORCALL_ARGUMENTS_OFFSET, kwnames); } kwnames = NULL; assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); - Py_DECREF(function); - /* Clear the stack */ - STACK_SHRINK(total_args); + Py_DECREF(callable); for (int i = 0; i < total_args; i++) { - Py_DECREF(stack_pointer[i]); + Py_DECREF(args[i]); } - STACK_SHRINK(2-is_meth); - PUSH(res); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); CHECK_EVAL_BREAKER(); DISPATCH(); } + TARGET(CALL_BOUND_METHOD_EXACT_ARGS) { + PyObject *callable = PEEK(1 + oparg); + PyObject *method = PEEK(2 + oparg); + DEOPT_IF(method != NULL, CALL); + DEOPT_IF(Py_TYPE(callable) != &PyMethod_Type, CALL); + STAT_INC(CALL, hit); + PyObject *self = ((PyMethodObject *)callable)->im_self; + PEEK(oparg + 1) = Py_NewRef(self); // callable + PyObject *meth = ((PyMethodObject *)callable)->im_func; + PEEK(oparg + 2) = Py_NewRef(meth); // method + Py_DECREF(callable); + GO_TO_INSTRUCTION(CALL_PY_EXACT_ARGS); + } + TARGET(CALL_PY_EXACT_ARGS) { PREDICTED(CALL_PY_EXACT_ARGS); + PyObject **args = &PEEK(oparg); + PyObject *callable = PEEK(1 + oparg); + PyObject *method = PEEK(2 + oparg); + uint32_t func_version = read_u32(&next_instr[1].cache); assert(kwnames == NULL); DEOPT_IF(tstate->interp->eval_frame, CALL); - _PyCallCache *cache = (_PyCallCache *)next_instr; - int is_meth = is_method(stack_pointer, oparg); - int argcount = oparg + is_meth; - PyObject *callable = PEEK(argcount + 1); + int is_meth = method != NULL; + int argcount = oparg; + if (is_meth) { + callable = method; + args--; + argcount++; + } DEOPT_IF(!PyFunction_Check(callable), CALL); PyFunctionObject *func = (PyFunctionObject *)callable; - DEOPT_IF(func->func_version != read_u32(cache->func_version), CALL); + DEOPT_IF(func->func_version != func_version, CALL); PyCodeObject *code = (PyCodeObject *)func->func_code; DEOPT_IF(code->co_argcount != argcount, CALL); DEOPT_IF(!_PyThreadState_HasStackSpace(tstate, code->co_framesize), CALL); STAT_INC(CALL, hit); _PyInterpreterFrame *new_frame = _PyFrame_PushUnchecked(tstate, func, argcount); - STACK_SHRINK(argcount); for (int i = 0; i < argcount; i++) { - new_frame->localsplus[i] = stack_pointer[i]; + new_frame->localsplus[i] = args[i]; } - STACK_SHRINK(2-is_meth); + // Manipulate stack directly since we leave using DISPATCH_INLINED(). + STACK_SHRINK(oparg + 2); JUMPBY(INLINE_CACHE_ENTRIES_CALL); DISPATCH_INLINED(new_frame); } TARGET(CALL_PY_WITH_DEFAULTS) { + PyObject **args = &PEEK(oparg); + PyObject *callable = PEEK(1 + oparg); + PyObject *method = PEEK(2 + oparg); + uint32_t func_version = read_u32(&next_instr[1].cache); + uint16_t min_args = read_u16(&next_instr[3].cache); assert(kwnames == NULL); DEOPT_IF(tstate->interp->eval_frame, CALL); - _PyCallCache *cache = (_PyCallCache *)next_instr; - int is_meth = is_method(stack_pointer, oparg); - int argcount = oparg + is_meth; - PyObject *callable = PEEK(argcount + 1); + int is_meth = method != NULL; + int argcount = oparg; + if (is_meth) { + callable = method; + args--; + argcount++; + } DEOPT_IF(!PyFunction_Check(callable), CALL); PyFunctionObject *func = (PyFunctionObject *)callable; - DEOPT_IF(func->func_version != read_u32(cache->func_version), CALL); + DEOPT_IF(func->func_version != func_version, CALL); PyCodeObject *code = (PyCodeObject *)func->func_code; DEOPT_IF(argcount > code->co_argcount, CALL); - int minargs = cache->min_args; - DEOPT_IF(argcount < minargs, CALL); + DEOPT_IF(argcount < min_args, CALL); DEOPT_IF(!_PyThreadState_HasStackSpace(tstate, code->co_framesize), CALL); STAT_INC(CALL, hit); _PyInterpreterFrame *new_frame = _PyFrame_PushUnchecked(tstate, func, code->co_argcount); - STACK_SHRINK(argcount); for (int i = 0; i < argcount; i++) { - new_frame->localsplus[i] = stack_pointer[i]; + new_frame->localsplus[i] = args[i]; } for (int i = argcount; i < code->co_argcount; i++) { - PyObject *def = PyTuple_GET_ITEM(func->func_defaults, - i - minargs); + PyObject *def = PyTuple_GET_ITEM(func->func_defaults, i - min_args); new_frame->localsplus[i] = Py_NewRef(def); } - STACK_SHRINK(2-is_meth); + // Manipulate stack and cache directly since we leave using DISPATCH_INLINED(). + STACK_SHRINK(oparg + 2); JUMPBY(INLINE_CACHE_ENTRIES_CALL); DISPATCH_INLINED(new_frame); } TARGET(CALL_NO_KW_TYPE_1) { + PyObject **args = &PEEK(oparg); + PyObject *callable = PEEK(1 + oparg); + PyObject *null = PEEK(2 + oparg); + PyObject *res; assert(kwnames == NULL); assert(cframe.use_tracing == 0); assert(oparg == 1); - DEOPT_IF(is_method(stack_pointer, 1), CALL); - PyObject *obj = TOP(); - PyObject *callable = SECOND(); + DEOPT_IF(null != NULL, CALL); + PyObject *obj = args[0]; DEOPT_IF(callable != (PyObject *)&PyType_Type, CALL); STAT_INC(CALL, hit); - JUMPBY(INLINE_CACHE_ENTRIES_CALL); - PyObject *res = Py_NewRef(Py_TYPE(obj)); - Py_DECREF(callable); + res = Py_NewRef(Py_TYPE(obj)); Py_DECREF(obj); - STACK_SHRINK(2); - SET_TOP(res); + Py_DECREF(&PyType_Type); // I.e., callable + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); DISPATCH(); } TARGET(CALL_NO_KW_STR_1) { + PyObject **args = &PEEK(oparg); + PyObject *callable = PEEK(1 + oparg); + PyObject *null = PEEK(2 + oparg); + PyObject *res; assert(kwnames == NULL); assert(cframe.use_tracing == 0); assert(oparg == 1); - DEOPT_IF(is_method(stack_pointer, 1), CALL); - PyObject *callable = PEEK(2); + DEOPT_IF(null != NULL, CALL); DEOPT_IF(callable != (PyObject *)&PyUnicode_Type, CALL); STAT_INC(CALL, hit); - PyObject *arg = TOP(); - PyObject *res = PyObject_Str(arg); + PyObject *arg = args[0]; + res = PyObject_Str(arg); Py_DECREF(arg); - Py_DECREF(&PyUnicode_Type); - STACK_SHRINK(2); - SET_TOP(res); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + Py_DECREF(&PyUnicode_Type); // I.e., callable + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_NO_KW_TUPLE_1) { + PyObject **args = &PEEK(oparg); + PyObject *callable = PEEK(1 + oparg); + PyObject *null = PEEK(2 + oparg); + PyObject *res; assert(kwnames == NULL); assert(oparg == 1); - DEOPT_IF(is_method(stack_pointer, 1), CALL); - PyObject *callable = PEEK(2); + DEOPT_IF(null != NULL, CALL); DEOPT_IF(callable != (PyObject *)&PyTuple_Type, CALL); STAT_INC(CALL, hit); - PyObject *arg = TOP(); - PyObject *res = PySequence_Tuple(arg); + PyObject *arg = args[0]; + res = PySequence_Tuple(arg); Py_DECREF(arg); - Py_DECREF(&PyTuple_Type); - STACK_SHRINK(2); - SET_TOP(res); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + Py_DECREF(&PyTuple_Type); // I.e., tuple + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_BUILTIN_CLASS) { - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + PyObject **args = &PEEK(oparg); + PyObject *callable = PEEK(1 + oparg); + PyObject *method = PEEK(2 + oparg); + PyObject *res; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } int kwnames_len = KWNAMES_LEN(); - PyObject *callable = PEEK(total_args + 1); DEOPT_IF(!PyType_Check(callable), CALL); PyTypeObject *tp = (PyTypeObject *)callable; DEOPT_IF(tp->tp_vectorcall == NULL, CALL); STAT_INC(CALL, hit); - STACK_SHRINK(total_args); - PyObject *res = tp->tp_vectorcall((PyObject *)tp, stack_pointer, - total_args-kwnames_len, kwnames); + res = tp->tp_vectorcall((PyObject *)tp, args, + total_args - kwnames_len, kwnames); kwnames = NULL; /* Free the arguments. */ for (int i = 0; i < total_args; i++) { - Py_DECREF(stack_pointer[i]); + Py_DECREF(args[i]); } Py_DECREF(tp); - STACK_SHRINK(1-is_meth); - SET_TOP(res); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_NO_KW_BUILTIN_O) { + PyObject **args = &PEEK(oparg); + PyObject *callable = PEEK(1 + oparg); + PyObject *method = PEEK(2 + oparg); + PyObject *res; assert(cframe.use_tracing == 0); /* Builtin METH_O functions */ assert(kwnames == NULL); - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } DEOPT_IF(total_args != 1, CALL); - PyObject *callable = PEEK(total_args + 1); DEOPT_IF(!PyCFunction_CheckExact(callable), CALL); DEOPT_IF(PyCFunction_GET_FLAGS(callable) != METH_O, CALL); STAT_INC(CALL, hit); @@ -3259,81 +3298,92 @@ if (_Py_EnterRecursiveCallTstate(tstate, " while calling a Python object")) { goto error; } - PyObject *arg = TOP(); - PyObject *res = _PyCFunction_TrampolineCall(cfunc, PyCFunction_GET_SELF(callable), arg); + PyObject *arg = args[0]; + res = _PyCFunction_TrampolineCall(cfunc, PyCFunction_GET_SELF(callable), arg); _Py_LeaveRecursiveCallTstate(tstate); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); Py_DECREF(arg); Py_DECREF(callable); - STACK_SHRINK(2-is_meth); - SET_TOP(res); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_NO_KW_BUILTIN_FAST) { + PyObject **args = &PEEK(oparg); + PyObject *callable = PEEK(1 + oparg); + PyObject *method = PEEK(2 + oparg); + PyObject *res; assert(cframe.use_tracing == 0); /* Builtin METH_FASTCALL functions, without keywords */ assert(kwnames == NULL); - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; - PyObject *callable = PEEK(total_args + 1); + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } DEOPT_IF(!PyCFunction_CheckExact(callable), CALL); - DEOPT_IF(PyCFunction_GET_FLAGS(callable) != METH_FASTCALL, - CALL); + DEOPT_IF(PyCFunction_GET_FLAGS(callable) != METH_FASTCALL, CALL); STAT_INC(CALL, hit); PyCFunction cfunc = PyCFunction_GET_FUNCTION(callable); - STACK_SHRINK(total_args); /* res = func(self, args, nargs) */ - PyObject *res = ((_PyCFunctionFast)(void(*)(void))cfunc)( + res = ((_PyCFunctionFast)(void(*)(void))cfunc)( PyCFunction_GET_SELF(callable), - stack_pointer, + args, total_args); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); /* Free the arguments. */ for (int i = 0; i < total_args; i++) { - Py_DECREF(stack_pointer[i]); + Py_DECREF(args[i]); } - STACK_SHRINK(2-is_meth); - PUSH(res); Py_DECREF(callable); - if (res == NULL) { + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } /* Not deopting because this doesn't mean our optimization was wrong. `res` can be NULL for valid reasons. Eg. getattr(x, 'invalid'). In those cases an exception is set, so we must handle it. */ - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_BUILTIN_FAST_WITH_KEYWORDS) { + PyObject **args = &PEEK(oparg); + PyObject *callable = PEEK(1 + oparg); + PyObject *method = PEEK(2 + oparg); + PyObject *res; assert(cframe.use_tracing == 0); /* Builtin METH_FASTCALL | METH_KEYWORDS functions */ - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; - PyObject *callable = PEEK(total_args + 1); + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } DEOPT_IF(!PyCFunction_CheckExact(callable), CALL); DEOPT_IF(PyCFunction_GET_FLAGS(callable) != (METH_FASTCALL | METH_KEYWORDS), CALL); STAT_INC(CALL, hit); - STACK_SHRINK(total_args); /* res = func(self, args, nargs, kwnames) */ _PyCFunctionFastWithKeywords cfunc = (_PyCFunctionFastWithKeywords)(void(*)(void)) PyCFunction_GET_FUNCTION(callable); - PyObject *res = cfunc( + res = cfunc( PyCFunction_GET_SELF(callable), - stack_pointer, + args, total_args - KWNAMES_LEN(), kwnames ); @@ -3342,99 +3392,112 @@ /* Free the arguments. */ for (int i = 0; i < total_args; i++) { - Py_DECREF(stack_pointer[i]); + Py_DECREF(args[i]); } - STACK_SHRINK(2-is_meth); - PUSH(res); Py_DECREF(callable); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_NO_KW_LEN) { + PyObject **args = &PEEK(oparg); + PyObject *callable = PEEK(1 + oparg); + PyObject *method = PEEK(2 + oparg); + PyObject *res; assert(cframe.use_tracing == 0); assert(kwnames == NULL); /* len(o) */ - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } DEOPT_IF(total_args != 1, CALL); - PyObject *callable = PEEK(total_args + 1); PyInterpreterState *interp = _PyInterpreterState_GET(); DEOPT_IF(callable != interp->callable_cache.len, CALL); STAT_INC(CALL, hit); - PyObject *arg = TOP(); + PyObject *arg = args[0]; Py_ssize_t len_i = PyObject_Length(arg); if (len_i < 0) { goto error; } - PyObject *res = PyLong_FromSsize_t(len_i); + res = PyLong_FromSsize_t(len_i); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); - STACK_SHRINK(2-is_meth); - SET_TOP(res); Py_DECREF(callable); Py_DECREF(arg); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); DISPATCH(); } TARGET(CALL_NO_KW_ISINSTANCE) { + PyObject **args = &PEEK(oparg); + PyObject *callable = PEEK(1 + oparg); + PyObject *method = PEEK(2 + oparg); + PyObject *res; assert(cframe.use_tracing == 0); assert(kwnames == NULL); /* isinstance(o, o2) */ - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; - PyObject *callable = PEEK(total_args + 1); + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + callable = method; + args--; + total_args++; + } DEOPT_IF(total_args != 2, CALL); PyInterpreterState *interp = _PyInterpreterState_GET(); DEOPT_IF(callable != interp->callable_cache.isinstance, CALL); STAT_INC(CALL, hit); - PyObject *cls = POP(); - PyObject *inst = TOP(); + PyObject *cls = args[1]; + PyObject *inst = args[0]; int retval = PyObject_IsInstance(inst, cls); if (retval < 0) { - Py_DECREF(cls); goto error; } - PyObject *res = PyBool_FromLong(retval); + res = PyBool_FromLong(retval); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); - STACK_SHRINK(2-is_meth); - SET_TOP(res); Py_DECREF(inst); Py_DECREF(cls); Py_DECREF(callable); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); DISPATCH(); } TARGET(CALL_NO_KW_LIST_APPEND) { + PyObject **args = &PEEK(oparg); + PyObject *self = PEEK(1 + oparg); + PyObject *method = PEEK(2 + oparg); assert(cframe.use_tracing == 0); assert(kwnames == NULL); assert(oparg == 1); - PyObject *callable = PEEK(3); + assert(method != NULL); PyInterpreterState *interp = _PyInterpreterState_GET(); - DEOPT_IF(callable != interp->callable_cache.list_append, CALL); - PyObject *list = SECOND(); - DEOPT_IF(!PyList_Check(list), CALL); + DEOPT_IF(method != interp->callable_cache.list_append, CALL); + DEOPT_IF(!PyList_Check(self), CALL); STAT_INC(CALL, hit); - PyObject *arg = POP(); - if (_PyList_AppendTakeRef((PyListObject *)list, arg) < 0) { - goto error; + if (_PyList_AppendTakeRef((PyListObject *)self, args[0]) < 0) { + goto pop_1_error; // Since arg is DECREF'ed already } - STACK_SHRINK(2); - Py_DECREF(list); - Py_DECREF(callable); + Py_DECREF(self); + Py_DECREF(method); + STACK_SHRINK(3); // CALL + POP_TOP JUMPBY(INLINE_CACHE_ENTRIES_CALL + 1); assert(_Py_OPCODE(next_instr[-1]) == POP_TOP); @@ -3442,17 +3505,24 @@ } TARGET(CALL_NO_KW_METHOD_DESCRIPTOR_O) { + PyObject **args = &PEEK(oparg); + PyObject *method = PEEK(2 + oparg); + PyObject *res; assert(kwnames == NULL); - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + args--; + total_args++; + } PyMethodDescrObject *callable = (PyMethodDescrObject *)PEEK(total_args + 1); DEOPT_IF(total_args != 2, CALL); DEOPT_IF(!Py_IS_TYPE(callable, &PyMethodDescr_Type), CALL); PyMethodDef *meth = callable->d_method; DEOPT_IF(meth->ml_flags != METH_O, CALL); - PyObject *arg = TOP(); - PyObject *self = SECOND(); + PyObject *arg = args[1]; + PyObject *self = args[0]; DEOPT_IF(!Py_IS_TYPE(self, callable->d_common.d_type), CALL); STAT_INC(CALL, hit); PyCFunction cfunc = meth->ml_meth; @@ -3461,69 +3531,78 @@ if (_Py_EnterRecursiveCallTstate(tstate, " while calling a Python object")) { goto error; } - PyObject *res = _PyCFunction_TrampolineCall(cfunc, self, arg); + res = _PyCFunction_TrampolineCall(cfunc, self, arg); _Py_LeaveRecursiveCallTstate(tstate); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); Py_DECREF(self); Py_DECREF(arg); - STACK_SHRINK(oparg + 1); - SET_TOP(res); Py_DECREF(callable); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS) { - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + PyObject **args = &PEEK(oparg); + PyObject *method = PEEK(2 + oparg); + PyObject *res; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + args--; + total_args++; + } PyMethodDescrObject *callable = (PyMethodDescrObject *)PEEK(total_args + 1); DEOPT_IF(!Py_IS_TYPE(callable, &PyMethodDescr_Type), CALL); PyMethodDef *meth = callable->d_method; DEOPT_IF(meth->ml_flags != (METH_FASTCALL|METH_KEYWORDS), CALL); PyTypeObject *d_type = callable->d_common.d_type; - PyObject *self = PEEK(total_args); + PyObject *self = args[0]; DEOPT_IF(!Py_IS_TYPE(self, d_type), CALL); STAT_INC(CALL, hit); - int nargs = total_args-1; - STACK_SHRINK(nargs); + int nargs = total_args - 1; _PyCFunctionFastWithKeywords cfunc = (_PyCFunctionFastWithKeywords)(void(*)(void))meth->ml_meth; - PyObject *res = cfunc(self, stack_pointer, nargs - KWNAMES_LEN(), - kwnames); + res = cfunc(self, args + 1, nargs - KWNAMES_LEN(), kwnames); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); kwnames = NULL; /* Free the arguments. */ - for (int i = 0; i < nargs; i++) { - Py_DECREF(stack_pointer[i]); + for (int i = 0; i < total_args; i++) { + Py_DECREF(args[i]); } - Py_DECREF(self); - STACK_SHRINK(2-is_meth); - SET_TOP(res); Py_DECREF(callable); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS) { + PyObject **args = &PEEK(oparg); + PyObject *method = PEEK(2 + oparg); + PyObject *res; assert(kwnames == NULL); assert(oparg == 0 || oparg == 1); - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + args--; + total_args++; + } DEOPT_IF(total_args != 1, CALL); PyMethodDescrObject *callable = (PyMethodDescrObject *)SECOND(); DEOPT_IF(!Py_IS_TYPE(callable, &PyMethodDescr_Type), CALL); PyMethodDef *meth = callable->d_method; - PyObject *self = TOP(); + PyObject *self = args[0]; DEOPT_IF(!Py_IS_TYPE(self, callable->d_common.d_type), CALL); DEOPT_IF(meth->ml_flags != METH_NOARGS, CALL); STAT_INC(CALL, hit); @@ -3533,52 +3612,55 @@ if (_Py_EnterRecursiveCallTstate(tstate, " while calling a Python object")) { goto error; } - PyObject *res = _PyCFunction_TrampolineCall(cfunc, self, NULL); + res = _PyCFunction_TrampolineCall(cfunc, self, NULL); _Py_LeaveRecursiveCallTstate(tstate); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); Py_DECREF(self); - STACK_SHRINK(oparg + 1); - SET_TOP(res); Py_DECREF(callable); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_NO_KW_METHOD_DESCRIPTOR_FAST) { + PyObject **args = &PEEK(oparg); + PyObject *method = PEEK(2 + oparg); + PyObject *res; assert(kwnames == NULL); - int is_meth = is_method(stack_pointer, oparg); - int total_args = oparg + is_meth; + int is_meth = method != NULL; + int total_args = oparg; + if (is_meth) { + args--; + total_args++; + } PyMethodDescrObject *callable = (PyMethodDescrObject *)PEEK(total_args + 1); /* Builtin METH_FASTCALL methods, without keywords */ DEOPT_IF(!Py_IS_TYPE(callable, &PyMethodDescr_Type), CALL); PyMethodDef *meth = callable->d_method; DEOPT_IF(meth->ml_flags != METH_FASTCALL, CALL); - PyObject *self = PEEK(total_args); + PyObject *self = args[0]; DEOPT_IF(!Py_IS_TYPE(self, callable->d_common.d_type), CALL); STAT_INC(CALL, hit); _PyCFunctionFast cfunc = (_PyCFunctionFast)(void(*)(void))meth->ml_meth; - int nargs = total_args-1; - STACK_SHRINK(nargs); - PyObject *res = cfunc(self, stack_pointer, nargs); + int nargs = total_args - 1; + res = cfunc(self, args + 1, nargs); assert((res != NULL) ^ (_PyErr_Occurred(tstate) != NULL)); /* Clear the stack of the arguments. */ - for (int i = 0; i < nargs; i++) { - Py_DECREF(stack_pointer[i]); + for (int i = 0; i < total_args; i++) { + Py_DECREF(args[i]); } - Py_DECREF(self); - STACK_SHRINK(2-is_meth); - SET_TOP(res); Py_DECREF(callable); - if (res == NULL) { - goto error; - } - JUMPBY(INLINE_CACHE_ENTRIES_CALL); + if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } + STACK_SHRINK(oparg); + STACK_SHRINK(1); + POKE(1, res); + JUMPBY(4); CHECK_EVAL_BREAKER(); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index 054ef6c29982..98791043f552 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -286,44 +286,44 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { return 1; case LOAD_ATTR_METHOD_LAZY_DICT: return 1; - case CALL_BOUND_METHOD_EXACT_ARGS: - return -1; case KW_NAMES: return 0; case CALL: - return -1; + return oparg + 2; + case CALL_BOUND_METHOD_EXACT_ARGS: + return oparg + 2; case CALL_PY_EXACT_ARGS: - return -1; + return oparg + 2; case CALL_PY_WITH_DEFAULTS: - return -1; + return oparg + 2; case CALL_NO_KW_TYPE_1: - return -1; + return oparg + 2; case CALL_NO_KW_STR_1: - return -1; + return oparg + 2; case CALL_NO_KW_TUPLE_1: - return -1; + return oparg + 2; case CALL_BUILTIN_CLASS: - return -1; + return oparg + 2; case CALL_NO_KW_BUILTIN_O: - return -1; + return oparg + 2; case CALL_NO_KW_BUILTIN_FAST: - return -1; + return oparg + 2; case CALL_BUILTIN_FAST_WITH_KEYWORDS: - return -1; + return oparg + 2; case CALL_NO_KW_LEN: - return -1; + return oparg + 2; case CALL_NO_KW_ISINSTANCE: - return -1; + return oparg + 2; case CALL_NO_KW_LIST_APPEND: - return -1; + return oparg + 2; case CALL_NO_KW_METHOD_DESCRIPTOR_O: - return -1; + return oparg + 2; case CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS: - return -1; + return oparg + 2; case CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS: - return -1; + return oparg + 2; case CALL_NO_KW_METHOD_DESCRIPTOR_FAST: - return -1; + return oparg + 2; case CALL_FUNCTION_EX: return ((oparg & 1) ? 1 : 0) + 3; case MAKE_FUNCTION: @@ -634,44 +634,44 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { return ((oparg & 1) ? 1 : 0) + 1; case LOAD_ATTR_METHOD_LAZY_DICT: return ((oparg & 1) ? 1 : 0) + 1; - case CALL_BOUND_METHOD_EXACT_ARGS: - return -1; case KW_NAMES: return 0; case CALL: - return -1; + return 1; + case CALL_BOUND_METHOD_EXACT_ARGS: + return 1; case CALL_PY_EXACT_ARGS: - return -1; + return 1; case CALL_PY_WITH_DEFAULTS: - return -1; + return 1; case CALL_NO_KW_TYPE_1: - return -1; + return 1; case CALL_NO_KW_STR_1: - return -1; + return 1; case CALL_NO_KW_TUPLE_1: - return -1; + return 1; case CALL_BUILTIN_CLASS: - return -1; + return 1; case CALL_NO_KW_BUILTIN_O: - return -1; + return 1; case CALL_NO_KW_BUILTIN_FAST: - return -1; + return 1; case CALL_BUILTIN_FAST_WITH_KEYWORDS: - return -1; + return 1; case CALL_NO_KW_LEN: - return -1; + return 1; case CALL_NO_KW_ISINSTANCE: - return -1; + return 1; case CALL_NO_KW_LIST_APPEND: - return -1; + return 1; case CALL_NO_KW_METHOD_DESCRIPTOR_O: - return -1; + return 1; case CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS: - return -1; + return 1; case CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS: - return -1; + return 1; case CALL_NO_KW_METHOD_DESCRIPTOR_FAST: - return -1; + return 1; case CALL_FUNCTION_EX: return 1; case MAKE_FUNCTION: @@ -846,25 +846,25 @@ struct opcode_metadata { [LOAD_ATTR_METHOD_WITH_VALUES] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, [LOAD_ATTR_METHOD_NO_DICT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, [LOAD_ATTR_METHOD_LAZY_DICT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC00000000 }, - [CALL_BOUND_METHOD_EXACT_ARGS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [KW_NAMES] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_PY_EXACT_ARGS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_PY_WITH_DEFAULTS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_NO_KW_TYPE_1] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_NO_KW_STR_1] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_NO_KW_TUPLE_1] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_BUILTIN_CLASS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_NO_KW_BUILTIN_O] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_NO_KW_BUILTIN_FAST] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_BUILTIN_FAST_WITH_KEYWORDS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_NO_KW_LEN] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_NO_KW_ISINSTANCE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_NO_KW_LIST_APPEND] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_NO_KW_METHOD_DESCRIPTOR_O] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [CALL_NO_KW_METHOD_DESCRIPTOR_FAST] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, + [CALL] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_BOUND_METHOD_EXACT_ARGS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_PY_EXACT_ARGS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_PY_WITH_DEFAULTS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_NO_KW_TYPE_1] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_NO_KW_STR_1] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_NO_KW_TUPLE_1] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_BUILTIN_CLASS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_NO_KW_BUILTIN_O] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_NO_KW_BUILTIN_FAST] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_BUILTIN_FAST_WITH_KEYWORDS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_NO_KW_LEN] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_NO_KW_ISINSTANCE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_NO_KW_LIST_APPEND] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_NO_KW_METHOD_DESCRIPTOR_O] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, + [CALL_NO_KW_METHOD_DESCRIPTOR_FAST] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC000 }, [CALL_FUNCTION_EX] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [MAKE_FUNCTION] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [RETURN_GENERATOR] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, diff --git a/Tools/cases_generator/generate_cases.py b/Tools/cases_generator/generate_cases.py index 9b5aa914cdee..1fcfbb677090 100644 --- a/Tools/cases_generator/generate_cases.py +++ b/Tools/cases_generator/generate_cases.py @@ -391,9 +391,11 @@ def write_body(self, out: Formatter, dedent: int, cache_adjust: int = 0) -> None # Write the body, substituting a goto for ERROR_IF() and other stuff assert dedent <= 0 extra = " " * -dedent + names_to_skip = self.unmoved_names | frozenset({UNUSED, "null"}) for line in self.block_text: if m := re.match(r"(\s*)ERROR_IF\((.+), (\w+)\);\s*(?://.*)?$", line): space, cond, label = m.groups() + space = extra + space # ERROR_IF() must pop the inputs from the stack. # The code block is responsible for DECREF()ing them. # NOTE: If the label doesn't exist, just add it to ceval.c. @@ -412,16 +414,25 @@ def write_body(self, out: Formatter, dedent: int, cache_adjust: int = 0) -> None symbolic = "" if symbolic: out.write_raw( - f"{extra}{space}if ({cond}) {{ STACK_SHRINK({symbolic}); goto {label}; }}\n" + f"{space}if ({cond}) {{ STACK_SHRINK({symbolic}); goto {label}; }}\n" ) else: - out.write_raw(f"{extra}{space}if ({cond}) goto {label};\n") + out.write_raw(f"{space}if ({cond}) goto {label};\n") elif m := re.match(r"(\s*)DECREF_INPUTS\(\);\s*(?://.*)?$", line): if not self.register: - space = m.group(1) + space = extra + m.group(1) for ieff in self.input_effects: - if ieff.name not in self.unmoved_names: - out.write_raw(f"{extra}{space}Py_DECREF({ieff.name});\n") + if ieff.name in names_to_skip: + continue + if ieff.size: + out.write_raw( + f"{space}for (int _i = {ieff.size}; --_i >= 0;) {{\n" + ) + out.write_raw(f"{space} Py_DECREF({ieff.name}[_i]);\n") + out.write_raw(f"{space}}}\n") + else: + decref = "XDECREF" if ieff.cond else "DECREF" + out.write_raw(f"{space}Py_{decref}({ieff.name});\n") else: out.write_raw(extra + line) From webhook-mailer at python.org Wed Feb 8 15:25:51 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Wed, 08 Feb 2023 20:25:51 -0000 Subject: [Python-checkins] gh-101277: Port more itertools static types to heap types (#101304) Message-ID: https://github.com/python/cpython/commit/de3669ebcb33ca8e3373fbbaed646c5f287979b8 commit: de3669ebcb33ca8e3373fbbaed646c5f287979b8 branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-08T21:25:42+01:00 summary: gh-101277: Port more itertools static types to heap types (#101304) Add accumulate, compress, count, filterfalse, pairwise, product, and zip_longest types to module state. files: M Modules/clinic/itertoolsmodule.c.h M Modules/itertoolsmodule.c diff --git a/Modules/clinic/itertoolsmodule.c.h b/Modules/clinic/itertoolsmodule.c.h index be44246cc970..d15d5f0890ca 100644 --- a/Modules/clinic/itertoolsmodule.c.h +++ b/Modules/clinic/itertoolsmodule.c.h @@ -102,7 +102,7 @@ static PyObject * pairwise_new(PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *return_value = NULL; - PyTypeObject *base_tp = &pairwise_type; + PyTypeObject *base_tp = clinic_state()->pairwise_type; PyObject *iterable; if ((type == base_tp || type->tp_init == base_tp->tp_init) && @@ -821,7 +821,7 @@ static PyObject * itertools_filterfalse(PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *return_value = NULL; - PyTypeObject *base_tp = &filterfalse_type; + PyTypeObject *base_tp = clinic_state()->filterfalse_type; PyObject *func; PyObject *seq; @@ -913,4 +913,4 @@ itertools_count(PyTypeObject *type, PyObject *args, PyObject *kwargs) exit: return return_value; } -/*[clinic end generated code: output=b86fcd99bd32145e input=a9049054013a1b77]*/ +/*[clinic end generated code: output=a08b58d7dac825da input=a9049054013a1b77]*/ diff --git a/Modules/itertoolsmodule.c b/Modules/itertoolsmodule.c index c9baa47e2c0e..ce8720d0fd92 100644 --- a/Modules/itertoolsmodule.c +++ b/Modules/itertoolsmodule.c @@ -12,15 +12,22 @@ */ typedef struct { + PyTypeObject *accumulate_type; PyTypeObject *combinations_type; + PyTypeObject *compress_type; + PyTypeObject *count_type; PyTypeObject *cwr_type; PyTypeObject *cycle_type; PyTypeObject *dropwhile_type; + PyTypeObject *filterfalse_type; PyTypeObject *groupby_type; PyTypeObject *_grouper_type; + PyTypeObject *pairwise_type; PyTypeObject *permutations_type; + PyTypeObject *product_type; PyTypeObject *starmap_type; PyTypeObject *takewhile_type; + PyTypeObject *ziplongest_type; } itertools_state; static inline itertools_state * @@ -48,7 +55,6 @@ find_state_by_type(PyTypeObject *tp) assert(mod != NULL); return get_module_state(mod); } -#define clinic_state() (find_state_by_type(type)) /*[clinic input] module itertools @@ -65,23 +71,19 @@ class itertools.chain "chainobject *" "&chain_type" class itertools.combinations "combinationsobject *" "clinic_state()->combinations_type" class itertools.combinations_with_replacement "cwr_object *" "clinic_state()->cwr_type" class itertools.permutations "permutationsobject *" "clinic_state()->permutations_type" -class itertools.accumulate "accumulateobject *" "&accumulate_type" -class itertools.compress "compressobject *" "&compress_type" -class itertools.filterfalse "filterfalseobject *" "&filterfalse_type" -class itertools.count "countobject *" "&count_type" -class itertools.pairwise "pairwiseobject *" "&pairwise_type" +class itertools.accumulate "accumulateobject *" "clinic_state()->accumulate_type" +class itertools.compress "compressobject *" "clinic_state()->compress_type" +class itertools.filterfalse "filterfalseobject *" "clinic_state()->filterfalse_type" +class itertools.count "countobject *" "clinic_state()->count_type" +class itertools.pairwise "pairwiseobject *" "clinic_state()->pairwise_type" [clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=1790ac655869a651]*/ +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=28ffff5c0c93eed7]*/ static PyTypeObject teedataobject_type; static PyTypeObject tee_type; static PyTypeObject batched_type; -static PyTypeObject accumulate_type; -static PyTypeObject compress_type; -static PyTypeObject filterfalse_type; -static PyTypeObject count_type; -static PyTypeObject pairwise_type; +#define clinic_state() (find_state_by_type(type)) #define clinic_state_by_cls() (get_module_state_by_cls(base_tp)) #include "clinic/itertoolsmodule.c.h" #undef clinic_state_by_cls @@ -308,15 +310,18 @@ pairwise_new_impl(PyTypeObject *type, PyObject *iterable) static void pairwise_dealloc(pairwiseobject *po) { + PyTypeObject *tp = Py_TYPE(po); PyObject_GC_UnTrack(po); Py_XDECREF(po->it); Py_XDECREF(po->old); - Py_TYPE(po)->tp_free(po); + tp->tp_free(po); + Py_DECREF(tp); } static int pairwise_traverse(pairwiseobject *po, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(po)); Py_VISIT(po->it); Py_VISIT(po->old); return 0; @@ -351,48 +356,25 @@ pairwise_next(pairwiseobject *po) return result; } -static PyTypeObject pairwise_type = { - PyVarObject_HEAD_INIT(&PyType_Type, 0) - "itertools.pairwise", /* tp_name */ - sizeof(pairwiseobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)pairwise_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - pairwise_new__doc__, /* tp_doc */ - (traverseproc)pairwise_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)pairwise_next, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - PyType_GenericAlloc, /* tp_alloc */ - pairwise_new, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot pairwise_slots[] = { + {Py_tp_dealloc, pairwise_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)pairwise_new__doc__}, + {Py_tp_traverse, pairwise_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, pairwise_next}, + {Py_tp_alloc, PyType_GenericAlloc}, + {Py_tp_new, pairwise_new}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec pairwise_spec = { + .name = "itertools.pairwise", + .basicsize = sizeof(pairwiseobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = pairwise_slots, }; @@ -2300,8 +2282,6 @@ typedef struct { int stopped; /* set to 1 when the iterator is exhausted */ } productobject; -static PyTypeObject product_type; - static PyObject * product_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { @@ -2388,12 +2368,14 @@ product_new(PyTypeObject *type, PyObject *args, PyObject *kwds) static void product_dealloc(productobject *lz) { + PyTypeObject *tp = Py_TYPE(lz); PyObject_GC_UnTrack(lz); Py_XDECREF(lz->pools); Py_XDECREF(lz->result); if (lz->indices != NULL) PyMem_Free(lz->indices); - Py_TYPE(lz)->tp_free(lz); + tp->tp_free(lz); + Py_DECREF(tp); } static PyObject * @@ -2409,6 +2391,7 @@ PyDoc_STRVAR(sizeof_doc, "Returns size in memory, in bytes."); static int product_traverse(productobject *lz, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(lz)); Py_VISIT(lz->pools); Py_VISIT(lz->result); return 0; @@ -2600,48 +2583,25 @@ product(A, repeat=4) means the same as product(A, A, A, A).\n\n\ product('ab', range(3)) --> ('a',0) ('a',1) ('a',2) ('b',0) ('b',1) ('b',2)\n\ product((0,1), (0,1), (0,1)) --> (0,0,0) (0,0,1) (0,1,0) (0,1,1) (1,0,0) ..."); -static PyTypeObject product_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.product", /* tp_name */ - sizeof(productobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)product_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - product_doc, /* tp_doc */ - (traverseproc)product_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)product_next, /* tp_iternext */ - product_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - product_new, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot product_slots[] = { + {Py_tp_dealloc, product_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)product_doc}, + {Py_tp_traverse, product_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, product_next}, + {Py_tp_methods, product_methods}, + {Py_tp_new, product_new}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec product_spec = { + .name = "itertools.product", + .basicsize = sizeof(productobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = product_slots, }; @@ -3658,17 +3618,20 @@ itertools_accumulate_impl(PyTypeObject *type, PyObject *iterable, static void accumulate_dealloc(accumulateobject *lz) { + PyTypeObject *tp = Py_TYPE(lz); PyObject_GC_UnTrack(lz); Py_XDECREF(lz->binop); Py_XDECREF(lz->total); Py_XDECREF(lz->it); Py_XDECREF(lz->initial); - Py_TYPE(lz)->tp_free(lz); + tp->tp_free(lz); + Py_DECREF(tp); } static int accumulate_traverse(accumulateobject *lz, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(lz)); Py_VISIT(lz->binop); Py_VISIT(lz->it); Py_VISIT(lz->total); @@ -3762,48 +3725,25 @@ static PyMethodDef accumulate_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject accumulate_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.accumulate", /* tp_name */ - sizeof(accumulateobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)accumulate_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - itertools_accumulate__doc__, /* tp_doc */ - (traverseproc)accumulate_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)accumulate_next, /* tp_iternext */ - accumulate_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_accumulate, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot accumulate_slots[] = { + {Py_tp_dealloc, accumulate_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_accumulate__doc__}, + {Py_tp_traverse, accumulate_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, accumulate_next}, + {Py_tp_methods, accumulate_methods}, + {Py_tp_new, itertools_accumulate}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec accumulate_spec = { + .name = "itertools.accumulate", + .basicsize = sizeof(accumulateobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = accumulate_slots, }; @@ -3864,15 +3804,18 @@ itertools_compress_impl(PyTypeObject *type, PyObject *seq1, PyObject *seq2) static void compress_dealloc(compressobject *lz) { + PyTypeObject *tp = Py_TYPE(lz); PyObject_GC_UnTrack(lz); Py_XDECREF(lz->data); Py_XDECREF(lz->selectors); - Py_TYPE(lz)->tp_free(lz); + tp->tp_free(lz); + Py_DECREF(tp); } static int compress_traverse(compressobject *lz, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(lz)); Py_VISIT(lz->data); Py_VISIT(lz->selectors); return 0; @@ -3927,48 +3870,25 @@ static PyMethodDef compress_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject compress_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.compress", /* tp_name */ - sizeof(compressobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)compress_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - itertools_compress__doc__, /* tp_doc */ - (traverseproc)compress_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)compress_next, /* tp_iternext */ - compress_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_compress, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot compress_slots[] = { + {Py_tp_dealloc, compress_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_compress__doc__}, + {Py_tp_traverse, compress_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, compress_next}, + {Py_tp_methods, compress_methods}, + {Py_tp_new, itertools_compress}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec compress_spec = { + .name = "itertools.compress", + .basicsize = sizeof(compressobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = compress_slots, }; @@ -4018,15 +3938,18 @@ itertools_filterfalse_impl(PyTypeObject *type, PyObject *func, PyObject *seq) static void filterfalse_dealloc(filterfalseobject *lz) { + PyTypeObject *tp = Py_TYPE(lz); PyObject_GC_UnTrack(lz); Py_XDECREF(lz->func); Py_XDECREF(lz->it); - Py_TYPE(lz)->tp_free(lz); + tp->tp_free(lz); + Py_DECREF(tp); } static int filterfalse_traverse(filterfalseobject *lz, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(lz)); Py_VISIT(lz->it); Py_VISIT(lz->func); return 0; @@ -4078,48 +4001,25 @@ static PyMethodDef filterfalse_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject filterfalse_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.filterfalse", /* tp_name */ - sizeof(filterfalseobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)filterfalse_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - itertools_filterfalse__doc__, /* tp_doc */ - (traverseproc)filterfalse_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)filterfalse_next, /* tp_iternext */ - filterfalse_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_filterfalse, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot filterfalse_slots[] = { + {Py_tp_dealloc, filterfalse_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_filterfalse__doc__}, + {Py_tp_traverse, filterfalse_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, filterfalse_next}, + {Py_tp_methods, filterfalse_methods}, + {Py_tp_new, itertools_filterfalse}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec filterfalse_spec = { + .name = "itertools.filterfalse", + .basicsize = sizeof(filterfalseobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = filterfalse_slots, }; @@ -4245,15 +4145,18 @@ itertools_count_impl(PyTypeObject *type, PyObject *long_cnt, static void count_dealloc(countobject *lz) { + PyTypeObject *tp = Py_TYPE(lz); PyObject_GC_UnTrack(lz); Py_XDECREF(lz->long_cnt); Py_XDECREF(lz->long_step); - Py_TYPE(lz)->tp_free(lz); + tp->tp_free(lz); + Py_DECREF(tp); } static int count_traverse(countobject *lz, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(lz)); Py_VISIT(lz->long_cnt); Py_VISIT(lz->long_step); return 0; @@ -4327,48 +4230,26 @@ static PyMethodDef count_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject count_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.count", /* tp_name */ - sizeof(countobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)count_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - (reprfunc)count_repr, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - itertools_count__doc__, /* tp_doc */ - (traverseproc)count_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)count_next, /* tp_iternext */ - count_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_count, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot count_slots[] = { + {Py_tp_dealloc, count_dealloc}, + {Py_tp_repr, count_repr}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_count__doc__}, + {Py_tp_traverse, count_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, count_next}, + {Py_tp_methods, count_methods}, + {Py_tp_new, itertools_count}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec count_spec = { + .name = "itertools.count", + .basicsize = sizeof(countobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = count_slots, }; @@ -4536,8 +4417,6 @@ typedef struct { PyObject *fillvalue; } ziplongestobject; -static PyTypeObject ziplongest_type; - static PyObject * zip_longest_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { @@ -4609,16 +4488,19 @@ zip_longest_new(PyTypeObject *type, PyObject *args, PyObject *kwds) static void zip_longest_dealloc(ziplongestobject *lz) { + PyTypeObject *tp = Py_TYPE(lz); PyObject_GC_UnTrack(lz); Py_XDECREF(lz->ittuple); Py_XDECREF(lz->result); Py_XDECREF(lz->fillvalue); - Py_TYPE(lz)->tp_free(lz); + tp->tp_free(lz); + Py_DECREF(tp); } static int zip_longest_traverse(ziplongestobject *lz, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(lz)); Py_VISIT(lz->ittuple); Py_VISIT(lz->result); Py_VISIT(lz->fillvalue); @@ -4752,48 +4634,25 @@ are exhausted, the fillvalue is substituted in their place. The fillvalue\n\ defaults to None or can be specified by a keyword argument.\n\ "); -static PyTypeObject ziplongest_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.zip_longest", /* tp_name */ - sizeof(ziplongestobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)zip_longest_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - zip_longest_doc, /* tp_doc */ - (traverseproc)zip_longest_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)zip_longest_next, /* tp_iternext */ - zip_longest_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - zip_longest_new, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot ziplongest_slots[] = { + {Py_tp_dealloc, zip_longest_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)zip_longest_doc}, + {Py_tp_traverse, zip_longest_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, zip_longest_next}, + {Py_tp_methods, zip_longest_methods}, + {Py_tp_new, zip_longest_new}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec ziplongest_spec = { + .name = "itertools.zip_longest", + .basicsize = sizeof(ziplongestobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = ziplongest_slots, }; @@ -4835,15 +4694,22 @@ static int itertoolsmodule_traverse(PyObject *mod, visitproc visit, void *arg) { itertools_state *state = get_module_state(mod); + Py_VISIT(state->accumulate_type); Py_VISIT(state->combinations_type); + Py_VISIT(state->compress_type); + Py_VISIT(state->count_type); Py_VISIT(state->cwr_type); Py_VISIT(state->cycle_type); Py_VISIT(state->dropwhile_type); + Py_VISIT(state->filterfalse_type); Py_VISIT(state->groupby_type); Py_VISIT(state->_grouper_type); + Py_VISIT(state->pairwise_type); Py_VISIT(state->permutations_type); + Py_VISIT(state->product_type); Py_VISIT(state->starmap_type); Py_VISIT(state->takewhile_type); + Py_VISIT(state->ziplongest_type); return 0; } @@ -4851,15 +4717,22 @@ static int itertoolsmodule_clear(PyObject *mod) { itertools_state *state = get_module_state(mod); + Py_CLEAR(state->accumulate_type); Py_CLEAR(state->combinations_type); + Py_CLEAR(state->compress_type); + Py_CLEAR(state->count_type); Py_CLEAR(state->cwr_type); Py_CLEAR(state->cycle_type); Py_CLEAR(state->dropwhile_type); + Py_CLEAR(state->filterfalse_type); Py_CLEAR(state->groupby_type); Py_CLEAR(state->_grouper_type); + Py_CLEAR(state->pairwise_type); Py_CLEAR(state->permutations_type); + Py_CLEAR(state->product_type); Py_CLEAR(state->starmap_type); Py_CLEAR(state->takewhile_type); + Py_CLEAR(state->ziplongest_type); return 0; } @@ -4884,27 +4757,27 @@ static int itertoolsmodule_exec(PyObject *mod) { itertools_state *state = get_module_state(mod); + ADD_TYPE(mod, state->accumulate_type, &accumulate_spec); ADD_TYPE(mod, state->combinations_type, &combinations_spec); + ADD_TYPE(mod, state->compress_type, &compress_spec); + ADD_TYPE(mod, state->count_type, &count_spec); ADD_TYPE(mod, state->cwr_type, &cwr_spec); ADD_TYPE(mod, state->cycle_type, &cycle_spec); ADD_TYPE(mod, state->dropwhile_type, &dropwhile_spec); + ADD_TYPE(mod, state->filterfalse_type, &filterfalse_spec); ADD_TYPE(mod, state->groupby_type, &groupby_spec); ADD_TYPE(mod, state->_grouper_type, &_grouper_spec); + ADD_TYPE(mod, state->pairwise_type, &pairwise_spec); ADD_TYPE(mod, state->permutations_type, &permutations_spec); + ADD_TYPE(mod, state->product_type, &product_spec); ADD_TYPE(mod, state->starmap_type, &starmap_spec); ADD_TYPE(mod, state->takewhile_type, &takewhile_spec); + ADD_TYPE(mod, state->ziplongest_type, &ziplongest_spec); PyTypeObject *typelist[] = { - &accumulate_type, &batched_type, &islice_type, &chain_type, - &compress_type, - &filterfalse_type, - &count_type, - &ziplongest_type, - &pairwise_type, - &product_type, &repeat_type, &tee_type, &teedataobject_type From webhook-mailer at python.org Wed Feb 8 17:12:27 2023 From: webhook-mailer at python.org (zooba) Date: Wed, 08 Feb 2023 22:12:27 -0000 Subject: [Python-checkins] gh-101283: Improved fallback logic for subprocess with shell=True on Windows (GH-101286) Message-ID: https://github.com/python/cpython/commit/23751ed826ee63fb486e874ec25934ea87dd8519 commit: 23751ed826ee63fb486e874ec25934ea87dd8519 branch: main author: Oleg Iarygin committer: zooba date: 2023-02-08T22:12:19Z summary: gh-101283: Improved fallback logic for subprocess with shell=True on Windows (GH-101286) files: A Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst M Doc/library/subprocess.rst M Lib/subprocess.py diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst index a87369a2461a..c93319e7011c 100644 --- a/Doc/library/subprocess.rst +++ b/Doc/library/subprocess.rst @@ -111,6 +111,14 @@ underlying :class:`Popen` interface can be used directly. Added the *text* parameter, as a more understandable alias of *universal_newlines*. Added the *capture_output* parameter. + .. versionchanged:: 3.11.2 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. class:: CompletedProcess The return value from :func:`run`, representing a process that has finished. @@ -487,6 +495,14 @@ functions. *executable* parameter accepts a bytes and :term:`path-like object` on Windows. + .. versionchanged:: 3.11.2 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + *stdin*, *stdout* and *stderr* specify the executed program's standard input, standard output and standard error file handles, respectively. Valid values are ``None``, :data:`PIPE`, :data:`DEVNULL`, an existing file descriptor (a @@ -1158,6 +1174,14 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. + .. versionchanged:: 3.11.2 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. function:: check_call(args, *, stdin=None, stdout=None, stderr=None, \ shell=False, cwd=None, timeout=None, \ **other_popen_kwargs) @@ -1190,6 +1214,14 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. + .. versionchanged:: 3.11.2 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. function:: check_output(args, *, stdin=None, stderr=None, shell=False, \ cwd=None, encoding=None, errors=None, \ @@ -1245,6 +1277,14 @@ calls these functions. .. versionadded:: 3.7 *text* was added as a more readable alias for *universal_newlines*. + .. versionchanged:: 3.11.2 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. _subprocess-replacements: diff --git a/Lib/subprocess.py b/Lib/subprocess.py index 9cadd1bf8e62..fa527d50ebb4 100644 --- a/Lib/subprocess.py +++ b/Lib/subprocess.py @@ -1480,7 +1480,21 @@ def _execute_child(self, args, executable, preexec_fn, close_fds, if shell: startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW startupinfo.wShowWindow = _winapi.SW_HIDE - comspec = os.environ.get("COMSPEC", "cmd.exe") + if not executable: + # gh-101283: without a fully-qualified path, before Windows + # checks the system directories, it first looks in the + # application directory, and also the current directory if + # NeedCurrentDirectoryForExePathW(ExeName) is true, so try + # to avoid executing unqualified "cmd.exe". + comspec = os.environ.get('ComSpec') + if not comspec: + system_root = os.environ.get('SystemRoot', '') + comspec = os.path.join(system_root, 'System32', 'cmd.exe') + if not os.path.isabs(comspec): + raise FileNotFoundError('shell not found: neither %ComSpec% nor %SystemRoot% is set') + if os.path.isabs(comspec): + executable = comspec + args = '{} /c "{}"'.format (comspec, args) if cwd is not None: diff --git a/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst b/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst new file mode 100644 index 000000000000..0efdfa102341 --- /dev/null +++ b/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst @@ -0,0 +1,3 @@ +:class:`subprocess.Popen` now uses a safer approach to find +``cmd.exe`` when launching with ``shell=True``. Patch by Eryk Sun, +based on a patch by Oleg Iarygin. From webhook-mailer at python.org Wed Feb 8 18:39:27 2023 From: webhook-mailer at python.org (zooba) Date: Wed, 08 Feb 2023 23:39:27 -0000 Subject: [Python-checkins] gh-101283: Fix use of unbound variable (GH-101712) Message-ID: https://github.com/python/cpython/commit/20cf32e761fb9eaccc142415b389998896869263 commit: 20cf32e761fb9eaccc142415b389998896869263 branch: main author: Steve Dower committer: zooba date: 2023-02-08T23:38:56Z summary: gh-101283: Fix use of unbound variable (GH-101712) files: M Lib/subprocess.py diff --git a/Lib/subprocess.py b/Lib/subprocess.py index fa527d50ebb4..1f203bd00d35 100644 --- a/Lib/subprocess.py +++ b/Lib/subprocess.py @@ -1494,6 +1494,8 @@ def _execute_child(self, args, executable, preexec_fn, close_fds, raise FileNotFoundError('shell not found: neither %ComSpec% nor %SystemRoot% is set') if os.path.isabs(comspec): executable = comspec + else: + comspec = executable args = '{} /c "{}"'.format (comspec, args) From webhook-mailer at python.org Wed Feb 8 18:44:13 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 08 Feb 2023 23:44:13 -0000 Subject: [Python-checkins] Update Lib/subprocess.py Message-ID: https://github.com/python/cpython/commit/51b079a2d6c9a7a852c04823ef4180c36eed682b commit: 51b079a2d6c9a7a852c04823ef4180c36eed682b branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-08T15:44:06-08:00 summary: Update Lib/subprocess.py files: A Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst M Doc/library/subprocess.rst M Lib/subprocess.py diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst index 7ae8a9493608..3a315f0cb81d 100644 --- a/Doc/library/subprocess.rst +++ b/Doc/library/subprocess.rst @@ -111,6 +111,14 @@ underlying :class:`Popen` interface can be used directly. Added the *text* parameter, as a more understandable alias of *universal_newlines*. Added the *capture_output* parameter. + .. versionchanged:: 3.11.2 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. class:: CompletedProcess The return value from :func:`run`, representing a process that has finished. @@ -488,6 +496,14 @@ functions. *executable* parameter accepts a bytes and :term:`path-like object` on Windows. + .. versionchanged:: 3.11.2 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + *stdin*, *stdout* and *stderr* specify the executed program's standard input, standard output and standard error file handles, respectively. Valid values are :data:`PIPE`, :data:`DEVNULL`, an existing file descriptor (a positive @@ -1160,6 +1176,14 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. + .. versionchanged:: 3.11.2 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. function:: check_call(args, *, stdin=None, stdout=None, stderr=None, \ shell=False, cwd=None, timeout=None, \ **other_popen_kwargs) @@ -1192,6 +1216,14 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. + .. versionchanged:: 3.11.2 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. function:: check_output(args, *, stdin=None, stderr=None, shell=False, \ cwd=None, encoding=None, errors=None, \ @@ -1247,6 +1279,14 @@ calls these functions. .. versionadded:: 3.7 *text* was added as a more readable alias for *universal_newlines*. + .. versionchanged:: 3.11.2 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. _subprocess-replacements: diff --git a/Lib/subprocess.py b/Lib/subprocess.py index 9cadd1bf8e62..1f203bd00d35 100644 --- a/Lib/subprocess.py +++ b/Lib/subprocess.py @@ -1480,7 +1480,23 @@ def _execute_child(self, args, executable, preexec_fn, close_fds, if shell: startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW startupinfo.wShowWindow = _winapi.SW_HIDE - comspec = os.environ.get("COMSPEC", "cmd.exe") + if not executable: + # gh-101283: without a fully-qualified path, before Windows + # checks the system directories, it first looks in the + # application directory, and also the current directory if + # NeedCurrentDirectoryForExePathW(ExeName) is true, so try + # to avoid executing unqualified "cmd.exe". + comspec = os.environ.get('ComSpec') + if not comspec: + system_root = os.environ.get('SystemRoot', '') + comspec = os.path.join(system_root, 'System32', 'cmd.exe') + if not os.path.isabs(comspec): + raise FileNotFoundError('shell not found: neither %ComSpec% nor %SystemRoot% is set') + if os.path.isabs(comspec): + executable = comspec + else: + comspec = executable + args = '{} /c "{}"'.format (comspec, args) if cwd is not None: diff --git a/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst b/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst new file mode 100644 index 000000000000..0efdfa102341 --- /dev/null +++ b/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst @@ -0,0 +1,3 @@ +:class:`subprocess.Popen` now uses a safer approach to find +``cmd.exe`` when launching with ``shell=True``. Patch by Eryk Sun, +based on a patch by Oleg Iarygin. From webhook-mailer at python.org Wed Feb 8 18:45:24 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 08 Feb 2023 23:45:24 -0000 Subject: [Python-checkins] Apply suggestions from code review Message-ID: https://github.com/python/cpython/commit/9889de3fa734abf4064c9b188bb00b722a4e229c commit: 9889de3fa734abf4064c9b188bb00b722a4e229c branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-08T15:45:17-08:00 summary: Apply suggestions from code review files: A Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst M Doc/library/subprocess.rst M Lib/subprocess.py diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst index d9576cc4b066..fc66663546b9 100644 --- a/Doc/library/subprocess.rst +++ b/Doc/library/subprocess.rst @@ -110,6 +110,14 @@ underlying :class:`Popen` interface can be used directly. Added the *text* parameter, as a more understandable alias of *universal_newlines*. Added the *capture_output* parameter. + .. versionchanged:: 3.10.11 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. class:: CompletedProcess The return value from :func:`run`, representing a process that has finished. @@ -486,6 +494,14 @@ functions. *executable* parameter accepts a bytes and :term:`path-like object` on Windows. + .. versionchanged:: 3.10.11 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + *stdin*, *stdout* and *stderr* specify the executed program's standard input, standard output and standard error file handles, respectively. Valid values are :data:`PIPE`, :data:`DEVNULL`, an existing file descriptor (a positive @@ -1152,6 +1168,14 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. + .. versionchanged:: 3.10.11 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. function:: check_call(args, *, stdin=None, stdout=None, stderr=None, \ shell=False, cwd=None, timeout=None, \ **other_popen_kwargs) @@ -1184,6 +1208,14 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. + .. versionchanged:: 3.10.11 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. function:: check_output(args, *, stdin=None, stderr=None, shell=False, \ cwd=None, encoding=None, errors=None, \ @@ -1239,6 +1271,14 @@ calls these functions. .. versionadded:: 3.7 *text* was added as a more readable alias for *universal_newlines*. + .. versionchanged:: 3.10.11 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. _subprocess-replacements: diff --git a/Lib/subprocess.py b/Lib/subprocess.py index ffe9170b68b2..f1e3d64dfe04 100644 --- a/Lib/subprocess.py +++ b/Lib/subprocess.py @@ -1427,7 +1427,23 @@ def _execute_child(self, args, executable, preexec_fn, close_fds, if shell: startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW startupinfo.wShowWindow = _winapi.SW_HIDE - comspec = os.environ.get("COMSPEC", "cmd.exe") + if not executable: + # gh-101283: without a fully-qualified path, before Windows + # checks the system directories, it first looks in the + # application directory, and also the current directory if + # NeedCurrentDirectoryForExePathW(ExeName) is true, so try + # to avoid executing unqualified "cmd.exe". + comspec = os.environ.get('ComSpec') + if not comspec: + system_root = os.environ.get('SystemRoot', '') + comspec = os.path.join(system_root, 'System32', 'cmd.exe') + if not os.path.isabs(comspec): + raise FileNotFoundError('shell not found: neither %ComSpec% nor %SystemRoot% is set') + if os.path.isabs(comspec): + executable = comspec + else: + comspec = executable + args = '{} /c "{}"'.format (comspec, args) if cwd is not None: diff --git a/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst b/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst new file mode 100644 index 000000000000..0efdfa102341 --- /dev/null +++ b/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst @@ -0,0 +1,3 @@ +:class:`subprocess.Popen` now uses a safer approach to find +``cmd.exe`` when launching with ``shell=True``. Patch by Eryk Sun, +based on a patch by Oleg Iarygin. From webhook-mailer at python.org Wed Feb 8 18:52:11 2023 From: webhook-mailer at python.org (zooba) Date: Wed, 08 Feb 2023 23:52:11 -0000 Subject: [Python-checkins] gh-101283: Version was just released, so should be changed in 3.11.3 (GH-101719) Message-ID: https://github.com/python/cpython/commit/0e0c5d8baaa6aa91f4221c5aa57d5586e58e8652 commit: 0e0c5d8baaa6aa91f4221c5aa57d5586e58e8652 branch: main author: Steve Dower committer: zooba date: 2023-02-08T23:52:03Z summary: gh-101283: Version was just released, so should be changed in 3.11.3 (GH-101719) files: M Doc/library/subprocess.rst diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst index c93319e7011c..d792a43eeb27 100644 --- a/Doc/library/subprocess.rst +++ b/Doc/library/subprocess.rst @@ -111,7 +111,7 @@ underlying :class:`Popen` interface can be used directly. Added the *text* parameter, as a more understandable alias of *universal_newlines*. Added the *capture_output* parameter. - .. versionchanged:: 3.11.2 + .. versionchanged:: 3.11.3 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and @@ -495,7 +495,7 @@ functions. *executable* parameter accepts a bytes and :term:`path-like object` on Windows. - .. versionchanged:: 3.11.2 + .. versionchanged:: 3.11.3 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and @@ -1174,7 +1174,7 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. - .. versionchanged:: 3.11.2 + .. versionchanged:: 3.11.3 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and @@ -1214,7 +1214,7 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. - .. versionchanged:: 3.11.2 + .. versionchanged:: 3.11.3 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and @@ -1277,7 +1277,7 @@ calls these functions. .. versionadded:: 3.7 *text* was added as a more readable alias for *universal_newlines*. - .. versionchanged:: 3.11.2 + .. versionchanged:: 3.11.3 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and From webhook-mailer at python.org Wed Feb 8 19:11:44 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 09 Feb 2023 00:11:44 -0000 Subject: [Python-checkins] gh-101283: Version was just released, so should be changed in 3.11.3 (GH-101719) Message-ID: https://github.com/python/cpython/commit/4a9dff0e5adc91cbb1ed68c495dac64ccfe608bd commit: 4a9dff0e5adc91cbb1ed68c495dac64ccfe608bd branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-08T16:11:36-08:00 summary: gh-101283: Version was just released, so should be changed in 3.11.3 (GH-101719) (cherry picked from commit 0e0c5d8baaa6aa91f4221c5aa57d5586e58e8652) Co-authored-by: Steve Dower files: M Doc/library/subprocess.rst diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst index 3a315f0cb81d..9b153a2def1a 100644 --- a/Doc/library/subprocess.rst +++ b/Doc/library/subprocess.rst @@ -111,7 +111,7 @@ underlying :class:`Popen` interface can be used directly. Added the *text* parameter, as a more understandable alias of *universal_newlines*. Added the *capture_output* parameter. - .. versionchanged:: 3.11.2 + .. versionchanged:: 3.11.3 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and @@ -496,7 +496,7 @@ functions. *executable* parameter accepts a bytes and :term:`path-like object` on Windows. - .. versionchanged:: 3.11.2 + .. versionchanged:: 3.11.3 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and @@ -1176,7 +1176,7 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. - .. versionchanged:: 3.11.2 + .. versionchanged:: 3.11.3 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and @@ -1216,7 +1216,7 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. - .. versionchanged:: 3.11.2 + .. versionchanged:: 3.11.3 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and @@ -1279,7 +1279,7 @@ calls these functions. .. versionadded:: 3.7 *text* was added as a more readable alias for *universal_newlines*. - .. versionchanged:: 3.11.2 + .. versionchanged:: 3.11.3 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and From webhook-mailer at python.org Wed Feb 8 19:23:27 2023 From: webhook-mailer at python.org (gvanrossum) Date: Thu, 09 Feb 2023 00:23:27 -0000 Subject: [Python-checkins] gh-98831: Use opcode metadata for stack_effect() (#101704) Message-ID: https://github.com/python/cpython/commit/65b7b6bd23ea789357777f3a0a6f25a79bb04177 commit: 65b7b6bd23ea789357777f3a0a6f25a79bb04177 branch: main author: Guido van Rossum committer: gvanrossum date: 2023-02-08T16:23:19-08:00 summary: gh-98831: Use opcode metadata for stack_effect() (#101704) * Write output and metadata in a single run This halves the time to run the cases generator (most of the time goes into parsing the input). * Declare or define opcode metadata based on NEED_OPCODE_TABLES * Use generated metadata for stack_effect() * compile.o depends on opcode_metadata.h * Return -1 from _PyOpcode_num_popped/pushed for unknown opcode files: M Makefile.pre.in M Python/compile.c M Python/opcode_metadata.h M Tools/cases_generator/generate_cases.py diff --git a/Makefile.pre.in b/Makefile.pre.in index 2559df8e7495..7a84b953d979 100644 --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -1445,24 +1445,21 @@ regen-opcode-targets: .PHONY: regen-cases regen-cases: - # Regenerate Python/generated_cases.c.h from Python/bytecodes.c + # Regenerate Python/generated_cases.c.h + # and Python/opcode_metadata.h + # from Python/bytecodes.c # using Tools/cases_generator/generate_cases.py PYTHONPATH=$(srcdir)/Tools/cases_generator \ $(PYTHON_FOR_REGEN) \ $(srcdir)/Tools/cases_generator/generate_cases.py \ -i $(srcdir)/Python/bytecodes.c \ - -o $(srcdir)/Python/generated_cases.c.h.new + -o $(srcdir)/Python/generated_cases.c.h.new \ + -m $(srcdir)/Python/opcode_metadata.h.new $(UPDATE_FILE) $(srcdir)/Python/generated_cases.c.h $(srcdir)/Python/generated_cases.c.h.new - # Regenerate Python/opcode_metadata.h from Python/bytecodes.c - # using Tools/cases_generator/generate_cases.py --metadata - PYTHONPATH=$(srcdir)/Tools/cases_generator \ - $(PYTHON_FOR_REGEN) \ - $(srcdir)/Tools/cases_generator/generate_cases.py \ - --metadata \ - -i $(srcdir)/Python/bytecodes.c \ - -o $(srcdir)/Python/opcode_metadata.h.new $(UPDATE_FILE) $(srcdir)/Python/opcode_metadata.h $(srcdir)/Python/opcode_metadata.h.new +Python/compile.o: $(srcdir)/Python/opcode_metadata.h + Python/ceval.o: \ $(srcdir)/Python/ceval_macros.h \ $(srcdir)/Python/condvar.h \ diff --git a/Python/compile.c b/Python/compile.c index df2dffb95bbd..a3c915c3c14a 100644 --- a/Python/compile.c +++ b/Python/compile.c @@ -1074,135 +1074,49 @@ basicblock_next_instr(basicblock *b) static int stack_effect(int opcode, int oparg, int jump) { - switch (opcode) { - case NOP: - case EXTENDED_ARG: - case RESUME: - case CACHE: - return 0; - - /* Stack manipulation */ - case POP_TOP: - return -1; - case SWAP: - return 0; - case END_FOR: - return -2; - - /* Unary operators */ - case UNARY_NEGATIVE: - case UNARY_NOT: - case UNARY_INVERT: - return 0; - - case SET_ADD: - case LIST_APPEND: - return -1; - case MAP_ADD: - return -2; - - case BINARY_SUBSCR: - return -1; - case BINARY_SLICE: - return -2; - case STORE_SUBSCR: - return -3; - case STORE_SLICE: - return -4; - case DELETE_SUBSCR: - return -2; - - case GET_ITER: - return 0; - - case LOAD_BUILD_CLASS: - return 1; + if (0 <= opcode && opcode <= MAX_REAL_OPCODE) { + if (_PyOpcode_Deopt[opcode] != opcode) { + // Specialized instructions are not supported. + return PY_INVALID_STACK_EFFECT; + } + int popped, pushed; + if (jump > 0) { + popped = _PyOpcode_num_popped(opcode, oparg, true); + pushed = _PyOpcode_num_pushed(opcode, oparg, true); + } + else { + popped = _PyOpcode_num_popped(opcode, oparg, false); + pushed = _PyOpcode_num_pushed(opcode, oparg, false); + } + if (popped < 0 || pushed < 0) { + return PY_INVALID_STACK_EFFECT; + } + if (jump >= 0) { + return pushed - popped; + } + if (jump < 0) { + // Compute max(pushed - popped, alt_pushed - alt_popped) + int alt_popped = _PyOpcode_num_popped(opcode, oparg, true); + int alt_pushed = _PyOpcode_num_pushed(opcode, oparg, true); + if (alt_popped < 0 || alt_pushed < 0) { + return PY_INVALID_STACK_EFFECT; + } + int diff = pushed - popped; + int alt_diff = alt_pushed - alt_popped; + if (alt_diff > diff) { + return alt_diff; + } + return diff; + } + } - case RETURN_VALUE: - return -1; - case RETURN_CONST: - return 0; - case SETUP_ANNOTATIONS: - return 0; - case YIELD_VALUE: - return 0; + // Pseudo ops + switch (opcode) { case POP_BLOCK: - return 0; - case POP_EXCEPT: - return -1; - - case STORE_NAME: - return -1; - case DELETE_NAME: - return 0; - case UNPACK_SEQUENCE: - return oparg-1; - case UNPACK_EX: - return (oparg&0xFF) + (oparg>>8); - case FOR_ITER: - return 1; - case SEND: - return jump > 0 ? -1 : 0; - case STORE_ATTR: - return -2; - case DELETE_ATTR: - return -1; - case STORE_GLOBAL: - return -1; - case DELETE_GLOBAL: - return 0; - case LOAD_CONST: - return 1; - case LOAD_NAME: - return 1; - case BUILD_TUPLE: - case BUILD_LIST: - case BUILD_SET: - case BUILD_STRING: - return 1-oparg; - case BUILD_MAP: - return 1 - 2*oparg; - case BUILD_CONST_KEY_MAP: - return -oparg; - case LOAD_ATTR: - return (oparg & 1); - case COMPARE_OP: - case IS_OP: - case CONTAINS_OP: - return -1; - case CHECK_EXC_MATCH: - return 0; - case CHECK_EG_MATCH: - return 0; - case IMPORT_NAME: - return -1; - case IMPORT_FROM: - return 1; - - /* Jumps */ - case JUMP_FORWARD: - case JUMP_BACKWARD: case JUMP: - case JUMP_BACKWARD_NO_INTERRUPT: case JUMP_NO_INTERRUPT: return 0; - case JUMP_IF_TRUE_OR_POP: - case JUMP_IF_FALSE_OR_POP: - return jump ? 0 : -1; - - case POP_JUMP_IF_NONE: - case POP_JUMP_IF_NOT_NONE: - case POP_JUMP_IF_FALSE: - case POP_JUMP_IF_TRUE: - return -1; - - case COMPARE_AND_BRANCH: - return -2; - - case LOAD_GLOBAL: - return (oparg & 1) + 1; - /* Exception handling pseudo-instructions */ case SETUP_FINALLY: /* 0 in the normal flow. @@ -1218,109 +1132,13 @@ stack_effect(int opcode, int oparg, int jump) * of __(a)enter__ and push 2 values before jumping to the handler * if an exception be raised. */ return jump ? 1 : 0; - case PREP_RERAISE_STAR: - return -1; - case RERAISE: - return -1; - case PUSH_EXC_INFO: - return 1; - - case WITH_EXCEPT_START: - return 1; - - case LOAD_FAST: - case LOAD_FAST_CHECK: - return 1; - case STORE_FAST: - return -1; - case DELETE_FAST: - return 0; - - case RETURN_GENERATOR: - return 0; - case RAISE_VARARGS: - return -oparg; - - /* Functions and calls */ - case KW_NAMES: - return 0; - case CALL: - return -1-oparg; - case CALL_INTRINSIC_1: - return 0; - case CALL_FUNCTION_EX: - return -2 - ((oparg & 0x01) != 0); - case MAKE_FUNCTION: - return 0 - ((oparg & 0x01) != 0) - ((oparg & 0x02) != 0) - - ((oparg & 0x04) != 0) - ((oparg & 0x08) != 0); - case BUILD_SLICE: - if (oparg == 3) - return -2; - else - return -1; - - /* Closures */ - case MAKE_CELL: - case COPY_FREE_VARS: - return 0; - case LOAD_CLOSURE: - return 1; - case LOAD_DEREF: - case LOAD_CLASSDEREF: - return 1; - case STORE_DEREF: - return -1; - case DELETE_DEREF: - return 0; - - /* Iterators and generators */ - case GET_AWAITABLE: - return 0; - - case BEFORE_ASYNC_WITH: - case BEFORE_WITH: - return 1; - case GET_AITER: - return 0; - case GET_ANEXT: - return 1; - case GET_YIELD_FROM_ITER: - return 0; - case END_ASYNC_FOR: - return -2; - case CLEANUP_THROW: - return -2; - case FORMAT_VALUE: - /* If there's a fmt_spec on the stack, we go from 2->1, - else 1->1. */ - return (oparg & FVS_MASK) == FVS_HAVE_SPEC ? -1 : 0; case LOAD_METHOD: return 1; - case LOAD_ASSERTION_ERROR: - return 1; - case LIST_EXTEND: - case SET_UPDATE: - case DICT_MERGE: - case DICT_UPDATE: - return -1; - case MATCH_CLASS: - return -2; - case GET_LEN: - case MATCH_MAPPING: - case MATCH_SEQUENCE: - case MATCH_KEYS: - return 1; - case COPY: - case PUSH_NULL: - return 1; - case BINARY_OP: - return -1; - case INTERPRETER_EXIT: - return -1; default: return PY_INVALID_STACK_EFFECT; } + return PY_INVALID_STACK_EFFECT; /* not reachable */ } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index 98791043f552..db1dfd37a901 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -2,8 +2,10 @@ // from Python/bytecodes.c // Do not edit! -#ifndef NDEBUG -static int +#ifndef NEED_OPCODE_TABLES +extern int _PyOpcode_num_popped(int opcode, int oparg, bool jump); +#else +int _PyOpcode_num_popped(int opcode, int oparg, bool jump) { switch(opcode) { case NOP: @@ -345,13 +347,15 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { case CACHE: return 0; default: - Py_UNREACHABLE(); + return -1; } } #endif -#ifndef NDEBUG -static int +#ifndef NEED_OPCODE_TABLES +extern int _PyOpcode_num_pushed(int opcode, int oparg, bool jump); +#else +int _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { switch(opcode) { case NOP: @@ -693,10 +697,11 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case CACHE: return 0; default: - Py_UNREACHABLE(); + return -1; } } #endif + enum Direction { DIR_NONE, DIR_READ, DIR_WRITE }; enum InstructionFormat { INSTR_FMT_IB, INSTR_FMT_IBC, INSTR_FMT_IBC0, INSTR_FMT_IBC000, INSTR_FMT_IBC0000, INSTR_FMT_IBC00000000, INSTR_FMT_IBIB, INSTR_FMT_IX, INSTR_FMT_IXC, INSTR_FMT_IXC000 }; struct opcode_metadata { @@ -705,7 +710,12 @@ struct opcode_metadata { enum Direction dir_op3; bool valid_entry; enum InstructionFormat instr_format; -} _PyOpcode_opcode_metadata[256] = { +}; + +#ifndef NEED_OPCODE_TABLES +extern const struct opcode_metadata _PyOpcode_opcode_metadata[256]; +#else +const struct opcode_metadata _PyOpcode_opcode_metadata[256] = { [NOP] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [RESUME] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [LOAD_CLOSURE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, @@ -876,3 +886,4 @@ struct opcode_metadata { [EXTENDED_ARG] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [CACHE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, }; +#endif diff --git a/Tools/cases_generator/generate_cases.py b/Tools/cases_generator/generate_cases.py index 1fcfbb677090..aa8e14075c87 100644 --- a/Tools/cases_generator/generate_cases.py +++ b/Tools/cases_generator/generate_cases.py @@ -43,10 +43,7 @@ "-o", "--output", type=str, help="Generated code", default=DEFAULT_OUTPUT ) arg_parser.add_argument( - "-m", - "--metadata", - action="store_true", - help=f"Generate metadata instead, changes output default to {DEFAULT_METADATA_OUTPUT}", + "-m", "--metadata", type=str, help="Generated metadata", default=DEFAULT_METADATA_OUTPUT ) @@ -498,13 +495,15 @@ class Analyzer: filename: str output_filename: str + metadata_filename: str src: str errors: int = 0 - def __init__(self, filename: str, output_filename: str): + def __init__(self, filename: str, output_filename: str, metadata_filename: str): """Read the input file.""" self.filename = filename self.output_filename = output_filename + self.metadata_filename = metadata_filename with open(filename) as f: self.src = f.read() @@ -889,21 +888,25 @@ def write_stack_effect_functions(self) -> None: def write_function( direction: str, data: list[tuple[AnyInstruction, str]] ) -> None: - self.out.emit("\n#ifndef NDEBUG") - self.out.emit("static int") + self.out.emit("") + self.out.emit("#ifndef NEED_OPCODE_TABLES") + self.out.emit(f"extern int _PyOpcode_num_{direction}(int opcode, int oparg, bool jump);") + self.out.emit("#else") + self.out.emit("int") self.out.emit(f"_PyOpcode_num_{direction}(int opcode, int oparg, bool jump) {{") self.out.emit(" switch(opcode) {") for instr, effect in data: self.out.emit(f" case {instr.name}:") self.out.emit(f" return {effect};") self.out.emit(" default:") - self.out.emit(" Py_UNREACHABLE();") + self.out.emit(" return -1;") self.out.emit(" }") self.out.emit("}") self.out.emit("#endif") write_function("popped", popped_data) write_function("pushed", pushed_data) + self.out.emit("") def write_metadata(self) -> None: """Write instruction metadata to output file.""" @@ -924,7 +927,7 @@ def write_metadata(self) -> None: # Turn it into a list of enum definitions. format_enums = [INSTR_FMT_PREFIX + format for format in sorted(all_formats)] - with open(self.output_filename, "w") as f: + with open(self.metadata_filename, "w") as f: # Write provenance header f.write(f"// This file is generated by {THIS} --metadata\n") f.write(f"// from {os.path.relpath(self.filename, ROOT)}\n") @@ -935,7 +938,7 @@ def write_metadata(self) -> None: self.write_stack_effect_functions() - # Write variable definition + # Write type definitions self.out.emit("enum Direction { DIR_NONE, DIR_READ, DIR_WRITE };") self.out.emit(f"enum InstructionFormat {{ {', '.join(format_enums)} }};") self.out.emit("struct opcode_metadata {") @@ -945,7 +948,14 @@ def write_metadata(self) -> None: self.out.emit("enum Direction dir_op3;") self.out.emit("bool valid_entry;") self.out.emit("enum InstructionFormat instr_format;") - self.out.emit("} _PyOpcode_opcode_metadata[256] = {") + self.out.emit("};") + self.out.emit("") + + # Write metadata array declaration + self.out.emit("#ifndef NEED_OPCODE_TABLES") + self.out.emit("extern const struct opcode_metadata _PyOpcode_opcode_metadata[256];") + self.out.emit("#else") + self.out.emit("const struct opcode_metadata _PyOpcode_opcode_metadata[256] = {") # Write metadata for each instruction for thing in self.everything: @@ -962,6 +972,7 @@ def write_metadata(self) -> None: # Write end of array self.out.emit("};") + self.out.emit("#endif") def write_metadata_for_inst(self, instr: Instruction) -> None: """Write metadata for a single instruction.""" @@ -1184,18 +1195,13 @@ def variable_used(node: parser.Node, name: str) -> bool: def main(): """Parse command line, parse input, analyze, write output.""" args = arg_parser.parse_args() # Prints message and sys.exit(2) on error - if args.metadata: - if args.output == DEFAULT_OUTPUT: - args.output = DEFAULT_METADATA_OUTPUT - a = Analyzer(args.input, args.output) # Raises OSError if input unreadable + a = Analyzer(args.input, args.output, args.metadata) # Raises OSError if input unreadable a.parse() # Raises SyntaxError on failure a.analyze() # Prints messages and sets a.errors on failure if a.errors: sys.exit(f"Found {a.errors} errors") - if args.metadata: - a.write_metadata() - else: - a.write_instructions() # Raises OSError if output can't be written + a.write_instructions() # Raises OSError if output can't be written + a.write_metadata() if __name__ == "__main__": From webhook-mailer at python.org Wed Feb 8 20:00:48 2023 From: webhook-mailer at python.org (gpshead) Date: Thu, 09 Feb 2023 01:00:48 -0000 Subject: [Python-checkins] gh-85984: Remove legacy Lib/pty.py code. (#92365) Message-ID: https://github.com/python/cpython/commit/244d4cd9d22d73fb3c0938937c4f435bd68f32d4 commit: 244d4cd9d22d73fb3c0938937c4f435bd68f32d4 branch: main author: Soumendra Ganguly <67527439+8vasu at users.noreply.github.com> committer: gpshead date: 2023-02-08T17:00:17-08:00 summary: gh-85984: Remove legacy Lib/pty.py code. (#92365) Refactored the implementation of pty.fork to use os.login_tty. A DeprecationWarning is now raised by pty.master_open() and pty.slave_open(). They were undocumented and deprecated long long ago in the docstring in favor of pty.openpty. Signed-off-by: Soumendra Ganguly Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com> Co-authored-by: Gregory P. Smith files: A Misc/NEWS.d/next/Library/2023-02-05-21-40-15.gh-issue-85984.Kfzbb2.rst M Doc/whatsnew/3.12.rst M Lib/pty.py diff --git a/Doc/whatsnew/3.12.rst b/Doc/whatsnew/3.12.rst index b723b70154f0..45a5e5062d55 100644 --- a/Doc/whatsnew/3.12.rst +++ b/Doc/whatsnew/3.12.rst @@ -512,6 +512,10 @@ Pending Removal in Python 3.14 :func:`~multiprocessing.set_start_method` APIs to explicitly specify when your code *requires* ``'fork'``. See :ref:`multiprocessing-start-methods`. +* :mod:`pty` has two undocumented ``master_open()`` and ``slave_open()`` + functions that have been deprecated since Python 2 but only gained a + proper :exc:`DeprecationWarning` in 3.12. Remove them in 3.14. + Pending Removal in Future Versions ---------------------------------- diff --git a/Lib/pty.py b/Lib/pty.py index 03073f07c92c..6571050886bd 100644 --- a/Lib/pty.py +++ b/Lib/pty.py @@ -40,6 +40,9 @@ def master_open(): Open a pty master and return the fd, and the filename of the slave end. Deprecated, use openpty() instead.""" + import warnings + warnings.warn("Use pty.openpty() instead.", DeprecationWarning, stacklevel=2) # Remove API in 3.14 + try: master_fd, slave_fd = os.openpty() except (AttributeError, OSError): @@ -69,6 +72,9 @@ def slave_open(tty_name): opened filedescriptor. Deprecated, use openpty() instead.""" + import warnings + warnings.warn("Use pty.openpty() instead.", DeprecationWarning, stacklevel=2) # Remove API in 3.14 + result = os.open(tty_name, os.O_RDWR) try: from fcntl import ioctl, I_PUSH @@ -101,20 +107,8 @@ def fork(): master_fd, slave_fd = openpty() pid = os.fork() if pid == CHILD: - # Establish a new session. - os.setsid() os.close(master_fd) - - # Slave becomes stdin/stdout/stderr of child. - os.dup2(slave_fd, STDIN_FILENO) - os.dup2(slave_fd, STDOUT_FILENO) - os.dup2(slave_fd, STDERR_FILENO) - if slave_fd > STDERR_FILENO: - os.close(slave_fd) - - # Explicitly open the tty to make it become a controlling tty. - tmp_fd = os.open(os.ttyname(STDOUT_FILENO), os.O_RDWR) - os.close(tmp_fd) + os.login_tty(slave_fd) else: os.close(slave_fd) diff --git a/Misc/NEWS.d/next/Library/2023-02-05-21-40-15.gh-issue-85984.Kfzbb2.rst b/Misc/NEWS.d/next/Library/2023-02-05-21-40-15.gh-issue-85984.Kfzbb2.rst new file mode 100644 index 000000000000..c91829f2c739 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-05-21-40-15.gh-issue-85984.Kfzbb2.rst @@ -0,0 +1,4 @@ +Refactored the implementation of :func:`pty.fork` to use :func:`os.login_tty`. + +A :exc:`DeprecationWarning` is now raised by ``pty.master_open()`` and ``pty.slave_open()``. They were +undocumented and deprecated long long ago in the docstring in favor of :func:`pty.openpty`. From webhook-mailer at python.org Thu Feb 9 03:41:00 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 09 Feb 2023 08:41:00 -0000 Subject: [Python-checkins] gh-101678: refactor the math module to use special functions from c11 (GH-101679) Message-ID: https://github.com/python/cpython/commit/58395759b04273edccf3d199606088e0703ae6b1 commit: 58395759b04273edccf3d199606088e0703ae6b1 branch: main author: Sergey B Kirpichev committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-09T00:40:52-08:00 summary: gh-101678: refactor the math module to use special functions from c11 (GH-101679) Shouldn't affect users, hence no news. Automerge-Triggered-By: GH:mdickinson files: M Modules/_math.h M Modules/mathmodule.c diff --git a/Modules/_math.h b/Modules/_math.h index 4a6bc223ef5f..2285b64747c0 100644 --- a/Modules/_math.h +++ b/Modules/_math.h @@ -7,8 +7,9 @@ static double _Py_log1p(double x) { - /* Some platforms supply a log1p function but don't respect the sign of - zero: log1p(-0.0) gives 0.0 instead of the correct result of -0.0. + /* Some platforms (e.g. MacOS X 10.8, see gh-59682) supply a log1p function + but don't respect the sign of zero: log1p(-0.0) gives 0.0 instead of + the correct result of -0.0. To save fiddling with configure tests and platform checks, we handle the special case of zero input directly on all platforms. diff --git a/Modules/mathmodule.c b/Modules/mathmodule.c index 992957e675a7..939954c95d9f 100644 --- a/Modules/mathmodule.c +++ b/Modules/mathmodule.c @@ -101,10 +101,6 @@ get_math_module_state(PyObject *module) static const double pi = 3.141592653589793238462643383279502884197; static const double logpi = 1.144729885849400174143427351353058711647; -#if !defined(HAVE_ERF) || !defined(HAVE_ERFC) -static const double sqrtpi = 1.772453850905516027298167483341145182798; -#endif /* !defined(HAVE_ERF) || !defined(HAVE_ERFC) */ - /* Version of PyFloat_AsDouble() with in-line fast paths for exact floats and integers. Gives a substantial @@ -162,7 +158,9 @@ m_sinpi(double x) return copysign(1.0, x)*r; } -/* Implementation of the real gamma function. In extensive but non-exhaustive +/* Implementation of the real gamma function. Kept here to work around + issues (see e.g. gh-70309) with quality of libm's tgamma/lgamma implementations + on various platforms (Windows, MacOS). In extensive but non-exhaustive random tests, this function proved accurate to within <= 10 ulps across the entire float domain. Note that accuracy may depend on the quality of the system math functions, the pow function in particular. Special cases @@ -458,163 +456,6 @@ m_lgamma(double x) return r; } -#if !defined(HAVE_ERF) || !defined(HAVE_ERFC) - -/* - Implementations of the error function erf(x) and the complementary error - function erfc(x). - - Method: we use a series approximation for erf for small x, and a continued - fraction approximation for erfc(x) for larger x; - combined with the relations erf(-x) = -erf(x) and erfc(x) = 1.0 - erf(x), - this gives us erf(x) and erfc(x) for all x. - - The series expansion used is: - - erf(x) = x*exp(-x*x)/sqrt(pi) * [ - 2/1 + 4/3 x**2 + 8/15 x**4 + 16/105 x**6 + ...] - - The coefficient of x**(2k-2) here is 4**k*factorial(k)/factorial(2*k). - This series converges well for smallish x, but slowly for larger x. - - The continued fraction expansion used is: - - erfc(x) = x*exp(-x*x)/sqrt(pi) * [1/(0.5 + x**2 -) 0.5/(2.5 + x**2 - ) - 3.0/(4.5 + x**2 - ) 7.5/(6.5 + x**2 - ) ...] - - after the first term, the general term has the form: - - k*(k-0.5)/(2*k+0.5 + x**2 - ...). - - This expansion converges fast for larger x, but convergence becomes - infinitely slow as x approaches 0.0. The (somewhat naive) continued - fraction evaluation algorithm used below also risks overflow for large x; - but for large x, erfc(x) == 0.0 to within machine precision. (For - example, erfc(30.0) is approximately 2.56e-393). - - Parameters: use series expansion for abs(x) < ERF_SERIES_CUTOFF and - continued fraction expansion for ERF_SERIES_CUTOFF <= abs(x) < - ERFC_CONTFRAC_CUTOFF. ERFC_SERIES_TERMS and ERFC_CONTFRAC_TERMS are the - numbers of terms to use for the relevant expansions. */ - -#define ERF_SERIES_CUTOFF 1.5 -#define ERF_SERIES_TERMS 25 -#define ERFC_CONTFRAC_CUTOFF 30.0 -#define ERFC_CONTFRAC_TERMS 50 - -/* - Error function, via power series. - - Given a finite float x, return an approximation to erf(x). - Converges reasonably fast for small x. -*/ - -static double -m_erf_series(double x) -{ - double x2, acc, fk, result; - int i, saved_errno; - - x2 = x * x; - acc = 0.0; - fk = (double)ERF_SERIES_TERMS + 0.5; - for (i = 0; i < ERF_SERIES_TERMS; i++) { - acc = 2.0 + x2 * acc / fk; - fk -= 1.0; - } - /* Make sure the exp call doesn't affect errno; - see m_erfc_contfrac for more. */ - saved_errno = errno; - result = acc * x * exp(-x2) / sqrtpi; - errno = saved_errno; - return result; -} - -/* - Complementary error function, via continued fraction expansion. - - Given a positive float x, return an approximation to erfc(x). Converges - reasonably fast for x large (say, x > 2.0), and should be safe from - overflow if x and nterms are not too large. On an IEEE 754 machine, with x - <= 30.0, we're safe up to nterms = 100. For x >= 30.0, erfc(x) is smaller - than the smallest representable nonzero float. */ - -static double -m_erfc_contfrac(double x) -{ - double x2, a, da, p, p_last, q, q_last, b, result; - int i, saved_errno; - - if (x >= ERFC_CONTFRAC_CUTOFF) - return 0.0; - - x2 = x*x; - a = 0.0; - da = 0.5; - p = 1.0; p_last = 0.0; - q = da + x2; q_last = 1.0; - for (i = 0; i < ERFC_CONTFRAC_TERMS; i++) { - double temp; - a += da; - da += 2.0; - b = da + x2; - temp = p; p = b*p - a*p_last; p_last = temp; - temp = q; q = b*q - a*q_last; q_last = temp; - } - /* Issue #8986: On some platforms, exp sets errno on underflow to zero; - save the current errno value so that we can restore it later. */ - saved_errno = errno; - result = p / q * x * exp(-x2) / sqrtpi; - errno = saved_errno; - return result; -} - -#endif /* !defined(HAVE_ERF) || !defined(HAVE_ERFC) */ - -/* Error function erf(x), for general x */ - -static double -m_erf(double x) -{ -#ifdef HAVE_ERF - return erf(x); -#else - double absx, cf; - - if (Py_IS_NAN(x)) - return x; - absx = fabs(x); - if (absx < ERF_SERIES_CUTOFF) - return m_erf_series(x); - else { - cf = m_erfc_contfrac(absx); - return x > 0.0 ? 1.0 - cf : cf - 1.0; - } -#endif -} - -/* Complementary error function erfc(x), for general x. */ - -static double -m_erfc(double x) -{ -#ifdef HAVE_ERFC - return erfc(x); -#else - double absx, cf; - - if (Py_IS_NAN(x)) - return x; - absx = fabs(x); - if (absx < ERF_SERIES_CUTOFF) - return 1.0 - m_erf_series(x); - else { - cf = m_erfc_contfrac(absx); - return x > 0.0 ? cf : 2.0 - cf; - } -#endif -} - /* wrapper for atan2 that deals directly with special cases before delegating to the platform libm for the remaining cases. This @@ -801,25 +642,7 @@ m_log2(double x) } if (x > 0.0) { -#ifdef HAVE_LOG2 return log2(x); -#else - double m; - int e; - m = frexp(x, &e); - /* We want log2(m * 2**e) == log(m) / log(2) + e. Care is needed when - * x is just greater than 1.0: in that case e is 1, log(m) is negative, - * and we get significant cancellation error from the addition of - * log(m) / log(2) to e. The slight rewrite of the expression below - * avoids this problem. - */ - if (x >= 1.0) { - return log(2.0 * m) / log(2.0) + (e - 1); - } - else { - return log(m) / log(2.0) + e; - } -#endif } else if (x == 0.0) { errno = EDOM; @@ -1261,10 +1084,10 @@ FUNC1(cos, cos, 0, FUNC1(cosh, cosh, 1, "cosh($module, x, /)\n--\n\n" "Return the hyperbolic cosine of x.") -FUNC1A(erf, m_erf, +FUNC1A(erf, erf, "erf($module, x, /)\n--\n\n" "Error function at x.") -FUNC1A(erfc, m_erfc, +FUNC1A(erfc, erfc, "erfc($module, x, /)\n--\n\n" "Complementary error function at x.") FUNC1(exp, exp, 1, From webhook-mailer at python.org Thu Feb 9 04:40:23 2023 From: webhook-mailer at python.org (mdickinson) Date: Thu, 09 Feb 2023 09:40:23 -0000 Subject: [Python-checkins] gh-101678: Merge math_1_to_whatever() and math_1() (#101730) Message-ID: https://github.com/python/cpython/commit/45fa12aec8f508c224a1521cfe3ae597f1026264 commit: 45fa12aec8f508c224a1521cfe3ae597f1026264 branch: main author: Sergey B Kirpichev committer: mdickinson date: 2023-02-09T09:40:13Z summary: gh-101678: Merge math_1_to_whatever() and math_1() (#101730) `math_1_to_whatever()` is no longer useful, since all existing uses of it convert to `float`. Earlier versions of Python used `math_1_to_whatever` with an integer target; see gh-16991 for the PR where that use was removed. files: M Modules/mathmodule.c diff --git a/Modules/mathmodule.c b/Modules/mathmodule.c index 939954c95d9f..544560e8322c 100644 --- a/Modules/mathmodule.c +++ b/Modules/mathmodule.c @@ -875,9 +875,7 @@ is_error(double x) */ static PyObject * -math_1_to_whatever(PyObject *arg, double (*func) (double), - PyObject *(*from_double_func) (double), - int can_overflow) +math_1(PyObject *arg, double (*func) (double), int can_overflow) { double x, r; x = PyFloat_AsDouble(arg); @@ -903,7 +901,7 @@ math_1_to_whatever(PyObject *arg, double (*func) (double), /* this branch unnecessary on most platforms */ return NULL; - return (*from_double_func)(r); + return PyFloat_FromDouble(r); } /* variant of math_1, to be used when the function being wrapped is known to @@ -951,12 +949,6 @@ math_1a(PyObject *arg, double (*func) (double)) OverflowError. */ -static PyObject * -math_1(PyObject *arg, double (*func) (double), int can_overflow) -{ - return math_1_to_whatever(arg, func, PyFloat_FromDouble, can_overflow); -} - static PyObject * math_2(PyObject *const *args, Py_ssize_t nargs, double (*func) (double, double), const char *funcname) From webhook-mailer at python.org Thu Feb 9 04:59:48 2023 From: webhook-mailer at python.org (ambv) Date: Thu, 09 Feb 2023 09:59:48 -0000 Subject: [Python-checkins] [3.9] gh-101283: Improved fallback logic for subprocess with shell=True on Windows (GH-101286) (#101709) Message-ID: https://github.com/python/cpython/commit/04cc4270251e4e76e210d032cea966cedbd486e8 commit: 04cc4270251e4e76e210d032cea966cedbd486e8 branch: 3.9 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: ambv date: 2023-02-09T10:59:40+01:00 summary: [3.9] gh-101283: Improved fallback logic for subprocess with shell=True on Windows (GH-101286) (#101709) Co-authored-by: Oleg Iarygin Co-authored-by: Steve Dower files: A Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst M Doc/library/subprocess.rst M Lib/subprocess.py diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst index 10e40635e248..370ea5839b6f 100644 --- a/Doc/library/subprocess.rst +++ b/Doc/library/subprocess.rst @@ -111,6 +111,14 @@ compatibility with older versions, see the :ref:`call-function-trio` section. Added the *text* parameter, as a more understandable alias of *universal_newlines*. Added the *capture_output* parameter. + .. versionchanged:: 3.9.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. class:: CompletedProcess The return value from :func:`run`, representing a process that has finished. @@ -468,6 +476,14 @@ functions. *executable* parameter accepts a bytes and :term:`path-like object` on Windows. + .. versionchanged:: 3.9.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + *stdin*, *stdout* and *stderr* specify the executed program's standard input, standard output and standard error file handles, respectively. Valid values are :data:`PIPE`, :data:`DEVNULL`, an existing file descriptor (a positive @@ -1126,6 +1142,14 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. + .. versionchanged:: 3.9.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. function:: check_call(args, *, stdin=None, stdout=None, stderr=None, \ shell=False, cwd=None, timeout=None, \ **other_popen_kwargs) @@ -1158,6 +1182,14 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. + .. versionchanged:: 3.9.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. function:: check_output(args, *, stdin=None, stderr=None, shell=False, \ cwd=None, encoding=None, errors=None, \ @@ -1213,6 +1245,14 @@ calls these functions. .. versionadded:: 3.7 *text* was added as a more readable alias for *universal_newlines*. + .. versionchanged:: 3.9.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. _subprocess-replacements: diff --git a/Lib/subprocess.py b/Lib/subprocess.py index 4effc1d8b3fb..8579d3f29ee5 100644 --- a/Lib/subprocess.py +++ b/Lib/subprocess.py @@ -1407,7 +1407,23 @@ def _execute_child(self, args, executable, preexec_fn, close_fds, if shell: startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW startupinfo.wShowWindow = _winapi.SW_HIDE - comspec = os.environ.get("COMSPEC", "cmd.exe") + if not executable: + # gh-101283: without a fully-qualified path, before Windows + # checks the system directories, it first looks in the + # application directory, and also the current directory if + # NeedCurrentDirectoryForExePathW(ExeName) is true, so try + # to avoid executing unqualified "cmd.exe". + comspec = os.environ.get('ComSpec') + if not comspec: + system_root = os.environ.get('SystemRoot', '') + comspec = os.path.join(system_root, 'System32', 'cmd.exe') + if not os.path.isabs(comspec): + raise FileNotFoundError('shell not found: neither %ComSpec% nor %SystemRoot% is set') + if os.path.isabs(comspec): + executable = comspec + else: + comspec = executable + args = '{} /c "{}"'.format (comspec, args) if cwd is not None: diff --git a/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst b/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst new file mode 100644 index 000000000000..0efdfa102341 --- /dev/null +++ b/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst @@ -0,0 +1,3 @@ +:class:`subprocess.Popen` now uses a safer approach to find +``cmd.exe`` when launching with ``shell=True``. Patch by Eryk Sun, +based on a patch by Oleg Iarygin. From webhook-mailer at python.org Thu Feb 9 05:01:04 2023 From: webhook-mailer at python.org (ambv) Date: Thu, 09 Feb 2023 10:01:04 -0000 Subject: [Python-checkins] [3.8] gh-101283: Improved fallback logic for subprocess with shell=True on Windows (GH-101286) (#101710) Message-ID: https://github.com/python/cpython/commit/32a1a619516be1a25128d1991dc8b6cb8863e2b7 commit: 32a1a619516be1a25128d1991dc8b6cb8863e2b7 branch: 3.8 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: ambv date: 2023-02-09T11:00:51+01:00 summary: [3.8] gh-101283: Improved fallback logic for subprocess with shell=True on Windows (GH-101286) (#101710) Co-authored-by: Oleg Iarygin Co-authored-by: Steve Dower files: A Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst M Doc/library/subprocess.rst M Lib/subprocess.py diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst index 7ce274652d39..8eea5a474eda 100644 --- a/Doc/library/subprocess.rst +++ b/Doc/library/subprocess.rst @@ -111,6 +111,14 @@ compatibility with older versions, see the :ref:`call-function-trio` section. Added the *text* parameter, as a more understandable alias of *universal_newlines*. Added the *capture_output* parameter. + .. versionchanged:: 3.8.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. class:: CompletedProcess The return value from :func:`run`, representing a process that has finished. @@ -459,6 +467,14 @@ functions. *executable* parameter accepts a bytes and :term:`path-like object` on Windows. + .. versionchanged:: 3.8.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + *stdin*, *stdout* and *stderr* specify the executed program's standard input, standard output and standard error file handles, respectively. Valid values are :data:`PIPE`, :data:`DEVNULL`, an existing file descriptor (a positive @@ -1077,6 +1093,14 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. + .. versionchanged:: 3.8.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. function:: check_call(args, *, stdin=None, stdout=None, stderr=None, \ shell=False, cwd=None, timeout=None, \ **other_popen_kwargs) @@ -1107,6 +1131,14 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. + .. versionchanged:: 3.8.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. function:: check_output(args, *, stdin=None, stderr=None, shell=False, \ cwd=None, encoding=None, errors=None, \ @@ -1162,6 +1194,14 @@ calls these functions. .. versionadded:: 3.7 *text* was added as a more readable alias for *universal_newlines*. + .. versionchanged:: 3.8.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. _subprocess-replacements: diff --git a/Lib/subprocess.py b/Lib/subprocess.py index d4d04a5c3436..7d363fa3153f 100644 --- a/Lib/subprocess.py +++ b/Lib/subprocess.py @@ -1298,7 +1298,23 @@ def _execute_child(self, args, executable, preexec_fn, close_fds, if shell: startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW startupinfo.wShowWindow = _winapi.SW_HIDE - comspec = os.environ.get("COMSPEC", "cmd.exe") + if not executable: + # gh-101283: without a fully-qualified path, before Windows + # checks the system directories, it first looks in the + # application directory, and also the current directory if + # NeedCurrentDirectoryForExePathW(ExeName) is true, so try + # to avoid executing unqualified "cmd.exe". + comspec = os.environ.get('ComSpec') + if not comspec: + system_root = os.environ.get('SystemRoot', '') + comspec = os.path.join(system_root, 'System32', 'cmd.exe') + if not os.path.isabs(comspec): + raise FileNotFoundError('shell not found: neither %ComSpec% nor %SystemRoot% is set') + if os.path.isabs(comspec): + executable = comspec + else: + comspec = executable + args = '{} /c "{}"'.format (comspec, args) if cwd is not None: diff --git a/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst b/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst new file mode 100644 index 000000000000..0efdfa102341 --- /dev/null +++ b/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst @@ -0,0 +1,3 @@ +:class:`subprocess.Popen` now uses a safer approach to find +``cmd.exe`` when launching with ``shell=True``. Patch by Eryk Sun, +based on a patch by Oleg Iarygin. From webhook-mailer at python.org Thu Feb 9 05:35:04 2023 From: webhook-mailer at python.org (ambv) Date: Thu, 09 Feb 2023 10:35:04 -0000 Subject: [Python-checkins] [3.7] gh-101283: Improved fallback logic for subprocess with shell=True on Windows (GH-101286) (#101713) Message-ID: https://github.com/python/cpython/commit/c7fdc9c743d23b3c77ab4b1261748b8e395c298a commit: c7fdc9c743d23b3c77ab4b1261748b8e395c298a branch: 3.7 author: Steve Dower committer: ambv date: 2023-02-09T11:34:52+01:00 summary: [3.7] gh-101283: Improved fallback logic for subprocess with shell=True on Windows (GH-101286) (#101713) Co-authored-by: Oleg Iarygin Co-authored-by: ?ukasz Langa Co-authored-by: Oleg Iarygin files: A Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst M Doc/library/subprocess.rst M Lib/subprocess.py diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst index 9f2a05662860..4c51f4f1f03c 100644 --- a/Doc/library/subprocess.rst +++ b/Doc/library/subprocess.rst @@ -111,6 +111,14 @@ compatibility with older versions, see the :ref:`call-function-trio` section. Added the *text* parameter, as a more understandable alias of *universal_newlines*. Added the *capture_output* parameter. + .. versionchanged:: 3.7.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. class:: CompletedProcess The return value from :func:`run`, representing a process that has finished. @@ -442,6 +450,17 @@ functions. :program:`ps`. If ``shell=True``, on POSIX the *executable* argument specifies a replacement shell for the default :file:`/bin/sh`. + .. versionchanged:: 3.6 + *executable* parameter accepts a :term:`path-like object` on POSIX. + + .. versionchanged:: 3.7.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + *stdin*, *stdout* and *stderr* specify the executed program's standard input, standard output and standard error file handles, respectively. Valid values are :data:`PIPE`, :data:`DEVNULL`, an existing file descriptor (a positive @@ -1032,6 +1051,14 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. + .. versionchanged:: 3.7.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. function:: check_call(args, *, stdin=None, stdout=None, stderr=None, \ shell=False, cwd=None, timeout=None, \ **other_popen_kwargs) @@ -1062,6 +1089,14 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. + .. versionchanged:: 3.7.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. function:: check_output(args, *, stdin=None, stderr=None, shell=False, \ cwd=None, encoding=None, errors=None, \ @@ -1116,6 +1151,14 @@ calls these functions. .. versionadded:: 3.7 *text* was added as a more readable alias for *universal_newlines*. + .. versionchanged:: 3.7.17 + + Changed Windows shell search order for ``shell=True``. The current + directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and + ``%SystemRoot%\System32\cmd.exe``. As a result, dropping a + malicious program named ``cmd.exe`` into a current directory no + longer works. + .. _subprocess-replacements: diff --git a/Lib/subprocess.py b/Lib/subprocess.py index 3f99be551c51..fd67945b7e4b 100644 --- a/Lib/subprocess.py +++ b/Lib/subprocess.py @@ -1192,7 +1192,23 @@ def _execute_child(self, args, executable, preexec_fn, close_fds, if shell: startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW startupinfo.wShowWindow = _winapi.SW_HIDE - comspec = os.environ.get("COMSPEC", "cmd.exe") + if not executable: + # gh-101283: without a fully-qualified path, before Windows + # checks the system directories, it first looks in the + # application directory, and also the current directory if + # NeedCurrentDirectoryForExePathW(ExeName) is true, so try + # to avoid executing unqualified "cmd.exe". + comspec = os.environ.get('ComSpec') + if not comspec: + system_root = os.environ.get('SystemRoot', '') + comspec = os.path.join(system_root, 'System32', 'cmd.exe') + if not os.path.isabs(comspec): + raise FileNotFoundError('shell not found: neither %ComSpec% nor %SystemRoot% is set') + if os.path.isabs(comspec): + executable = comspec + else: + comspec = executable + args = '{} /c "{}"'.format (comspec, args) # Start the process diff --git a/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst b/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst new file mode 100644 index 000000000000..0efdfa102341 --- /dev/null +++ b/Misc/NEWS.d/next/Security/2023-01-24-16-12-00.gh-issue-101283.9tqu39.rst @@ -0,0 +1,3 @@ +:class:`subprocess.Popen` now uses a safer approach to find +``cmd.exe`` when launching with ``shell=True``. Patch by Eryk Sun, +based on a patch by Oleg Iarygin. From webhook-mailer at python.org Thu Feb 9 07:01:39 2023 From: webhook-mailer at python.org (corona10) Date: Thu, 09 Feb 2023 12:01:39 -0000 Subject: [Python-checkins] no-issue: Add Dong-hee Na as the cjkcodecs codeowner (gh-101731) Message-ID: https://github.com/python/cpython/commit/1c49e61b9b18d550b9c5cff69a1dd3bb218e544a commit: 1c49e61b9b18d550b9c5cff69a1dd3bb218e544a branch: main author: Dong-hee Na committer: corona10 date: 2023-02-09T21:01:32+09:00 summary: no-issue: Add Dong-hee Na as the cjkcodecs codeowner (gh-101731) files: M .github/CODEOWNERS diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 8dd07d911f5b..fc1bb3388976 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -151,6 +151,8 @@ Lib/ast.py @isidentical **/*sysconfig* @FFY00 +**/*cjkcodecs* @corona10 + # macOS /Mac/ @python/macos-team **/*osx_support* @python/macos-team From webhook-mailer at python.org Thu Feb 9 08:06:24 2023 From: webhook-mailer at python.org (Yhg1s) Date: Thu, 09 Feb 2023 13:06:24 -0000 Subject: [Python-checkins] GH-99293: Document that `Py_TPFLAGS_VALID_VERSION_TAG` shouldn't be used. (#GH-101736) Message-ID: https://github.com/python/cpython/commit/ecfd2d37c529c1952dc11fabe1758156924de67a commit: ecfd2d37c529c1952dc11fabe1758156924de67a branch: main author: Mark Shannon committer: Yhg1s date: 2023-02-09T14:05:53+01:00 summary: GH-99293: Document that `Py_TPFLAGS_VALID_VERSION_TAG` shouldn't be used. (#GH-101736) Document that Py_TPFLAGS_VALID_VERSION_TAG shouldn't be used. files: A Misc/NEWS.d/next/C API/2023-02-09-10-38-20.gh-issue-99293.mFqfpp.rst M Doc/c-api/typeobj.rst diff --git a/Doc/c-api/typeobj.rst b/Doc/c-api/typeobj.rst index 644830b940b4..fd8f49ccb1ca 100644 --- a/Doc/c-api/typeobj.rst +++ b/Doc/c-api/typeobj.rst @@ -1313,6 +1313,16 @@ and :c:type:`PyType_Type` effectively act as defaults.) .. versionadded:: 3.10 + .. data:: Py_TPFLAGS_VALID_VERSION_TAG + + Internal. Do not set or unset this flag. + To indicate that a class has changed call :c:func:`PyType_Modified` + + .. warning:: + This flag is present in header files, but is an internal feature and should + not be used. It will be removed in a future version of CPython + + .. c:member:: const char* PyTypeObject.tp_doc An optional pointer to a NUL-terminated C string giving the docstring for this diff --git a/Misc/NEWS.d/next/C API/2023-02-09-10-38-20.gh-issue-99293.mFqfpp.rst b/Misc/NEWS.d/next/C API/2023-02-09-10-38-20.gh-issue-99293.mFqfpp.rst new file mode 100644 index 000000000000..8c0f05543747 --- /dev/null +++ b/Misc/NEWS.d/next/C API/2023-02-09-10-38-20.gh-issue-99293.mFqfpp.rst @@ -0,0 +1,2 @@ +Document that the Py_TPFLAGS_VALID_VERSION_TAG is an internal feature, +should not be used, and will be removed. From webhook-mailer at python.org Thu Feb 9 11:49:29 2023 From: webhook-mailer at python.org (hugovk) Date: Thu, 09 Feb 2023 16:49:29 -0000 Subject: [Python-checkins] gh-101670: typo fix in PyImport_ExtendInittab() (#101723) Message-ID: https://github.com/python/cpython/commit/cb2411886a181d25e0cff2c870f331d2949874d5 commit: cb2411886a181d25e0cff2c870f331d2949874d5 branch: main author: Sergey B Kirpichev committer: hugovk date: 2023-02-09T18:49:02+02:00 summary: gh-101670: typo fix in PyImport_ExtendInittab() (#101723) Co-authored-by: Eric Snow files: M Python/import.c diff --git a/Python/import.c b/Python/import.c index 795d36896648..302255d76edc 100644 --- a/Python/import.c +++ b/Python/import.c @@ -2651,7 +2651,7 @@ PyImport_ExtendInittab(struct _inittab *newtab) int res = 0; if (_PyRuntime.imports.inittab != NULL) { - Py_FatalError("PyImport_ExtendInittab() may be be called after Py_Initialize()"); + Py_FatalError("PyImport_ExtendInittab() may not be called after Py_Initialize()"); } /* Count the number of entries in both tables */ From webhook-mailer at python.org Thu Feb 9 12:15:36 2023 From: webhook-mailer at python.org (zooba) Date: Thu, 09 Feb 2023 17:15:36 -0000 Subject: [Python-checkins] LibFFI build requires x64 Cygwin, and skip the ARM build (GH-101743) Message-ID: https://github.com/python/cpython/commit/f23371fbc9bb283207ecebf8efd81a22538b4327 commit: f23371fbc9bb283207ecebf8efd81a22538b4327 branch: main author: Steve Dower committer: zooba date: 2023-02-09T17:15:19Z summary: LibFFI build requires x64 Cygwin, and skip the ARM build (GH-101743) files: M PCbuild/prepare_libffi.bat diff --git a/PCbuild/prepare_libffi.bat b/PCbuild/prepare_libffi.bat index 7e7842a2fc97..ef36c36e058a 100644 --- a/PCbuild/prepare_libffi.bat +++ b/PCbuild/prepare_libffi.bat @@ -60,7 +60,7 @@ goto :Usage if NOT DEFINED BUILD_X64 if NOT DEFINED BUILD_X86 if NOT DEFINED BUILD_ARM32 if NOT DEFINED BUILD_ARM64 ( set BUILD_X64=1 set BUILD_X86=1 - set BUILD_ARM32=1 + set BUILD_ARM32=0 set BUILD_ARM64=1 set COPY_LICENSE=1 ) @@ -204,7 +204,7 @@ if NOT DEFINED CYG_CACHE (set CYG_CACHE=C:/cygwin/var/cache/setup) if NOT DEFINED CYG_MIRROR (set CYG_MIRROR=http://mirrors.kernel.org/sourceware/cygwin/) powershell -c "md $env:CYG_ROOT -ErrorAction SilentlyContinue" -powershell -c "$setup = $env:CYG_ROOT+'/setup.exe'; if (!(Test-Path $setup)){invoke-webrequest https://cygwin.com/setup-x86.exe -outfile $setup} +powershell -c "$setup = $env:CYG_ROOT+'/setup.exe'; if (!(Test-Path $setup)){invoke-webrequest https://cygwin.com/setup-x86_64.exe -outfile $setup} %CYG_ROOT%/setup.exe -qnNdO -R "%CYG_ROOT%" -s "%CYG_MIRROR%" -l "%CYG_CACHE%" -P make -P autoconf -P automake -P libtool -P dejagnu endlocal From webhook-mailer at python.org Thu Feb 9 12:36:49 2023 From: webhook-mailer at python.org (zooba) Date: Thu, 09 Feb 2023 17:36:49 -0000 Subject: [Python-checkins] gh-101283: Fix 'versionchanged' for the shell=True fallback on Windows in 3.12 (GH-101728) Message-ID: https://github.com/python/cpython/commit/6d92373f500eb81a175516b3abb16e21f0806c1f commit: 6d92373f500eb81a175516b3abb16e21f0806c1f branch: main author: Oleg Iarygin committer: zooba date: 2023-02-09T17:36:24Z summary: gh-101283: Fix 'versionchanged' for the shell=True fallback on Windows in 3.12 (GH-101728) files: M Doc/library/subprocess.rst diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst index d792a43eeb27..ccc431b2d92e 100644 --- a/Doc/library/subprocess.rst +++ b/Doc/library/subprocess.rst @@ -111,7 +111,7 @@ underlying :class:`Popen` interface can be used directly. Added the *text* parameter, as a more understandable alias of *universal_newlines*. Added the *capture_output* parameter. - .. versionchanged:: 3.11.3 + .. versionchanged:: 3.12 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and @@ -495,7 +495,7 @@ functions. *executable* parameter accepts a bytes and :term:`path-like object` on Windows. - .. versionchanged:: 3.11.3 + .. versionchanged:: 3.12 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and @@ -1174,7 +1174,7 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. - .. versionchanged:: 3.11.3 + .. versionchanged:: 3.12 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and @@ -1214,7 +1214,7 @@ calls these functions. .. versionchanged:: 3.3 *timeout* was added. - .. versionchanged:: 3.11.3 + .. versionchanged:: 3.12 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and @@ -1277,7 +1277,7 @@ calls these functions. .. versionadded:: 3.7 *text* was added as a more readable alias for *universal_newlines*. - .. versionchanged:: 3.11.3 + .. versionchanged:: 3.12 Changed Windows shell search order for ``shell=True``. The current directory and ``%PATH%`` are replaced with ``%COMSPEC%`` and From webhook-mailer at python.org Thu Feb 9 12:40:58 2023 From: webhook-mailer at python.org (zooba) Date: Thu, 09 Feb 2023 17:40:58 -0000 Subject: [Python-checkins] gh-101726: Update the OpenSSL version to 1.1.1t (GH-101727) Message-ID: https://github.com/python/cpython/commit/b41c47cd0606e8273aef4813e83fe2deaf9ab33b commit: b41c47cd0606e8273aef4813e83fe2deaf9ab33b branch: main author: Gregory P. Smith committer: zooba date: 2023-02-09T17:40:51Z summary: gh-101726: Update the OpenSSL version to 1.1.1t (GH-101727) Fixes CVE-2023-0286 (High) and a couple of Medium security issues. https://www.openssl.org/news/secadv/20230207.txt files: A Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst M .azure-pipelines/ci.yml M .azure-pipelines/pr.yml M .github/workflows/build.yml M Mac/BuildScript/build-installer.py M PCbuild/get_externals.bat M PCbuild/python.props M PCbuild/readme.txt M Tools/ssl/multissltests.py diff --git a/.azure-pipelines/ci.yml b/.azure-pipelines/ci.yml index e45dc2d43659..6302b5479821 100644 --- a/.azure-pipelines/ci.yml +++ b/.azure-pipelines/ci.yml @@ -57,7 +57,7 @@ jobs: variables: testRunTitle: '$(build.sourceBranchName)-linux' testRunPlatform: linux - openssl_version: 1.1.1q + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml @@ -83,7 +83,7 @@ jobs: variables: testRunTitle: '$(Build.SourceBranchName)-linux-coverage' testRunPlatform: linux-coverage - openssl_version: 1.1.1q + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml diff --git a/.azure-pipelines/pr.yml b/.azure-pipelines/pr.yml index af94ebf78c84..5f7218768c18 100644 --- a/.azure-pipelines/pr.yml +++ b/.azure-pipelines/pr.yml @@ -57,7 +57,7 @@ jobs: variables: testRunTitle: '$(system.pullRequest.TargetBranch)-linux' testRunPlatform: linux - openssl_version: 1.1.1q + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml @@ -83,7 +83,7 @@ jobs: variables: testRunTitle: '$(Build.SourceBranchName)-linux-coverage' testRunPlatform: linux-coverage - openssl_version: 1.1.1q + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index f798992d8af6..97ea2d94598e 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -176,7 +176,7 @@ jobs: needs: check_source if: needs.check_source.outputs.run_tests == 'true' env: - OPENSSL_VER: 1.1.1s + OPENSSL_VER: 1.1.1t PYTHONSTRICTEXTENSIONBUILD: 1 steps: - uses: actions/checkout at v3 @@ -235,7 +235,7 @@ jobs: strategy: fail-fast: false matrix: - openssl_ver: [1.1.1s, 3.0.7, 3.1.0-beta1] + openssl_ver: [1.1.1t, 3.0.8, 3.1.0-beta1] env: OPENSSL_VER: ${{ matrix.openssl_ver }} MULTISSL_DIR: ${{ github.workspace }}/multissl @@ -282,7 +282,7 @@ jobs: needs: check_source if: needs.check_source.outputs.run_tests == 'true' env: - OPENSSL_VER: 1.1.1s + OPENSSL_VER: 1.1.1t PYTHONSTRICTEXTENSIONBUILD: 1 ASAN_OPTIONS: detect_leaks=0:allocator_may_return_null=1:handle_segv=0 steps: diff --git a/Mac/BuildScript/build-installer.py b/Mac/BuildScript/build-installer.py index cf97b5558c2d..048cb0137960 100755 --- a/Mac/BuildScript/build-installer.py +++ b/Mac/BuildScript/build-installer.py @@ -246,9 +246,9 @@ def library_recipes(): result.extend([ dict( - name="OpenSSL 1.1.1s", - url="https://www.openssl.org/source/openssl-1.1.1s.tar.gz", - checksum='c5ac01e760ee6ff0dab61d6b2bbd30146724d063eb322180c6f18a6f74e4b6aa', + name="OpenSSL 1.1.1t", + url="https://www.openssl.org/source/openssl-1.1.1t.tar.gz", + checksum='8dee9b24bdb1dcbf0c3d1e9b02fb8f6bf22165e807f45adeb7c9677536859d3b', buildrecipe=build_universal_openssl, configure=None, install=None, diff --git a/Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst b/Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst new file mode 100644 index 000000000000..43acc82063fd --- /dev/null +++ b/Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst @@ -0,0 +1,4 @@ +Updated the OpenSSL version used in Windows and macOS binary release builds +to 1.1.1t to address CVE-2023-0286, CVE-2022-4303, and CVE-2022-4303 per +`the OpenSSL 2023-02-07 security advisory +`_. diff --git a/PCbuild/get_externals.bat b/PCbuild/get_externals.bat index 0a41d131a3e8..d4d96bd49d72 100644 --- a/PCbuild/get_externals.bat +++ b/PCbuild/get_externals.bat @@ -53,7 +53,7 @@ echo.Fetching external libraries... set libraries= set libraries=%libraries% bzip2-1.0.8 if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.4.3 -if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1s +if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1t set libraries=%libraries% sqlite-3.39.4.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tcl-core-8.6.13.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tk-8.6.13.0 @@ -77,7 +77,7 @@ echo.Fetching external binaries... set binaries= if NOT "%IncludeLibffi%"=="false" set binaries=%binaries% libffi-3.4.3 -if NOT "%IncludeSSL%"=="false" set binaries=%binaries% openssl-bin-1.1.1s +if NOT "%IncludeSSL%"=="false" set binaries=%binaries% openssl-bin-1.1.1t if NOT "%IncludeTkinter%"=="false" set binaries=%binaries% tcltk-8.6.13.0 if NOT "%IncludeSSLSrc%"=="false" set binaries=%binaries% nasm-2.11.06 diff --git a/PCbuild/python.props b/PCbuild/python.props index 57360e57baba..5926c7ded470 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -74,8 +74,8 @@ $(ExternalsDir)libffi-3.4.3\ $(libffiDir)$(ArchName)\ $(libffiOutDir)include - $(ExternalsDir)openssl-1.1.1s\ - $(ExternalsDir)openssl-bin-1.1.1s\$(ArchName)\ + $(ExternalsDir)openssl-1.1.1t\ + $(ExternalsDir)openssl-bin-1.1.1t\$(ArchName)\ $(opensslOutDir)include $(ExternalsDir)\nasm-2.11.06\ $(ExternalsDir)\zlib-1.2.13\ diff --git a/PCbuild/readme.txt b/PCbuild/readme.txt index 3ed26a47b066..347be8aeeca3 100644 --- a/PCbuild/readme.txt +++ b/PCbuild/readme.txt @@ -169,7 +169,7 @@ _lzma Homepage: https://tukaani.org/xz/ _ssl - Python wrapper for version 1.1.1q of the OpenSSL secure sockets + Python wrapper for version 1.1.1t of the OpenSSL secure sockets library, which is downloaded from our binaries repository at https://github.com/python/cpython-bin-deps. diff --git a/Tools/ssl/multissltests.py b/Tools/ssl/multissltests.py index 5ad597c8347e..c0fbee9ca6f9 100755 --- a/Tools/ssl/multissltests.py +++ b/Tools/ssl/multissltests.py @@ -46,8 +46,8 @@ ] OPENSSL_RECENT_VERSIONS = [ - "1.1.1s", - "3.0.7" + "1.1.1t", + "3.0.8" ] LIBRESSL_OLD_VERSIONS = [ From webhook-mailer at python.org Thu Feb 9 12:46:05 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Thu, 09 Feb 2023 17:46:05 -0000 Subject: [Python-checkins] Fix typo in `test_fstring.py` (#101600) Message-ID: https://github.com/python/cpython/commit/272da55affe6f2b3b73ff5474e1246089517d051 commit: 272da55affe6f2b3b73ff5474e1246089517d051 branch: main author: Ikko Eltociear Ashimine committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-09T23:15:58+05:30 summary: Fix typo in `test_fstring.py` (#101600) files: M Lib/test/test_fstring.py diff --git a/Lib/test/test_fstring.py b/Lib/test/test_fstring.py index 318f38a6ed5b..a50056da116e 100644 --- a/Lib/test/test_fstring.py +++ b/Lib/test/test_fstring.py @@ -667,7 +667,7 @@ def test_missing_expression(self): "f'''{\t\f\r\n}'''", ]) - # Different error messeges are raised when a specfier ('!', ':' or '=') is used after an empty expression + # Different error messages are raised when a specfier ('!', ':' or '=') is used after an empty expression self.assertAllRaise(SyntaxError, "f-string: expression required before '!'", ["f'{!r}'", "f'{ !r}'", From webhook-mailer at python.org Thu Feb 9 12:46:47 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Thu, 09 Feb 2023 17:46:47 -0000 Subject: [Python-checkins] GH-101228: Fix typo in docstring for read method of `_io.TextIOWrapper` class (#101227) Message-ID: https://github.com/python/cpython/commit/f1f3af7b8245e61a2e0abef03b2c6c5902ed7df8 commit: f1f3af7b8245e61a2e0abef03b2c6c5902ed7df8 branch: main author: Partha P. Mukherjee committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-09T23:16:40+05:30 summary: GH-101228: Fix typo in docstring for read method of `_io.TextIOWrapper` class (#101227) files: M Modules/_io/textio.c diff --git a/Modules/_io/textio.c b/Modules/_io/textio.c index 32ab8a44c621..ea2ea32c3369 100644 --- a/Modules/_io/textio.c +++ b/Modules/_io/textio.c @@ -56,10 +56,10 @@ textiobase_detach(PyObject *self, PyObject *Py_UNUSED(ignored)) } PyDoc_STRVAR(textiobase_read_doc, - "Read at most n characters from stream.\n" + "Read at most size characters from stream.\n" "\n" - "Read from underlying buffer until we have n characters or we hit EOF.\n" - "If n is negative or omitted, read until EOF.\n" + "Read from underlying buffer until we have size characters or we hit EOF.\n" + "If size is negative or omitted, read until EOF.\n" ); static PyObject * From webhook-mailer at python.org Thu Feb 9 14:29:21 2023 From: webhook-mailer at python.org (zooba) Date: Thu, 09 Feb 2023 19:29:21 -0000 Subject: [Python-checkins] gh-101726: Update the OpenSSL version to 1.1.1t (GH-101727) Message-ID: https://github.com/python/cpython/commit/b8149a9d7e770c03475042b34bf81d33d046bc50 commit: b8149a9d7e770c03475042b34bf81d33d046bc50 branch: 3.10 author: Steve Dower committer: zooba date: 2023-02-09T19:29:14Z summary: gh-101726: Update the OpenSSL version to 1.1.1t (GH-101727) Fixes CVE-2023-0286 (High) and a couple of Medium security issues. https://www.openssl.org/news/secadv/20230207.txt --------- Co-authored-by: Gregory P. Smith files: A Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst M .azure-pipelines/ci.yml M .azure-pipelines/pr.yml M .github/workflows/build.yml M Mac/BuildScript/build-installer.py M PCbuild/get_externals.bat M PCbuild/python.props M PCbuild/readme.txt M Tools/ssl/multissltests.py diff --git a/.azure-pipelines/ci.yml b/.azure-pipelines/ci.yml index 133699d46535..92f3f41a31ad 100644 --- a/.azure-pipelines/ci.yml +++ b/.azure-pipelines/ci.yml @@ -57,7 +57,7 @@ jobs: variables: testRunTitle: '$(build.sourceBranchName)-linux' testRunPlatform: linux - openssl_version: 1.1.1q + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml @@ -83,7 +83,7 @@ jobs: variables: testRunTitle: '$(Build.SourceBranchName)-linux-coverage' testRunPlatform: linux-coverage - openssl_version: 1.1.1q + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml diff --git a/.azure-pipelines/pr.yml b/.azure-pipelines/pr.yml index d84684e4c030..654d32540c2a 100644 --- a/.azure-pipelines/pr.yml +++ b/.azure-pipelines/pr.yml @@ -57,7 +57,7 @@ jobs: variables: testRunTitle: '$(system.pullRequest.TargetBranch)-linux' testRunPlatform: linux - openssl_version: 1.1.1q + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml @@ -83,7 +83,7 @@ jobs: variables: testRunTitle: '$(Build.SourceBranchName)-linux-coverage' testRunPlatform: linux-coverage - openssl_version: 1.1.1q + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index e47f0eb51bce..9f039d71b3c2 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -196,7 +196,7 @@ jobs: needs: check_source if: needs.check_source.outputs.run_tests == 'true' env: - OPENSSL_VER: 1.1.1q + OPENSSL_VER: 1.1.1t PYTHONSTRICTEXTENSIONBUILD: 1 steps: - uses: actions/checkout at v3 @@ -240,7 +240,7 @@ jobs: strategy: fail-fast: false matrix: - openssl_ver: [1.1.1q, 3.0.5] + openssl_ver: [1.1.1t, 3.0.8, 3.1.0-beta1] env: OPENSSL_VER: ${{ matrix.openssl_ver }} MULTISSL_DIR: ${{ github.workspace }}/multissl diff --git a/Mac/BuildScript/build-installer.py b/Mac/BuildScript/build-installer.py index 73a725cb44f5..c3f92e97611d 100755 --- a/Mac/BuildScript/build-installer.py +++ b/Mac/BuildScript/build-installer.py @@ -246,9 +246,9 @@ def library_recipes(): result.extend([ dict( - name="OpenSSL 1.1.1s", - url="https://www.openssl.org/source/openssl-1.1.1s.tar.gz", - checksum='c5ac01e760ee6ff0dab61d6b2bbd30146724d063eb322180c6f18a6f74e4b6aa', + name="OpenSSL 1.1.1t", + url="https://www.openssl.org/source/openssl-1.1.1t.tar.gz", + checksum='8dee9b24bdb1dcbf0c3d1e9b02fb8f6bf22165e807f45adeb7c9677536859d3b', buildrecipe=build_universal_openssl, configure=None, install=None, diff --git a/Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst b/Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst new file mode 100644 index 000000000000..43acc82063fd --- /dev/null +++ b/Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst @@ -0,0 +1,4 @@ +Updated the OpenSSL version used in Windows and macOS binary release builds +to 1.1.1t to address CVE-2023-0286, CVE-2022-4303, and CVE-2022-4303 per +`the OpenSSL 2023-02-07 security advisory +`_. diff --git a/PCbuild/get_externals.bat b/PCbuild/get_externals.bat index 6452bbdd7760..f700b9539228 100644 --- a/PCbuild/get_externals.bat +++ b/PCbuild/get_externals.bat @@ -53,7 +53,7 @@ echo.Fetching external libraries... set libraries= set libraries=%libraries% bzip2-1.0.8 if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.3.0 -if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1s +if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1t set libraries=%libraries% sqlite-3.39.4.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tcl-core-8.6.12.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tk-8.6.12.0 @@ -77,7 +77,7 @@ echo.Fetching external binaries... set binaries= if NOT "%IncludeLibffi%"=="false" set binaries=%binaries% libffi-3.3.0 -if NOT "%IncludeSSL%"=="false" set binaries=%binaries% openssl-bin-1.1.1s +if NOT "%IncludeSSL%"=="false" set binaries=%binaries% openssl-bin-1.1.1t if NOT "%IncludeTkinter%"=="false" set binaries=%binaries% tcltk-8.6.12.0 if NOT "%IncludeSSLSrc%"=="false" set binaries=%binaries% nasm-2.11.06 diff --git a/PCbuild/python.props b/PCbuild/python.props index 1f9b0e773634..98dbf7486d5d 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -70,8 +70,8 @@ $(ExternalsDir)libffi-3.3.0\ $(libffiDir)$(ArchName)\ $(libffiOutDir)include - $(ExternalsDir)openssl-1.1.1s\ - $(ExternalsDir)openssl-bin-1.1.1s\$(ArchName)\ + $(ExternalsDir)openssl-1.1.1t\ + $(ExternalsDir)openssl-bin-1.1.1t\$(ArchName)\ $(opensslOutDir)include $(ExternalsDir)\nasm-2.11.06\ $(ExternalsDir)\zlib-1.2.13\ diff --git a/PCbuild/readme.txt b/PCbuild/readme.txt index 155212625ccc..d39a0708a10a 100644 --- a/PCbuild/readme.txt +++ b/PCbuild/readme.txt @@ -167,7 +167,7 @@ _lzma Homepage: https://tukaani.org/xz/ _ssl - Python wrapper for version 1.1.1q of the OpenSSL secure sockets + Python wrapper for version 1.1.1t of the OpenSSL secure sockets library, which is downloaded from our binaries repository at https://github.com/python/cpython-bin-deps. diff --git a/Tools/ssl/multissltests.py b/Tools/ssl/multissltests.py index 91d6f558bc51..27e9032395fd 100755 --- a/Tools/ssl/multissltests.py +++ b/Tools/ssl/multissltests.py @@ -47,8 +47,8 @@ ] OPENSSL_RECENT_VERSIONS = [ - "1.1.1q", - "3.0.5" + "1.1.1t", + "3.0.8" ] LIBRESSL_OLD_VERSIONS = [ From webhook-mailer at python.org Thu Feb 9 14:29:21 2023 From: webhook-mailer at python.org (zooba) Date: Thu, 09 Feb 2023 19:29:21 -0000 Subject: [Python-checkins] gh-101726: Update the OpenSSL version to 1.1.1t (GH-101727) Message-ID: https://github.com/python/cpython/commit/52a03a000628f1882e39e8ed82bd8d703084f6e8 commit: 52a03a000628f1882e39e8ed82bd8d703084f6e8 branch: 3.11 author: Steve Dower committer: zooba date: 2023-02-09T19:28:59Z summary: gh-101726: Update the OpenSSL version to 1.1.1t (GH-101727) Fixes CVE-2023-0286 (High) and a couple of Medium security issues. https://www.openssl.org/news/secadv/20230207.txt Co-authored-by: Gregory P. Smith files: A Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst M .azure-pipelines/ci.yml M .azure-pipelines/pr.yml M .github/workflows/build.yml M Mac/BuildScript/build-installer.py M PCbuild/get_externals.bat M PCbuild/python.props M PCbuild/readme.txt M Tools/ssl/multissltests.py diff --git a/.azure-pipelines/ci.yml b/.azure-pipelines/ci.yml index e45dc2d43659..6302b5479821 100644 --- a/.azure-pipelines/ci.yml +++ b/.azure-pipelines/ci.yml @@ -57,7 +57,7 @@ jobs: variables: testRunTitle: '$(build.sourceBranchName)-linux' testRunPlatform: linux - openssl_version: 1.1.1q + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml @@ -83,7 +83,7 @@ jobs: variables: testRunTitle: '$(Build.SourceBranchName)-linux-coverage' testRunPlatform: linux-coverage - openssl_version: 1.1.1q + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml diff --git a/.azure-pipelines/pr.yml b/.azure-pipelines/pr.yml index af94ebf78c84..5f7218768c18 100644 --- a/.azure-pipelines/pr.yml +++ b/.azure-pipelines/pr.yml @@ -57,7 +57,7 @@ jobs: variables: testRunTitle: '$(system.pullRequest.TargetBranch)-linux' testRunPlatform: linux - openssl_version: 1.1.1q + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml @@ -83,7 +83,7 @@ jobs: variables: testRunTitle: '$(Build.SourceBranchName)-linux-coverage' testRunPlatform: linux-coverage - openssl_version: 1.1.1q + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 5647d6b29b13..32535dc6a4e6 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -209,7 +209,7 @@ jobs: needs: check_source if: needs.check_source.outputs.run_tests == 'true' env: - OPENSSL_VER: 1.1.1q + OPENSSL_VER: 1.1.1t PYTHONSTRICTEXTENSIONBUILD: 1 steps: - uses: actions/checkout at v3 @@ -268,7 +268,7 @@ jobs: strategy: fail-fast: false matrix: - openssl_ver: [1.1.1s, 3.0.7, 3.1.0-beta1] + openssl_ver: [1.1.1t, 3.0.8, 3.1.0-beta1] env: OPENSSL_VER: ${{ matrix.openssl_ver }} MULTISSL_DIR: ${{ github.workspace }}/multissl @@ -315,7 +315,7 @@ jobs: needs: check_source if: needs.check_source.outputs.run_tests == 'true' env: - OPENSSL_VER: 1.1.1q + OPENSSL_VER: 1.1.1t PYTHONSTRICTEXTENSIONBUILD: 1 ASAN_OPTIONS: detect_leaks=0:allocator_may_return_null=1:handle_segv=0 steps: diff --git a/Mac/BuildScript/build-installer.py b/Mac/BuildScript/build-installer.py index f756a9943759..91089864dc24 100755 --- a/Mac/BuildScript/build-installer.py +++ b/Mac/BuildScript/build-installer.py @@ -246,9 +246,9 @@ def library_recipes(): result.extend([ dict( - name="OpenSSL 1.1.1s", - url="https://www.openssl.org/source/openssl-1.1.1s.tar.gz", - checksum='c5ac01e760ee6ff0dab61d6b2bbd30146724d063eb322180c6f18a6f74e4b6aa', + name="OpenSSL 1.1.1t", + url="https://www.openssl.org/source/openssl-1.1.1t.tar.gz", + checksum='8dee9b24bdb1dcbf0c3d1e9b02fb8f6bf22165e807f45adeb7c9677536859d3b', buildrecipe=build_universal_openssl, configure=None, install=None, diff --git a/Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst b/Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst new file mode 100644 index 000000000000..43acc82063fd --- /dev/null +++ b/Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst @@ -0,0 +1,4 @@ +Updated the OpenSSL version used in Windows and macOS binary release builds +to 1.1.1t to address CVE-2023-0286, CVE-2022-4303, and CVE-2022-4303 per +`the OpenSSL 2023-02-07 security advisory +`_. diff --git a/PCbuild/get_externals.bat b/PCbuild/get_externals.bat index 7efdeb2d30a7..20c10ffbd818 100644 --- a/PCbuild/get_externals.bat +++ b/PCbuild/get_externals.bat @@ -53,7 +53,7 @@ echo.Fetching external libraries... set libraries= set libraries=%libraries% bzip2-1.0.8 if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.4.3 -if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1s +if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1t set libraries=%libraries% sqlite-3.39.4.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tcl-core-8.6.12.1 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tk-8.6.12.1 @@ -77,7 +77,7 @@ echo.Fetching external binaries... set binaries= if NOT "%IncludeLibffi%"=="false" set binaries=%binaries% libffi-3.4.3 -if NOT "%IncludeSSL%"=="false" set binaries=%binaries% openssl-bin-1.1.1s +if NOT "%IncludeSSL%"=="false" set binaries=%binaries% openssl-bin-1.1.1t if NOT "%IncludeTkinter%"=="false" set binaries=%binaries% tcltk-8.6.12.1 if NOT "%IncludeSSLSrc%"=="false" set binaries=%binaries% nasm-2.11.06 diff --git a/PCbuild/python.props b/PCbuild/python.props index 57360e57baba..5926c7ded470 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -74,8 +74,8 @@ $(ExternalsDir)libffi-3.4.3\ $(libffiDir)$(ArchName)\ $(libffiOutDir)include - $(ExternalsDir)openssl-1.1.1s\ - $(ExternalsDir)openssl-bin-1.1.1s\$(ArchName)\ + $(ExternalsDir)openssl-1.1.1t\ + $(ExternalsDir)openssl-bin-1.1.1t\$(ArchName)\ $(opensslOutDir)include $(ExternalsDir)\nasm-2.11.06\ $(ExternalsDir)\zlib-1.2.13\ diff --git a/PCbuild/readme.txt b/PCbuild/readme.txt index 5ba3e3940627..c8a35ce38978 100644 --- a/PCbuild/readme.txt +++ b/PCbuild/readme.txt @@ -168,7 +168,7 @@ _lzma Homepage: https://tukaani.org/xz/ _ssl - Python wrapper for version 1.1.1q of the OpenSSL secure sockets + Python wrapper for version 1.1.1t of the OpenSSL secure sockets library, which is downloaded from our binaries repository at https://github.com/python/cpython-bin-deps. diff --git a/Tools/ssl/multissltests.py b/Tools/ssl/multissltests.py index b1aacfa550f1..94d6b19a099c 100755 --- a/Tools/ssl/multissltests.py +++ b/Tools/ssl/multissltests.py @@ -47,8 +47,8 @@ ] OPENSSL_RECENT_VERSIONS = [ - "1.1.1q", - "3.0.5" + "1.1.1t", + "3.0.8" ] LIBRESSL_OLD_VERSIONS = [ From webhook-mailer at python.org Thu Feb 9 14:58:06 2023 From: webhook-mailer at python.org (ned-deily) Date: Thu, 09 Feb 2023 19:58:06 -0000 Subject: [Python-checkins] [3.7] gh-101726: Update the OpenSSL version to 1.1.1t (GH-101727) (GH-101753) Message-ID: https://github.com/python/cpython/commit/cbd192b6ff137aec25913cc80b2bf7bf1ef3755b commit: cbd192b6ff137aec25913cc80b2bf7bf1ef3755b branch: 3.7 author: Steve Dower committer: ned-deily date: 2023-02-09T14:57:59-05:00 summary: [3.7] gh-101726: Update the OpenSSL version to 1.1.1t (GH-101727) (GH-101753) Fixes CVE-2023-0286 (High) and a couple of Medium security issues. https://www.openssl.org/news/secadv/20230207.txt Co-authored-by: Gregory P. Smith Co-authored-by: Ned Deily files: A Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst M .azure-pipelines/ci.yml M .azure-pipelines/pr.yml M .github/workflows/build.yml M Mac/BuildScript/build-installer.py M PCbuild/get_externals.bat M PCbuild/python.props M PCbuild/readme.txt M Tools/ssl/multissltests.py diff --git a/.azure-pipelines/ci.yml b/.azure-pipelines/ci.yml index 3a12fe4bfa43..c68d571f70ce 100644 --- a/.azure-pipelines/ci.yml +++ b/.azure-pipelines/ci.yml @@ -61,7 +61,7 @@ jobs: variables: testRunTitle: '$(build.sourceBranchName)-linux' testRunPlatform: linux - openssl_version: 1.1.1n + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml @@ -118,7 +118,7 @@ jobs: variables: testRunTitle: '$(Build.SourceBranchName)-linux-coverage' testRunPlatform: linux-coverage - openssl_version: 1.1.1n + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml diff --git a/.azure-pipelines/pr.yml b/.azure-pipelines/pr.yml index 91607b931a36..ba61efe62d69 100644 --- a/.azure-pipelines/pr.yml +++ b/.azure-pipelines/pr.yml @@ -61,7 +61,7 @@ jobs: variables: testRunTitle: '$(system.pullRequest.TargetBranch)-linux' testRunPlatform: linux - openssl_version: 1.1.1n + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml @@ -118,7 +118,7 @@ jobs: variables: testRunTitle: '$(Build.SourceBranchName)-linux-coverage' testRunPlatform: linux-coverage - openssl_version: 1.1.1n + openssl_version: 1.1.1t steps: - template: ./posix-steps.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 1b1951ddb1af..a1f63747aa2a 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -142,7 +142,7 @@ jobs: needs: check_source if: needs.check_source.outputs.run_tests == 'true' env: - OPENSSL_VER: 1.1.1n + OPENSSL_VER: 1.1.1t steps: - uses: actions/checkout at v2 - name: Install Dependencies diff --git a/Mac/BuildScript/build-installer.py b/Mac/BuildScript/build-installer.py index 76078dcd22c3..7ba26a738a9e 100755 --- a/Mac/BuildScript/build-installer.py +++ b/Mac/BuildScript/build-installer.py @@ -209,9 +209,9 @@ def library_recipes(): result.extend([ dict( - name="OpenSSL 1.1.1n", - url="https://www.openssl.org/source/openssl-1.1.1n.tar.gz", - checksum='2aad5635f9bb338bc2c6b7d19cbc9676', + name="OpenSSL 1.1.1t", + url="https://www.openssl.org/source/openssl-1.1.1t.tar.gz", + checksum='1cfee919e0eac6be62c88c5ae8bcd91e', buildrecipe=build_universal_openssl, configure=None, install=None, diff --git a/Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst b/Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst new file mode 100644 index 000000000000..43acc82063fd --- /dev/null +++ b/Misc/NEWS.d/next/Security/2023-02-08-22-03-04.gh-issue-101727.9P5eZz.rst @@ -0,0 +1,4 @@ +Updated the OpenSSL version used in Windows and macOS binary release builds +to 1.1.1t to address CVE-2023-0286, CVE-2022-4303, and CVE-2022-4303 per +`the OpenSSL 2023-02-07 security advisory +`_. diff --git a/PCbuild/get_externals.bat b/PCbuild/get_externals.bat index 8f7a6aae7833..4f23edd45ae4 100644 --- a/PCbuild/get_externals.bat +++ b/PCbuild/get_externals.bat @@ -49,7 +49,7 @@ echo.Fetching external libraries... set libraries= set libraries=%libraries% bzip2-1.0.8 -if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1s +if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1t set libraries=%libraries% sqlite-3.31.1.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tcl-core-8.6.9.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tk-8.6.9.0 @@ -72,7 +72,7 @@ for %%e in (%libraries%) do ( echo.Fetching external binaries... set binaries= -if NOT "%IncludeSSL%"=="false" set binaries=%binaries% openssl-bin-1.1.1s +if NOT "%IncludeSSL%"=="false" set binaries=%binaries% openssl-bin-1.1.1t if NOT "%IncludeTkinter%"=="false" set binaries=%binaries% tcltk-8.6.9.0 if NOT "%IncludeSSLSrc%"=="false" set binaries=%binaries% nasm-2.11.06 diff --git a/PCbuild/python.props b/PCbuild/python.props index 53d106935b5d..4f198aa44e5f 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -46,15 +46,22 @@ $(EXTERNALS_DIR) $([System.IO.Path]::GetFullPath(`$(PySourcePath)externals`)) $(ExternalsDir)\ - $(ExternalsDir)sqlite-3.31.1.0\ - $(ExternalsDir)bzip2-1.0.8\ - $(ExternalsDir)xz-5.2.2\ - $(ExternalsDir)openssl-1.1.1s\ - $(ExternalsDir)openssl-bin-1.1.1s\$(ArchName)\ - $(opensslOutDir)include - $(ExternalsDir)\nasm-2.11.06\ - $(ExternalsDir)\zlib-1.2.12\ - + + + + + + $(ExternalsDir)sqlite-3.31.1.0\ + $(ExternalsDir)bzip2-1.0.8\ + $(ExternalsDir)xz-5.2.2\ + $(ExternalsDir)openssl-1.1.1t\ + $(ExternalsDir)openssl-bin-1.1.1t\$(ArchName)\ + $(opensslOutDir)include + $(ExternalsDir)\nasm-2.11.06\ + $(ExternalsDir)\zlib-1.2.12\ + + + _d diff --git a/PCbuild/readme.txt b/PCbuild/readme.txt index 5e57a9590cb6..7c87b7a57994 100644 --- a/PCbuild/readme.txt +++ b/PCbuild/readme.txt @@ -165,7 +165,7 @@ _lzma Homepage: http://tukaani.org/xz/ _ssl - Python wrapper for version 1.1.1c of the OpenSSL secure sockets + Python wrapper for version 1.1.1t of the OpenSSL secure sockets library, which is downloaded from our binaries repository at https://github.com/python/cpython-bin-deps. diff --git a/Tools/ssl/multissltests.py b/Tools/ssl/multissltests.py index dc74f8d401ee..2a4bbf4552cb 100755 --- a/Tools/ssl/multissltests.py +++ b/Tools/ssl/multissltests.py @@ -48,8 +48,8 @@ ] OPENSSL_RECENT_VERSIONS = [ - "1.1.1n", - # "3.0.0-alpha2" + "1.1.1t", + "3.0.8" ] LIBRESSL_OLD_VERSIONS = [ From webhook-mailer at python.org Thu Feb 9 18:30:20 2023 From: webhook-mailer at python.org (corona10) Date: Thu, 09 Feb 2023 23:30:20 -0000 Subject: [Python-checkins] gh-101430: Update tracemalloc to handle presize properly. (gh-101745) Message-ID: https://github.com/python/cpython/commit/5b946d371979a772120e6ee7d37f9b735769d433 commit: 5b946d371979a772120e6ee7d37f9b735769d433 branch: main author: Dong-hee Na committer: corona10 date: 2023-02-10T08:30:03+09:00 summary: gh-101430: Update tracemalloc to handle presize properly. (gh-101745) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-10-01-15-57.gh-issue-101430.T3Gegb.rst M Modules/_tracemalloc.c M Objects/object.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-10-01-15-57.gh-issue-101430.T3Gegb.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-10-01-15-57.gh-issue-101430.T3Gegb.rst new file mode 100644 index 000000000000..e617d8524214 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-10-01-15-57.gh-issue-101430.T3Gegb.rst @@ -0,0 +1,2 @@ +Update :mod:`tracemalloc` to handle presize of object properly. Patch by +Dong-hee Na. diff --git a/Modules/_tracemalloc.c b/Modules/_tracemalloc.c index 9826ad2935be..d69c5636486d 100644 --- a/Modules/_tracemalloc.c +++ b/Modules/_tracemalloc.c @@ -2,6 +2,7 @@ #include "pycore_fileutils.h" // _Py_write_noraise() #include "pycore_gc.h" // PyGC_Head #include "pycore_hashtable.h" // _Py_hashtable_t +#include "pycore_object.h" // _PyType_PreHeaderSize #include "pycore_pymem.h" // _Py_tracemalloc_config #include "pycore_runtime.h" // _Py_ID() #include "pycore_traceback.h" @@ -1400,20 +1401,16 @@ _tracemalloc__get_object_traceback(PyObject *module, PyObject *obj) /*[clinic end generated code: output=41ee0553a658b0aa input=29495f1b21c53212]*/ { PyTypeObject *type; - void *ptr; traceback_t *traceback; type = Py_TYPE(obj); - if (PyType_IS_GC(type)) { - ptr = (void *)((char *)obj - sizeof(PyGC_Head)); - } - else { - ptr = (void *)obj; - } + const size_t presize = _PyType_PreHeaderSize(type); + uintptr_t ptr = (uintptr_t)((char *)obj - presize); - traceback = tracemalloc_get_traceback(DEFAULT_DOMAIN, (uintptr_t)ptr); - if (traceback == NULL) + traceback = tracemalloc_get_traceback(DEFAULT_DOMAIN, ptr); + if (traceback == NULL) { Py_RETURN_NONE; + } return traceback_to_pyobject(traceback, NULL); } @@ -1723,14 +1720,9 @@ _PyTraceMalloc_NewReference(PyObject *op) return -1; } - uintptr_t ptr; PyTypeObject *type = Py_TYPE(op); - if (PyType_IS_GC(type)) { - ptr = (uintptr_t)((char *)op - sizeof(PyGC_Head)); - } - else { - ptr = (uintptr_t)op; - } + const size_t presize = _PyType_PreHeaderSize(type); + uintptr_t ptr = (uintptr_t)((char *)op - presize); int res = -1; diff --git a/Objects/object.c b/Objects/object.c index 7817c04ef5f5..446c7b1f5f03 100644 --- a/Objects/object.c +++ b/Objects/object.c @@ -2387,14 +2387,9 @@ _PyObject_AssertFailed(PyObject *obj, const char *expr, const char *msg, /* Display the traceback where the object has been allocated. Do it before dumping repr(obj), since repr() is more likely to crash than dumping the traceback. */ - void *ptr; PyTypeObject *type = Py_TYPE(obj); - if (_PyType_IS_GC(type)) { - ptr = (void *)((char *)obj - sizeof(PyGC_Head)); - } - else { - ptr = (void *)obj; - } + const size_t presize = _PyType_PreHeaderSize(type); + void *ptr = (void *)((char *)obj - presize); _PyMem_DumpTraceback(fileno(stderr), ptr); /* This might succeed or fail, but we're about to abort, so at least From webhook-mailer at python.org Thu Feb 9 22:01:11 2023 From: webhook-mailer at python.org (rhettinger) Date: Fri, 10 Feb 2023 03:01:11 -0000 Subject: [Python-checkins] gh-101747: Fix refleak in new `OrderedDict` repr (GH-101748) Message-ID: https://github.com/python/cpython/commit/34c50ceb1e2d40f7faab673d2033ecaafd3c611a commit: 34c50ceb1e2d40f7faab673d2033ecaafd3c611a branch: main author: Nikita Sobolev committer: rhettinger date: 2023-02-09T21:00:58-06:00 summary: gh-101747: Fix refleak in new `OrderedDict` repr (GH-101748) files: M Objects/odictobject.c diff --git a/Objects/odictobject.c b/Objects/odictobject.c index ab2bbed35873..215a8af54fb2 100644 --- a/Objects/odictobject.c +++ b/Objects/odictobject.c @@ -1385,6 +1385,7 @@ odict_repr(PyODictObject *self) result = PyUnicode_FromFormat("%s(%R)", _PyType_Name(Py_TYPE(self)), dcopy); + Py_DECREF(dcopy); Done: Py_ReprLeave((PyObject *)self); From webhook-mailer at python.org Thu Feb 9 22:10:53 2023 From: webhook-mailer at python.org (rhettinger) Date: Fri, 10 Feb 2023 03:10:53 -0000 Subject: [Python-checkins] Fix some typos in asdl_c.py (GH-101757) Message-ID: https://github.com/python/cpython/commit/448c7d154e72506158d0a7a766e9f1cb8adf3dec commit: 448c7d154e72506158d0a7a766e9f1cb8adf3dec branch: main author: abel1502 <32196516+abel1502 at users.noreply.github.com> committer: rhettinger date: 2023-02-09T21:10:46-06:00 summary: Fix some typos in asdl_c.py (GH-101757) files: M Parser/asdl_c.py diff --git a/Parser/asdl_c.py b/Parser/asdl_c.py index 3e307610b635..db0e597b7a5a 100755 --- a/Parser/asdl_c.py +++ b/Parser/asdl_c.py @@ -73,7 +73,7 @@ def reflow_c_string(s, depth): def is_simple(sum_type): """Return True if a sum is a simple. - A sum is simple if it's types have no fields and itself + A sum is simple if its types have no fields and itself doesn't have any attributes. Instances of these types are cached at C level, and they act like singletons when propagating parser generated nodes into Python level, e.g. @@ -352,7 +352,7 @@ def visitSum(self, sum, name): self.visit(t, name, sum.attributes) def get_args(self, fields): - """Return list of C argument into, one for each field. + """Return list of C argument info, one for each field. Argument info is 3-tuple of a C type, variable name, and flag that is true if type can be NULL. From webhook-mailer at python.org Fri Feb 10 02:25:09 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Fri, 10 Feb 2023 07:25:09 -0000 Subject: [Python-checkins] gh-101759: Update macOS installer to SQLite 3.40.1 (#101761) Message-ID: https://github.com/python/cpython/commit/d40a23c0a11060ba7fa076d50980c18a11a13a40 commit: d40a23c0a11060ba7fa076d50980c18a11a13a40 branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-10T08:25:02+01:00 summary: gh-101759: Update macOS installer to SQLite 3.40.1 (#101761) files: A Misc/NEWS.d/next/macOS/2023-02-09-22-07-17.gh-issue-101759.B0JP2H.rst M Mac/BuildScript/build-installer.py diff --git a/Mac/BuildScript/build-installer.py b/Mac/BuildScript/build-installer.py index 048cb0137960..5ea2cfc5183e 100755 --- a/Mac/BuildScript/build-installer.py +++ b/Mac/BuildScript/build-installer.py @@ -359,9 +359,9 @@ def library_recipes(): ), ), dict( - name="SQLite 3.39.4", - url="https://sqlite.org/2022/sqlite-autoconf-3390400.tar.gz", - checksum="44b7e6691b0954086f717a6c43b622a5", + name="SQLite 3.40.1", + url="https://sqlite.org/2022/sqlite-autoconf-3400100.tar.gz", + checksum="5498af3a357753d473ee713e363fa5b7", extra_cflags=('-Os ' '-DSQLITE_ENABLE_FTS5 ' '-DSQLITE_ENABLE_FTS4 ' diff --git a/Misc/NEWS.d/next/macOS/2023-02-09-22-07-17.gh-issue-101759.B0JP2H.rst b/Misc/NEWS.d/next/macOS/2023-02-09-22-07-17.gh-issue-101759.B0JP2H.rst new file mode 100644 index 000000000000..fc53d08bffc4 --- /dev/null +++ b/Misc/NEWS.d/next/macOS/2023-02-09-22-07-17.gh-issue-101759.B0JP2H.rst @@ -0,0 +1 @@ +Update macOS installer to SQLite 3.40.1. From webhook-mailer at python.org Fri Feb 10 02:49:00 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 10 Feb 2023 07:49:00 -0000 Subject: [Python-checkins] gh-101759: Update macOS installer to SQLite 3.40.1 (GH-101761) Message-ID: https://github.com/python/cpython/commit/4b8d2a1b40b88e4c658b3f5f450c146c78f2e6bd commit: 4b8d2a1b40b88e4c658b3f5f450c146c78f2e6bd branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-09T23:48:53-08:00 summary: gh-101759: Update macOS installer to SQLite 3.40.1 (GH-101761) (cherry picked from commit d40a23c0a11060ba7fa076d50980c18a11a13a40) Co-authored-by: Erlend E. Aasland files: A Misc/NEWS.d/next/macOS/2023-02-09-22-07-17.gh-issue-101759.B0JP2H.rst M Mac/BuildScript/build-installer.py diff --git a/Mac/BuildScript/build-installer.py b/Mac/BuildScript/build-installer.py index 91089864dc24..01e06013f3d1 100755 --- a/Mac/BuildScript/build-installer.py +++ b/Mac/BuildScript/build-installer.py @@ -358,9 +358,9 @@ def library_recipes(): ), ), dict( - name="SQLite 3.39.4", - url="https://sqlite.org/2022/sqlite-autoconf-3390400.tar.gz", - checksum="44b7e6691b0954086f717a6c43b622a5", + name="SQLite 3.40.1", + url="https://sqlite.org/2022/sqlite-autoconf-3400100.tar.gz", + checksum="5498af3a357753d473ee713e363fa5b7", extra_cflags=('-Os ' '-DSQLITE_ENABLE_FTS5 ' '-DSQLITE_ENABLE_FTS4 ' diff --git a/Misc/NEWS.d/next/macOS/2023-02-09-22-07-17.gh-issue-101759.B0JP2H.rst b/Misc/NEWS.d/next/macOS/2023-02-09-22-07-17.gh-issue-101759.B0JP2H.rst new file mode 100644 index 000000000000..fc53d08bffc4 --- /dev/null +++ b/Misc/NEWS.d/next/macOS/2023-02-09-22-07-17.gh-issue-101759.B0JP2H.rst @@ -0,0 +1 @@ +Update macOS installer to SQLite 3.40.1. From webhook-mailer at python.org Fri Feb 10 02:55:05 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 10 Feb 2023 07:55:05 -0000 Subject: [Python-checkins] gh-101759: Update macOS installer to SQLite 3.40.1 (GH-101761) Message-ID: https://github.com/python/cpython/commit/b653fced316c14eec669ee94e57240fffdfaf1f4 commit: b653fced316c14eec669ee94e57240fffdfaf1f4 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-09T23:54:58-08:00 summary: gh-101759: Update macOS installer to SQLite 3.40.1 (GH-101761) (cherry picked from commit d40a23c0a11060ba7fa076d50980c18a11a13a40) Co-authored-by: Erlend E. Aasland files: A Misc/NEWS.d/next/macOS/2023-02-09-22-07-17.gh-issue-101759.B0JP2H.rst M Mac/BuildScript/build-installer.py diff --git a/Mac/BuildScript/build-installer.py b/Mac/BuildScript/build-installer.py index c3f92e97611d..7daead587c22 100755 --- a/Mac/BuildScript/build-installer.py +++ b/Mac/BuildScript/build-installer.py @@ -358,9 +358,9 @@ def library_recipes(): ), ), dict( - name="SQLite 3.39.4", - url="https://sqlite.org/2022/sqlite-autoconf-3390400.tar.gz", - checksum="44b7e6691b0954086f717a6c43b622a5", + name="SQLite 3.40.1", + url="https://sqlite.org/2022/sqlite-autoconf-3400100.tar.gz", + checksum="5498af3a357753d473ee713e363fa5b7", extra_cflags=('-Os ' '-DSQLITE_ENABLE_FTS5 ' '-DSQLITE_ENABLE_FTS4 ' diff --git a/Misc/NEWS.d/next/macOS/2023-02-09-22-07-17.gh-issue-101759.B0JP2H.rst b/Misc/NEWS.d/next/macOS/2023-02-09-22-07-17.gh-issue-101759.B0JP2H.rst new file mode 100644 index 000000000000..fc53d08bffc4 --- /dev/null +++ b/Misc/NEWS.d/next/macOS/2023-02-09-22-07-17.gh-issue-101759.B0JP2H.rst @@ -0,0 +1 @@ +Update macOS installer to SQLite 3.40.1. From webhook-mailer at python.org Fri Feb 10 06:58:21 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 10 Feb 2023 11:58:21 -0000 Subject: [Python-checkins] gh-101277: Finalise isolating itertools (GH-101305) Message-ID: https://github.com/python/cpython/commit/826bf0e6957fd0addc321d1baee1fa846e9457eb commit: 826bf0e6957fd0addc321d1baee1fa846e9457eb branch: main author: Erlend E. Aasland committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-10T03:58:14-08:00 summary: gh-101277: Finalise isolating itertools (GH-101305) Add repeat, islice, chain, tee, teedataobject, and batched types to module state. Automerge-Triggered-By: GH:erlend-aasland files: A Misc/NEWS.d/next/Library/2023-01-25-00-14-52.gh-issue-101277.FceHX7.rst M Modules/clinic/itertoolsmodule.c.h M Modules/itertoolsmodule.c M Tools/c-analyzer/cpython/globals-to-fix.tsv diff --git a/Misc/NEWS.d/next/Library/2023-01-25-00-14-52.gh-issue-101277.FceHX7.rst b/Misc/NEWS.d/next/Library/2023-01-25-00-14-52.gh-issue-101277.FceHX7.rst new file mode 100644 index 000000000000..e09c0e09fb38 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-01-25-00-14-52.gh-issue-101277.FceHX7.rst @@ -0,0 +1,2 @@ +Remove global state from :mod:`itertools` module (:pep:`687`). Patches by +Erlend E. Aasland. diff --git a/Modules/clinic/itertoolsmodule.c.h b/Modules/clinic/itertoolsmodule.c.h index d15d5f0890ca..32278bf715aa 100644 --- a/Modules/clinic/itertoolsmodule.c.h +++ b/Modules/clinic/itertoolsmodule.c.h @@ -232,7 +232,7 @@ static PyObject * itertools_teedataobject(PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *return_value = NULL; - PyTypeObject *base_tp = &teedataobject_type; + PyTypeObject *base_tp = clinic_state()->teedataobject_type; PyObject *it; PyObject *values; PyObject *next; @@ -270,7 +270,7 @@ static PyObject * itertools__tee(PyTypeObject *type, PyObject *args, PyObject *kwargs) { PyObject *return_value = NULL; - PyTypeObject *base_tp = &tee_type; + PyTypeObject *base_tp = clinic_state()->tee_type; PyObject *iterable; if ((type == base_tp || type->tp_init == base_tp->tp_init) && @@ -913,4 +913,4 @@ itertools_count(PyTypeObject *type, PyObject *args, PyObject *kwargs) exit: return return_value; } -/*[clinic end generated code: output=a08b58d7dac825da input=a9049054013a1b77]*/ +/*[clinic end generated code: output=111cbd102c2a23c9 input=a9049054013a1b77]*/ diff --git a/Modules/itertoolsmodule.c b/Modules/itertoolsmodule.c index ce8720d0fd92..6986695e47b1 100644 --- a/Modules/itertoolsmodule.c +++ b/Modules/itertoolsmodule.c @@ -5,6 +5,7 @@ #include "pycore_moduleobject.h" // _PyModule_GetState() #include "pycore_object.h" // _PyObject_GC_TRACK() #include "pycore_tuple.h" // _PyTuple_ITEMS() +#include "structmember.h" // PyMemberDef #include // offsetof() /* Itertools module written and maintained @@ -13,6 +14,8 @@ typedef struct { PyTypeObject *accumulate_type; + PyTypeObject *batched_type; + PyTypeObject *chain_type; PyTypeObject *combinations_type; PyTypeObject *compress_type; PyTypeObject *count_type; @@ -22,11 +25,15 @@ typedef struct { PyTypeObject *filterfalse_type; PyTypeObject *groupby_type; PyTypeObject *_grouper_type; + PyTypeObject *islice_type; PyTypeObject *pairwise_type; PyTypeObject *permutations_type; PyTypeObject *product_type; + PyTypeObject *repeat_type; PyTypeObject *starmap_type; PyTypeObject *takewhile_type; + PyTypeObject *tee_type; + PyTypeObject *teedataobject_type; PyTypeObject *ziplongest_type; } itertools_state; @@ -60,14 +67,14 @@ find_state_by_type(PyTypeObject *tp) module itertools class itertools.groupby "groupbyobject *" "clinic_state()->groupby_type" class itertools._grouper "_grouperobject *" "clinic_state()->_grouper_type" -class itertools.teedataobject "teedataobject *" "&teedataobject_type" -class itertools._tee "teeobject *" "&tee_type" -class itertools.batched "batchedobject *" "&batched_type" +class itertools.teedataobject "teedataobject *" "clinic_state()->teedataobject_type" +class itertools._tee "teeobject *" "clinic_state()->tee_type" +class itertools.batched "batchedobject *" "clinic_state()->batched_type" class itertools.cycle "cycleobject *" "clinic_state()->cycle_type" class itertools.dropwhile "dropwhileobject *" "clinic_state()->dropwhile_type" class itertools.takewhile "takewhileobject *" "clinic_state()->takewhile_type" class itertools.starmap "starmapobject *" "clinic_state()->starmap_type" -class itertools.chain "chainobject *" "&chain_type" +class itertools.chain "chainobject *" "clinic_state()->chain_type" class itertools.combinations "combinationsobject *" "clinic_state()->combinations_type" class itertools.combinations_with_replacement "cwr_object *" "clinic_state()->cwr_type" class itertools.permutations "permutationsobject *" "clinic_state()->permutations_type" @@ -77,11 +84,7 @@ class itertools.filterfalse "filterfalseobject *" "clinic_state()->filterfalse_t class itertools.count "countobject *" "clinic_state()->count_type" class itertools.pairwise "pairwiseobject *" "clinic_state()->pairwise_type" [clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=28ffff5c0c93eed7]*/ - -static PyTypeObject teedataobject_type; -static PyTypeObject tee_type; -static PyTypeObject batched_type; +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=aa48fe4de9d4080f]*/ #define clinic_state() (find_state_by_type(type)) #define clinic_state_by_cls() (get_module_state_by_cls(base_tp)) @@ -162,17 +165,18 @@ batched_new_impl(PyTypeObject *type, PyObject *iterable, Py_ssize_t n) static void batched_dealloc(batchedobject *bo) { + PyTypeObject *tp = Py_TYPE(bo); PyObject_GC_UnTrack(bo); Py_XDECREF(bo->it); - Py_TYPE(bo)->tp_free(bo); + tp->tp_free(bo); + Py_DECREF(tp); } static int batched_traverse(batchedobject *bo, visitproc visit, void *arg) { - if (bo->it != NULL) { - Py_VISIT(bo->it); - } + Py_VISIT(Py_TYPE(bo)); + Py_VISIT(bo->it); return 0; } @@ -222,48 +226,25 @@ batched_next(batchedobject *bo) return result; } -static PyTypeObject batched_type = { - PyVarObject_HEAD_INIT(&PyType_Type, 0) - "itertools.batched", /* tp_name */ - sizeof(batchedobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)batched_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - batched_new__doc__, /* tp_doc */ - (traverseproc)batched_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)batched_next, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - PyType_GenericAlloc, /* tp_alloc */ - batched_new, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot batched_slots[] = { + {Py_tp_dealloc, batched_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)batched_new__doc__}, + {Py_tp_traverse, batched_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, batched_next}, + {Py_tp_alloc, PyType_GenericAlloc}, + {Py_tp_new, batched_new}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec batched_spec = { + .name = "itertools.batched", + .basicsize = sizeof(batchedobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = batched_slots, }; @@ -737,14 +718,15 @@ typedef struct { teedataobject *dataobj; int index; /* 0 <= index <= LINKCELLS */ PyObject *weakreflist; + itertools_state *state; } teeobject; static PyObject * -teedataobject_newinternal(PyObject *it) +teedataobject_newinternal(itertools_state *state, PyObject *it) { teedataobject *tdo; - tdo = PyObject_GC_New(teedataobject, &teedataobject_type); + tdo = PyObject_GC_New(teedataobject, state->teedataobject_type); if (tdo == NULL) return NULL; @@ -757,10 +739,10 @@ teedataobject_newinternal(PyObject *it) } static PyObject * -teedataobject_jumplink(teedataobject *tdo) +teedataobject_jumplink(itertools_state *state, teedataobject *tdo) { if (tdo->nextlink == NULL) - tdo->nextlink = teedataobject_newinternal(tdo->it); + tdo->nextlink = teedataobject_newinternal(state, tdo->it); return Py_XNewRef(tdo->nextlink); } @@ -796,6 +778,7 @@ teedataobject_traverse(teedataobject *tdo, visitproc visit, void * arg) { int i; + Py_VISIT(Py_TYPE(tdo)); Py_VISIT(tdo->it); for (i = 0; i < tdo->numread; i++) Py_VISIT(tdo->values[i]); @@ -804,9 +787,9 @@ teedataobject_traverse(teedataobject *tdo, visitproc visit, void * arg) } static void -teedataobject_safe_decref(PyObject *obj) +teedataobject_safe_decref(PyObject *obj, PyTypeObject *tdo_type) { - while (obj && Py_IS_TYPE(obj, &teedataobject_type) && + while (obj && Py_IS_TYPE(obj, tdo_type) && Py_REFCNT(obj) == 1) { PyObject *nextlink = ((teedataobject *)obj)->nextlink; ((teedataobject *)obj)->nextlink = NULL; @@ -826,16 +809,19 @@ teedataobject_clear(teedataobject *tdo) Py_CLEAR(tdo->values[i]); tmp = tdo->nextlink; tdo->nextlink = NULL; - teedataobject_safe_decref(tmp); + itertools_state *state = get_module_state_by_cls(Py_TYPE(tdo)); + teedataobject_safe_decref(tmp, state->teedataobject_type); return 0; } static void teedataobject_dealloc(teedataobject *tdo) { + PyTypeObject *tp = Py_TYPE(tdo); PyObject_GC_UnTrack(tdo); teedataobject_clear(tdo); PyObject_GC_Del(tdo); + Py_DECREF(tp); } static PyObject * @@ -874,9 +860,10 @@ itertools_teedataobject_impl(PyTypeObject *type, PyObject *it, teedataobject *tdo; Py_ssize_t i, len; - assert(type == &teedataobject_type); + itertools_state *state = get_module_state_by_cls(type); + assert(type == state->teedataobject_type); - tdo = (teedataobject *)teedataobject_newinternal(it); + tdo = (teedataobject *)teedataobject_newinternal(state, it); if (!tdo) return NULL; @@ -892,7 +879,7 @@ itertools_teedataobject_impl(PyTypeObject *type, PyObject *it, if (len == LINKCELLS) { if (next != Py_None) { - if (!Py_IS_TYPE(next, &teedataobject_type)) + if (!Py_IS_TYPE(next, state->teedataobject_type)) goto err; assert(tdo->nextlink == NULL); tdo->nextlink = Py_NewRef(next); @@ -915,47 +902,24 @@ static PyMethodDef teedataobject_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject teedataobject_type = { - PyVarObject_HEAD_INIT(0, 0) /* Must fill in type value later */ - "itertools._tee_dataobject", /* tp_name */ - sizeof(teedataobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)teedataobject_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */ - itertools_teedataobject__doc__, /* tp_doc */ - (traverseproc)teedataobject_traverse, /* tp_traverse */ - (inquiry)teedataobject_clear, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - teedataobject_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools_teedataobject, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot teedataobject_slots[] = { + {Py_tp_dealloc, teedataobject_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)itertools_teedataobject__doc__}, + {Py_tp_traverse, teedataobject_traverse}, + {Py_tp_clear, teedataobject_clear}, + {Py_tp_methods, teedataobject_methods}, + {Py_tp_new, itertools_teedataobject}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec teedataobject_spec = { + .name = "itertools._tee_dataobject", + .basicsize = sizeof(teedataobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = teedataobject_slots, }; @@ -965,7 +929,7 @@ tee_next(teeobject *to) PyObject *value, *link; if (to->index >= LINKCELLS) { - link = teedataobject_jumplink(to->dataobj); + link = teedataobject_jumplink(to->state, to->dataobj); if (link == NULL) return NULL; Py_SETREF(to->dataobj, (teedataobject *)link); @@ -981,6 +945,7 @@ tee_next(teeobject *to) static int tee_traverse(teeobject *to, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(to)); Py_VISIT((PyObject *)to->dataobj); return 0; } @@ -990,12 +955,13 @@ tee_copy(teeobject *to, PyObject *Py_UNUSED(ignored)) { teeobject *newto; - newto = PyObject_GC_New(teeobject, &tee_type); + newto = PyObject_GC_New(teeobject, Py_TYPE(to)); if (newto == NULL) return NULL; newto->dataobj = (teedataobject*)Py_NewRef(to->dataobj); newto->index = to->index; newto->weakreflist = NULL; + newto->state = to->state; PyObject_GC_Track(newto); return (PyObject *)newto; } @@ -1003,7 +969,7 @@ tee_copy(teeobject *to, PyObject *Py_UNUSED(ignored)) PyDoc_STRVAR(teecopy_doc, "Returns an independent iterator."); static PyObject * -tee_fromiterable(PyObject *iterable) +tee_fromiterable(itertools_state *state, PyObject *iterable) { teeobject *to; PyObject *it; @@ -1011,17 +977,17 @@ tee_fromiterable(PyObject *iterable) it = PyObject_GetIter(iterable); if (it == NULL) return NULL; - if (PyObject_TypeCheck(it, &tee_type)) { + if (PyObject_TypeCheck(it, state->tee_type)) { to = (teeobject *)tee_copy((teeobject *)it, NULL); goto done; } - PyObject *dataobj = teedataobject_newinternal(it); + PyObject *dataobj = teedataobject_newinternal(state, it); if (!dataobj) { to = NULL; goto done; } - to = PyObject_GC_New(teeobject, &tee_type); + to = PyObject_GC_New(teeobject, state->tee_type); if (to == NULL) { Py_DECREF(dataobj); goto done; @@ -1029,6 +995,7 @@ tee_fromiterable(PyObject *iterable) to->dataobj = (teedataobject *)dataobj; to->index = 0; to->weakreflist = NULL; + to->state = state; PyObject_GC_Track(to); done: Py_DECREF(it); @@ -1047,7 +1014,8 @@ static PyObject * itertools__tee_impl(PyTypeObject *type, PyObject *iterable) /*[clinic end generated code: output=b02d3fd26c810c3f input=adc0779d2afe37a2]*/ { - return tee_fromiterable(iterable); + itertools_state *state = get_module_state_by_cls(type); + return tee_fromiterable(state, iterable); } static int @@ -1062,9 +1030,11 @@ tee_clear(teeobject *to) static void tee_dealloc(teeobject *to) { + PyTypeObject *tp = Py_TYPE(to); PyObject_GC_UnTrack(to); tee_clear(to); PyObject_GC_Del(to); + Py_DECREF(tp); } static PyObject * @@ -1082,7 +1052,8 @@ tee_setstate(teeobject *to, PyObject *state) PyErr_SetString(PyExc_TypeError, "state is not a tuple"); return NULL; } - if (!PyArg_ParseTuple(state, "O!i", &teedataobject_type, &tdo, &index)) { + PyTypeObject *tdo_type = to->state->teedataobject_type; + if (!PyArg_ParseTuple(state, "O!i", tdo_type, &tdo, &index)) { return NULL; } if (index < 0 || index > LINKCELLS) { @@ -1102,47 +1073,31 @@ static PyMethodDef tee_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject tee_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools._tee", /* tp_name */ - sizeof(teeobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)tee_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */ - itertools__tee__doc__, /* tp_doc */ - (traverseproc)tee_traverse, /* tp_traverse */ - (inquiry)tee_clear, /* tp_clear */ - 0, /* tp_richcompare */ - offsetof(teeobject, weakreflist), /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)tee_next, /* tp_iternext */ - tee_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - itertools__tee, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyMemberDef tee_members[] = { + {"__weaklistoffset__", T_PYSSIZET, offsetof(teeobject, weakreflist), READONLY}, + {NULL}, +}; + +static PyType_Slot tee_slots[] = { + {Py_tp_dealloc, tee_dealloc}, + {Py_tp_doc, (void *)itertools__tee__doc__}, + {Py_tp_traverse, tee_traverse}, + {Py_tp_clear, tee_clear}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, tee_next}, + {Py_tp_methods, tee_methods}, + {Py_tp_members, tee_members}, + {Py_tp_new, itertools__tee}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec tee_spec = { + .name = "itertools._tee", + .basicsize = sizeof(teeobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = tee_slots, }; /*[clinic input] @@ -1184,7 +1139,8 @@ itertools_tee_impl(PyObject *module, PyObject *iterable, Py_ssize_t n) copyable = it; } else { - copyable = tee_fromiterable(it); + itertools_state *state = get_module_state(module); + copyable = tee_fromiterable(state, it); Py_DECREF(it); if (copyable == NULL) { Py_DECREF(result); @@ -1682,8 +1638,6 @@ typedef struct { Py_ssize_t cnt; } isliceobject; -static PyTypeObject islice_type; - static PyObject * islice_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { @@ -1693,7 +1647,9 @@ islice_new(PyTypeObject *type, PyObject *args, PyObject *kwds) Py_ssize_t numargs; isliceobject *lz; - if ((type == &islice_type || type->tp_init == islice_type.tp_init) && + itertools_state *st = find_state_by_type(type); + PyTypeObject *islice_type = st->islice_type; + if ((type == islice_type || type->tp_init == islice_type->tp_init) && !_PyArg_NoKeywords("islice", kwds)) return NULL; @@ -1772,14 +1728,17 @@ islice_new(PyTypeObject *type, PyObject *args, PyObject *kwds) static void islice_dealloc(isliceobject *lz) { + PyTypeObject *tp = Py_TYPE(lz); PyObject_GC_UnTrack(lz); Py_XDECREF(lz->it); - Py_TYPE(lz)->tp_free(lz); + tp->tp_free(lz); + Py_DECREF(tp); } static int islice_traverse(isliceobject *lz, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(lz)); Py_VISIT(lz->it); return 0; } @@ -1885,48 +1844,25 @@ specified as another value, step determines how many values are\n\ skipped between successive calls. Works like a slice() on a list\n\ but returns an iterator."); -static PyTypeObject islice_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.islice", /* tp_name */ - sizeof(isliceobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)islice_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - islice_doc, /* tp_doc */ - (traverseproc)islice_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)islice_next, /* tp_iternext */ - islice_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - islice_new, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot islice_slots[] = { + {Py_tp_dealloc, islice_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)islice_doc}, + {Py_tp_traverse, islice_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, islice_next}, + {Py_tp_methods, islice_methods}, + {Py_tp_new, islice_new}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec islice_spec = { + .name = "itertools.islice", + .basicsize = sizeof(isliceobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = islice_slots, }; @@ -2056,8 +1992,6 @@ typedef struct { PyObject *active; /* Currently running input iterator */ } chainobject; -static PyTypeObject chain_type; - static PyObject * chain_new_internal(PyTypeObject *type, PyObject *source) { @@ -2079,7 +2013,9 @@ chain_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { PyObject *source; - if ((type == &chain_type || type->tp_init == chain_type.tp_init) && + itertools_state *state = find_state_by_type(type); + PyTypeObject *chain_type = state->chain_type; + if ((type == chain_type || type->tp_init == chain_type->tp_init) && !_PyArg_NoKeywords("chain", kwds)) return NULL; @@ -2114,15 +2050,18 @@ itertools_chain_from_iterable(PyTypeObject *type, PyObject *arg) static void chain_dealloc(chainobject *lz) { + PyTypeObject *tp = Py_TYPE(lz); PyObject_GC_UnTrack(lz); Py_XDECREF(lz->active); Py_XDECREF(lz->source); - Py_TYPE(lz)->tp_free(lz); + tp->tp_free(lz); + Py_DECREF(tp); } static int chain_traverse(chainobject *lz, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(lz)); Py_VISIT(lz->source); Py_VISIT(lz->active); return 0; @@ -2227,48 +2166,25 @@ static PyMethodDef chain_methods[] = { {NULL, NULL} /* sentinel */ }; -static PyTypeObject chain_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.chain", /* tp_name */ - sizeof(chainobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)chain_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - chain_doc, /* tp_doc */ - (traverseproc)chain_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)chain_next, /* tp_iternext */ - chain_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - chain_new, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot chain_slots[] = { + {Py_tp_dealloc, chain_dealloc}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)chain_doc}, + {Py_tp_traverse, chain_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, chain_next}, + {Py_tp_methods, chain_methods}, + {Py_tp_new, chain_new}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec chain_spec = { + .name = "itertools.chain", + .basicsize = sizeof(chainobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = chain_slots, }; @@ -3574,6 +3490,7 @@ typedef struct { PyObject *it; PyObject *binop; PyObject *initial; + itertools_state *state; } accumulateobject; /*[clinic input] @@ -3612,6 +3529,7 @@ itertools_accumulate_impl(PyTypeObject *type, PyObject *iterable, lz->total = NULL; lz->it = it; lz->initial = Py_XNewRef(initial); + lz->state = find_state_by_type(type); return (PyObject *)lz; } @@ -3674,13 +3592,13 @@ accumulate_next(accumulateobject *lz) static PyObject * accumulate_reduce(accumulateobject *lz, PyObject *Py_UNUSED(ignored)) { + itertools_state *state = lz->state; + if (lz->initial != Py_None) { PyObject *it; assert(lz->total == NULL); - if (PyType_Ready(&chain_type) < 0) - return NULL; - it = PyObject_CallFunction((PyObject *)&chain_type, "(O)O", + it = PyObject_CallFunction((PyObject *)(state->chain_type), "(O)O", lz->initial, lz->it); if (it == NULL) return NULL; @@ -3690,11 +3608,7 @@ accumulate_reduce(accumulateobject *lz, PyObject *Py_UNUSED(ignored)) if (lz->total == Py_None) { PyObject *it; - if (PyType_Ready(&chain_type) < 0) - return NULL; - if (PyType_Ready(&islice_type) < 0) - return NULL; - it = PyObject_CallFunction((PyObject *)&chain_type, "(O)O", + it = PyObject_CallFunction((PyObject *)(state->chain_type), "(O)O", lz->total, lz->it); if (it == NULL) return NULL; @@ -3702,7 +3616,8 @@ accumulate_reduce(accumulateobject *lz, PyObject *Py_UNUSED(ignored)) it, lz->binop ? lz->binop : Py_None); if (it == NULL) return NULL; - return Py_BuildValue("O(NiO)", &islice_type, it, 1, Py_None); + + return Py_BuildValue("O(NiO)", state->islice_type, it, 1, Py_None); } return Py_BuildValue("O(OO)O", Py_TYPE(lz), lz->it, lz->binop?lz->binop:Py_None, @@ -4261,8 +4176,6 @@ typedef struct { Py_ssize_t cnt; } repeatobject; -static PyTypeObject repeat_type; - static PyObject * repeat_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { @@ -4292,14 +4205,17 @@ repeat_new(PyTypeObject *type, PyObject *args, PyObject *kwds) static void repeat_dealloc(repeatobject *ro) { + PyTypeObject *tp = Py_TYPE(ro); PyObject_GC_UnTrack(ro); Py_XDECREF(ro->element); - Py_TYPE(ro)->tp_free(ro); + tp->tp_free(ro); + Py_DECREF(tp); } static int repeat_traverse(repeatobject *ro, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(ro)); Py_VISIT(ro->element); return 0; } @@ -4361,48 +4277,26 @@ PyDoc_STRVAR(repeat_doc, for the specified number of times. If not specified, returns the object\n\ endlessly."); -static PyTypeObject repeat_type = { - PyVarObject_HEAD_INIT(NULL, 0) - "itertools.repeat", /* tp_name */ - sizeof(repeatobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)repeat_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - (reprfunc)repeat_repr, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | - Py_TPFLAGS_BASETYPE, /* tp_flags */ - repeat_doc, /* tp_doc */ - (traverseproc)repeat_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)repeat_next, /* tp_iternext */ - repeat_methods, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - repeat_new, /* tp_new */ - PyObject_GC_Del, /* tp_free */ +static PyType_Slot repeat_slots[] = { + {Py_tp_dealloc, repeat_dealloc}, + {Py_tp_repr, repeat_repr}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_doc, (void *)repeat_doc}, + {Py_tp_traverse, repeat_traverse}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, repeat_next}, + {Py_tp_methods, repeat_methods}, + {Py_tp_new, repeat_new}, + {Py_tp_free, PyObject_GC_Del}, + {0, NULL}, +}; + +static PyType_Spec repeat_spec = { + .name = "itertools.repeat", + .basicsize = sizeof(repeatobject), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = repeat_slots, }; @@ -4695,6 +4589,8 @@ itertoolsmodule_traverse(PyObject *mod, visitproc visit, void *arg) { itertools_state *state = get_module_state(mod); Py_VISIT(state->accumulate_type); + Py_VISIT(state->batched_type); + Py_VISIT(state->chain_type); Py_VISIT(state->combinations_type); Py_VISIT(state->compress_type); Py_VISIT(state->count_type); @@ -4704,11 +4600,15 @@ itertoolsmodule_traverse(PyObject *mod, visitproc visit, void *arg) Py_VISIT(state->filterfalse_type); Py_VISIT(state->groupby_type); Py_VISIT(state->_grouper_type); + Py_VISIT(state->islice_type); Py_VISIT(state->pairwise_type); Py_VISIT(state->permutations_type); Py_VISIT(state->product_type); + Py_VISIT(state->repeat_type); Py_VISIT(state->starmap_type); Py_VISIT(state->takewhile_type); + Py_VISIT(state->tee_type); + Py_VISIT(state->teedataobject_type); Py_VISIT(state->ziplongest_type); return 0; } @@ -4718,6 +4618,8 @@ itertoolsmodule_clear(PyObject *mod) { itertools_state *state = get_module_state(mod); Py_CLEAR(state->accumulate_type); + Py_CLEAR(state->batched_type); + Py_CLEAR(state->chain_type); Py_CLEAR(state->combinations_type); Py_CLEAR(state->compress_type); Py_CLEAR(state->count_type); @@ -4727,11 +4629,15 @@ itertoolsmodule_clear(PyObject *mod) Py_CLEAR(state->filterfalse_type); Py_CLEAR(state->groupby_type); Py_CLEAR(state->_grouper_type); + Py_CLEAR(state->islice_type); Py_CLEAR(state->pairwise_type); Py_CLEAR(state->permutations_type); Py_CLEAR(state->product_type); + Py_CLEAR(state->repeat_type); Py_CLEAR(state->starmap_type); Py_CLEAR(state->takewhile_type); + Py_CLEAR(state->tee_type); + Py_CLEAR(state->teedataobject_type); Py_CLEAR(state->ziplongest_type); return 0; } @@ -4758,6 +4664,8 @@ itertoolsmodule_exec(PyObject *mod) { itertools_state *state = get_module_state(mod); ADD_TYPE(mod, state->accumulate_type, &accumulate_spec); + ADD_TYPE(mod, state->batched_type, &batched_spec); + ADD_TYPE(mod, state->chain_type, &chain_spec); ADD_TYPE(mod, state->combinations_type, &combinations_spec); ADD_TYPE(mod, state->compress_type, &compress_spec); ADD_TYPE(mod, state->count_type, &count_spec); @@ -4767,30 +4675,18 @@ itertoolsmodule_exec(PyObject *mod) ADD_TYPE(mod, state->filterfalse_type, &filterfalse_spec); ADD_TYPE(mod, state->groupby_type, &groupby_spec); ADD_TYPE(mod, state->_grouper_type, &_grouper_spec); + ADD_TYPE(mod, state->islice_type, &islice_spec); ADD_TYPE(mod, state->pairwise_type, &pairwise_spec); ADD_TYPE(mod, state->permutations_type, &permutations_spec); ADD_TYPE(mod, state->product_type, &product_spec); + ADD_TYPE(mod, state->repeat_type, &repeat_spec); ADD_TYPE(mod, state->starmap_type, &starmap_spec); ADD_TYPE(mod, state->takewhile_type, &takewhile_spec); + ADD_TYPE(mod, state->tee_type, &tee_spec); + ADD_TYPE(mod, state->teedataobject_type, &teedataobject_spec); ADD_TYPE(mod, state->ziplongest_type, &ziplongest_spec); - PyTypeObject *typelist[] = { - &batched_type, - &islice_type, - &chain_type, - &repeat_type, - &tee_type, - &teedataobject_type - }; - - Py_SET_TYPE(&teedataobject_type, &PyType_Type); - - for (size_t i = 0; i < Py_ARRAY_LENGTH(typelist); i++) { - if (PyModule_AddType(mod, typelist[i]) < 0) { - return -1; - } - } - + Py_SET_TYPE(state->teedataobject_type, &PyType_Type); return 0; } diff --git a/Tools/c-analyzer/cpython/globals-to-fix.tsv b/Tools/c-analyzer/cpython/globals-to-fix.tsv index cf2d5c368f1b..52ea0b4901d4 100644 --- a/Tools/c-analyzer/cpython/globals-to-fix.tsv +++ b/Tools/c-analyzer/cpython/globals-to-fix.tsv @@ -335,28 +335,6 @@ Modules/_testcapi/vectorcall.c - MethodDescriptorBase_Type - Modules/_testcapi/vectorcall.c - MethodDescriptorDerived_Type - Modules/_testcapi/vectorcall.c - MethodDescriptorNopGet_Type - Modules/_testcapi/vectorcall.c - MethodDescriptor2_Type - -Modules/itertoolsmodule.c - _grouper_type - -Modules/itertoolsmodule.c - accumulate_type - -Modules/itertoolsmodule.c - batched_type - -Modules/itertoolsmodule.c - chain_type - -Modules/itertoolsmodule.c - combinations_type - -Modules/itertoolsmodule.c - compress_type - -Modules/itertoolsmodule.c - count_type - -Modules/itertoolsmodule.c - cwr_type - -Modules/itertoolsmodule.c - cycle_type - -Modules/itertoolsmodule.c - dropwhile_type - -Modules/itertoolsmodule.c - filterfalse_type - -Modules/itertoolsmodule.c - groupby_type - -Modules/itertoolsmodule.c - islice_type - -Modules/itertoolsmodule.c - pairwise_type - -Modules/itertoolsmodule.c - permutations_type - -Modules/itertoolsmodule.c - product_type - -Modules/itertoolsmodule.c - repeat_type - -Modules/itertoolsmodule.c - starmap_type - -Modules/itertoolsmodule.c - takewhile_type - -Modules/itertoolsmodule.c - tee_type - -Modules/itertoolsmodule.c - teedataobject_type - -Modules/itertoolsmodule.c - ziplongest_type - ################################## From webhook-mailer at python.org Fri Feb 10 11:38:34 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Fri, 10 Feb 2023 16:38:34 -0000 Subject: [Python-checkins] gh-101759: Update Windows installer to SQLite 3.40.1 (#101762) Message-ID: https://github.com/python/cpython/commit/5d15224011217487e1a174c144af0e5f5826c17c commit: 5d15224011217487e1a174c144af0e5f5826c17c branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-10T17:38:26+01:00 summary: gh-101759: Update Windows installer to SQLite 3.40.1 (#101762) files: A Misc/NEWS.d/next/Windows/2023-02-09-22-09-27.gh-issue-101759.zFlqSH.rst M PCbuild/get_externals.bat M PCbuild/python.props M PCbuild/readme.txt diff --git a/Misc/NEWS.d/next/Windows/2023-02-09-22-09-27.gh-issue-101759.zFlqSH.rst b/Misc/NEWS.d/next/Windows/2023-02-09-22-09-27.gh-issue-101759.zFlqSH.rst new file mode 100644 index 000000000000..62bcac34397d --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-09-22-09-27.gh-issue-101759.zFlqSH.rst @@ -0,0 +1 @@ +Update Windows installer to SQLite 3.40.1. diff --git a/PCbuild/get_externals.bat b/PCbuild/get_externals.bat index d4d96bd49d72..2c424517eae4 100644 --- a/PCbuild/get_externals.bat +++ b/PCbuild/get_externals.bat @@ -54,7 +54,7 @@ set libraries= set libraries=%libraries% bzip2-1.0.8 if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.4.3 if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1t -set libraries=%libraries% sqlite-3.39.4.0 +set libraries=%libraries% sqlite-3.40.1.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tcl-core-8.6.13.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tk-8.6.13.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tix-8.4.3.6 diff --git a/PCbuild/python.props b/PCbuild/python.props index 5926c7ded470..28ee74d85947 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -68,7 +68,7 @@ - $(ExternalsDir)sqlite-3.39.4.0\ + $(ExternalsDir)sqlite-3.40.1.0\ $(ExternalsDir)bzip2-1.0.8\ $(ExternalsDir)xz-5.2.5\ $(ExternalsDir)libffi-3.4.3\ diff --git a/PCbuild/readme.txt b/PCbuild/readme.txt index 347be8aeeca3..4c799b64c461 100644 --- a/PCbuild/readme.txt +++ b/PCbuild/readme.txt @@ -188,7 +188,7 @@ _ssl again when building. _sqlite3 - Wraps SQLite 3.39.4, which is itself built by sqlite3.vcxproj + Wraps SQLite 3.40.1, which is itself built by sqlite3.vcxproj Homepage: https://www.sqlite.org/ _tkinter From webhook-mailer at python.org Fri Feb 10 11:49:36 2023 From: webhook-mailer at python.org (iritkatriel) Date: Fri, 10 Feb 2023 16:49:36 -0000 Subject: [Python-checkins] gh-101517: make bdb avoid looking up in linecache with lineno=None (#101787) Message-ID: https://github.com/python/cpython/commit/366b94905869d680b3f1d4801fb497e78811e511 commit: 366b94905869d680b3f1d4801fb497e78811e511 branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-10T16:49:29Z summary: gh-101517: make bdb avoid looking up in linecache with lineno=None (#101787) files: A Misc/NEWS.d/next/Library/2023-02-10-16-02-29.gh-issue-101517.r7S2u8.rst M Lib/bdb.py M Lib/test/test_bdb.py diff --git a/Lib/bdb.py b/Lib/bdb.py index 81fbb8514acb..7f9b09514ffd 100644 --- a/Lib/bdb.py +++ b/Lib/bdb.py @@ -570,9 +570,10 @@ def format_stack_entry(self, frame_lineno, lprefix=': '): rv = frame.f_locals['__return__'] s += '->' s += reprlib.repr(rv) - line = linecache.getline(filename, lineno, frame.f_globals) - if line: - s += lprefix + line.strip() + if lineno is not None: + line = linecache.getline(filename, lineno, frame.f_globals) + if line: + s += lprefix + line.strip() return s # The following methods can be called by clients to use diff --git a/Lib/test/test_bdb.py b/Lib/test/test_bdb.py index 87a5ac308a12..042c2daea7f7 100644 --- a/Lib/test/test_bdb.py +++ b/Lib/test/test_bdb.py @@ -1203,5 +1203,11 @@ def main(): tracer.runcall(tfunc_import) +class TestRegressions(unittest.TestCase): + def test_format_stack_entry_no_lineno(self): + # See gh-101517 + Bdb().format_stack_entry((sys._getframe(), None)) + + if __name__ == "__main__": unittest.main() diff --git a/Misc/NEWS.d/next/Library/2023-02-10-16-02-29.gh-issue-101517.r7S2u8.rst b/Misc/NEWS.d/next/Library/2023-02-10-16-02-29.gh-issue-101517.r7S2u8.rst new file mode 100644 index 000000000000..a5f6bdfa5ac2 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-10-16-02-29.gh-issue-101517.r7S2u8.rst @@ -0,0 +1 @@ +Fixed bug where :mod:`bdb` looks up the source line with :mod:`linecache` with a ``lineno=None``, which causes it to fail with an unhandled exception. From webhook-mailer at python.org Fri Feb 10 11:57:37 2023 From: webhook-mailer at python.org (zooba) Date: Fri, 10 Feb 2023 16:57:37 -0000 Subject: [Python-checkins] gh-101763: Update bundled copy of libffi to 3.4.4 on Windows (GH-101784) Message-ID: https://github.com/python/cpython/commit/e1aadedf099e645fd2eb1aa8bdcde5a105cee95d commit: e1aadedf099e645fd2eb1aa8bdcde5a105cee95d branch: main author: Steve Dower committer: zooba date: 2023-02-10T16:57:30Z summary: gh-101763: Update bundled copy of libffi to 3.4.4 on Windows (GH-101784) files: A Misc/NEWS.d/next/Windows/2023-02-10-14-26-05.gh-issue-101763.RPaj7r.rst M PCbuild/get_externals.bat M PCbuild/python.props diff --git a/Misc/NEWS.d/next/Windows/2023-02-10-14-26-05.gh-issue-101763.RPaj7r.rst b/Misc/NEWS.d/next/Windows/2023-02-10-14-26-05.gh-issue-101763.RPaj7r.rst new file mode 100644 index 000000000000..e7e5a73afeb5 --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-10-14-26-05.gh-issue-101763.RPaj7r.rst @@ -0,0 +1 @@ +Updates copy of libffi bundled with Windows installs to 3.4.4. diff --git a/PCbuild/get_externals.bat b/PCbuild/get_externals.bat index 2c424517eae4..128241393f9f 100644 --- a/PCbuild/get_externals.bat +++ b/PCbuild/get_externals.bat @@ -52,7 +52,7 @@ echo.Fetching external libraries... set libraries= set libraries=%libraries% bzip2-1.0.8 -if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.4.3 +if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.4.4 if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1t set libraries=%libraries% sqlite-3.40.1.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tcl-core-8.6.13.0 @@ -76,7 +76,7 @@ for %%e in (%libraries%) do ( echo.Fetching external binaries... set binaries= -if NOT "%IncludeLibffi%"=="false" set binaries=%binaries% libffi-3.4.3 +if NOT "%IncludeLibffi%"=="false" set binaries=%binaries% libffi-3.4.4 if NOT "%IncludeSSL%"=="false" set binaries=%binaries% openssl-bin-1.1.1t if NOT "%IncludeTkinter%"=="false" set binaries=%binaries% tcltk-8.6.13.0 if NOT "%IncludeSSLSrc%"=="false" set binaries=%binaries% nasm-2.11.06 diff --git a/PCbuild/python.props b/PCbuild/python.props index 28ee74d85947..7994fbe7cd5e 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -71,7 +71,7 @@ $(ExternalsDir)sqlite-3.40.1.0\ $(ExternalsDir)bzip2-1.0.8\ $(ExternalsDir)xz-5.2.5\ - $(ExternalsDir)libffi-3.4.3\ + $(ExternalsDir)libffi-3.4.4\ $(libffiDir)$(ArchName)\ $(libffiOutDir)include $(ExternalsDir)openssl-1.1.1t\ From webhook-mailer at python.org Fri Feb 10 12:24:38 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 10 Feb 2023 17:24:38 -0000 Subject: [Python-checkins] gh-101517: make bdb avoid looking up in linecache with lineno=None (GH-101787) Message-ID: https://github.com/python/cpython/commit/6d8ef9680689823229106c22aa74e1c549a250ec commit: 6d8ef9680689823229106c22aa74e1c549a250ec branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-10T09:24:30-08:00 summary: gh-101517: make bdb avoid looking up in linecache with lineno=None (GH-101787) (cherry picked from commit 366b94905869d680b3f1d4801fb497e78811e511) Co-authored-by: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> files: A Misc/NEWS.d/next/Library/2023-02-10-16-02-29.gh-issue-101517.r7S2u8.rst M Lib/bdb.py M Lib/test/test_bdb.py diff --git a/Lib/bdb.py b/Lib/bdb.py index 81fbb8514acb..7f9b09514ffd 100644 --- a/Lib/bdb.py +++ b/Lib/bdb.py @@ -570,9 +570,10 @@ def format_stack_entry(self, frame_lineno, lprefix=': '): rv = frame.f_locals['__return__'] s += '->' s += reprlib.repr(rv) - line = linecache.getline(filename, lineno, frame.f_globals) - if line: - s += lprefix + line.strip() + if lineno is not None: + line = linecache.getline(filename, lineno, frame.f_globals) + if line: + s += lprefix + line.strip() return s # The following methods can be called by clients to use diff --git a/Lib/test/test_bdb.py b/Lib/test/test_bdb.py index 87a5ac308a12..042c2daea7f7 100644 --- a/Lib/test/test_bdb.py +++ b/Lib/test/test_bdb.py @@ -1203,5 +1203,11 @@ def main(): tracer.runcall(tfunc_import) +class TestRegressions(unittest.TestCase): + def test_format_stack_entry_no_lineno(self): + # See gh-101517 + Bdb().format_stack_entry((sys._getframe(), None)) + + if __name__ == "__main__": unittest.main() diff --git a/Misc/NEWS.d/next/Library/2023-02-10-16-02-29.gh-issue-101517.r7S2u8.rst b/Misc/NEWS.d/next/Library/2023-02-10-16-02-29.gh-issue-101517.r7S2u8.rst new file mode 100644 index 000000000000..a5f6bdfa5ac2 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-10-16-02-29.gh-issue-101517.r7S2u8.rst @@ -0,0 +1 @@ +Fixed bug where :mod:`bdb` looks up the source line with :mod:`linecache` with a ``lineno=None``, which causes it to fail with an unhandled exception. From webhook-mailer at python.org Fri Feb 10 12:26:34 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 10 Feb 2023 17:26:34 -0000 Subject: [Python-checkins] gh-101517: make bdb avoid looking up in linecache with lineno=None (GH-101787) Message-ID: https://github.com/python/cpython/commit/b0bba7ad14a7f1774339032feae0a95af0f56483 commit: b0bba7ad14a7f1774339032feae0a95af0f56483 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-10T09:26:27-08:00 summary: gh-101517: make bdb avoid looking up in linecache with lineno=None (GH-101787) (cherry picked from commit 366b94905869d680b3f1d4801fb497e78811e511) Co-authored-by: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> files: A Misc/NEWS.d/next/Library/2023-02-10-16-02-29.gh-issue-101517.r7S2u8.rst M Lib/bdb.py M Lib/test/test_bdb.py diff --git a/Lib/bdb.py b/Lib/bdb.py index 81fbb8514acb..7f9b09514ffd 100644 --- a/Lib/bdb.py +++ b/Lib/bdb.py @@ -570,9 +570,10 @@ def format_stack_entry(self, frame_lineno, lprefix=': '): rv = frame.f_locals['__return__'] s += '->' s += reprlib.repr(rv) - line = linecache.getline(filename, lineno, frame.f_globals) - if line: - s += lprefix + line.strip() + if lineno is not None: + line = linecache.getline(filename, lineno, frame.f_globals) + if line: + s += lprefix + line.strip() return s # The following methods can be called by clients to use diff --git a/Lib/test/test_bdb.py b/Lib/test/test_bdb.py index 87a5ac308a12..042c2daea7f7 100644 --- a/Lib/test/test_bdb.py +++ b/Lib/test/test_bdb.py @@ -1203,5 +1203,11 @@ def main(): tracer.runcall(tfunc_import) +class TestRegressions(unittest.TestCase): + def test_format_stack_entry_no_lineno(self): + # See gh-101517 + Bdb().format_stack_entry((sys._getframe(), None)) + + if __name__ == "__main__": unittest.main() diff --git a/Misc/NEWS.d/next/Library/2023-02-10-16-02-29.gh-issue-101517.r7S2u8.rst b/Misc/NEWS.d/next/Library/2023-02-10-16-02-29.gh-issue-101517.r7S2u8.rst new file mode 100644 index 000000000000..a5f6bdfa5ac2 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-10-16-02-29.gh-issue-101517.r7S2u8.rst @@ -0,0 +1 @@ +Fixed bug where :mod:`bdb` looks up the source line with :mod:`linecache` with a ``lineno=None``, which causes it to fail with an unhandled exception. From webhook-mailer at python.org Fri Feb 10 12:28:01 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 10 Feb 2023 17:28:01 -0000 Subject: [Python-checkins] gh-101763: Update bundled copy of libffi to 3.4.4 on Windows (GH-101784) Message-ID: https://github.com/python/cpython/commit/7ca9da93161fa26060411e7a3f448384e1435fa6 commit: 7ca9da93161fa26060411e7a3f448384e1435fa6 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-10T09:27:53-08:00 summary: gh-101763: Update bundled copy of libffi to 3.4.4 on Windows (GH-101784) (cherry picked from commit e1aadedf099e645fd2eb1aa8bdcde5a105cee95d) Co-authored-by: Steve Dower files: A Misc/NEWS.d/next/Windows/2023-02-10-14-26-05.gh-issue-101763.RPaj7r.rst M PCbuild/get_externals.bat M PCbuild/python.props diff --git a/Misc/NEWS.d/next/Windows/2023-02-10-14-26-05.gh-issue-101763.RPaj7r.rst b/Misc/NEWS.d/next/Windows/2023-02-10-14-26-05.gh-issue-101763.RPaj7r.rst new file mode 100644 index 000000000000..e7e5a73afeb5 --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-10-14-26-05.gh-issue-101763.RPaj7r.rst @@ -0,0 +1 @@ +Updates copy of libffi bundled with Windows installs to 3.4.4. diff --git a/PCbuild/get_externals.bat b/PCbuild/get_externals.bat index 20c10ffbd818..048243e8c28b 100644 --- a/PCbuild/get_externals.bat +++ b/PCbuild/get_externals.bat @@ -52,7 +52,7 @@ echo.Fetching external libraries... set libraries= set libraries=%libraries% bzip2-1.0.8 -if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.4.3 +if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.4.4 if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1t set libraries=%libraries% sqlite-3.39.4.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tcl-core-8.6.12.1 @@ -76,7 +76,7 @@ for %%e in (%libraries%) do ( echo.Fetching external binaries... set binaries= -if NOT "%IncludeLibffi%"=="false" set binaries=%binaries% libffi-3.4.3 +if NOT "%IncludeLibffi%"=="false" set binaries=%binaries% libffi-3.4.4 if NOT "%IncludeSSL%"=="false" set binaries=%binaries% openssl-bin-1.1.1t if NOT "%IncludeTkinter%"=="false" set binaries=%binaries% tcltk-8.6.12.1 if NOT "%IncludeSSLSrc%"=="false" set binaries=%binaries% nasm-2.11.06 diff --git a/PCbuild/python.props b/PCbuild/python.props index 5926c7ded470..49c098da242c 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -71,7 +71,7 @@ $(ExternalsDir)sqlite-3.39.4.0\ $(ExternalsDir)bzip2-1.0.8\ $(ExternalsDir)xz-5.2.5\ - $(ExternalsDir)libffi-3.4.3\ + $(ExternalsDir)libffi-3.4.4\ $(libffiDir)$(ArchName)\ $(libffiOutDir)include $(ExternalsDir)openssl-1.1.1t\ From webhook-mailer at python.org Fri Feb 10 12:54:13 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Fri, 10 Feb 2023 17:54:13 -0000 Subject: [Python-checkins] Docs: use parameter list for sqlite3.Cursor.execute* (#101782) Message-ID: https://github.com/python/cpython/commit/2037ebf81bd4bbe5421421b822bd57cfd665a1e9 commit: 2037ebf81bd4bbe5421421b822bd57cfd665a1e9 branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-10T18:54:04+01:00 summary: Docs: use parameter list for sqlite3.Cursor.execute* (#101782) Co-authored-by: Alex Waygood files: M Doc/library/sqlite3.rst diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst index bbdc891c930c..8ffc0aad9199 100644 --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -1418,15 +1418,22 @@ Cursor objects .. method:: execute(sql, parameters=(), /) - Execute SQL statement *sql*. - Bind values to the statement using :ref:`placeholders - ` that map to the :term:`sequence` or :class:`dict` - *parameters*. + Execute SQL a single SQL statement, + optionally binding Python values using + :ref:`placeholders `. - :meth:`execute` will only execute a single SQL statement. If you try to execute - more than one statement with it, it will raise a :exc:`ProgrammingError`. Use - :meth:`executescript` if you want to execute multiple SQL statements with one - call. + :param str sql: + A single SQL statement. + + :param parameters: + Python values to bind to placeholders in *sql*. + A :class:`!dict` if named placeholders are used. + A :term:`!sequence` if unnamed placeholders are used. + See :ref:`sqlite3-placeholders`. + :type parameters: :class:`dict` | :term:`sequence` + + :raises ProgrammingError: + If *sql* contains more than one SQL statement. If :attr:`~Connection.autocommit` is :data:`LEGACY_TRANSACTION_CONTROL`, @@ -1435,15 +1442,29 @@ Cursor objects and there is no open transaction, a transaction is implicitly opened before executing *sql*. + Use :meth:`executescript` to execute multiple SQL statements. .. method:: executemany(sql, parameters, /) - Execute :ref:`parameterized ` SQL statement *sql* - against all parameter sequences or mappings found in the sequence - *parameters*. It is also possible to use an - :term:`iterator` yielding parameters instead of a sequence. + For every item in *parameters*, + repeatedly execute the :ref:`parameterized ` + SQL statement *sql*. + Uses the same implicit transaction handling as :meth:`~Cursor.execute`. + :param str sql: + A single SQL :abbr:`DML (Data Manipulation Language)` statement. + + :param parameters: + An :term:`!iterable` of parameters to bind with + the placeholders in *sql*. + See :ref:`sqlite3-placeholders`. + :type parameters: :term:`iterable` + + :raises ProgrammingError: + If *sql* contains more than one SQL statement, + or is not a DML statment. + Example: .. testcode:: sqlite3.cursor From webhook-mailer at python.org Fri Feb 10 13:01:55 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 10 Feb 2023 18:01:55 -0000 Subject: [Python-checkins] Docs: use parameter list for sqlite3.Cursor.execute* (GH-101782) Message-ID: https://github.com/python/cpython/commit/18313ecb09fe29849724738ddd3cde9510ffd50b commit: 18313ecb09fe29849724738ddd3cde9510ffd50b branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-10T10:01:38-08:00 summary: Docs: use parameter list for sqlite3.Cursor.execute* (GH-101782) (cherry picked from commit 2037ebf81bd4bbe5421421b822bd57cfd665a1e9) Co-authored-by: Erlend E. Aasland Co-authored-by: Alex Waygood files: M Doc/library/sqlite3.rst diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst index 8a0d84d1c4b1..fd8a032aff73 100644 --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -1335,30 +1335,51 @@ Cursor objects .. method:: execute(sql, parameters=(), /) - Execute SQL statement *sql*. - Bind values to the statement using :ref:`placeholders - ` that map to the :term:`sequence` or :class:`dict` - *parameters*. + Execute SQL a single SQL statement, + optionally binding Python values using + :ref:`placeholders `. - :meth:`execute` will only execute a single SQL statement. If you try to execute - more than one statement with it, it will raise a :exc:`ProgrammingError`. Use - :meth:`executescript` if you want to execute multiple SQL statements with one - call. + :param str sql: + A single SQL statement. + + :param parameters: + Python values to bind to placeholders in *sql*. + A :class:`!dict` if named placeholders are used. + A :term:`!sequence` if unnamed placeholders are used. + See :ref:`sqlite3-placeholders`. + :type parameters: :class:`dict` | :term:`sequence` + + :raises ProgrammingError: + If *sql* contains more than one SQL statement. If :attr:`~Connection.isolation_level` is not ``None``, *sql* is an ``INSERT``, ``UPDATE``, ``DELETE``, or ``REPLACE`` statement, and there is no open transaction, a transaction is implicitly opened before executing *sql*. + Use :meth:`executescript` to execute multiple SQL statements. .. method:: executemany(sql, parameters, /) - Execute :ref:`parameterized ` SQL statement *sql* - against all parameter sequences or mappings found in the sequence - *parameters*. It is also possible to use an - :term:`iterator` yielding parameters instead of a sequence. + For every item in *parameters*, + repeatedly execute the :ref:`parameterized ` + SQL statement *sql*. + Uses the same implicit transaction handling as :meth:`~Cursor.execute`. + :param str sql: + A single SQL :abbr:`DML (Data Manipulation Language)` statement. + + :param parameters: + An :term:`!iterable` of parameters to bind with + the placeholders in *sql*. + See :ref:`sqlite3-placeholders`. + :type parameters: :term:`iterable` + + :raises ProgrammingError: + If *sql* contains more than one SQL statement, + or is not a DML statment. + Example: .. testcode:: sqlite3.cursor From webhook-mailer at python.org Fri Feb 10 13:46:20 2023 From: webhook-mailer at python.org (hugovk) Date: Fri, 10 Feb 2023 18:46:20 -0000 Subject: [Python-checkins] Docs: Fix getstatus() -> getcode() typos (#101296) Message-ID: https://github.com/python/cpython/commit/61f2be08661949e2f6dfc94143436297e60d47de commit: 61f2be08661949e2f6dfc94143436297e60d47de branch: main author: Hugo van Kemenade committer: hugovk date: 2023-02-10T20:46:12+02:00 summary: Docs: Fix getstatus() -> getcode() typos (#101296) files: M Doc/library/http.client.rst M Doc/library/urllib.request.rst diff --git a/Doc/library/http.client.rst b/Doc/library/http.client.rst index 48582219695b..ad3416135e30 100644 --- a/Doc/library/http.client.rst +++ b/Doc/library/http.client.rst @@ -532,7 +532,7 @@ statement. .. deprecated:: 3.9 Deprecated in favor of :attr:`~HTTPResponse.headers`. -.. method:: HTTPResponse.getstatus() +.. method:: HTTPResponse.getcode() .. deprecated:: 3.9 Deprecated in favor of :attr:`~HTTPResponse.status`. diff --git a/Doc/library/urllib.request.rst b/Doc/library/urllib.request.rst index 59e1f2da828a..64cc9c388ec3 100644 --- a/Doc/library/urllib.request.rst +++ b/Doc/library/urllib.request.rst @@ -1630,7 +1630,7 @@ The typical response object is a :class:`urllib.response.addinfourl` instance: .. deprecated:: 3.9 Deprecated in favor of :attr:`~addinfourl.status`. - .. method:: getstatus() + .. method:: getcode() .. deprecated:: 3.9 Deprecated in favor of :attr:`~addinfourl.status`. From webhook-mailer at python.org Fri Feb 10 16:21:27 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Fri, 10 Feb 2023 21:21:27 -0000 Subject: [Python-checkins] [3.10] Docs: use parameter list for sqlite3.Cursor.execute* (GH-101782) (#101808) Message-ID: https://github.com/python/cpython/commit/a565cd5b10bcb676dfb623a1a209be76e7fbecf3 commit: a565cd5b10bcb676dfb623a1a209be76e7fbecf3 branch: 3.10 author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-10T22:21:20+01:00 summary: [3.10] Docs: use parameter list for sqlite3.Cursor.execute* (GH-101782) (#101808) Co-authored-by: Alex Waygood (cherry picked from commit 2037ebf81bd4bbe5421421b822bd57cfd665a1e9) files: M Doc/library/sqlite3.rst diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst index 3a6925473b78..dc8a48b0c05c 100644 --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -1044,30 +1044,53 @@ Cursor objects .. method:: execute(sql, parameters=(), /) - Execute SQL statement *sql*. - Bind values to the statement using :ref:`placeholders - ` that map to the :term:`sequence` or :class:`dict` - *parameters*. + Execute SQL a single SQL statement, + optionally binding Python values using + :ref:`placeholders `. - :meth:`execute` will only execute a single SQL statement. If you try to execute - more than one statement with it, it will raise a :exc:`Warning`. Use - :meth:`executescript` if you want to execute multiple SQL statements with one - call. + :param str sql: + A single SQL statement. + + :param parameters: + Python values to bind to placeholders in *sql*. + A :class:`!dict` if named placeholders are used. + A :term:`!sequence` if unnamed placeholders are used. + See :ref:`sqlite3-placeholders`. + :type parameters: :class:`dict` | :term:`sequence` + + :raises Warning: + If *sql* contains more than one SQL statement. If :attr:`~Connection.isolation_level` is not ``None``, *sql* is an ``INSERT``, ``UPDATE``, ``DELETE``, or ``REPLACE`` statement, and there is no open transaction, a transaction is implicitly opened before executing *sql*. + Use :meth:`executescript` to execute multiple SQL statements. .. method:: executemany(sql, parameters, /) - Execute :ref:`parameterized ` SQL statement *sql* - against all parameter sequences or mappings found in the sequence - *parameters*. It is also possible to use an - :term:`iterator` yielding parameters instead of a sequence. + For every item in *parameters*, + repeatedly execute the :ref:`parameterized ` + SQL statement *sql*. + Uses the same implicit transaction handling as :meth:`~Cursor.execute`. + :param str sql: + A single SQL :abbr:`DML (Data Manipulation Language)` statement. + + :param parameters: + An :term:`!iterable` of parameters to bind with + the placeholders in *sql*. + See :ref:`sqlite3-placeholders`. + :type parameters: :term:`iterable` + + :raises ProgrammingError: + If *sql* is not a DML statment. + + :raises Warning: + If *sql* contains more than one SQL statement. + Example: .. testcode:: sqlite3.cursor From webhook-mailer at python.org Fri Feb 10 16:22:02 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Fri, 10 Feb 2023 21:22:02 -0000 Subject: [Python-checkins] [3.11] gh-101759: Update Windows installer to SQLite 3.40.1 (GH-101762) (#101791) Message-ID: https://github.com/python/cpython/commit/836098857bfb407c97fcd5ccd04fff1cbb2c626a commit: 836098857bfb407c97fcd5ccd04fff1cbb2c626a branch: 3.11 author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-10T22:21:55+01:00 summary: [3.11] gh-101759: Update Windows installer to SQLite 3.40.1 (GH-101762) (#101791) (cherry picked from commit 5d15224011217487e1a174c144af0e5f5826c17c) files: A Misc/NEWS.d/next/Windows/2023-02-09-22-09-27.gh-issue-101759.zFlqSH.rst M PCbuild/get_externals.bat M PCbuild/python.props M PCbuild/readme.txt diff --git a/Misc/NEWS.d/next/Windows/2023-02-09-22-09-27.gh-issue-101759.zFlqSH.rst b/Misc/NEWS.d/next/Windows/2023-02-09-22-09-27.gh-issue-101759.zFlqSH.rst new file mode 100644 index 000000000000..62bcac34397d --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-09-22-09-27.gh-issue-101759.zFlqSH.rst @@ -0,0 +1 @@ +Update Windows installer to SQLite 3.40.1. diff --git a/PCbuild/get_externals.bat b/PCbuild/get_externals.bat index 048243e8c28b..8507b5e52889 100644 --- a/PCbuild/get_externals.bat +++ b/PCbuild/get_externals.bat @@ -54,7 +54,7 @@ set libraries= set libraries=%libraries% bzip2-1.0.8 if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.4.4 if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1t -set libraries=%libraries% sqlite-3.39.4.0 +set libraries=%libraries% sqlite-3.40.1.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tcl-core-8.6.12.1 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tk-8.6.12.1 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tix-8.4.3.6 diff --git a/PCbuild/python.props b/PCbuild/python.props index 49c098da242c..7994fbe7cd5e 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -68,7 +68,7 @@ - $(ExternalsDir)sqlite-3.39.4.0\ + $(ExternalsDir)sqlite-3.40.1.0\ $(ExternalsDir)bzip2-1.0.8\ $(ExternalsDir)xz-5.2.5\ $(ExternalsDir)libffi-3.4.4\ diff --git a/PCbuild/readme.txt b/PCbuild/readme.txt index c8a35ce38978..cce2632a6cdd 100644 --- a/PCbuild/readme.txt +++ b/PCbuild/readme.txt @@ -187,7 +187,7 @@ _ssl again when building. _sqlite3 - Wraps SQLite 3.39.4, which is itself built by sqlite3.vcxproj + Wraps SQLite 3.40.1, which is itself built by sqlite3.vcxproj Homepage: https://www.sqlite.org/ _tkinter From webhook-mailer at python.org Fri Feb 10 16:25:10 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Fri, 10 Feb 2023 21:25:10 -0000 Subject: [Python-checkins] [3.10] gh-101759: Update Windows installer to SQLite 3.40.1 (GH-101762) (#101792) Message-ID: https://github.com/python/cpython/commit/207fa11febbe0438d9c12be5353003d3c19f60c3 commit: 207fa11febbe0438d9c12be5353003d3c19f60c3 branch: 3.10 author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-10T22:23:00+01:00 summary: [3.10] gh-101759: Update Windows installer to SQLite 3.40.1 (GH-101762) (#101792) (cherry picked from commit 5d15224011217487e1a174c144af0e5f5826c17c) files: A Misc/NEWS.d/next/Windows/2023-02-09-22-09-27.gh-issue-101759.zFlqSH.rst M PCbuild/get_externals.bat M PCbuild/python.props M PCbuild/readme.txt diff --git a/Misc/NEWS.d/next/Windows/2023-02-09-22-09-27.gh-issue-101759.zFlqSH.rst b/Misc/NEWS.d/next/Windows/2023-02-09-22-09-27.gh-issue-101759.zFlqSH.rst new file mode 100644 index 000000000000..62bcac34397d --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-09-22-09-27.gh-issue-101759.zFlqSH.rst @@ -0,0 +1 @@ +Update Windows installer to SQLite 3.40.1. diff --git a/PCbuild/get_externals.bat b/PCbuild/get_externals.bat index f700b9539228..aee6471ea272 100644 --- a/PCbuild/get_externals.bat +++ b/PCbuild/get_externals.bat @@ -54,7 +54,7 @@ set libraries= set libraries=%libraries% bzip2-1.0.8 if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.3.0 if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1t -set libraries=%libraries% sqlite-3.39.4.0 +set libraries=%libraries% sqlite-3.40.1.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tcl-core-8.6.12.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tk-8.6.12.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tix-8.4.3.6 diff --git a/PCbuild/python.props b/PCbuild/python.props index 98dbf7486d5d..c93f25261e73 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -64,7 +64,7 @@ - $(ExternalsDir)sqlite-3.39.4.0\ + $(ExternalsDir)sqlite-3.40.1.0\ $(ExternalsDir)bzip2-1.0.8\ $(ExternalsDir)xz-5.2.5\ $(ExternalsDir)libffi-3.3.0\ diff --git a/PCbuild/readme.txt b/PCbuild/readme.txt index d39a0708a10a..3dcd5bb7f327 100644 --- a/PCbuild/readme.txt +++ b/PCbuild/readme.txt @@ -186,7 +186,7 @@ _ssl again when building. _sqlite3 - Wraps SQLite 3.39.4, which is itself built by sqlite3.vcxproj + Wraps SQLite 3.40.1, which is itself built by sqlite3.vcxproj Homepage: https://www.sqlite.org/ _tkinter From webhook-mailer at python.org Fri Feb 10 17:26:42 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 10 Feb 2023 22:26:42 -0000 Subject: [Python-checkins] Docs: Fix getstatus() -> getcode() typos (GH-101296) Message-ID: https://github.com/python/cpython/commit/2e7ff1fcf67dcbcf872c699d95cc8b01771c0698 commit: 2e7ff1fcf67dcbcf872c699d95cc8b01771c0698 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-10T14:26:34-08:00 summary: Docs: Fix getstatus() -> getcode() typos (GH-101296) (cherry picked from commit 61f2be08661949e2f6dfc94143436297e60d47de) Co-authored-by: Hugo van Kemenade files: M Doc/library/http.client.rst M Doc/library/urllib.request.rst diff --git a/Doc/library/http.client.rst b/Doc/library/http.client.rst index dd7a0c459acc..00d76cc2bda7 100644 --- a/Doc/library/http.client.rst +++ b/Doc/library/http.client.rst @@ -528,7 +528,7 @@ statement. .. deprecated:: 3.9 Deprecated in favor of :attr:`~HTTPResponse.headers`. -.. method:: HTTPResponse.getstatus() +.. method:: HTTPResponse.getcode() .. deprecated:: 3.9 Deprecated in favor of :attr:`~HTTPResponse.status`. diff --git a/Doc/library/urllib.request.rst b/Doc/library/urllib.request.rst index 9c2c37a5c9a7..7659d802b361 100644 --- a/Doc/library/urllib.request.rst +++ b/Doc/library/urllib.request.rst @@ -1619,7 +1619,7 @@ The typical response object is a :class:`urllib.response.addinfourl` instance: .. deprecated:: 3.9 Deprecated in favor of :attr:`~addinfourl.status`. - .. method:: getstatus() + .. method:: getcode() .. deprecated:: 3.9 Deprecated in favor of :attr:`~addinfourl.status`. From webhook-mailer at python.org Fri Feb 10 17:26:48 2023 From: webhook-mailer at python.org (hugovk) Date: Fri, 10 Feb 2023 22:26:48 -0000 Subject: [Python-checkins] [3.11] Docs: Fix getstatus() -> getcode() typos (GH-101296) (#101805) Message-ID: https://github.com/python/cpython/commit/c485f0e39e2dd4afdb8608662ae3b15bcecf5e0e commit: c485f0e39e2dd4afdb8608662ae3b15bcecf5e0e branch: 3.11 author: Hugo van Kemenade committer: hugovk date: 2023-02-11T00:26:41+02:00 summary: [3.11] Docs: Fix getstatus() -> getcode() typos (GH-101296) (#101805) files: M Doc/library/http.client.rst M Doc/library/urllib.request.rst diff --git a/Doc/library/http.client.rst b/Doc/library/http.client.rst index 06f92510a5e4..bf0bf0f8c530 100644 --- a/Doc/library/http.client.rst +++ b/Doc/library/http.client.rst @@ -530,7 +530,7 @@ statement. .. deprecated:: 3.9 Deprecated in favor of :attr:`~HTTPResponse.headers`. -.. method:: HTTPResponse.getstatus() +.. method:: HTTPResponse.getcode() .. deprecated:: 3.9 Deprecated in favor of :attr:`~HTTPResponse.status`. diff --git a/Doc/library/urllib.request.rst b/Doc/library/urllib.request.rst index 59e1f2da828a..64cc9c388ec3 100644 --- a/Doc/library/urllib.request.rst +++ b/Doc/library/urllib.request.rst @@ -1630,7 +1630,7 @@ The typical response object is a :class:`urllib.response.addinfourl` instance: .. deprecated:: 3.9 Deprecated in favor of :attr:`~addinfourl.status`. - .. method:: getstatus() + .. method:: getcode() .. deprecated:: 3.9 Deprecated in favor of :attr:`~addinfourl.status`. From webhook-mailer at python.org Fri Feb 10 18:29:31 2023 From: webhook-mailer at python.org (brettcannon) Date: Fri, 10 Feb 2023 23:29:31 -0000 Subject: [Python-checkins] gh-101390: Fix docs for `imporlib.util.LazyLoader.factory` to properly call it a class method (GH-101391) Message-ID: https://github.com/python/cpython/commit/17143e2c30ae5e51945e04eeaec7ebb0e1f07fb5 commit: 17143e2c30ae5e51945e04eeaec7ebb0e1f07fb5 branch: main author: busywhitespace committer: brettcannon date: 2023-02-10T15:29:24-08:00 summary: gh-101390: Fix docs for `imporlib.util.LazyLoader.factory` to properly call it a class method (GH-101391) files: M Doc/library/importlib.rst diff --git a/Doc/library/importlib.rst b/Doc/library/importlib.rst index 3fc1531c0cdf..89efa64c6b52 100644 --- a/Doc/library/importlib.rst +++ b/Doc/library/importlib.rst @@ -1387,7 +1387,7 @@ an :term:`importer`. .. classmethod:: factory(loader) - A static method which returns a callable that creates a lazy loader. This + A class method which returns a callable that creates a lazy loader. This is meant to be used in situations where the loader is passed by class instead of by instance. :: From webhook-mailer at python.org Fri Feb 10 19:24:45 2023 From: webhook-mailer at python.org (brettcannon) Date: Sat, 11 Feb 2023 00:24:45 -0000 Subject: [Python-checkins] [3.11] gh-101390: Fix docs for `imporlib.util.LazyLoader.factory` to properly call it a class method (GH-101391) (#GH-101813) Message-ID: https://github.com/python/cpython/commit/91fb7c36a35e61d44387c49d4e65f7779c9892cf commit: 91fb7c36a35e61d44387c49d4e65f7779c9892cf branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: brettcannon date: 2023-02-10T16:24:28-08:00 summary: [3.11] gh-101390: Fix docs for `imporlib.util.LazyLoader.factory` to properly call it a class method (GH-101391) (#GH-101813) gh-101390: Fix docs for `imporlib.util.LazyLoader.factory` to properly call it a class method (GH-101391) (cherry picked from commit 17143e2c30ae5e51945e04eeaec7ebb0e1f07fb5) Co-authored-by: busywhitespace files: M Doc/library/importlib.rst diff --git a/Doc/library/importlib.rst b/Doc/library/importlib.rst index c29d69c143cf..55668dacfe49 100644 --- a/Doc/library/importlib.rst +++ b/Doc/library/importlib.rst @@ -1462,7 +1462,7 @@ an :term:`importer`. .. classmethod:: factory(loader) - A static method which returns a callable that creates a lazy loader. This + A class method which returns a callable that creates a lazy loader. This is meant to be used in situations where the loader is passed by class instead of by instance. :: From webhook-mailer at python.org Fri Feb 10 19:24:49 2023 From: webhook-mailer at python.org (brettcannon) Date: Sat, 11 Feb 2023 00:24:49 -0000 Subject: [Python-checkins] [3.10] gh-101390: Fix docs for `imporlib.util.LazyLoader.factory` to properly call it a class method (GH-101391) (GH-101814) Message-ID: https://github.com/python/cpython/commit/6c0e3dc8ef958e1d2eb2fc4e59a9152e85e65da0 commit: 6c0e3dc8ef958e1d2eb2fc4e59a9152e85e65da0 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: brettcannon date: 2023-02-10T16:24:42-08:00 summary: [3.10] gh-101390: Fix docs for `imporlib.util.LazyLoader.factory` to properly call it a class method (GH-101391) (GH-101814) gh-101390: Fix docs for `imporlib.util.LazyLoader.factory` to properly call it a class method (GH-101391) (cherry picked from commit 17143e2c30ae5e51945e04eeaec7ebb0e1f07fb5) Co-authored-by: busywhitespace files: M Doc/library/importlib.rst diff --git a/Doc/library/importlib.rst b/Doc/library/importlib.rst index f1cf9eec5d54..20305150d648 100644 --- a/Doc/library/importlib.rst +++ b/Doc/library/importlib.rst @@ -1722,7 +1722,7 @@ an :term:`importer`. .. classmethod:: factory(loader) - A static method which returns a callable that creates a lazy loader. This + A class method which returns a callable that creates a lazy loader. This is meant to be used in situations where the loader is passed by class instead of by instance. :: From webhook-mailer at python.org Sat Feb 11 03:30:56 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Sat, 11 Feb 2023 08:30:56 -0000 Subject: [Python-checkins] [3.11] GH-101696: invalidate type version tag in `_PyStaticType_Dealloc` (GH-101697) (#101722) Message-ID: https://github.com/python/cpython/commit/c5c12381b38494ebc2346bb01d3426160e068d35 commit: c5c12381b38494ebc2346bb01d3426160e068d35 branch: 3.11 author: Erlend E. Aasland committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-11T14:00:42+05:30 summary: [3.11] GH-101696: invalidate type version tag in `_PyStaticType_Dealloc` (GH-101697) (#101722) [3.11] GH-101696: invalidate type version tag in `_PyStaticType_Dealloc` (GH-101697). (cherry picked from commit d9de0792482d2ded364b0c7d2867b97a5da41b12) Co-authored-by: Kumar Aditya <59607654+kumaraditya303 at users.noreply.github.com> files: A Misc/NEWS.d/next/Core and Builtins/2023-02-08-17-13-31.gh-issue-101696.seJhTt.rst M Objects/typeobject.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-08-17-13-31.gh-issue-101696.seJhTt.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-08-17-13-31.gh-issue-101696.seJhTt.rst new file mode 100644 index 000000000000..ff2bbb4b5642 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-08-17-13-31.gh-issue-101696.seJhTt.rst @@ -0,0 +1 @@ +Invalidate type version tag in ``_PyStaticType_Dealloc`` for static types, avoiding bug where a false cache hit could crash the interpreter. Patch by Kumar Aditya. diff --git a/Objects/typeobject.c b/Objects/typeobject.c index 9ee57105204e..90c6425ffdfe 100644 --- a/Objects/typeobject.c +++ b/Objects/typeobject.c @@ -4080,6 +4080,8 @@ _PyStaticType_Dealloc(PyTypeObject *type) } type->tp_flags &= ~Py_TPFLAGS_READY; + type->tp_flags &= ~Py_TPFLAGS_VALID_VERSION_TAG; + type->tp_version_tag = 0; } From webhook-mailer at python.org Sat Feb 11 03:37:46 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Sat, 11 Feb 2023 08:37:46 -0000 Subject: [Python-checkins] GH-101797: allocate `PyExpat_CAPI` capsule on heap (#101798) Message-ID: https://github.com/python/cpython/commit/b652d40f1c88fcd8595cd401513f6b7f8e499471 commit: b652d40f1c88fcd8595cd401513f6b7f8e499471 branch: main author: Kumar Aditya <59607654+kumaraditya303 at users.noreply.github.com> committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-11T14:07:39+05:30 summary: GH-101797: allocate `PyExpat_CAPI` capsule on heap (#101798) files: M Modules/pyexpat.c diff --git a/Modules/pyexpat.c b/Modules/pyexpat.c index 63a3392d5efe..0a744998b6c5 100644 --- a/Modules/pyexpat.c +++ b/Modules/pyexpat.c @@ -1878,6 +1878,18 @@ add_features(PyObject *mod) } #endif +static void +pyexpat_capsule_destructor(PyObject *capsule) +{ + void *p = PyCapsule_GetPointer(capsule, PyExpat_CAPSULE_NAME); + if (p == NULL) { + PyErr_WriteUnraisable(capsule); + return; + } + PyMem_Free(p); +} + + static int pyexpat_exec(PyObject *mod) { @@ -1965,40 +1977,46 @@ pyexpat_exec(PyObject *mod) MYCONST(XML_PARAM_ENTITY_PARSING_ALWAYS); #undef MYCONST - static struct PyExpat_CAPI capi; + struct PyExpat_CAPI *capi = PyMem_Malloc(sizeof(*capi)); + if (capi == NULL) { + PyErr_NoMemory(); + return -1; + } /* initialize pyexpat dispatch table */ - capi.size = sizeof(capi); - capi.magic = PyExpat_CAPI_MAGIC; - capi.MAJOR_VERSION = XML_MAJOR_VERSION; - capi.MINOR_VERSION = XML_MINOR_VERSION; - capi.MICRO_VERSION = XML_MICRO_VERSION; - capi.ErrorString = XML_ErrorString; - capi.GetErrorCode = XML_GetErrorCode; - capi.GetErrorColumnNumber = XML_GetErrorColumnNumber; - capi.GetErrorLineNumber = XML_GetErrorLineNumber; - capi.Parse = XML_Parse; - capi.ParserCreate_MM = XML_ParserCreate_MM; - capi.ParserFree = XML_ParserFree; - capi.SetCharacterDataHandler = XML_SetCharacterDataHandler; - capi.SetCommentHandler = XML_SetCommentHandler; - capi.SetDefaultHandlerExpand = XML_SetDefaultHandlerExpand; - capi.SetElementHandler = XML_SetElementHandler; - capi.SetNamespaceDeclHandler = XML_SetNamespaceDeclHandler; - capi.SetProcessingInstructionHandler = XML_SetProcessingInstructionHandler; - capi.SetUnknownEncodingHandler = XML_SetUnknownEncodingHandler; - capi.SetUserData = XML_SetUserData; - capi.SetStartDoctypeDeclHandler = XML_SetStartDoctypeDeclHandler; - capi.SetEncoding = XML_SetEncoding; - capi.DefaultUnknownEncodingHandler = PyUnknownEncodingHandler; + capi->size = sizeof(*capi); + capi->magic = PyExpat_CAPI_MAGIC; + capi->MAJOR_VERSION = XML_MAJOR_VERSION; + capi->MINOR_VERSION = XML_MINOR_VERSION; + capi->MICRO_VERSION = XML_MICRO_VERSION; + capi->ErrorString = XML_ErrorString; + capi->GetErrorCode = XML_GetErrorCode; + capi->GetErrorColumnNumber = XML_GetErrorColumnNumber; + capi->GetErrorLineNumber = XML_GetErrorLineNumber; + capi->Parse = XML_Parse; + capi->ParserCreate_MM = XML_ParserCreate_MM; + capi->ParserFree = XML_ParserFree; + capi->SetCharacterDataHandler = XML_SetCharacterDataHandler; + capi->SetCommentHandler = XML_SetCommentHandler; + capi->SetDefaultHandlerExpand = XML_SetDefaultHandlerExpand; + capi->SetElementHandler = XML_SetElementHandler; + capi->SetNamespaceDeclHandler = XML_SetNamespaceDeclHandler; + capi->SetProcessingInstructionHandler = XML_SetProcessingInstructionHandler; + capi->SetUnknownEncodingHandler = XML_SetUnknownEncodingHandler; + capi->SetUserData = XML_SetUserData; + capi->SetStartDoctypeDeclHandler = XML_SetStartDoctypeDeclHandler; + capi->SetEncoding = XML_SetEncoding; + capi->DefaultUnknownEncodingHandler = PyUnknownEncodingHandler; #if XML_COMBINED_VERSION >= 20100 - capi.SetHashSalt = XML_SetHashSalt; + capi->SetHashSalt = XML_SetHashSalt; #else - capi.SetHashSalt = NULL; + capi->SetHashSalt = NULL; #endif /* export using capsule */ - PyObject *capi_object = PyCapsule_New(&capi, PyExpat_CAPSULE_NAME, NULL); + PyObject *capi_object = PyCapsule_New(capi, PyExpat_CAPSULE_NAME, + pyexpat_capsule_destructor); if (capi_object == NULL) { + PyMem_Free(capi); return -1; } From webhook-mailer at python.org Sat Feb 11 10:34:22 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Sat, 11 Feb 2023 15:34:22 -0000 Subject: [Python-checkins] Fix typo in test_fstring.py (#101823) Message-ID: https://github.com/python/cpython/commit/3eb12df8b526aa5a2ca6b43f21a1c5e7d38ee634 commit: 3eb12df8b526aa5a2ca6b43f21a1c5e7d38ee634 branch: main author: mjoerg committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-11T21:04:15+05:30 summary: Fix typo in test_fstring.py (#101823) files: M Lib/test/test_fstring.py diff --git a/Lib/test/test_fstring.py b/Lib/test/test_fstring.py index a50056da116e..b3f6ef41d77b 100644 --- a/Lib/test/test_fstring.py +++ b/Lib/test/test_fstring.py @@ -667,7 +667,7 @@ def test_missing_expression(self): "f'''{\t\f\r\n}'''", ]) - # Different error messages are raised when a specfier ('!', ':' or '=') is used after an empty expression + # Different error messages are raised when a specifier ('!', ':' or '=') is used after an empty expression self.assertAllRaise(SyntaxError, "f-string: expression required before '!'", ["f'{!r}'", "f'{ !r}'", From webhook-mailer at python.org Sat Feb 11 23:54:34 2023 From: webhook-mailer at python.org (gpshead) Date: Sun, 12 Feb 2023 04:54:34 -0000 Subject: [Python-checkins] gh-89792: Prevent test_tools from copying 1000M of "source" in freeze test (#101837) Message-ID: https://github.com/python/cpython/commit/1d194235e4d5981b5fea25c75318d61189103a58 commit: 1d194235e4d5981b5fea25c75318d61189103a58 branch: main author: Gregory P. Smith committer: gpshead date: 2023-02-11T20:54:28-08:00 summary: gh-89792: Prevent test_tools from copying 1000M of "source" in freeze test (#101837) Prevent test_tools from copying 1000M of "source" It doesn't need a git repo, just the checkout. We skip .git metadata, Doc/build, Doc/venv, and `__pycache__` subdirs, that developers often have in their clients to reduce the size of the source tree copy ten-fold. This should significantly reduce IO and presumably time on buildbots during this long test. files: A Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst M Tools/freeze/test/freeze.py diff --git a/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst b/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst new file mode 100644 index 000000000000..a3a3070d7f37 --- /dev/null +++ b/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst @@ -0,0 +1,3 @@ +``test_tools`` now copies up to 10x less source data to a temporary +directory during the ``freeze`` test by ignoring git metadata and other +artifacts. diff --git a/Tools/freeze/test/freeze.py b/Tools/freeze/test/freeze.py index ddbfd7fc9c2f..0ae983b15c98 100644 --- a/Tools/freeze/test/freeze.py +++ b/Tools/freeze/test/freeze.py @@ -80,7 +80,19 @@ def copy_source_tree(newroot, oldroot): if newroot == SRCDIR: raise Exception('this probably isn\'t what you wanted') shutil.rmtree(newroot) - shutil.copytree(oldroot, newroot) + + def ignore_non_src(src, names): + """Turns what could be a 1000M copy into a 100M copy.""" + # Don't copy the ~600M+ of needless git repo metadata. + # source only, ignore cached .pyc files. + subdirs_to_skip = {'.git', '__pycache__'} + if os.path.basename(src) == 'Doc': + # Another potential ~250M+ of non test related data. + subdirs_to_skip.add('build') + subdirs_to_skip.add('venv') + return subdirs_to_skip + + shutil.copytree(oldroot, newroot, ignore=ignore_non_src) if os.path.exists(os.path.join(newroot, 'Makefile')): _run_quiet([MAKE, 'clean'], newroot) From webhook-mailer at python.org Sun Feb 12 00:18:18 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 12 Feb 2023 05:18:18 -0000 Subject: [Python-checkins] gh-89792: Prevent test_tools from copying 1000M of "source" in freeze test (GH-101837) Message-ID: https://github.com/python/cpython/commit/d17cc3dfeb9c3d0eecea4405b1ee3d9e36a7e299 commit: d17cc3dfeb9c3d0eecea4405b1ee3d9e36a7e299 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-11T21:18:07-08:00 summary: gh-89792: Prevent test_tools from copying 1000M of "source" in freeze test (GH-101837) Prevent test_tools from copying 1000M of "source" It doesn't need a git repo, just the checkout. We skip .git metadata, Doc/build, Doc/venv, and `__pycache__` subdirs, that developers often have in their clients to reduce the size of the source tree copy ten-fold. This should significantly reduce IO and presumably time on buildbots during this long test. (cherry picked from commit 1d194235e4d5981b5fea25c75318d61189103a58) Co-authored-by: Gregory P. Smith files: A Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst M Tools/freeze/test/freeze.py diff --git a/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst b/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst new file mode 100644 index 000000000000..a3a3070d7f37 --- /dev/null +++ b/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst @@ -0,0 +1,3 @@ +``test_tools`` now copies up to 10x less source data to a temporary +directory during the ``freeze`` test by ignoring git metadata and other +artifacts. diff --git a/Tools/freeze/test/freeze.py b/Tools/freeze/test/freeze.py index ddbfd7fc9c2f..0ae983b15c98 100644 --- a/Tools/freeze/test/freeze.py +++ b/Tools/freeze/test/freeze.py @@ -80,7 +80,19 @@ def copy_source_tree(newroot, oldroot): if newroot == SRCDIR: raise Exception('this probably isn\'t what you wanted') shutil.rmtree(newroot) - shutil.copytree(oldroot, newroot) + + def ignore_non_src(src, names): + """Turns what could be a 1000M copy into a 100M copy.""" + # Don't copy the ~600M+ of needless git repo metadata. + # source only, ignore cached .pyc files. + subdirs_to_skip = {'.git', '__pycache__'} + if os.path.basename(src) == 'Doc': + # Another potential ~250M+ of non test related data. + subdirs_to_skip.add('build') + subdirs_to_skip.add('venv') + return subdirs_to_skip + + shutil.copytree(oldroot, newroot, ignore=ignore_non_src) if os.path.exists(os.path.join(newroot, 'Makefile')): _run_quiet([MAKE, 'clean'], newroot) From webhook-mailer at python.org Sun Feb 12 00:24:50 2023 From: webhook-mailer at python.org (gpshead) Date: Sun, 12 Feb 2023 05:24:50 -0000 Subject: [Python-checkins] gh-85984: Utilize new "winsize" functions from termios in pty tests. (#101831) Message-ID: https://github.com/python/cpython/commit/da2fb9264315dc30ac3012b4dbf5ba76d3f34433 commit: da2fb9264315dc30ac3012b4dbf5ba76d3f34433 branch: main author: Soumendra Ganguly <67527439+8vasu at users.noreply.github.com> committer: gpshead date: 2023-02-11T21:24:43-08:00 summary: gh-85984: Utilize new "winsize" functions from termios in pty tests. (#101831) Utilize new functions termios.tcgetwinsize() and termios.tcsetwinsize in test_pty.py. Signed-off-by: Soumendra Ganguly Co-authored-by: Gregory P. Smith files: A Misc/NEWS.d/next/Tests/2023-02-11-22-36-10.gh-issue-85984.EVXjT9.rst M Lib/test/test_pty.py diff --git a/Lib/test/test_pty.py b/Lib/test/test_pty.py index fa0dbcc16f3c..c723bb362c5d 100644 --- a/Lib/test/test_pty.py +++ b/Lib/test/test_pty.py @@ -3,6 +3,8 @@ # Skip these tests if termios or fcntl are not available import_module('termios') +# fcntl is a proxy for not being one of the wasm32 platforms even though we +# don't use this module... a proper check for what crashes those is needed. import_module("fcntl") import errno @@ -15,20 +17,12 @@ import socket import io # readline import unittest - -import struct -import fcntl import warnings TEST_STRING_1 = b"I wish to buy a fish license.\n" TEST_STRING_2 = b"For my pet fish, Eric.\n" -try: - _TIOCGWINSZ = tty.TIOCGWINSZ - _TIOCSWINSZ = tty.TIOCSWINSZ - _HAVE_WINSZ = True -except AttributeError: - _HAVE_WINSZ = False +_HAVE_WINSZ = hasattr(tty, "TIOCGWINSZ") and hasattr(tty, "TIOCSWINSZ") if verbose: def debug(msg): @@ -82,14 +76,6 @@ def expectedFailureIfStdinIsTTY(fun): pass return fun -def _get_term_winsz(fd): - s = struct.pack("HHHH", 0, 0, 0, 0) - return fcntl.ioctl(fd, _TIOCGWINSZ, s) - -def _set_term_winsz(fd, winsz): - fcntl.ioctl(fd, _TIOCSWINSZ, winsz) - - # Marginal testing of pty suite. Cannot do extensive 'do or fail' testing # because pty code is not too portable. class PtyTest(unittest.TestCase): @@ -105,18 +91,14 @@ def setUp(self): self.addCleanup(signal.alarm, 0) signal.alarm(10) - # Save original stdin window size - self.stdin_rows = None - self.stdin_cols = None + # Save original stdin window size. + self.stdin_dim = None if _HAVE_WINSZ: try: - stdin_dim = os.get_terminal_size(pty.STDIN_FILENO) - self.stdin_rows = stdin_dim.lines - self.stdin_cols = stdin_dim.columns - old_stdin_winsz = struct.pack("HHHH", self.stdin_rows, - self.stdin_cols, 0, 0) - self.addCleanup(_set_term_winsz, pty.STDIN_FILENO, old_stdin_winsz) - except OSError: + self.stdin_dim = tty.tcgetwinsize(pty.STDIN_FILENO) + self.addCleanup(tty.tcsetwinsize, pty.STDIN_FILENO, + self.stdin_dim) + except tty.error: pass def handle_sig(self, sig, frame): @@ -131,41 +113,40 @@ def test_openpty(self): try: mode = tty.tcgetattr(pty.STDIN_FILENO) except tty.error: - # not a tty or bad/closed fd + # Not a tty or bad/closed fd. debug("tty.tcgetattr(pty.STDIN_FILENO) failed") mode = None - new_stdin_winsz = None - if self.stdin_rows is not None and self.stdin_cols is not None: + new_dim = None + if self.stdin_dim: try: # Modify pty.STDIN_FILENO window size; we need to # check if pty.openpty() is able to set pty slave # window size accordingly. - debug("Setting pty.STDIN_FILENO window size") - debug(f"original size: (rows={self.stdin_rows}, cols={self.stdin_cols})") - target_stdin_rows = self.stdin_rows + 1 - target_stdin_cols = self.stdin_cols + 1 - debug(f"target size: (rows={target_stdin_rows}, cols={target_stdin_cols})") - target_stdin_winsz = struct.pack("HHHH", target_stdin_rows, - target_stdin_cols, 0, 0) - _set_term_winsz(pty.STDIN_FILENO, target_stdin_winsz) + debug("Setting pty.STDIN_FILENO window size.") + debug(f"original size: (row, col) = {self.stdin_dim}") + target_dim = (self.stdin_dim[0] + 1, self.stdin_dim[1] + 1) + debug(f"target size: (row, col) = {target_dim}") + tty.tcsetwinsize(pty.STDIN_FILENO, target_dim) # Were we able to set the window size # of pty.STDIN_FILENO successfully? - new_stdin_winsz = _get_term_winsz(pty.STDIN_FILENO) - self.assertEqual(new_stdin_winsz, target_stdin_winsz, + new_dim = tty.tcgetwinsize(pty.STDIN_FILENO) + self.assertEqual(new_dim, target_dim, "pty.STDIN_FILENO window size unchanged") except OSError: - warnings.warn("Failed to set pty.STDIN_FILENO window size") + warnings.warn("Failed to set pty.STDIN_FILENO window size.") pass try: debug("Calling pty.openpty()") try: - master_fd, slave_fd = pty.openpty(mode, new_stdin_winsz) + master_fd, slave_fd, slave_name = pty.openpty(mode, new_dim, + True) except TypeError: master_fd, slave_fd = pty.openpty() - debug(f"Got master_fd '{master_fd}', slave_fd '{slave_fd}'") + slave_name = None + debug(f"Got {master_fd=}, {slave_fd=}, {slave_name=}") except OSError: # " An optional feature could not be imported " ... ? raise unittest.SkipTest("Pseudo-terminals (seemingly) not functional.") @@ -181,8 +162,8 @@ def test_openpty(self): if mode: self.assertEqual(tty.tcgetattr(slave_fd), mode, "openpty() failed to set slave termios") - if new_stdin_winsz: - self.assertEqual(_get_term_winsz(slave_fd), new_stdin_winsz, + if new_dim: + self.assertEqual(tty.tcgetwinsize(slave_fd), new_dim, "openpty() failed to set slave window size") # Ensure the fd is non-blocking in case there's nothing to read. @@ -367,9 +348,8 @@ def _socketpair(self): self.files.extend(socketpair) return socketpair - def _mock_select(self, rfds, wfds, xfds, timeout=0): + def _mock_select(self, rfds, wfds, xfds): # This will raise IndexError when no more expected calls exist. - # This ignores the timeout self.assertEqual(self.select_rfds_lengths.pop(0), len(rfds)) return self.select_rfds_results.pop(0), [], [] @@ -409,28 +389,6 @@ def test__copy_to_each(self): self.assertEqual(os.read(read_from_stdout_fd, 20), b'from master') self.assertEqual(os.read(masters[1], 20), b'from stdin') - def test__copy_eof_on_all(self): - """Test the empty read EOF case on both master_fd and stdin.""" - read_from_stdout_fd, mock_stdout_fd = self._pipe() - pty.STDOUT_FILENO = mock_stdout_fd - mock_stdin_fd, write_to_stdin_fd = self._pipe() - pty.STDIN_FILENO = mock_stdin_fd - socketpair = self._socketpair() - masters = [s.fileno() for s in socketpair] - - socketpair[1].close() - os.close(write_to_stdin_fd) - - pty.select = self._mock_select - self.select_rfds_lengths.append(2) - self.select_rfds_results.append([mock_stdin_fd, masters[0]]) - # We expect that both fds were removed from the fds list as they - # both encountered an EOF before the second select call. - self.select_rfds_lengths.append(0) - - # We expect the function to return without error. - self.assertEqual(pty._copy(masters[0]), None) - def test__restore_tty_mode_normal_return(self): """Test that spawn resets the tty mode no when _copy returns normally.""" diff --git a/Misc/NEWS.d/next/Tests/2023-02-11-22-36-10.gh-issue-85984.EVXjT9.rst b/Misc/NEWS.d/next/Tests/2023-02-11-22-36-10.gh-issue-85984.EVXjT9.rst new file mode 100644 index 000000000000..402f99ea6c6e --- /dev/null +++ b/Misc/NEWS.d/next/Tests/2023-02-11-22-36-10.gh-issue-85984.EVXjT9.rst @@ -0,0 +1 @@ +Utilize new "winsize" functions from termios in pty tests. From webhook-mailer at python.org Sun Feb 12 01:07:58 2023 From: webhook-mailer at python.org (gpshead) Date: Sun, 12 Feb 2023 06:07:58 -0000 Subject: [Python-checkins] gh-89792: Limit test_tools freeze test build parallelism based on the number of cores (#101841) Message-ID: https://github.com/python/cpython/commit/dfc2e065a2e71011017077e549cd2f9bf4944c54 commit: dfc2e065a2e71011017077e549cd2f9bf4944c54 branch: main author: Gregory P. Smith committer: gpshead date: 2023-02-11T22:07:52-08:00 summary: gh-89792: Limit test_tools freeze test build parallelism based on the number of cores (#101841) unhardcode freeze test build parallelism. base it on the number of cpus, don't use more than max(2, os.cpu_count()/3). files: M Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst M Tools/freeze/test/freeze.py diff --git a/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst b/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst index a3a3070d7f37..9de278919ef2 100644 --- a/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst +++ b/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst @@ -1,3 +1,4 @@ -``test_tools`` now copies up to 10x less source data to a temporary -directory during the ``freeze`` test by ignoring git metadata and other -artifacts. +``test_tools`` now copies up to 10x less source data to a temporary directory +during the ``freeze`` test by ignoring git metadata and other artifacts. It +also limits its python build parallelism based on os.cpu_count instead of hard +coding it as 8 cores. diff --git a/Tools/freeze/test/freeze.py b/Tools/freeze/test/freeze.py index 0ae983b15c98..b4c76ff36a87 100644 --- a/Tools/freeze/test/freeze.py +++ b/Tools/freeze/test/freeze.py @@ -163,16 +163,25 @@ def prepare(script=None, outdir=None): if not MAKE: raise UnsupportedError('make') + cores = os.cpu_count() + if cores and cores >= 3: + # this test is most often run as part of the whole suite with a lot + # of other tests running in parallel, from 1-2 vCPU systems up to + # people's NNN core beasts. Don't attempt to use it all. + parallel = f'-j{cores*2//3}' + else: + parallel = '-j2' + # Build python. - print(f'building python in {builddir}...') + print(f'building python {parallel=} in {builddir}...') if os.path.exists(os.path.join(srcdir, 'Makefile')): # Out-of-tree builds require a clean srcdir. _run_quiet([MAKE, '-C', srcdir, 'clean']) - _run_quiet([MAKE, '-C', builddir, '-j8']) + _run_quiet([MAKE, '-C', builddir, parallel]) # Install the build. print(f'installing python into {prefix}...') - _run_quiet([MAKE, '-C', builddir, '-j8', 'install']) + _run_quiet([MAKE, '-C', builddir, 'install']) python = os.path.join(prefix, 'bin', 'python3') return outdir, scriptfile, python From webhook-mailer at python.org Sun Feb 12 01:33:20 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 12 Feb 2023 06:33:20 -0000 Subject: [Python-checkins] gh-89792: Limit test_tools freeze test build parallelism based on the number of cores (GH-101841) Message-ID: https://github.com/python/cpython/commit/cec99ed1a7f76dda36816be0656e6f64e1d3ae1d commit: cec99ed1a7f76dda36816be0656e6f64e1d3ae1d branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-11T22:33:13-08:00 summary: gh-89792: Limit test_tools freeze test build parallelism based on the number of cores (GH-101841) unhardcode freeze test build parallelism. base it on the number of cpus, don't use more than max(2, os.cpu_count()/3). (cherry picked from commit dfc2e065a2e71011017077e549cd2f9bf4944c54) Co-authored-by: Gregory P. Smith files: M Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst M Tools/freeze/test/freeze.py diff --git a/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst b/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst index a3a3070d7f37..9de278919ef2 100644 --- a/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst +++ b/Misc/NEWS.d/next/Tests/2023-02-11-20-28-08.gh-issue-89792.S-Y5BZ.rst @@ -1,3 +1,4 @@ -``test_tools`` now copies up to 10x less source data to a temporary -directory during the ``freeze`` test by ignoring git metadata and other -artifacts. +``test_tools`` now copies up to 10x less source data to a temporary directory +during the ``freeze`` test by ignoring git metadata and other artifacts. It +also limits its python build parallelism based on os.cpu_count instead of hard +coding it as 8 cores. diff --git a/Tools/freeze/test/freeze.py b/Tools/freeze/test/freeze.py index 0ae983b15c98..b4c76ff36a87 100644 --- a/Tools/freeze/test/freeze.py +++ b/Tools/freeze/test/freeze.py @@ -163,16 +163,25 @@ def prepare(script=None, outdir=None): if not MAKE: raise UnsupportedError('make') + cores = os.cpu_count() + if cores and cores >= 3: + # this test is most often run as part of the whole suite with a lot + # of other tests running in parallel, from 1-2 vCPU systems up to + # people's NNN core beasts. Don't attempt to use it all. + parallel = f'-j{cores*2//3}' + else: + parallel = '-j2' + # Build python. - print(f'building python in {builddir}...') + print(f'building python {parallel=} in {builddir}...') if os.path.exists(os.path.join(srcdir, 'Makefile')): # Out-of-tree builds require a clean srcdir. _run_quiet([MAKE, '-C', srcdir, 'clean']) - _run_quiet([MAKE, '-C', builddir, '-j8']) + _run_quiet([MAKE, '-C', builddir, parallel]) # Install the build. print(f'installing python into {prefix}...') - _run_quiet([MAKE, '-C', builddir, '-j8', 'install']) + _run_quiet([MAKE, '-C', builddir, 'install']) python = os.path.join(prefix, 'bin', 'python3') return outdir, scriptfile, python From webhook-mailer at python.org Sun Feb 12 09:20:21 2023 From: webhook-mailer at python.org (JulienPalard) Date: Sun, 12 Feb 2023 14:20:21 -0000 Subject: [Python-checkins] gh-101845: pyspecific: Fix i18n for availability directive (GH-101846) Message-ID: https://github.com/python/cpython/commit/6ef6915d3530e844243893f91bf4bd702dfef570 commit: 6ef6915d3530e844243893f91bf4bd702dfef570 branch: main author: Jean Abou-Samra committer: JulienPalard date: 2023-02-12T15:20:11+01:00 summary: gh-101845: pyspecific: Fix i18n for availability directive (GH-101846) pyspecific: Fix i18n for availability directive If the directive has content, the previous code would nest paragraph nodes from that content inside a general paragraph node, which confuses Sphinx and leads it to drop the content when translating. Instead, use a container node for the body. Also use set_source_info so that any warnings have location info. files: M Doc/tools/extensions/pyspecific.py diff --git a/Doc/tools/extensions/pyspecific.py b/Doc/tools/extensions/pyspecific.py index db7bb3b44219..d659a4a54b9d 100644 --- a/Doc/tools/extensions/pyspecific.py +++ b/Doc/tools/extensions/pyspecific.py @@ -28,6 +28,7 @@ from sphinx.environment import NoUri from sphinx.locale import _ as sphinx_gettext from sphinx.util import status_iterator, logging +from sphinx.util.docutils import SphinxDirective from sphinx.util.nodes import split_explicit_title from sphinx.writers.text import TextWriter, TextTranslator @@ -119,7 +120,7 @@ def run(self): # Support for documenting platform availability -class Availability(Directive): +class Availability(SphinxDirective): has_content = True required_arguments = 1 @@ -139,18 +140,19 @@ class Availability(Directive): def run(self): availability_ref = ':ref:`Availability `: ' + avail_nodes, avail_msgs = self.state.inline_text( + availability_ref + self.arguments[0], + self.lineno) pnode = nodes.paragraph(availability_ref + self.arguments[0], - classes=["availability"],) - n, m = self.state.inline_text(availability_ref, self.lineno) - pnode.extend(n + m) - n, m = self.state.inline_text(self.arguments[0], self.lineno) - pnode.extend(n + m) + '', *avail_nodes, *avail_msgs) + self.set_source_info(pnode) + cnode = nodes.container("", pnode, classes=["availability"]) + self.set_source_info(cnode) if self.content: - self.state.nested_parse(self.content, self.content_offset, pnode) - + self.state.nested_parse(self.content, self.content_offset, cnode) self.parse_platforms() - return [pnode] + return [cnode] def parse_platforms(self): """Parse platform information from arguments From webhook-mailer at python.org Sun Feb 12 12:28:27 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 12 Feb 2023 17:28:27 -0000 Subject: [Python-checkins] gh-101845: pyspecific: Fix i18n for availability directive (GH-101846) Message-ID: https://github.com/python/cpython/commit/1b736838e6ae1b4ef42cdd27c2708face908f92c commit: 1b736838e6ae1b4ef42cdd27c2708face908f92c branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-12T09:28:20-08:00 summary: gh-101845: pyspecific: Fix i18n for availability directive (GH-101846) pyspecific: Fix i18n for availability directive If the directive has content, the previous code would nest paragraph nodes from that content inside a general paragraph node, which confuses Sphinx and leads it to drop the content when translating. Instead, use a container node for the body. Also use set_source_info so that any warnings have location info. (cherry picked from commit 6ef6915d3530e844243893f91bf4bd702dfef570) Co-authored-by: Jean Abou-Samra files: M Doc/tools/extensions/pyspecific.py diff --git a/Doc/tools/extensions/pyspecific.py b/Doc/tools/extensions/pyspecific.py index 6c383ea57c8a..b4fea983818d 100644 --- a/Doc/tools/extensions/pyspecific.py +++ b/Doc/tools/extensions/pyspecific.py @@ -28,6 +28,7 @@ from sphinx.environment import NoUri from sphinx.locale import _ as sphinx_gettext from sphinx.util import status_iterator, logging +from sphinx.util.docutils import SphinxDirective from sphinx.util.nodes import split_explicit_title from sphinx.writers.text import TextWriter, TextTranslator from sphinx.writers.latex import LaTeXTranslator @@ -124,7 +125,7 @@ def run(self): # Support for documenting platform availability -class Availability(Directive): +class Availability(SphinxDirective): has_content = True required_arguments = 1 @@ -144,18 +145,19 @@ class Availability(Directive): def run(self): availability_ref = ':ref:`Availability `: ' + avail_nodes, avail_msgs = self.state.inline_text( + availability_ref + self.arguments[0], + self.lineno) pnode = nodes.paragraph(availability_ref + self.arguments[0], - classes=["availability"],) - n, m = self.state.inline_text(availability_ref, self.lineno) - pnode.extend(n + m) - n, m = self.state.inline_text(self.arguments[0], self.lineno) - pnode.extend(n + m) + '', *avail_nodes, *avail_msgs) + self.set_source_info(pnode) + cnode = nodes.container("", pnode, classes=["availability"]) + self.set_source_info(cnode) if self.content: - self.state.nested_parse(self.content, self.content_offset, pnode) - + self.state.nested_parse(self.content, self.content_offset, cnode) self.parse_platforms() - return [pnode] + return [cnode] def parse_platforms(self): """Parse platform information from arguments From webhook-mailer at python.org Mon Feb 13 04:11:54 2023 From: webhook-mailer at python.org (hugovk) Date: Mon, 13 Feb 2023 09:11:54 -0000 Subject: [Python-checkins] Correct trivial grammar in reset_mock docs (#101861) Message-ID: https://github.com/python/cpython/commit/a1f08f5f19753c7c9295f51b5ae1262c7a1c838f commit: a1f08f5f19753c7c9295f51b5ae1262c7a1c838f branch: main author: Steve Kowalik committer: hugovk date: 2023-02-13T11:11:43+02:00 summary: Correct trivial grammar in reset_mock docs (#101861) files: M Doc/library/unittest.mock.rst diff --git a/Doc/library/unittest.mock.rst b/Doc/library/unittest.mock.rst index e009f303fef3..d6d8e5e9557d 100644 --- a/Doc/library/unittest.mock.rst +++ b/Doc/library/unittest.mock.rst @@ -406,7 +406,7 @@ the *new_callable* argument to :func:`patch`. False .. versionchanged:: 3.6 - Added two keyword only argument to the reset_mock function. + Added two keyword-only arguments to the reset_mock function. This can be useful where you want to make a series of assertions that reuse the same object. Note that :meth:`reset_mock` *doesn't* clear the @@ -416,8 +416,8 @@ the *new_callable* argument to :func:`patch`. parameter as ``True``. Child mocks and the return value mock (if any) are reset as well. - .. note:: *return_value*, and :attr:`side_effect` are keyword only - argument. + .. note:: *return_value*, and :attr:`side_effect` are keyword-only + arguments. .. method:: mock_add_spec(spec, spec_set=False) From webhook-mailer at python.org Mon Feb 13 04:19:43 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 13 Feb 2023 09:19:43 -0000 Subject: [Python-checkins] Correct trivial grammar in reset_mock docs (GH-101861) Message-ID: https://github.com/python/cpython/commit/59852bbcc38ed4b5d6da7208e0e6b96456028523 commit: 59852bbcc38ed4b5d6da7208e0e6b96456028523 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-13T01:19:20-08:00 summary: Correct trivial grammar in reset_mock docs (GH-101861) (cherry picked from commit a1f08f5f19753c7c9295f51b5ae1262c7a1c838f) Co-authored-by: Steve Kowalik files: M Doc/library/unittest.mock.rst diff --git a/Doc/library/unittest.mock.rst b/Doc/library/unittest.mock.rst index 7947f097eeb1..3b5087a3fd3b 100644 --- a/Doc/library/unittest.mock.rst +++ b/Doc/library/unittest.mock.rst @@ -406,7 +406,7 @@ the *new_callable* argument to :func:`patch`. False .. versionchanged:: 3.6 - Added two keyword only argument to the reset_mock function. + Added two keyword-only arguments to the reset_mock function. This can be useful where you want to make a series of assertions that reuse the same object. Note that :meth:`reset_mock` *doesn't* clear the @@ -416,8 +416,8 @@ the *new_callable* argument to :func:`patch`. parameter as ``True``. Child mocks and the return value mock (if any) are reset as well. - .. note:: *return_value*, and :attr:`side_effect` are keyword only - argument. + .. note:: *return_value*, and :attr:`side_effect` are keyword-only + arguments. .. method:: mock_add_spec(spec, spec_set=False) From webhook-mailer at python.org Mon Feb 13 04:20:25 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 13 Feb 2023 09:20:25 -0000 Subject: [Python-checkins] Correct trivial grammar in reset_mock docs (GH-101861) Message-ID: https://github.com/python/cpython/commit/01b21c320b9dbd123f5848e33b4a9da952a11730 commit: 01b21c320b9dbd123f5848e33b4a9da952a11730 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-13T01:20:18-08:00 summary: Correct trivial grammar in reset_mock docs (GH-101861) (cherry picked from commit a1f08f5f19753c7c9295f51b5ae1262c7a1c838f) Co-authored-by: Steve Kowalik files: M Doc/library/unittest.mock.rst diff --git a/Doc/library/unittest.mock.rst b/Doc/library/unittest.mock.rst index b768557e6075..4f06e836d572 100644 --- a/Doc/library/unittest.mock.rst +++ b/Doc/library/unittest.mock.rst @@ -406,7 +406,7 @@ the *new_callable* argument to :func:`patch`. False .. versionchanged:: 3.6 - Added two keyword only argument to the reset_mock function. + Added two keyword-only arguments to the reset_mock function. This can be useful where you want to make a series of assertions that reuse the same object. Note that :meth:`reset_mock` *doesn't* clear the @@ -416,8 +416,8 @@ the *new_callable* argument to :func:`patch`. parameter as ``True``. Child mocks and the return value mock (if any) are reset as well. - .. note:: *return_value*, and :attr:`side_effect` are keyword only - argument. + .. note:: *return_value*, and :attr:`side_effect` are keyword-only + arguments. .. method:: mock_add_spec(spec, spec_set=False) From webhook-mailer at python.org Mon Feb 13 06:25:02 2023 From: webhook-mailer at python.org (markshannon) Date: Mon, 13 Feb 2023 11:25:02 -0000 Subject: [Python-checkins] GH-87849: Simplify stack effect of SEND and specialize it for generators and coroutines. (GH-101788) Message-ID: https://github.com/python/cpython/commit/160f2fe2b90ed5ec7838cb4141dd35768422891f commit: 160f2fe2b90ed5ec7838cb4141dd35768422891f branch: main author: Mark Shannon committer: markshannon date: 2023-02-13T11:24:55Z summary: GH-87849: Simplify stack effect of SEND and specialize it for generators and coroutines. (GH-101788) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-10-15-54-57.gh-issue-87849.IUVvPz.rst M Include/internal/pycore_code.h M Include/internal/pycore_opcode.h M Include/opcode.h M Lib/dis.py M Lib/importlib/_bootstrap_external.py M Lib/opcode.py M Lib/test/test_dis.py M Objects/frameobject.c M Python/bytecodes.c M Python/compile.c M Python/generated_cases.c.h M Python/opcode_metadata.h M Python/opcode_targets.h M Python/specialize.c diff --git a/Include/internal/pycore_code.h b/Include/internal/pycore_code.h index a287250acc19..10f1e320a12f 100644 --- a/Include/internal/pycore_code.h +++ b/Include/internal/pycore_code.h @@ -92,6 +92,12 @@ typedef struct { #define INLINE_CACHE_ENTRIES_FOR_ITER CACHE_ENTRIES(_PyForIterCache) +typedef struct { + uint16_t counter; +} _PySendCache; + +#define INLINE_CACHE_ENTRIES_SEND CACHE_ENTRIES(_PySendCache) + // Borrowed references to common callables: struct callable_cache { PyObject *isinstance; @@ -233,6 +239,7 @@ extern void _Py_Specialize_CompareAndBranch(PyObject *lhs, PyObject *rhs, extern void _Py_Specialize_UnpackSequence(PyObject *seq, _Py_CODEUNIT *instr, int oparg); extern void _Py_Specialize_ForIter(PyObject *iter, _Py_CODEUNIT *instr, int oparg); +extern void _Py_Specialize_Send(PyObject *receiver, _Py_CODEUNIT *instr); /* Finalizer function for static codeobjects used in deepfreeze.py */ extern void _PyStaticCode_Fini(PyCodeObject *co); diff --git a/Include/internal/pycore_opcode.h b/Include/internal/pycore_opcode.h index 47c847213351..5e65adee9e00 100644 --- a/Include/internal/pycore_opcode.h +++ b/Include/internal/pycore_opcode.h @@ -50,6 +50,7 @@ const uint8_t _PyOpcode_Caches[256] = { [COMPARE_OP] = 1, [LOAD_GLOBAL] = 5, [BINARY_OP] = 1, + [SEND] = 1, [COMPARE_AND_BRANCH] = 1, [CALL] = 4, }; @@ -196,6 +197,7 @@ const uint8_t _PyOpcode_Deopt[256] = { [RETURN_GENERATOR] = RETURN_GENERATOR, [RETURN_VALUE] = RETURN_VALUE, [SEND] = SEND, + [SEND_GEN] = SEND, [SETUP_ANNOTATIONS] = SETUP_ANNOTATIONS, [SET_ADD] = SET_ADD, [SET_UPDATE] = SET_UPDATE, @@ -395,7 +397,7 @@ static const char *const _PyOpcode_OpName[263] = { [SET_UPDATE] = "SET_UPDATE", [DICT_MERGE] = "DICT_MERGE", [DICT_UPDATE] = "DICT_UPDATE", - [166] = "<166>", + [SEND_GEN] = "SEND_GEN", [167] = "<167>", [168] = "<168>", [169] = "<169>", @@ -496,7 +498,6 @@ static const char *const _PyOpcode_OpName[263] = { #endif #define EXTRA_CASES \ - case 166: \ case 167: \ case 168: \ case 169: \ diff --git a/Include/opcode.h b/Include/opcode.h index 77ad7c22440d..d643741c3c3a 100644 --- a/Include/opcode.h +++ b/Include/opcode.h @@ -187,6 +187,7 @@ extern "C" { #define UNPACK_SEQUENCE_LIST 159 #define UNPACK_SEQUENCE_TUPLE 160 #define UNPACK_SEQUENCE_TWO_TUPLE 161 +#define SEND_GEN 166 #define DO_TRACING 255 #define HAS_ARG(op) ((((op) >= HAVE_ARGUMENT) && (!IS_PSEUDO_OPCODE(op)))\ diff --git a/Lib/dis.py b/Lib/dis.py index a6921008d9d0..9edde6ae8258 100644 --- a/Lib/dis.py +++ b/Lib/dis.py @@ -39,6 +39,7 @@ BINARY_OP = opmap['BINARY_OP'] JUMP_BACKWARD = opmap['JUMP_BACKWARD'] FOR_ITER = opmap['FOR_ITER'] +SEND = opmap['SEND'] LOAD_ATTR = opmap['LOAD_ATTR'] CACHE = opmap["CACHE"] @@ -453,6 +454,7 @@ def _get_instructions_bytes(code, varname_from_oparg=None, argrepr = '' positions = Positions(*next(co_positions, ())) deop = _deoptop(op) + caches = _inline_cache_entries[deop] if arg is not None: # Set argval to the dereferenced value of the argument when # available, and argrepr to the string representation of argval. @@ -478,8 +480,7 @@ def _get_instructions_bytes(code, varname_from_oparg=None, elif deop in hasjrel: signed_arg = -arg if _is_backward_jump(deop) else arg argval = offset + 2 + signed_arg*2 - if deop == FOR_ITER: - argval += 2 + argval += 2 * caches argrepr = "to " + repr(argval) elif deop in haslocal or deop in hasfree: argval, argrepr = _get_name_info(arg, varname_from_oparg) @@ -633,12 +634,12 @@ def findlabels(code): for offset, op, arg in _unpack_opargs(code): if arg is not None: deop = _deoptop(op) + caches = _inline_cache_entries[deop] if deop in hasjrel: if _is_backward_jump(deop): arg = -arg label = offset + 2 + arg*2 - if deop == FOR_ITER: - label += 2 + label += 2 * caches elif deop in hasjabs: label = arg*2 else: diff --git a/Lib/importlib/_bootstrap_external.py b/Lib/importlib/_bootstrap_external.py index 933c8c7d7e05..38d4a384c2cc 100644 --- a/Lib/importlib/_bootstrap_external.py +++ b/Lib/importlib/_bootstrap_external.py @@ -432,6 +432,7 @@ def _write_atomic(path, data, mode=0o666): # Python 3.12a5 3516 (Add COMPARE_AND_BRANCH instruction) # Python 3.12a5 3517 (Change YIELD_VALUE oparg to exception block depth) # Python 3.12a5 3518 (Add RETURN_CONST instruction) +# Python 3.12a5 3519 (Modify SEND instruction) # Python 3.13 will start with 3550 @@ -444,7 +445,7 @@ def _write_atomic(path, data, mode=0o666): # Whenever MAGIC_NUMBER is changed, the ranges in the magic_values array # in PC/launcher.c must also be updated. -MAGIC_NUMBER = (3518).to_bytes(2, 'little') + b'\r\n' +MAGIC_NUMBER = (3519).to_bytes(2, 'little') + b'\r\n' _RAW_MAGIC_NUMBER = int.from_bytes(MAGIC_NUMBER, 'little') # For import.c diff --git a/Lib/opcode.py b/Lib/opcode.py index 5f163d2ccb80..b69cd1bbdd61 100644 --- a/Lib/opcode.py +++ b/Lib/opcode.py @@ -167,7 +167,7 @@ def pseudo_op(name, op, real_ops): def_op('RETURN_CONST', 121) hasconst.append(121) def_op('BINARY_OP', 122) -jrel_op('SEND', 123) # Number of bytes to skip +jrel_op('SEND', 123) # Number of words to skip def_op('LOAD_FAST', 124) # Local variable number, no null check haslocal.append(124) def_op('STORE_FAST', 125) # Local variable number @@ -370,6 +370,9 @@ def pseudo_op(name, op, real_ops): "UNPACK_SEQUENCE_TUPLE", "UNPACK_SEQUENCE_TWO_TUPLE", ], + "SEND": [ + "SEND_GEN", + ], } _specialized_instructions = [ opcode for family in _specializations.values() for opcode in family @@ -429,6 +432,9 @@ def pseudo_op(name, op, real_ops): "STORE_SUBSCR": { "counter": 1, }, + "SEND": { + "counter": 1, + }, } _inline_cache_entries = [ diff --git a/Lib/test/test_dis.py b/Lib/test/test_dis.py index 1050b15e16ea..9086824dd6f4 100644 --- a/Lib/test/test_dis.py +++ b/Lib/test/test_dis.py @@ -475,11 +475,13 @@ async def _asyncwith(c): BEFORE_ASYNC_WITH GET_AWAITABLE 1 LOAD_CONST 0 (None) - >> SEND 3 (to 22) + >> SEND 3 (to 24) YIELD_VALUE 2 RESUME 3 - JUMP_BACKWARD_NO_INTERRUPT 4 (to 14) - >> POP_TOP + JUMP_BACKWARD_NO_INTERRUPT 5 (to 14) + >> SWAP 2 + POP_TOP + POP_TOP %3d LOAD_CONST 1 (1) STORE_FAST 1 (x) @@ -490,30 +492,33 @@ async def _asyncwith(c): CALL 2 GET_AWAITABLE 2 LOAD_CONST 0 (None) - >> SEND 3 (to 56) + >> SEND 3 (to 64) YIELD_VALUE 2 RESUME 3 - JUMP_BACKWARD_NO_INTERRUPT 4 (to 48) + JUMP_BACKWARD_NO_INTERRUPT 5 (to 54) >> POP_TOP + POP_TOP %3d LOAD_CONST 2 (2) STORE_FAST 2 (y) RETURN_CONST 0 (None) %3d >> CLEANUP_THROW - JUMP_BACKWARD 23 (to 22) + JUMP_BACKWARD 27 (to 24) >> CLEANUP_THROW - JUMP_BACKWARD 8 (to 56) + JUMP_BACKWARD 9 (to 64) >> PUSH_EXC_INFO WITH_EXCEPT_START GET_AWAITABLE 2 LOAD_CONST 0 (None) - >> SEND 4 (to 90) + >> SEND 4 (to 102) YIELD_VALUE 3 RESUME 3 - JUMP_BACKWARD_NO_INTERRUPT 4 (to 80) + JUMP_BACKWARD_NO_INTERRUPT 5 (to 90) >> CLEANUP_THROW - >> POP_JUMP_IF_TRUE 1 (to 94) + >> SWAP 2 + POP_TOP + POP_JUMP_IF_TRUE 1 (to 110) RERAISE 2 >> POP_TOP POP_EXCEPT diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-10-15-54-57.gh-issue-87849.IUVvPz.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-10-15-54-57.gh-issue-87849.IUVvPz.rst new file mode 100644 index 000000000000..da5f3ff79fd5 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-10-15-54-57.gh-issue-87849.IUVvPz.rst @@ -0,0 +1,3 @@ +Change the ``SEND`` instruction to leave the receiver on the stack. This +allows the specialized form of ``SEND`` to skip the chain of C calls and jump +directly to the ``RESUME`` in the generator or coroutine. diff --git a/Objects/frameobject.c b/Objects/frameobject.c index 0e52a3e2399c..581ed2d214c4 100644 --- a/Objects/frameobject.c +++ b/Objects/frameobject.c @@ -334,10 +334,10 @@ mark_stacks(PyCodeObject *code_obj, int len) break; } case SEND: - j = get_arg(code, i) + i + 1; + j = get_arg(code, i) + i + INLINE_CACHE_ENTRIES_SEND + 1; assert(j < len); - assert(stacks[j] == UNINITIALIZED || stacks[j] == pop_value(next_stack)); - stacks[j] = pop_value(next_stack); + assert(stacks[j] == UNINITIALIZED || stacks[j] == next_stack); + stacks[j] = next_stack; stacks[i+1] = next_stack; break; case JUMP_FORWARD: diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 2b9f12fefa14..429cd7fdafa1 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -680,51 +680,66 @@ dummy_func( PREDICT(LOAD_CONST); } - inst(SEND, (receiver, v -- receiver if (!jump), retval)) { + family(for_iter, INLINE_CACHE_ENTRIES_FOR_ITER) = { + SEND, + SEND_GEN, + }; + + inst(SEND, (unused/1, receiver, v -- receiver, retval)) { + #if ENABLE_SPECIALIZATION + _PySendCache *cache = (_PySendCache *)next_instr; + if (ADAPTIVE_COUNTER_IS_ZERO(cache->counter)) { + assert(cframe.use_tracing == 0); + next_instr--; + _Py_Specialize_Send(receiver, next_instr); + DISPATCH_SAME_OPARG(); + } + STAT_INC(SEND, deferred); + DECREMENT_ADAPTIVE_COUNTER(cache->counter); + #endif /* ENABLE_SPECIALIZATION */ assert(frame != &entry_frame); - bool jump = false; - PySendResult gen_status; - if (tstate->c_tracefunc == NULL) { - gen_status = PyIter_Send(receiver, v, &retval); - } else { - if (Py_IsNone(v) && PyIter_Check(receiver)) { - retval = Py_TYPE(receiver)->tp_iternext(receiver); - } - else { - retval = PyObject_CallMethodOneArg(receiver, &_Py_ID(send), v); - } - if (retval == NULL) { - if (tstate->c_tracefunc != NULL - && _PyErr_ExceptionMatches(tstate, PyExc_StopIteration)) - call_exc_trace(tstate->c_tracefunc, tstate->c_traceobj, tstate, frame); - if (_PyGen_FetchStopIterationValue(&retval) == 0) { - gen_status = PYGEN_RETURN; - } - else { - gen_status = PYGEN_ERROR; - } + if (Py_IsNone(v) && PyIter_Check(receiver)) { + retval = Py_TYPE(receiver)->tp_iternext(receiver); + } + else { + retval = PyObject_CallMethodOneArg(receiver, &_Py_ID(send), v); + } + if (retval == NULL) { + if (tstate->c_tracefunc != NULL + && _PyErr_ExceptionMatches(tstate, PyExc_StopIteration)) + call_exc_trace(tstate->c_tracefunc, tstate->c_traceobj, tstate, frame); + if (_PyGen_FetchStopIterationValue(&retval) == 0) { + assert(retval != NULL); + JUMPBY(oparg); } else { - gen_status = PYGEN_NEXT; + assert(retval == NULL); + goto error; } } - if (gen_status == PYGEN_ERROR) { - assert(retval == NULL); - goto error; - } - Py_DECREF(v); - if (gen_status == PYGEN_RETURN) { - assert(retval != NULL); - Py_DECREF(receiver); - JUMPBY(oparg); - jump = true; - } else { - assert(gen_status == PYGEN_NEXT); assert(retval != NULL); } } + inst(SEND_GEN, (unused/1, receiver, v -- receiver)) { + assert(cframe.use_tracing == 0); + PyGenObject *gen = (PyGenObject *)receiver; + DEOPT_IF(Py_TYPE(gen) != &PyGen_Type && + Py_TYPE(gen) != &PyCoro_Type, SEND); + DEOPT_IF(gen->gi_frame_state >= FRAME_EXECUTING, SEND); + STAT_INC(SEND, hit); + _PyInterpreterFrame *gen_frame = (_PyInterpreterFrame *)gen->gi_iframe; + frame->yield_offset = oparg; + STACK_SHRINK(1); + _PyFrame_StackPush(gen_frame, v); + gen->gi_frame_state = FRAME_EXECUTING; + gen->gi_exc_state.previous_item = tstate->exc_info; + tstate->exc_info = &gen->gi_exc_state; + JUMPBY(INLINE_CACHE_ENTRIES_SEND + oparg); + DISPATCH_INLINED(gen_frame); + } + inst(YIELD_VALUE, (retval -- unused)) { // NOTE: It's important that YIELD_VALUE never raises an exception! // The compiler treats any exception raised here as a failed close() @@ -796,12 +811,13 @@ dummy_func( } } - inst(CLEANUP_THROW, (sub_iter, last_sent_val, exc_value -- value)) { + inst(CLEANUP_THROW, (sub_iter, last_sent_val, exc_value -- none, value)) { assert(throwflag); assert(exc_value && PyExceptionInstance_Check(exc_value)); if (PyErr_GivenExceptionMatches(exc_value, PyExc_StopIteration)) { value = Py_NewRef(((PyStopIterationObject *)exc_value)->value); DECREF_INPUTS(); + none = Py_NewRef(Py_None); } else { _PyErr_SetRaisedException(tstate, Py_NewRef(exc_value)); diff --git a/Python/compile.c b/Python/compile.c index a3c915c3c14a..b49eda314eee 100644 --- a/Python/compile.c +++ b/Python/compile.c @@ -1789,6 +1789,8 @@ compiler_add_yield_from(struct compiler *c, location loc, int await) ADDOP(c, loc, CLEANUP_THROW); USE_LABEL(c, exit); + ADDOP_I(c, loc, SWAP, 2); + ADDOP(c, loc, POP_TOP); return SUCCESS; } diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index a224d4eb8927..093ebff026b5 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -882,57 +882,69 @@ } TARGET(SEND) { + PREDICTED(SEND); PyObject *v = PEEK(1); PyObject *receiver = PEEK(2); PyObject *retval; + #if ENABLE_SPECIALIZATION + _PySendCache *cache = (_PySendCache *)next_instr; + if (ADAPTIVE_COUNTER_IS_ZERO(cache->counter)) { + assert(cframe.use_tracing == 0); + next_instr--; + _Py_Specialize_Send(receiver, next_instr); + DISPATCH_SAME_OPARG(); + } + STAT_INC(SEND, deferred); + DECREMENT_ADAPTIVE_COUNTER(cache->counter); + #endif /* ENABLE_SPECIALIZATION */ assert(frame != &entry_frame); - bool jump = false; - PySendResult gen_status; - if (tstate->c_tracefunc == NULL) { - gen_status = PyIter_Send(receiver, v, &retval); - } else { - if (Py_IsNone(v) && PyIter_Check(receiver)) { - retval = Py_TYPE(receiver)->tp_iternext(receiver); - } - else { - retval = PyObject_CallMethodOneArg(receiver, &_Py_ID(send), v); - } - if (retval == NULL) { - if (tstate->c_tracefunc != NULL - && _PyErr_ExceptionMatches(tstate, PyExc_StopIteration)) - call_exc_trace(tstate->c_tracefunc, tstate->c_traceobj, tstate, frame); - if (_PyGen_FetchStopIterationValue(&retval) == 0) { - gen_status = PYGEN_RETURN; - } - else { - gen_status = PYGEN_ERROR; - } + if (Py_IsNone(v) && PyIter_Check(receiver)) { + retval = Py_TYPE(receiver)->tp_iternext(receiver); + } + else { + retval = PyObject_CallMethodOneArg(receiver, &_Py_ID(send), v); + } + if (retval == NULL) { + if (tstate->c_tracefunc != NULL + && _PyErr_ExceptionMatches(tstate, PyExc_StopIteration)) + call_exc_trace(tstate->c_tracefunc, tstate->c_traceobj, tstate, frame); + if (_PyGen_FetchStopIterationValue(&retval) == 0) { + assert(retval != NULL); + JUMPBY(oparg); } else { - gen_status = PYGEN_NEXT; + assert(retval == NULL); + goto error; } } - if (gen_status == PYGEN_ERROR) { - assert(retval == NULL); - goto error; - } - Py_DECREF(v); - if (gen_status == PYGEN_RETURN) { - assert(retval != NULL); - Py_DECREF(receiver); - JUMPBY(oparg); - jump = true; - } else { - assert(gen_status == PYGEN_NEXT); assert(retval != NULL); } - STACK_SHRINK(1); - STACK_GROW(((!jump) ? 1 : 0)); POKE(1, retval); + JUMPBY(1); DISPATCH(); } + TARGET(SEND_GEN) { + PyObject *v = PEEK(1); + PyObject *receiver = PEEK(2); + assert(cframe.use_tracing == 0); + PyGenObject *gen = (PyGenObject *)receiver; + DEOPT_IF(Py_TYPE(gen) != &PyGen_Type && + Py_TYPE(gen) != &PyCoro_Type, SEND); + DEOPT_IF(gen->gi_frame_state >= FRAME_EXECUTING, SEND); + STAT_INC(SEND, hit); + _PyInterpreterFrame *gen_frame = (_PyInterpreterFrame *)gen->gi_iframe; + frame->yield_offset = oparg; + STACK_SHRINK(1); + _PyFrame_StackPush(gen_frame, v); + gen->gi_frame_state = FRAME_EXECUTING; + gen->gi_exc_state.previous_item = tstate->exc_info; + tstate->exc_info = &gen->gi_exc_state; + JUMPBY(INLINE_CACHE_ENTRIES_SEND + oparg); + DISPATCH_INLINED(gen_frame); + } + TARGET(YIELD_VALUE) { PyObject *retval = PEEK(1); // NOTE: It's important that YIELD_VALUE never raises an exception! @@ -1026,6 +1038,7 @@ PyObject *exc_value = PEEK(1); PyObject *last_sent_val = PEEK(2); PyObject *sub_iter = PEEK(3); + PyObject *none; PyObject *value; assert(throwflag); assert(exc_value && PyExceptionInstance_Check(exc_value)); @@ -1034,13 +1047,15 @@ Py_DECREF(sub_iter); Py_DECREF(last_sent_val); Py_DECREF(exc_value); + none = Py_NewRef(Py_None); } else { _PyErr_SetRaisedException(tstate, Py_NewRef(exc_value)); goto exception_unwind; } - STACK_SHRINK(2); + STACK_SHRINK(1); POKE(1, value); + POKE(2, none); DISPATCH(); } diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index db1dfd37a901..d622eb12c8cb 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -104,6 +104,8 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { return 1; case SEND: return 2; + case SEND_GEN: + return 2; case YIELD_VALUE: return 1; case POP_EXCEPT: @@ -453,7 +455,9 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case GET_AWAITABLE: return 1; case SEND: - return ((!jump) ? 1 : 0) + 1; + return 2; + case SEND_GEN: + return 1; case YIELD_VALUE: return 1; case POP_EXCEPT: @@ -465,7 +469,7 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { case END_ASYNC_FOR: return 0; case CLEANUP_THROW: - return 1; + return 2; case LOAD_ASSERTION_ERROR: return 1; case LOAD_BUILD_CLASS: @@ -763,7 +767,8 @@ const struct opcode_metadata _PyOpcode_opcode_metadata[256] = { [GET_AITER] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [GET_ANEXT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [GET_AWAITABLE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [SEND] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, + [SEND] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC }, + [SEND_GEN] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IBC }, [YIELD_VALUE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [POP_EXCEPT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [RERAISE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, diff --git a/Python/opcode_targets.h b/Python/opcode_targets.h index eceb246fac49..301ec6e005da 100644 --- a/Python/opcode_targets.h +++ b/Python/opcode_targets.h @@ -165,7 +165,7 @@ static void *opcode_targets[256] = { &&TARGET_SET_UPDATE, &&TARGET_DICT_MERGE, &&TARGET_DICT_UPDATE, - &&_unknown_opcode, + &&TARGET_SEND_GEN, &&_unknown_opcode, &&_unknown_opcode, &&_unknown_opcode, diff --git a/Python/specialize.c b/Python/specialize.c index 908ad6dceb57..4ede3122d380 100644 --- a/Python/specialize.c +++ b/Python/specialize.c @@ -128,6 +128,7 @@ print_spec_stats(FILE *out, OpcodeStats *stats) fprintf(out, "opcode[%d].specializable : 1\n", BINARY_SLICE); fprintf(out, "opcode[%d].specializable : 1\n", COMPARE_OP); fprintf(out, "opcode[%d].specializable : 1\n", STORE_SLICE); + fprintf(out, "opcode[%d].specializable : 1\n", SEND); for (int i = 0; i < 256; i++) { if (_PyOpcode_Caches[i]) { fprintf(out, "opcode[%d].specializable : 1\n", i); @@ -1084,7 +1085,7 @@ PyObject *descr, DescriptorClassification kind) if (dict) { SPECIALIZATION_FAIL(LOAD_ATTR, SPEC_FAIL_ATTR_NOT_MANAGED_DICT); return 0; - } + } assert(owner_cls->tp_dictoffset > 0); assert(owner_cls->tp_dictoffset <= INT16_MAX); _py_set_opcode(instr, LOAD_ATTR_METHOD_LAZY_DICT); @@ -2183,3 +2184,25 @@ _Py_Specialize_ForIter(PyObject *iter, _Py_CODEUNIT *instr, int oparg) STAT_INC(FOR_ITER, success); cache->counter = adaptive_counter_cooldown(); } + +void +_Py_Specialize_Send(PyObject *receiver, _Py_CODEUNIT *instr) +{ + assert(ENABLE_SPECIALIZATION); + assert(_PyOpcode_Caches[SEND] == INLINE_CACHE_ENTRIES_SEND); + _PySendCache *cache = (_PySendCache *)(instr + 1); + PyTypeObject *tp = Py_TYPE(receiver); + if (tp == &PyGen_Type || tp == &PyCoro_Type) { + _py_set_opcode(instr, SEND_GEN); + goto success; + } + SPECIALIZATION_FAIL(SEND, + _PySpecialization_ClassifyIterator(receiver)); + STAT_INC(SEND, failure); + _py_set_opcode(instr, SEND); + cache->counter = adaptive_counter_backoff(cache->counter); + return; +success: + STAT_INC(SEND, success); + cache->counter = adaptive_counter_cooldown(); +} From webhook-mailer at python.org Mon Feb 13 06:31:23 2023 From: webhook-mailer at python.org (markshannon) Date: Mon, 13 Feb 2023 11:31:23 -0000 Subject: [Python-checkins] GH-100987: Refactor `_PyInterpreterFrame` a bit, to assist generator improvement. (GH-100988) Message-ID: https://github.com/python/cpython/commit/d9199175c7386a95aaac91822a2197b9365eb0e8 commit: d9199175c7386a95aaac91822a2197b9365eb0e8 branch: main author: Mark Shannon committer: markshannon date: 2023-02-13T11:31:15Z summary: GH-100987: Refactor `_PyInterpreterFrame` a bit, to assist generator improvement. (GH-100988) Refactor _PyInterpreterFrame a bit, to assist generator improvement. files: M Include/internal/pycore_frame.h diff --git a/Include/internal/pycore_frame.h b/Include/internal/pycore_frame.h index f12b225ebfcc..81d16b219c30 100644 --- a/Include/internal/pycore_frame.h +++ b/Include/internal/pycore_frame.h @@ -47,15 +47,13 @@ enum _frameowner { }; typedef struct _PyInterpreterFrame { - /* "Specials" section */ + PyCodeObject *f_code; /* Strong reference */ + struct _PyInterpreterFrame *previous; PyObject *f_funcobj; /* Strong reference. Only valid if not on C stack */ PyObject *f_globals; /* Borrowed reference. Only valid if not on C stack */ PyObject *f_builtins; /* Borrowed reference. Only valid if not on C stack */ PyObject *f_locals; /* Strong reference, may be NULL. Only valid if not on C stack */ - PyCodeObject *f_code; /* Strong reference */ PyFrameObject *frame_obj; /* Strong reference, may be NULL. Only valid if not on C stack */ - /* Linkage section */ - struct _PyInterpreterFrame *previous; // NOTE: This is not necessarily the last instruction started in the given // frame. Rather, it is the code unit *prior to* the *next* instruction. For // example, it may be an inline CACHE entry, an instruction we just jumped From webhook-mailer at python.org Mon Feb 13 07:37:11 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Mon, 13 Feb 2023 12:37:11 -0000 Subject: [Python-checkins] gh-92547: Purge sqlite3_enable_shared_cache() detection from configure (#101873) Message-ID: https://github.com/python/cpython/commit/2db2c4b45501eebef5b3ff89118554bd5eb92ed4 commit: 2db2c4b45501eebef5b3ff89118554bd5eb92ed4 branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-13T13:36:42+01:00 summary: gh-92547: Purge sqlite3_enable_shared_cache() detection from configure (#101873) files: M configure M configure.ac diff --git a/configure b/configure index 97694c602d1c..35088f9e5caf 100755 --- a/configure +++ b/configure @@ -13689,57 +13689,6 @@ fi - { $as_echo "$as_me:${as_lineno-$LINENO}: checking for sqlite3_enable_shared_cache in -lsqlite3" >&5 -$as_echo_n "checking for sqlite3_enable_shared_cache in -lsqlite3... " >&6; } -if ${ac_cv_lib_sqlite3_sqlite3_enable_shared_cache+:} false; then : - $as_echo_n "(cached) " >&6 -else - ac_check_lib_save_LIBS=$LIBS -LIBS="-lsqlite3 $LIBS" -cat confdefs.h - <<_ACEOF >conftest.$ac_ext -/* end confdefs.h. */ - -/* Override any GCC internal prototype to avoid an error. - Use char because int might match the return type of a GCC - builtin and then its argument prototype would still apply. */ -#ifdef __cplusplus -extern "C" -#endif -char sqlite3_enable_shared_cache (); -int -main () -{ -return sqlite3_enable_shared_cache (); - ; - return 0; -} -_ACEOF -if ac_fn_c_try_link "$LINENO"; then : - ac_cv_lib_sqlite3_sqlite3_enable_shared_cache=yes -else - ac_cv_lib_sqlite3_sqlite3_enable_shared_cache=no -fi -rm -f core conftest.err conftest.$ac_objext \ - conftest$ac_exeext conftest.$ac_ext -LIBS=$ac_check_lib_save_LIBS -fi -{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_sqlite3_sqlite3_enable_shared_cache" >&5 -$as_echo "$ac_cv_lib_sqlite3_sqlite3_enable_shared_cache" >&6; } -if test "x$ac_cv_lib_sqlite3_sqlite3_enable_shared_cache" = xyes; then : - cat >>confdefs.h <<_ACEOF -#define HAVE_LIBSQLITE3 1 -_ACEOF - - LIBS="-lsqlite3 $LIBS" - -else - - have_supported_sqlite3=no - -fi - - - { $as_echo "$as_me:${as_lineno-$LINENO}: checking for sqlite3_progress_handler in -lsqlite3" >&5 $as_echo_n "checking for sqlite3_progress_handler in -lsqlite3... " >&6; } if ${ac_cv_lib_sqlite3_sqlite3_progress_handler+:} false; then : diff --git a/configure.ac b/configure.ac index 09369b985b33..1ab48e0d1c16 100644 --- a/configure.ac +++ b/configure.ac @@ -3867,7 +3867,6 @@ dnl hence CPPFLAGS instead of CFLAGS. PY_CHECK_SQLITE_FUNC([sqlite3_column_decltype]) PY_CHECK_SQLITE_FUNC([sqlite3_column_double]) PY_CHECK_SQLITE_FUNC([sqlite3_complete]) - PY_CHECK_SQLITE_FUNC([sqlite3_enable_shared_cache]) PY_CHECK_SQLITE_FUNC([sqlite3_progress_handler]) PY_CHECK_SQLITE_FUNC([sqlite3_result_double]) PY_CHECK_SQLITE_FUNC([sqlite3_set_authorizer]) From webhook-mailer at python.org Mon Feb 13 08:49:52 2023 From: webhook-mailer at python.org (zooba) Date: Mon, 13 Feb 2023 13:49:52 -0000 Subject: [Python-checkins] gh-101810: Remove duplicated st_ino calculation (GH-101811) Message-ID: https://github.com/python/cpython/commit/95cbb3d908175ccd855078b3fab7f99e7d0bca88 commit: 95cbb3d908175ccd855078b3fab7f99e7d0bca88 branch: main author: James Lee <49257044+juria90 at users.noreply.github.com> committer: zooba date: 2023-02-13T13:49:44Z summary: gh-101810: Remove duplicated st_ino calculation (GH-101811) files: M Python/fileutils.c diff --git a/Python/fileutils.c b/Python/fileutils.c index 244bd899b3bd..22b2257a56d0 100644 --- a/Python/fileutils.c +++ b/Python/fileutils.c @@ -1162,8 +1162,6 @@ _Py_fstat_noraise(int fd, struct _Py_stat_struct *status) } _Py_attribute_data_to_stat(&info, 0, status); - /* specific to fstat() */ - status->st_ino = (((uint64_t)info.nFileIndexHigh) << 32) + info.nFileIndexLow; return 0; #else return fstat(fd, status); From webhook-mailer at python.org Mon Feb 13 15:33:57 2023 From: webhook-mailer at python.org (zooba) Date: Mon, 13 Feb 2023 20:33:57 -0000 Subject: [Python-checkins] gh-101849: Add upgrade codes for old versions of launcher that ended up with later version numbers (GH-101877) Message-ID: https://github.com/python/cpython/commit/0c6fe81dce9d6bb1dce5e4503f1b42bc5355ba24 commit: 0c6fe81dce9d6bb1dce5e4503f1b42bc5355ba24 branch: main author: Steve Dower committer: zooba date: 2023-02-13T20:33:48Z summary: gh-101849: Add upgrade codes for old versions of launcher that ended up with later version numbers (GH-101877) files: A Misc/NEWS.d/next/Windows/2023-02-13-16-32-50.gh-issue-101849.7lm_53.rst M Tools/msi/common.wxs M Tools/msi/launcher/launcher.wxs diff --git a/Misc/NEWS.d/next/Windows/2023-02-13-16-32-50.gh-issue-101849.7lm_53.rst b/Misc/NEWS.d/next/Windows/2023-02-13-16-32-50.gh-issue-101849.7lm_53.rst new file mode 100644 index 000000000000..861d4de9f9a6 --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-13-16-32-50.gh-issue-101849.7lm_53.rst @@ -0,0 +1 @@ +Ensures installer will correctly upgrade existing ``py.exe`` launcher installs. diff --git a/Tools/msi/common.wxs b/Tools/msi/common.wxs index 55cb44860d02..54fa749ab17c 100644 --- a/Tools/msi/common.wxs +++ b/Tools/msi/common.wxs @@ -25,7 +25,6 @@ - @@ -42,6 +41,7 @@ UPGRADE + diff --git a/Tools/msi/launcher/launcher.wxs b/Tools/msi/launcher/launcher.wxs index b83058c63bf6..49f1f7b8c176 100644 --- a/Tools/msi/launcher/launcher.wxs +++ b/Tools/msi/launcher/launcher.wxs @@ -34,13 +34,34 @@ NOT Installed AND NOT ALLUSERS=1 NOT Installed AND ALLUSERS=1 + + UPGRADE or REMOVE_350_LAUNCHER or REMOVE_360A1_LAUNCHER or UPGRADE_3_11_0 or UPGRADE_3_11_1 + UPGRADE or REMOVE_350_LAUNCHER or REMOVE_360A1_LAUNCHER + + + Installed OR NOT DOWNGRADE OR UPGRADE_3_11_0 OR UPGRADE_3_11_1 + + Installed OR NOT DOWNGRADE + + + + + + + From webhook-mailer at python.org Mon Feb 13 15:59:19 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 13 Feb 2023 20:59:19 -0000 Subject: [Python-checkins] gh-101849: Add upgrade codes for old versions of launcher that ended up with later version numbers (GH-101877) Message-ID: https://github.com/python/cpython/commit/fd155b91392c5bd238781bb11e040cf46c94725c commit: fd155b91392c5bd238781bb11e040cf46c94725c branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-13T12:59:12-08:00 summary: gh-101849: Add upgrade codes for old versions of launcher that ended up with later version numbers (GH-101877) (cherry picked from commit 0c6fe81dce9d6bb1dce5e4503f1b42bc5355ba24) Co-authored-by: Steve Dower files: A Misc/NEWS.d/next/Windows/2023-02-13-16-32-50.gh-issue-101849.7lm_53.rst M Tools/msi/common.wxs M Tools/msi/launcher/launcher.wxs diff --git a/Misc/NEWS.d/next/Windows/2023-02-13-16-32-50.gh-issue-101849.7lm_53.rst b/Misc/NEWS.d/next/Windows/2023-02-13-16-32-50.gh-issue-101849.7lm_53.rst new file mode 100644 index 000000000000..861d4de9f9a6 --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-13-16-32-50.gh-issue-101849.7lm_53.rst @@ -0,0 +1 @@ +Ensures installer will correctly upgrade existing ``py.exe`` launcher installs. diff --git a/Tools/msi/common.wxs b/Tools/msi/common.wxs index b819d320ee94..0d61c1e61be7 100644 --- a/Tools/msi/common.wxs +++ b/Tools/msi/common.wxs @@ -25,7 +25,6 @@ - @@ -42,6 +41,7 @@ UPGRADE + diff --git a/Tools/msi/launcher/launcher.wxs b/Tools/msi/launcher/launcher.wxs index b83058c63bf6..49f1f7b8c176 100644 --- a/Tools/msi/launcher/launcher.wxs +++ b/Tools/msi/launcher/launcher.wxs @@ -34,13 +34,34 @@ NOT Installed AND NOT ALLUSERS=1 NOT Installed AND ALLUSERS=1 + + UPGRADE or REMOVE_350_LAUNCHER or REMOVE_360A1_LAUNCHER or UPGRADE_3_11_0 or UPGRADE_3_11_1 + UPGRADE or REMOVE_350_LAUNCHER or REMOVE_360A1_LAUNCHER + + + Installed OR NOT DOWNGRADE OR UPGRADE_3_11_0 OR UPGRADE_3_11_1 + + Installed OR NOT DOWNGRADE + + + + + + + From webhook-mailer at python.org Mon Feb 13 20:38:10 2023 From: webhook-mailer at python.org (gpshead) Date: Tue, 14 Feb 2023 01:38:10 -0000 Subject: [Python-checkins] gh-74895: getaddrinfo no longer raises OverflowError (#2435) Message-ID: https://github.com/python/cpython/commit/928752ce4c23f47d3175dd47ecacf08d86a99c9d commit: 928752ce4c23f47d3175dd47ecacf08d86a99c9d branch: main author: Radek Smejkal committer: gpshead date: 2023-02-13T17:37:34-08:00 summary: gh-74895: getaddrinfo no longer raises OverflowError (#2435) `socket.getaddrinfo()` no longer raises `OverflowError` based on the **port** argument. Error reporting (or not) for its value is left up to the underlying C library `getaddrinfo()` implementation. files: A Misc/NEWS.d/next/Core and Builtins/2023-02-13-22-21-58.gh-issue-74895.esMNtq.rst M Lib/test/test_socket.py M Misc/ACKS M Modules/getaddrinfo.c M Modules/socketmodule.c diff --git a/Lib/test/test_socket.py b/Lib/test/test_socket.py index f1b4018c265e..a313da29b4a4 100644 --- a/Lib/test/test_socket.py +++ b/Lib/test/test_socket.py @@ -1600,6 +1600,54 @@ def testGetaddrinfo(self): except socket.gaierror: pass + def test_getaddrinfo_int_port_overflow(self): + # gh-74895: Test that getaddrinfo does not raise OverflowError on port. + # + # POSIX getaddrinfo() never specify the valid range for "service" + # decimal port number values. For IPv4 and IPv6 they are technically + # unsigned 16-bit values, but the API is protocol agnostic. Which values + # trigger an error from the C library function varies by platform as + # they do not all perform validation. + + # The key here is that we don't want to produce OverflowError as Python + # prior to 3.12 did for ints outside of a [LONG_MIN, LONG_MAX] range. + # Leave the error up to the underlying string based platform C API. + + from _testcapi import ULONG_MAX, LONG_MAX, LONG_MIN + try: + socket.getaddrinfo(None, ULONG_MAX + 1) + except OverflowError: + # Platforms differ as to what values consitute a getaddrinfo() error + # return. Some fail for LONG_MAX+1, others ULONG_MAX+1, and Windows + # silently accepts such huge "port" aka "service" numeric values. + self.fail("Either no error or socket.gaierror expected.") + except socket.gaierror: + pass + + try: + socket.getaddrinfo(None, LONG_MAX + 1) + except OverflowError: + self.fail("Either no error or socket.gaierror expected.") + except socket.gaierror: + pass + + try: + socket.getaddrinfo(None, LONG_MAX - 0xffff + 1) + except OverflowError: + self.fail("Either no error or socket.gaierror expected.") + except socket.gaierror: + pass + + try: + socket.getaddrinfo(None, LONG_MIN - 1) + except OverflowError: + self.fail("Either no error or socket.gaierror expected.") + except socket.gaierror: + pass + + socket.getaddrinfo(None, 0) # No error expected. + socket.getaddrinfo(None, 0xffff) # No error expected. + def test_getnameinfo(self): # only IP addresses are allowed self.assertRaises(OSError, socket.getnameinfo, ('mail.python.org',0), 0) diff --git a/Misc/ACKS b/Misc/ACKS index e12cbea0ebd6..ca92608868f2 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1688,6 +1688,7 @@ Roman Skurikhin Ville Skytt? Michael Sloan Nick Sloan +Radek Smejkal V?clav ?milauer Casper W. Smet Allen W. Smith diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-13-22-21-58.gh-issue-74895.esMNtq.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-13-22-21-58.gh-issue-74895.esMNtq.rst new file mode 100644 index 000000000000..adbbb601634a --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-13-22-21-58.gh-issue-74895.esMNtq.rst @@ -0,0 +1,5 @@ +:mod:`socket.getaddrinfo` no longer raises :class:`OverflowError` for +:class:`int` **port** values outside of the C long range. Out of range values +are left up to the underlying string based C library API to report. A +:class:`socket.gaierror` ``SAI_SERVICE`` may occur instead, or no error at all +as not all platform C libraries generate an error. diff --git a/Modules/getaddrinfo.c b/Modules/getaddrinfo.c index 0b4620ed683d..f1c28d7d9312 100644 --- a/Modules/getaddrinfo.c +++ b/Modules/getaddrinfo.c @@ -342,7 +342,11 @@ getaddrinfo(const char*hostname, const char*servname, pai->ai_socktype = SOCK_DGRAM; pai->ai_protocol = IPPROTO_UDP; } - port = htons((u_short)atoi(servname)); + long maybe_port = strtol(servname, NULL, 10); + if (maybe_port < 0 || maybe_port > 0xffff) { + ERR(EAI_SERVICE); + } + port = htons((u_short)maybe_port); } else { struct servent *sp; const char *proto; diff --git a/Modules/socketmodule.c b/Modules/socketmodule.c index 0a9e46512b15..2d300f19436b 100644 --- a/Modules/socketmodule.c +++ b/Modules/socketmodule.c @@ -6650,7 +6650,7 @@ socket_getaddrinfo(PyObject *self, PyObject *args, PyObject* kwargs) struct addrinfo *res0 = NULL; PyObject *hobj = NULL; PyObject *pobj = (PyObject *)NULL; - char pbuf[30]; + PyObject *pstr = NULL; const char *hptr, *pptr; int family, socktype, protocol, flags; int error; @@ -6680,11 +6680,13 @@ socket_getaddrinfo(PyObject *self, PyObject *args, PyObject* kwargs) return NULL; } if (PyLong_CheckExact(pobj)) { - long value = PyLong_AsLong(pobj); - if (value == -1 && PyErr_Occurred()) + pstr = PyObject_Str(pobj); + if (pstr == NULL) + goto err; + assert(PyUnicode_Check(pstr)); + pptr = PyUnicode_AsUTF8(pstr); + if (pptr == NULL) goto err; - PyOS_snprintf(pbuf, sizeof(pbuf), "%ld", value); - pptr = pbuf; } else if (PyUnicode_Check(pobj)) { pptr = PyUnicode_AsUTF8(pobj); if (pptr == NULL) @@ -6750,12 +6752,14 @@ socket_getaddrinfo(PyObject *self, PyObject *args, PyObject* kwargs) Py_DECREF(single); } Py_XDECREF(idna); + Py_XDECREF(pstr); if (res0) freeaddrinfo(res0); return all; err: Py_XDECREF(all); Py_XDECREF(idna); + Py_XDECREF(pstr); if (res0) freeaddrinfo(res0); return (PyObject *)NULL; From webhook-mailer at python.org Tue Feb 14 02:22:12 2023 From: webhook-mailer at python.org (gpshead) Date: Tue, 14 Feb 2023 07:22:12 -0000 Subject: [Python-checkins] gh-101857: Allow xattr detection on musl libc (#101858) Message-ID: https://github.com/python/cpython/commit/8be8101bca34b60481ec3d7ecaea4a3379fb7dbb commit: 8be8101bca34b60481ec3d7ecaea4a3379fb7dbb branch: main author: Sam James committer: gpshead date: 2023-02-13T23:21:58-08:00 summary: gh-101857: Allow xattr detection on musl libc (#101858) Previously, we checked exclusively for `__GLIBC__` (AND'd with some other conditions). Checking for `__linux__` instead should be fine. This fixes using e.g. `os.listxattr()` on systems using musl libc. Bug: https://bugs.gentoo.org/894130 Co-authored-by: Gregory P. Smith files: A Misc/NEWS.d/next/Core and Builtins/2023-02-12-22-40-22.gh-issue-101857._bribG.rst M Modules/posixmodule.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-12-22-40-22.gh-issue-101857._bribG.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-12-22-40-22.gh-issue-101857._bribG.rst new file mode 100644 index 000000000000..832cc300fa94 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-12-22-40-22.gh-issue-101857._bribG.rst @@ -0,0 +1 @@ +Fix xattr support detection on Linux systems by widening the check to linux, not just glibc. This fixes support for musl. diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c index cba6cea48b77..d9e93473aead 100644 --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -274,8 +274,9 @@ corresponding Unix manual entries for more information on calls."); # undef HAVE_SCHED_SETAFFINITY #endif -#if defined(HAVE_SYS_XATTR_H) && defined(__GLIBC__) && !defined(__FreeBSD_kernel__) && !defined(__GNU__) +#if defined(HAVE_SYS_XATTR_H) && defined(__linux__) && !defined(__FreeBSD_kernel__) && !defined(__GNU__) # define USE_XATTRS +# include // Needed for XATTR_SIZE_MAX on musl libc. #endif #ifdef USE_XATTRS From webhook-mailer at python.org Tue Feb 14 04:25:24 2023 From: webhook-mailer at python.org (gpshead) Date: Tue, 14 Feb 2023 09:25:24 -0000 Subject: [Python-checkins] gh-99108: Import SHA2-384/512 from HACL* (#101707) Message-ID: https://github.com/python/cpython/commit/e5da9ab2c82c6b4e4f8ffa699a9a609ea1bea255 commit: e5da9ab2c82c6b4e4f8ffa699a9a609ea1bea255 branch: main author: Jonathan Protzenko committer: gpshead date: 2023-02-14T01:25:16-08:00 summary: gh-99108: Import SHA2-384/512 from HACL* (#101707) Replace the builtin hashlib implementations of SHA2-384 and SHA2-512 originally from LibTomCrypt with formally verified, side-channel resistant code from the [HACL*](https://github.com/hacl-star/hacl-star/) project. The builtins remain a fallback only used when OpenSSL does not provide them. files: A Misc/NEWS.d/next/Security/2023-02-08-12-57-35.gh-issue-99108.6tnmhA.rst A Modules/_hacl/include/krml/FStar_UInt128_Verified.h A Modules/_hacl/include/krml/fstar_uint128_struct_endianness.h A Modules/_hacl/include/krml/types.h M Makefile.pre.in M Modules/Setup.stdlib.in M Modules/_hacl/Hacl_Streaming_SHA2.c M Modules/_hacl/Hacl_Streaming_SHA2.h M Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h M Modules/_hacl/include/python_hacl_namespaces.h M Modules/_hacl/internal/Hacl_SHA2_Generic.h M Modules/_hacl/refresh.sh M Modules/sha256module.c M Modules/sha512module.c M configure M configure.ac diff --git a/Makefile.pre.in b/Makefile.pre.in index 7a84b953d979..d42d4d8a3c1c 100644 --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -2608,7 +2608,7 @@ MODULE__MD5_DEPS=$(srcdir)/Modules/hashlib.h MODULE__SHA1_DEPS=$(srcdir)/Modules/hashlib.h MODULE__SHA256_DEPS=$(srcdir)/Modules/hashlib.h $(srcdir)/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h $(srcdir)/Modules/_hacl/include/krml/lowstar_endianness.h $(srcdir)/Modules/_hacl/include/krml/internal/target.h $(srcdir)/Modules/_hacl/Hacl_Streaming_SHA2.h MODULE__SHA3_DEPS=$(srcdir)/Modules/_sha3/sha3.c $(srcdir)/Modules/_sha3/sha3.h $(srcdir)/Modules/hashlib.h -MODULE__SHA512_DEPS=$(srcdir)/Modules/hashlib.h +MODULE__SHA512_DEPS=$(srcdir)/Modules/hashlib.h $(srcdir)/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h $(srcdir)/Modules/_hacl/include/krml/lowstar_endianness.h $(srcdir)/Modules/_hacl/include/krml/internal/target.h $(srcdir)/Modules/_hacl/Hacl_Streaming_SHA2.h MODULE__SOCKET_DEPS=$(srcdir)/Modules/socketmodule.h $(srcdir)/Modules/addrinfo.h $(srcdir)/Modules/getaddrinfo.c $(srcdir)/Modules/getnameinfo.c MODULE__SSL_DEPS=$(srcdir)/Modules/_ssl.h $(srcdir)/Modules/_ssl/cert.c $(srcdir)/Modules/_ssl/debughelpers.c $(srcdir)/Modules/_ssl/misc.c $(srcdir)/Modules/_ssl_data.h $(srcdir)/Modules/_ssl_data_111.h $(srcdir)/Modules/_ssl_data_300.h $(srcdir)/Modules/socketmodule.h MODULE__TESTCAPI_DEPS=$(srcdir)/Modules/_testcapi/testcapi_long.h $(srcdir)/Modules/_testcapi/parts.h diff --git a/Misc/NEWS.d/next/Security/2023-02-08-12-57-35.gh-issue-99108.6tnmhA.rst b/Misc/NEWS.d/next/Security/2023-02-08-12-57-35.gh-issue-99108.6tnmhA.rst new file mode 100644 index 000000000000..6a7a309dad5d --- /dev/null +++ b/Misc/NEWS.d/next/Security/2023-02-08-12-57-35.gh-issue-99108.6tnmhA.rst @@ -0,0 +1,4 @@ +Replace the builtin :mod:`hashlib` implementations of SHA2-384 and SHA2-512 +originally from LibTomCrypt with formally verified, side-channel resistant +code from the `HACL* `_ project. +The builtins remain a fallback only used when OpenSSL does not provide them. diff --git a/Modules/Setup.stdlib.in b/Modules/Setup.stdlib.in index f72783810f94..b6d13e04d3fa 100644 --- a/Modules/Setup.stdlib.in +++ b/Modules/Setup.stdlib.in @@ -80,7 +80,7 @@ @MODULE__MD5_TRUE at _md5 md5module.c @MODULE__SHA1_TRUE at _sha1 sha1module.c @MODULE__SHA256_TRUE at _sha256 sha256module.c _hacl/Hacl_Streaming_SHA2.c - at MODULE__SHA512_TRUE@_sha512 sha512module.c + at MODULE__SHA512_TRUE@_sha512 sha512module.c _hacl/Hacl_Streaming_SHA2.c @MODULE__SHA3_TRUE at _sha3 _sha3/sha3module.c @MODULE__BLAKE2_TRUE at _blake2 _blake2/blake2module.c _blake2/blake2b_impl.c _blake2/blake2s_impl.c diff --git a/Modules/_hacl/Hacl_Streaming_SHA2.c b/Modules/_hacl/Hacl_Streaming_SHA2.c index 84566571792a..8169c7a35673 100644 --- a/Modules/_hacl/Hacl_Streaming_SHA2.c +++ b/Modules/_hacl/Hacl_Streaming_SHA2.c @@ -250,6 +250,229 @@ static inline void sha224_finish(uint32_t *st, uint8_t *h) memcpy(h, hbuf, (uint32_t)28U * sizeof (uint8_t)); } +void Hacl_SHA2_Scalar32_sha512_init(uint64_t *hash) +{ + KRML_MAYBE_FOR8(i, + (uint32_t)0U, + (uint32_t)8U, + (uint32_t)1U, + uint64_t *os = hash; + uint64_t x = Hacl_Impl_SHA2_Generic_h512[i]; + os[i] = x;); +} + +static inline void sha512_update(uint8_t *b, uint64_t *hash) +{ + uint64_t hash_old[8U] = { 0U }; + uint64_t ws[16U] = { 0U }; + memcpy(hash_old, hash, (uint32_t)8U * sizeof (uint64_t)); + uint8_t *b10 = b; + uint64_t u = load64_be(b10); + ws[0U] = u; + uint64_t u0 = load64_be(b10 + (uint32_t)8U); + ws[1U] = u0; + uint64_t u1 = load64_be(b10 + (uint32_t)16U); + ws[2U] = u1; + uint64_t u2 = load64_be(b10 + (uint32_t)24U); + ws[3U] = u2; + uint64_t u3 = load64_be(b10 + (uint32_t)32U); + ws[4U] = u3; + uint64_t u4 = load64_be(b10 + (uint32_t)40U); + ws[5U] = u4; + uint64_t u5 = load64_be(b10 + (uint32_t)48U); + ws[6U] = u5; + uint64_t u6 = load64_be(b10 + (uint32_t)56U); + ws[7U] = u6; + uint64_t u7 = load64_be(b10 + (uint32_t)64U); + ws[8U] = u7; + uint64_t u8 = load64_be(b10 + (uint32_t)72U); + ws[9U] = u8; + uint64_t u9 = load64_be(b10 + (uint32_t)80U); + ws[10U] = u9; + uint64_t u10 = load64_be(b10 + (uint32_t)88U); + ws[11U] = u10; + uint64_t u11 = load64_be(b10 + (uint32_t)96U); + ws[12U] = u11; + uint64_t u12 = load64_be(b10 + (uint32_t)104U); + ws[13U] = u12; + uint64_t u13 = load64_be(b10 + (uint32_t)112U); + ws[14U] = u13; + uint64_t u14 = load64_be(b10 + (uint32_t)120U); + ws[15U] = u14; + KRML_MAYBE_FOR5(i0, + (uint32_t)0U, + (uint32_t)5U, + (uint32_t)1U, + KRML_MAYBE_FOR16(i, + (uint32_t)0U, + (uint32_t)16U, + (uint32_t)1U, + uint64_t k_t = Hacl_Impl_SHA2_Generic_k384_512[(uint32_t)16U * i0 + i]; + uint64_t ws_t = ws[i]; + uint64_t a0 = hash[0U]; + uint64_t b0 = hash[1U]; + uint64_t c0 = hash[2U]; + uint64_t d0 = hash[3U]; + uint64_t e0 = hash[4U]; + uint64_t f0 = hash[5U]; + uint64_t g0 = hash[6U]; + uint64_t h02 = hash[7U]; + uint64_t k_e_t = k_t; + uint64_t + t1 = + h02 + + + ((e0 << (uint32_t)50U | e0 >> (uint32_t)14U) + ^ + ((e0 << (uint32_t)46U | e0 >> (uint32_t)18U) + ^ (e0 << (uint32_t)23U | e0 >> (uint32_t)41U))) + + ((e0 & f0) ^ (~e0 & g0)) + + k_e_t + + ws_t; + uint64_t + t2 = + ((a0 << (uint32_t)36U | a0 >> (uint32_t)28U) + ^ + ((a0 << (uint32_t)30U | a0 >> (uint32_t)34U) + ^ (a0 << (uint32_t)25U | a0 >> (uint32_t)39U))) + + ((a0 & b0) ^ ((a0 & c0) ^ (b0 & c0))); + uint64_t a1 = t1 + t2; + uint64_t b1 = a0; + uint64_t c1 = b0; + uint64_t d1 = c0; + uint64_t e1 = d0 + t1; + uint64_t f1 = e0; + uint64_t g1 = f0; + uint64_t h12 = g0; + hash[0U] = a1; + hash[1U] = b1; + hash[2U] = c1; + hash[3U] = d1; + hash[4U] = e1; + hash[5U] = f1; + hash[6U] = g1; + hash[7U] = h12;); + if (i0 < (uint32_t)4U) + { + KRML_MAYBE_FOR16(i, + (uint32_t)0U, + (uint32_t)16U, + (uint32_t)1U, + uint64_t t16 = ws[i]; + uint64_t t15 = ws[(i + (uint32_t)1U) % (uint32_t)16U]; + uint64_t t7 = ws[(i + (uint32_t)9U) % (uint32_t)16U]; + uint64_t t2 = ws[(i + (uint32_t)14U) % (uint32_t)16U]; + uint64_t + s1 = + (t2 << (uint32_t)45U | t2 >> (uint32_t)19U) + ^ ((t2 << (uint32_t)3U | t2 >> (uint32_t)61U) ^ t2 >> (uint32_t)6U); + uint64_t + s0 = + (t15 << (uint32_t)63U | t15 >> (uint32_t)1U) + ^ ((t15 << (uint32_t)56U | t15 >> (uint32_t)8U) ^ t15 >> (uint32_t)7U); + ws[i] = s1 + t7 + s0 + t16;); + }); + KRML_MAYBE_FOR8(i, + (uint32_t)0U, + (uint32_t)8U, + (uint32_t)1U, + uint64_t *os = hash; + uint64_t x = hash[i] + hash_old[i]; + os[i] = x;); +} + +static inline void sha512_update_nblocks(uint32_t len, uint8_t *b, uint64_t *st) +{ + uint32_t blocks = len / (uint32_t)128U; + for (uint32_t i = (uint32_t)0U; i < blocks; i++) + { + uint8_t *b0 = b; + uint8_t *mb = b0 + i * (uint32_t)128U; + sha512_update(mb, st); + } +} + +static inline void +sha512_update_last(FStar_UInt128_uint128 totlen, uint32_t len, uint8_t *b, uint64_t *hash) +{ + uint32_t blocks; + if (len + (uint32_t)16U + (uint32_t)1U <= (uint32_t)128U) + { + blocks = (uint32_t)1U; + } + else + { + blocks = (uint32_t)2U; + } + uint32_t fin = blocks * (uint32_t)128U; + uint8_t last[256U] = { 0U }; + uint8_t totlen_buf[16U] = { 0U }; + FStar_UInt128_uint128 total_len_bits = FStar_UInt128_shift_left(totlen, (uint32_t)3U); + store128_be(totlen_buf, total_len_bits); + uint8_t *b0 = b; + memcpy(last, b0, len * sizeof (uint8_t)); + last[len] = (uint8_t)0x80U; + memcpy(last + fin - (uint32_t)16U, totlen_buf, (uint32_t)16U * sizeof (uint8_t)); + uint8_t *last00 = last; + uint8_t *last10 = last + (uint32_t)128U; + uint8_t *l0 = last00; + uint8_t *l1 = last10; + uint8_t *lb0 = l0; + uint8_t *lb1 = l1; + uint8_t *last0 = lb0; + uint8_t *last1 = lb1; + sha512_update(last0, hash); + if (blocks > (uint32_t)1U) + { + sha512_update(last1, hash); + return; + } +} + +static inline void sha512_finish(uint64_t *st, uint8_t *h) +{ + uint8_t hbuf[64U] = { 0U }; + KRML_MAYBE_FOR8(i, + (uint32_t)0U, + (uint32_t)8U, + (uint32_t)1U, + store64_be(hbuf + i * (uint32_t)8U, st[i]);); + memcpy(h, hbuf, (uint32_t)64U * sizeof (uint8_t)); +} + +static inline void sha384_init(uint64_t *hash) +{ + KRML_MAYBE_FOR8(i, + (uint32_t)0U, + (uint32_t)8U, + (uint32_t)1U, + uint64_t *os = hash; + uint64_t x = Hacl_Impl_SHA2_Generic_h384[i]; + os[i] = x;); +} + +static inline void sha384_update_nblocks(uint32_t len, uint8_t *b, uint64_t *st) +{ + sha512_update_nblocks(len, b, st); +} + +static void +sha384_update_last(FStar_UInt128_uint128 totlen, uint32_t len, uint8_t *b, uint64_t *st) +{ + sha512_update_last(totlen, len, b, st); +} + +static inline void sha384_finish(uint64_t *st, uint8_t *h) +{ + uint8_t hbuf[64U] = { 0U }; + KRML_MAYBE_FOR8(i, + (uint32_t)0U, + (uint32_t)8U, + (uint32_t)1U, + store64_be(hbuf + i * (uint32_t)8U, st[i]);); + memcpy(h, hbuf, (uint32_t)48U * sizeof (uint8_t)); +} + /** Allocate initial state for the SHA2_256 hash. The state is to be freed by calling `free_256`. @@ -680,3 +903,434 @@ void Hacl_Streaming_SHA2_sha224(uint8_t *input, uint32_t input_len, uint8_t *dst sha224_finish(st, rb); } +Hacl_Streaming_SHA2_state_sha2_384 *Hacl_Streaming_SHA2_create_in_512(void) +{ + uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)128U, sizeof (uint8_t)); + uint64_t *block_state = (uint64_t *)KRML_HOST_CALLOC((uint32_t)8U, sizeof (uint64_t)); + Hacl_Streaming_SHA2_state_sha2_384 + s = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; + Hacl_Streaming_SHA2_state_sha2_384 + *p = + (Hacl_Streaming_SHA2_state_sha2_384 *)KRML_HOST_MALLOC(sizeof ( + Hacl_Streaming_SHA2_state_sha2_384 + )); + p[0U] = s; + Hacl_SHA2_Scalar32_sha512_init(block_state); + return p; +} + +/** +Copies the state passed as argument into a newly allocated state (deep copy). +The state is to be freed by calling `free_512`. Cloning the state this way is +useful, for instance, if your control-flow diverges and you need to feed +more (different) data into the hash in each branch. +*/ +Hacl_Streaming_SHA2_state_sha2_384 +*Hacl_Streaming_SHA2_copy_512(Hacl_Streaming_SHA2_state_sha2_384 *s0) +{ + Hacl_Streaming_SHA2_state_sha2_384 scrut = *s0; + uint64_t *block_state0 = scrut.block_state; + uint8_t *buf0 = scrut.buf; + uint64_t total_len0 = scrut.total_len; + uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)128U, sizeof (uint8_t)); + memcpy(buf, buf0, (uint32_t)128U * sizeof (uint8_t)); + uint64_t *block_state = (uint64_t *)KRML_HOST_CALLOC((uint32_t)8U, sizeof (uint64_t)); + memcpy(block_state, block_state0, (uint32_t)8U * sizeof (uint64_t)); + Hacl_Streaming_SHA2_state_sha2_384 + s = { .block_state = block_state, .buf = buf, .total_len = total_len0 }; + Hacl_Streaming_SHA2_state_sha2_384 + *p = + (Hacl_Streaming_SHA2_state_sha2_384 *)KRML_HOST_MALLOC(sizeof ( + Hacl_Streaming_SHA2_state_sha2_384 + )); + p[0U] = s; + return p; +} + +void Hacl_Streaming_SHA2_init_512(Hacl_Streaming_SHA2_state_sha2_384 *s) +{ + Hacl_Streaming_SHA2_state_sha2_384 scrut = *s; + uint8_t *buf = scrut.buf; + uint64_t *block_state = scrut.block_state; + Hacl_SHA2_Scalar32_sha512_init(block_state); + Hacl_Streaming_SHA2_state_sha2_384 + tmp = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; + s[0U] = tmp; +} + +static inline uint32_t +update_384_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *data, uint32_t len) +{ + Hacl_Streaming_SHA2_state_sha2_384 s = *p; + uint64_t total_len = s.total_len; + if ((uint64_t)len > (uint64_t)18446744073709551615U - total_len) + { + return (uint32_t)1U; + } + uint32_t sz; + if (total_len % (uint64_t)(uint32_t)128U == (uint64_t)0U && total_len > (uint64_t)0U) + { + sz = (uint32_t)128U; + } + else + { + sz = (uint32_t)(total_len % (uint64_t)(uint32_t)128U); + } + if (len <= (uint32_t)128U - sz) + { + Hacl_Streaming_SHA2_state_sha2_384 s1 = *p; + uint64_t *block_state1 = s1.block_state; + uint8_t *buf = s1.buf; + uint64_t total_len1 = s1.total_len; + uint32_t sz1; + if (total_len1 % (uint64_t)(uint32_t)128U == (uint64_t)0U && total_len1 > (uint64_t)0U) + { + sz1 = (uint32_t)128U; + } + else + { + sz1 = (uint32_t)(total_len1 % (uint64_t)(uint32_t)128U); + } + uint8_t *buf2 = buf + sz1; + memcpy(buf2, data, len * sizeof (uint8_t)); + uint64_t total_len2 = total_len1 + (uint64_t)len; + *p + = + ( + (Hacl_Streaming_SHA2_state_sha2_384){ + .block_state = block_state1, + .buf = buf, + .total_len = total_len2 + } + ); + } + else if (sz == (uint32_t)0U) + { + Hacl_Streaming_SHA2_state_sha2_384 s1 = *p; + uint64_t *block_state1 = s1.block_state; + uint8_t *buf = s1.buf; + uint64_t total_len1 = s1.total_len; + uint32_t sz1; + if (total_len1 % (uint64_t)(uint32_t)128U == (uint64_t)0U && total_len1 > (uint64_t)0U) + { + sz1 = (uint32_t)128U; + } + else + { + sz1 = (uint32_t)(total_len1 % (uint64_t)(uint32_t)128U); + } + if (!(sz1 == (uint32_t)0U)) + { + sha512_update_nblocks((uint32_t)128U, buf, block_state1); + } + uint32_t ite; + if ((uint64_t)len % (uint64_t)(uint32_t)128U == (uint64_t)0U && (uint64_t)len > (uint64_t)0U) + { + ite = (uint32_t)128U; + } + else + { + ite = (uint32_t)((uint64_t)len % (uint64_t)(uint32_t)128U); + } + uint32_t n_blocks = (len - ite) / (uint32_t)128U; + uint32_t data1_len = n_blocks * (uint32_t)128U; + uint32_t data2_len = len - data1_len; + uint8_t *data1 = data; + uint8_t *data2 = data + data1_len; + sha512_update_nblocks(data1_len, data1, block_state1); + uint8_t *dst = buf; + memcpy(dst, data2, data2_len * sizeof (uint8_t)); + *p + = + ( + (Hacl_Streaming_SHA2_state_sha2_384){ + .block_state = block_state1, + .buf = buf, + .total_len = total_len1 + (uint64_t)len + } + ); + } + else + { + uint32_t diff = (uint32_t)128U - sz; + uint8_t *data1 = data; + uint8_t *data2 = data + diff; + Hacl_Streaming_SHA2_state_sha2_384 s1 = *p; + uint64_t *block_state10 = s1.block_state; + uint8_t *buf0 = s1.buf; + uint64_t total_len10 = s1.total_len; + uint32_t sz10; + if (total_len10 % (uint64_t)(uint32_t)128U == (uint64_t)0U && total_len10 > (uint64_t)0U) + { + sz10 = (uint32_t)128U; + } + else + { + sz10 = (uint32_t)(total_len10 % (uint64_t)(uint32_t)128U); + } + uint8_t *buf2 = buf0 + sz10; + memcpy(buf2, data1, diff * sizeof (uint8_t)); + uint64_t total_len2 = total_len10 + (uint64_t)diff; + *p + = + ( + (Hacl_Streaming_SHA2_state_sha2_384){ + .block_state = block_state10, + .buf = buf0, + .total_len = total_len2 + } + ); + Hacl_Streaming_SHA2_state_sha2_384 s10 = *p; + uint64_t *block_state1 = s10.block_state; + uint8_t *buf = s10.buf; + uint64_t total_len1 = s10.total_len; + uint32_t sz1; + if (total_len1 % (uint64_t)(uint32_t)128U == (uint64_t)0U && total_len1 > (uint64_t)0U) + { + sz1 = (uint32_t)128U; + } + else + { + sz1 = (uint32_t)(total_len1 % (uint64_t)(uint32_t)128U); + } + if (!(sz1 == (uint32_t)0U)) + { + sha512_update_nblocks((uint32_t)128U, buf, block_state1); + } + uint32_t ite; + if + ( + (uint64_t)(len - diff) + % (uint64_t)(uint32_t)128U + == (uint64_t)0U + && (uint64_t)(len - diff) > (uint64_t)0U + ) + { + ite = (uint32_t)128U; + } + else + { + ite = (uint32_t)((uint64_t)(len - diff) % (uint64_t)(uint32_t)128U); + } + uint32_t n_blocks = (len - diff - ite) / (uint32_t)128U; + uint32_t data1_len = n_blocks * (uint32_t)128U; + uint32_t data2_len = len - diff - data1_len; + uint8_t *data11 = data2; + uint8_t *data21 = data2 + data1_len; + sha512_update_nblocks(data1_len, data11, block_state1); + uint8_t *dst = buf; + memcpy(dst, data21, data2_len * sizeof (uint8_t)); + *p + = + ( + (Hacl_Streaming_SHA2_state_sha2_384){ + .block_state = block_state1, + .buf = buf, + .total_len = total_len1 + (uint64_t)(len - diff) + } + ); + } + return (uint32_t)0U; +} + +/** +Feed an arbitrary amount of data into the hash. This function returns 0 for +success, or 1 if the combined length of all of the data passed to `update_512` +(since the last call to `init_512`) exceeds 2^125-1 bytes. + +This function is identical to the update function for SHA2_384. +*/ +uint32_t +Hacl_Streaming_SHA2_update_512( + Hacl_Streaming_SHA2_state_sha2_384 *p, + uint8_t *input, + uint32_t input_len +) +{ + return update_384_512(p, input, input_len); +} + +/** +Write the resulting hash into `dst`, an array of 64 bytes. The state remains +valid after a call to `finish_512`, meaning the user may feed more data into +the hash via `update_512`. (The finish_512 function operates on an internal copy of +the state and therefore does not invalidate the client-held state `p`.) +*/ +void Hacl_Streaming_SHA2_finish_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *dst) +{ + Hacl_Streaming_SHA2_state_sha2_384 scrut = *p; + uint64_t *block_state = scrut.block_state; + uint8_t *buf_ = scrut.buf; + uint64_t total_len = scrut.total_len; + uint32_t r; + if (total_len % (uint64_t)(uint32_t)128U == (uint64_t)0U && total_len > (uint64_t)0U) + { + r = (uint32_t)128U; + } + else + { + r = (uint32_t)(total_len % (uint64_t)(uint32_t)128U); + } + uint8_t *buf_1 = buf_; + uint64_t tmp_block_state[8U] = { 0U }; + memcpy(tmp_block_state, block_state, (uint32_t)8U * sizeof (uint64_t)); + uint32_t ite; + if (r % (uint32_t)128U == (uint32_t)0U && r > (uint32_t)0U) + { + ite = (uint32_t)128U; + } + else + { + ite = r % (uint32_t)128U; + } + uint8_t *buf_last = buf_1 + r - ite; + uint8_t *buf_multi = buf_1; + sha512_update_nblocks((uint32_t)0U, buf_multi, tmp_block_state); + uint64_t prev_len_last = total_len - (uint64_t)r; + sha512_update_last(FStar_UInt128_add(FStar_UInt128_uint64_to_uint128(prev_len_last), + FStar_UInt128_uint64_to_uint128((uint64_t)r)), + r, + buf_last, + tmp_block_state); + sha512_finish(tmp_block_state, dst); +} + +/** +Free a state allocated with `create_in_512`. + +This function is identical to the free function for SHA2_384. +*/ +void Hacl_Streaming_SHA2_free_512(Hacl_Streaming_SHA2_state_sha2_384 *s) +{ + Hacl_Streaming_SHA2_state_sha2_384 scrut = *s; + uint8_t *buf = scrut.buf; + uint64_t *block_state = scrut.block_state; + KRML_HOST_FREE(block_state); + KRML_HOST_FREE(buf); + KRML_HOST_FREE(s); +} + +/** +Hash `input`, of len `input_len`, into `dst`, an array of 64 bytes. +*/ +void Hacl_Streaming_SHA2_sha512(uint8_t *input, uint32_t input_len, uint8_t *dst) +{ + uint8_t *ib = input; + uint8_t *rb = dst; + uint64_t st[8U] = { 0U }; + Hacl_SHA2_Scalar32_sha512_init(st); + uint32_t rem = input_len % (uint32_t)128U; + FStar_UInt128_uint128 len_ = FStar_UInt128_uint64_to_uint128((uint64_t)input_len); + sha512_update_nblocks(input_len, ib, st); + uint32_t rem1 = input_len % (uint32_t)128U; + uint8_t *b0 = ib; + uint8_t *lb = b0 + input_len - rem1; + sha512_update_last(len_, rem, lb, st); + sha512_finish(st, rb); +} + +Hacl_Streaming_SHA2_state_sha2_384 *Hacl_Streaming_SHA2_create_in_384(void) +{ + uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)128U, sizeof (uint8_t)); + uint64_t *block_state = (uint64_t *)KRML_HOST_CALLOC((uint32_t)8U, sizeof (uint64_t)); + Hacl_Streaming_SHA2_state_sha2_384 + s = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; + Hacl_Streaming_SHA2_state_sha2_384 + *p = + (Hacl_Streaming_SHA2_state_sha2_384 *)KRML_HOST_MALLOC(sizeof ( + Hacl_Streaming_SHA2_state_sha2_384 + )); + p[0U] = s; + sha384_init(block_state); + return p; +} + +void Hacl_Streaming_SHA2_init_384(Hacl_Streaming_SHA2_state_sha2_384 *s) +{ + Hacl_Streaming_SHA2_state_sha2_384 scrut = *s; + uint8_t *buf = scrut.buf; + uint64_t *block_state = scrut.block_state; + sha384_init(block_state); + Hacl_Streaming_SHA2_state_sha2_384 + tmp = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; + s[0U] = tmp; +} + +uint32_t +Hacl_Streaming_SHA2_update_384( + Hacl_Streaming_SHA2_state_sha2_384 *p, + uint8_t *input, + uint32_t input_len +) +{ + return update_384_512(p, input, input_len); +} + +/** +Write the resulting hash into `dst`, an array of 48 bytes. The state remains +valid after a call to `finish_384`, meaning the user may feed more data into +the hash via `update_384`. +*/ +void Hacl_Streaming_SHA2_finish_384(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *dst) +{ + Hacl_Streaming_SHA2_state_sha2_384 scrut = *p; + uint64_t *block_state = scrut.block_state; + uint8_t *buf_ = scrut.buf; + uint64_t total_len = scrut.total_len; + uint32_t r; + if (total_len % (uint64_t)(uint32_t)128U == (uint64_t)0U && total_len > (uint64_t)0U) + { + r = (uint32_t)128U; + } + else + { + r = (uint32_t)(total_len % (uint64_t)(uint32_t)128U); + } + uint8_t *buf_1 = buf_; + uint64_t tmp_block_state[8U] = { 0U }; + memcpy(tmp_block_state, block_state, (uint32_t)8U * sizeof (uint64_t)); + uint32_t ite; + if (r % (uint32_t)128U == (uint32_t)0U && r > (uint32_t)0U) + { + ite = (uint32_t)128U; + } + else + { + ite = r % (uint32_t)128U; + } + uint8_t *buf_last = buf_1 + r - ite; + uint8_t *buf_multi = buf_1; + sha384_update_nblocks((uint32_t)0U, buf_multi, tmp_block_state); + uint64_t prev_len_last = total_len - (uint64_t)r; + sha384_update_last(FStar_UInt128_add(FStar_UInt128_uint64_to_uint128(prev_len_last), + FStar_UInt128_uint64_to_uint128((uint64_t)r)), + r, + buf_last, + tmp_block_state); + sha384_finish(tmp_block_state, dst); +} + +void Hacl_Streaming_SHA2_free_384(Hacl_Streaming_SHA2_state_sha2_384 *p) +{ + Hacl_Streaming_SHA2_free_512(p); +} + +/** +Hash `input`, of len `input_len`, into `dst`, an array of 48 bytes. +*/ +void Hacl_Streaming_SHA2_sha384(uint8_t *input, uint32_t input_len, uint8_t *dst) +{ + uint8_t *ib = input; + uint8_t *rb = dst; + uint64_t st[8U] = { 0U }; + sha384_init(st); + uint32_t rem = input_len % (uint32_t)128U; + FStar_UInt128_uint128 len_ = FStar_UInt128_uint64_to_uint128((uint64_t)input_len); + sha384_update_nblocks(input_len, ib, st); + uint32_t rem1 = input_len % (uint32_t)128U; + uint8_t *b0 = ib; + uint8_t *lb = b0 + input_len - rem1; + sha384_update_last(len_, rem, lb, st); + sha384_finish(st, rb); +} + diff --git a/Modules/_hacl/Hacl_Streaming_SHA2.h b/Modules/_hacl/Hacl_Streaming_SHA2.h index c83a835afe70..2c905854f336 100644 --- a/Modules/_hacl/Hacl_Streaming_SHA2.h +++ b/Modules/_hacl/Hacl_Streaming_SHA2.h @@ -32,13 +32,12 @@ extern "C" { #include #include "python_hacl_namespaces.h" -#include "krml/FStar_UInt_8_16_32_64.h" +#include "krml/types.h" #include "krml/lowstar_endianness.h" #include "krml/internal/target.h" - typedef struct Hacl_Streaming_SHA2_state_sha2_224_s { uint32_t *block_state; @@ -49,6 +48,16 @@ Hacl_Streaming_SHA2_state_sha2_224; typedef Hacl_Streaming_SHA2_state_sha2_224 Hacl_Streaming_SHA2_state_sha2_256; +typedef struct Hacl_Streaming_SHA2_state_sha2_384_s +{ + uint64_t *block_state; + uint8_t *buf; + uint64_t total_len; +} +Hacl_Streaming_SHA2_state_sha2_384; + +typedef Hacl_Streaming_SHA2_state_sha2_384 Hacl_Streaming_SHA2_state_sha2_512; + /** Allocate initial state for the SHA2_256 hash. The state is to be freed by calling `free_256`. @@ -128,6 +137,78 @@ Hash `input`, of len `input_len`, into `dst`, an array of 28 bytes. */ void Hacl_Streaming_SHA2_sha224(uint8_t *input, uint32_t input_len, uint8_t *dst); +Hacl_Streaming_SHA2_state_sha2_384 *Hacl_Streaming_SHA2_create_in_512(void); + +/** +Copies the state passed as argument into a newly allocated state (deep copy). +The state is to be freed by calling `free_512`. Cloning the state this way is +useful, for instance, if your control-flow diverges and you need to feed +more (different) data into the hash in each branch. +*/ +Hacl_Streaming_SHA2_state_sha2_384 +*Hacl_Streaming_SHA2_copy_512(Hacl_Streaming_SHA2_state_sha2_384 *s0); + +void Hacl_Streaming_SHA2_init_512(Hacl_Streaming_SHA2_state_sha2_384 *s); + +/** +Feed an arbitrary amount of data into the hash. This function returns 0 for +success, or 1 if the combined length of all of the data passed to `update_512` +(since the last call to `init_512`) exceeds 2^125-1 bytes. + +This function is identical to the update function for SHA2_384. +*/ +uint32_t +Hacl_Streaming_SHA2_update_512( + Hacl_Streaming_SHA2_state_sha2_384 *p, + uint8_t *input, + uint32_t input_len +); + +/** +Write the resulting hash into `dst`, an array of 64 bytes. The state remains +valid after a call to `finish_512`, meaning the user may feed more data into +the hash via `update_512`. (The finish_512 function operates on an internal copy of +the state and therefore does not invalidate the client-held state `p`.) +*/ +void Hacl_Streaming_SHA2_finish_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *dst); + +/** +Free a state allocated with `create_in_512`. + +This function is identical to the free function for SHA2_384. +*/ +void Hacl_Streaming_SHA2_free_512(Hacl_Streaming_SHA2_state_sha2_384 *s); + +/** +Hash `input`, of len `input_len`, into `dst`, an array of 64 bytes. +*/ +void Hacl_Streaming_SHA2_sha512(uint8_t *input, uint32_t input_len, uint8_t *dst); + +Hacl_Streaming_SHA2_state_sha2_384 *Hacl_Streaming_SHA2_create_in_384(void); + +void Hacl_Streaming_SHA2_init_384(Hacl_Streaming_SHA2_state_sha2_384 *s); + +uint32_t +Hacl_Streaming_SHA2_update_384( + Hacl_Streaming_SHA2_state_sha2_384 *p, + uint8_t *input, + uint32_t input_len +); + +/** +Write the resulting hash into `dst`, an array of 48 bytes. The state remains +valid after a call to `finish_384`, meaning the user may feed more data into +the hash via `update_384`. +*/ +void Hacl_Streaming_SHA2_finish_384(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *dst); + +void Hacl_Streaming_SHA2_free_384(Hacl_Streaming_SHA2_state_sha2_384 *p); + +/** +Hash `input`, of len `input_len`, into `dst`, an array of 48 bytes. +*/ +void Hacl_Streaming_SHA2_sha384(uint8_t *input, uint32_t input_len, uint8_t *dst); + #if defined(__cplusplus) } #endif diff --git a/Modules/_hacl/include/krml/FStar_UInt128_Verified.h b/Modules/_hacl/include/krml/FStar_UInt128_Verified.h new file mode 100644 index 000000000000..ee160193539e --- /dev/null +++ b/Modules/_hacl/include/krml/FStar_UInt128_Verified.h @@ -0,0 +1,347 @@ +/* + Copyright (c) INRIA and Microsoft Corporation. All rights reserved. + Licensed under the Apache 2.0 License. +*/ + + +#ifndef __FStar_UInt128_Verified_H +#define __FStar_UInt128_Verified_H + + + +#include "FStar_UInt_8_16_32_64.h" +#include +#include +#include "krml/types.h" +#include "krml/internal/target.h" +static inline uint64_t FStar_UInt128_constant_time_carry(uint64_t a, uint64_t b) +{ + return (a ^ ((a ^ b) | ((a - b) ^ b))) >> (uint32_t)63U; +} + +static inline uint64_t FStar_UInt128_carry(uint64_t a, uint64_t b) +{ + return FStar_UInt128_constant_time_carry(a, b); +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_add(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + FStar_UInt128_uint128 lit; + lit.low = a.low + b.low; + lit.high = a.high + b.high + FStar_UInt128_carry(a.low + b.low, b.low); + return lit; +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_add_underspec(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + FStar_UInt128_uint128 lit; + lit.low = a.low + b.low; + lit.high = a.high + b.high + FStar_UInt128_carry(a.low + b.low, b.low); + return lit; +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_add_mod(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + FStar_UInt128_uint128 lit; + lit.low = a.low + b.low; + lit.high = a.high + b.high + FStar_UInt128_carry(a.low + b.low, b.low); + return lit; +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_sub(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + FStar_UInt128_uint128 lit; + lit.low = a.low - b.low; + lit.high = a.high - b.high - FStar_UInt128_carry(a.low, a.low - b.low); + return lit; +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_sub_underspec(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + FStar_UInt128_uint128 lit; + lit.low = a.low - b.low; + lit.high = a.high - b.high - FStar_UInt128_carry(a.low, a.low - b.low); + return lit; +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_sub_mod_impl(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + FStar_UInt128_uint128 lit; + lit.low = a.low - b.low; + lit.high = a.high - b.high - FStar_UInt128_carry(a.low, a.low - b.low); + return lit; +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_sub_mod(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + return FStar_UInt128_sub_mod_impl(a, b); +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_logand(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + FStar_UInt128_uint128 lit; + lit.low = a.low & b.low; + lit.high = a.high & b.high; + return lit; +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_logxor(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + FStar_UInt128_uint128 lit; + lit.low = a.low ^ b.low; + lit.high = a.high ^ b.high; + return lit; +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_logor(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + FStar_UInt128_uint128 lit; + lit.low = a.low | b.low; + lit.high = a.high | b.high; + return lit; +} + +static inline FStar_UInt128_uint128 FStar_UInt128_lognot(FStar_UInt128_uint128 a) +{ + FStar_UInt128_uint128 lit; + lit.low = ~a.low; + lit.high = ~a.high; + return lit; +} + +static uint32_t FStar_UInt128_u32_64 = (uint32_t)64U; + +static inline uint64_t FStar_UInt128_add_u64_shift_left(uint64_t hi, uint64_t lo, uint32_t s) +{ + return (hi << s) + (lo >> (FStar_UInt128_u32_64 - s)); +} + +static inline uint64_t +FStar_UInt128_add_u64_shift_left_respec(uint64_t hi, uint64_t lo, uint32_t s) +{ + return FStar_UInt128_add_u64_shift_left(hi, lo, s); +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_shift_left_small(FStar_UInt128_uint128 a, uint32_t s) +{ + if (s == (uint32_t)0U) + { + return a; + } + else + { + FStar_UInt128_uint128 lit; + lit.low = a.low << s; + lit.high = FStar_UInt128_add_u64_shift_left_respec(a.high, a.low, s); + return lit; + } +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_shift_left_large(FStar_UInt128_uint128 a, uint32_t s) +{ + FStar_UInt128_uint128 lit; + lit.low = (uint64_t)0U; + lit.high = a.low << (s - FStar_UInt128_u32_64); + return lit; +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_shift_left(FStar_UInt128_uint128 a, uint32_t s) +{ + if (s < FStar_UInt128_u32_64) + { + return FStar_UInt128_shift_left_small(a, s); + } + else + { + return FStar_UInt128_shift_left_large(a, s); + } +} + +static inline uint64_t FStar_UInt128_add_u64_shift_right(uint64_t hi, uint64_t lo, uint32_t s) +{ + return (lo >> s) + (hi << (FStar_UInt128_u32_64 - s)); +} + +static inline uint64_t +FStar_UInt128_add_u64_shift_right_respec(uint64_t hi, uint64_t lo, uint32_t s) +{ + return FStar_UInt128_add_u64_shift_right(hi, lo, s); +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_shift_right_small(FStar_UInt128_uint128 a, uint32_t s) +{ + if (s == (uint32_t)0U) + { + return a; + } + else + { + FStar_UInt128_uint128 lit; + lit.low = FStar_UInt128_add_u64_shift_right_respec(a.high, a.low, s); + lit.high = a.high >> s; + return lit; + } +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_shift_right_large(FStar_UInt128_uint128 a, uint32_t s) +{ + FStar_UInt128_uint128 lit; + lit.low = a.high >> (s - FStar_UInt128_u32_64); + lit.high = (uint64_t)0U; + return lit; +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_shift_right(FStar_UInt128_uint128 a, uint32_t s) +{ + if (s < FStar_UInt128_u32_64) + { + return FStar_UInt128_shift_right_small(a, s); + } + else + { + return FStar_UInt128_shift_right_large(a, s); + } +} + +static inline bool FStar_UInt128_eq(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + return a.low == b.low && a.high == b.high; +} + +static inline bool FStar_UInt128_gt(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + return a.high > b.high || (a.high == b.high && a.low > b.low); +} + +static inline bool FStar_UInt128_lt(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + return a.high < b.high || (a.high == b.high && a.low < b.low); +} + +static inline bool FStar_UInt128_gte(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + return a.high > b.high || (a.high == b.high && a.low >= b.low); +} + +static inline bool FStar_UInt128_lte(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + return a.high < b.high || (a.high == b.high && a.low <= b.low); +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_eq_mask(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + FStar_UInt128_uint128 lit; + lit.low = FStar_UInt64_eq_mask(a.low, b.low) & FStar_UInt64_eq_mask(a.high, b.high); + lit.high = FStar_UInt64_eq_mask(a.low, b.low) & FStar_UInt64_eq_mask(a.high, b.high); + return lit; +} + +static inline FStar_UInt128_uint128 +FStar_UInt128_gte_mask(FStar_UInt128_uint128 a, FStar_UInt128_uint128 b) +{ + FStar_UInt128_uint128 lit; + lit.low = + (FStar_UInt64_gte_mask(a.high, b.high) & ~FStar_UInt64_eq_mask(a.high, b.high)) + | (FStar_UInt64_eq_mask(a.high, b.high) & FStar_UInt64_gte_mask(a.low, b.low)); + lit.high = + (FStar_UInt64_gte_mask(a.high, b.high) & ~FStar_UInt64_eq_mask(a.high, b.high)) + | (FStar_UInt64_eq_mask(a.high, b.high) & FStar_UInt64_gte_mask(a.low, b.low)); + return lit; +} + +static inline FStar_UInt128_uint128 FStar_UInt128_uint64_to_uint128(uint64_t a) +{ + FStar_UInt128_uint128 lit; + lit.low = a; + lit.high = (uint64_t)0U; + return lit; +} + +static inline uint64_t FStar_UInt128_uint128_to_uint64(FStar_UInt128_uint128 a) +{ + return a.low; +} + +static inline uint64_t FStar_UInt128_u64_mod_32(uint64_t a) +{ + return a & (uint64_t)0xffffffffU; +} + +static uint32_t FStar_UInt128_u32_32 = (uint32_t)32U; + +static inline uint64_t FStar_UInt128_u32_combine(uint64_t hi, uint64_t lo) +{ + return lo + (hi << FStar_UInt128_u32_32); +} + +static inline FStar_UInt128_uint128 FStar_UInt128_mul32(uint64_t x, uint32_t y) +{ + FStar_UInt128_uint128 lit; + lit.low = + FStar_UInt128_u32_combine((x >> FStar_UInt128_u32_32) + * (uint64_t)y + + (FStar_UInt128_u64_mod_32(x) * (uint64_t)y >> FStar_UInt128_u32_32), + FStar_UInt128_u64_mod_32(FStar_UInt128_u64_mod_32(x) * (uint64_t)y)); + lit.high = + ((x >> FStar_UInt128_u32_32) + * (uint64_t)y + + (FStar_UInt128_u64_mod_32(x) * (uint64_t)y >> FStar_UInt128_u32_32)) + >> FStar_UInt128_u32_32; + return lit; +} + +static inline uint64_t FStar_UInt128_u32_combine_(uint64_t hi, uint64_t lo) +{ + return lo + (hi << FStar_UInt128_u32_32); +} + +static inline FStar_UInt128_uint128 FStar_UInt128_mul_wide(uint64_t x, uint64_t y) +{ + FStar_UInt128_uint128 lit; + lit.low = + FStar_UInt128_u32_combine_(FStar_UInt128_u64_mod_32(x) + * (y >> FStar_UInt128_u32_32) + + + FStar_UInt128_u64_mod_32((x >> FStar_UInt128_u32_32) + * FStar_UInt128_u64_mod_32(y) + + (FStar_UInt128_u64_mod_32(x) * FStar_UInt128_u64_mod_32(y) >> FStar_UInt128_u32_32)), + FStar_UInt128_u64_mod_32(FStar_UInt128_u64_mod_32(x) * FStar_UInt128_u64_mod_32(y))); + lit.high = + (x >> FStar_UInt128_u32_32) + * (y >> FStar_UInt128_u32_32) + + + (((x >> FStar_UInt128_u32_32) + * FStar_UInt128_u64_mod_32(y) + + (FStar_UInt128_u64_mod_32(x) * FStar_UInt128_u64_mod_32(y) >> FStar_UInt128_u32_32)) + >> FStar_UInt128_u32_32) + + + ((FStar_UInt128_u64_mod_32(x) + * (y >> FStar_UInt128_u32_32) + + + FStar_UInt128_u64_mod_32((x >> FStar_UInt128_u32_32) + * FStar_UInt128_u64_mod_32(y) + + (FStar_UInt128_u64_mod_32(x) * FStar_UInt128_u64_mod_32(y) >> FStar_UInt128_u32_32))) + >> FStar_UInt128_u32_32); + return lit; +} + + +#define __FStar_UInt128_Verified_H_DEFINED +#endif diff --git a/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h b/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h index 3e2e4b32b22f..965afc836fd1 100644 --- a/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h +++ b/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h @@ -14,7 +14,7 @@ #include #include "krml/lowstar_endianness.h" -#include "krml/FStar_UInt_8_16_32_64.h" +#include "krml/types.h" #include "krml/internal/target.h" static inline uint64_t FStar_UInt64_eq_mask(uint64_t a, uint64_t b) { diff --git a/Modules/_hacl/include/krml/fstar_uint128_struct_endianness.h b/Modules/_hacl/include/krml/fstar_uint128_struct_endianness.h new file mode 100644 index 000000000000..e2b6d62859a5 --- /dev/null +++ b/Modules/_hacl/include/krml/fstar_uint128_struct_endianness.h @@ -0,0 +1,68 @@ +/* Copyright (c) INRIA and Microsoft Corporation. All rights reserved. + Licensed under the Apache 2.0 License. */ + +#ifndef FSTAR_UINT128_STRUCT_ENDIANNESS_H +#define FSTAR_UINT128_STRUCT_ENDIANNESS_H + +/* Hand-written implementation of endianness-related uint128 functions + * for the extracted uint128 implementation */ + +/* Access 64-bit fields within the int128. */ +#define HIGH64_OF(x) ((x)->high) +#define LOW64_OF(x) ((x)->low) + +/* A series of definitions written using pointers. */ + +inline static void load128_le_(uint8_t *b, uint128_t *r) { + LOW64_OF(r) = load64_le(b); + HIGH64_OF(r) = load64_le(b + 8); +} + +inline static void store128_le_(uint8_t *b, uint128_t *n) { + store64_le(b, LOW64_OF(n)); + store64_le(b + 8, HIGH64_OF(n)); +} + +inline static void load128_be_(uint8_t *b, uint128_t *r) { + HIGH64_OF(r) = load64_be(b); + LOW64_OF(r) = load64_be(b + 8); +} + +inline static void store128_be_(uint8_t *b, uint128_t *n) { + store64_be(b, HIGH64_OF(n)); + store64_be(b + 8, LOW64_OF(n)); +} + +#ifndef KRML_NOSTRUCT_PASSING + +inline static uint128_t load128_le(uint8_t *b) { + uint128_t r; + load128_le_(b, &r); + return r; +} + +inline static void store128_le(uint8_t *b, uint128_t n) { + store128_le_(b, &n); +} + +inline static uint128_t load128_be(uint8_t *b) { + uint128_t r; + load128_be_(b, &r); + return r; +} + +inline static void store128_be(uint8_t *b, uint128_t n) { + store128_be_(b, &n); +} + +#else /* !defined(KRML_STRUCT_PASSING) */ + +# define print128 print128_ +# define load128_le load128_le_ +# define store128_le store128_le_ +# define load128_be load128_be_ +# define store128_be store128_be_ + +#endif /* KRML_STRUCT_PASSING */ + +#endif diff --git a/Modules/_hacl/include/krml/types.h b/Modules/_hacl/include/krml/types.h new file mode 100644 index 000000000000..509f555536e4 --- /dev/null +++ b/Modules/_hacl/include/krml/types.h @@ -0,0 +1,14 @@ +#pragma once + +#include + +typedef struct FStar_UInt128_uint128_s { + uint64_t low; + uint64_t high; +} FStar_UInt128_uint128, uint128_t; + +#define KRML_VERIFIED_UINT128 + +#include "krml/lowstar_endianness.h" +#include "krml/fstar_uint128_struct_endianness.h" +#include "krml/FStar_UInt128_Verified.h" diff --git a/Modules/_hacl/include/python_hacl_namespaces.h b/Modules/_hacl/include/python_hacl_namespaces.h index af390459311f..65608d1fd283 100644 --- a/Modules/_hacl/include/python_hacl_namespaces.h +++ b/Modules/_hacl/include/python_hacl_namespaces.h @@ -10,19 +10,36 @@ #define Hacl_Streaming_SHA2_state_sha2_224_s python_hashlib_Hacl_Streaming_SHA2_state_sha2_224_s #define Hacl_Streaming_SHA2_state_sha2_224 python_hashlib_Hacl_Streaming_SHA2_state_sha2_224 #define Hacl_Streaming_SHA2_state_sha2_256 python_hashlib_Hacl_Streaming_SHA2_state_sha2_256 +#define Hacl_Streaming_SHA2_state_sha2_384_s python_hashlib_Hacl_Streaming_SHA2_state_sha2_384_s +#define Hacl_Streaming_SHA2_state_sha2_384 python_hashlib_Hacl_Streaming_SHA2_state_sha2_384 +#define Hacl_Streaming_SHA2_state_sha2_512 python_hashlib_Hacl_Streaming_SHA2_state_sha2_512 #define Hacl_Streaming_SHA2_create_in_256 python_hashlib_Hacl_Streaming_SHA2_create_in_256 #define Hacl_Streaming_SHA2_create_in_224 python_hashlib_Hacl_Streaming_SHA2_create_in_224 +#define Hacl_Streaming_SHA2_create_in_512 python_hashlib_Hacl_Streaming_SHA2_create_in_512 +#define Hacl_Streaming_SHA2_create_in_384 python_hashlib_Hacl_Streaming_SHA2_create_in_384 #define Hacl_Streaming_SHA2_copy_256 python_hashlib_Hacl_Streaming_SHA2_copy_256 #define Hacl_Streaming_SHA2_copy_224 python_hashlib_Hacl_Streaming_SHA2_copy_224 +#define Hacl_Streaming_SHA2_copy_512 python_hashlib_Hacl_Streaming_SHA2_copy_512 +#define Hacl_Streaming_SHA2_copy_384 python_hashlib_Hacl_Streaming_SHA2_copy_384 #define Hacl_Streaming_SHA2_init_256 python_hashlib_Hacl_Streaming_SHA2_init_256 #define Hacl_Streaming_SHA2_init_224 python_hashlib_Hacl_Streaming_SHA2_init_224 +#define Hacl_Streaming_SHA2_init_512 python_hashlib_Hacl_Streaming_SHA2_init_512 +#define Hacl_Streaming_SHA2_init_384 python_hashlib_Hacl_Streaming_SHA2_init_384 #define Hacl_Streaming_SHA2_update_256 python_hashlib_Hacl_Streaming_SHA2_update_256 #define Hacl_Streaming_SHA2_update_224 python_hashlib_Hacl_Streaming_SHA2_update_224 +#define Hacl_Streaming_SHA2_update_512 python_hashlib_Hacl_Streaming_SHA2_update_512 +#define Hacl_Streaming_SHA2_update_384 python_hashlib_Hacl_Streaming_SHA2_update_384 #define Hacl_Streaming_SHA2_finish_256 python_hashlib_Hacl_Streaming_SHA2_finish_256 #define Hacl_Streaming_SHA2_finish_224 python_hashlib_Hacl_Streaming_SHA2_finish_224 +#define Hacl_Streaming_SHA2_finish_512 python_hashlib_Hacl_Streaming_SHA2_finish_512 +#define Hacl_Streaming_SHA2_finish_384 python_hashlib_Hacl_Streaming_SHA2_finish_384 #define Hacl_Streaming_SHA2_free_256 python_hashlib_Hacl_Streaming_SHA2_free_256 #define Hacl_Streaming_SHA2_free_224 python_hashlib_Hacl_Streaming_SHA2_free_224 +#define Hacl_Streaming_SHA2_free_512 python_hashlib_Hacl_Streaming_SHA2_free_512 +#define Hacl_Streaming_SHA2_free_384 python_hashlib_Hacl_Streaming_SHA2_free_384 #define Hacl_Streaming_SHA2_sha256 python_hashlib_Hacl_Streaming_SHA2_sha256 #define Hacl_Streaming_SHA2_sha224 python_hashlib_Hacl_Streaming_SHA2_sha224 +#define Hacl_Streaming_SHA2_sha512 python_hashlib_Hacl_Streaming_SHA2_sha512 +#define Hacl_Streaming_SHA2_sha384 python_hashlib_Hacl_Streaming_SHA2_sha384 #endif // _PYTHON_HACL_NAMESPACES_H diff --git a/Modules/_hacl/internal/Hacl_SHA2_Generic.h b/Modules/_hacl/internal/Hacl_SHA2_Generic.h index 23f7cea1eb38..6ac47f3cf7ed 100644 --- a/Modules/_hacl/internal/Hacl_SHA2_Generic.h +++ b/Modules/_hacl/internal/Hacl_SHA2_Generic.h @@ -31,13 +31,10 @@ extern "C" { #endif #include -#include "krml/FStar_UInt_8_16_32_64.h" +#include "krml/types.h" #include "krml/lowstar_endianness.h" #include "krml/internal/target.h" - - - static const uint32_t Hacl_Impl_SHA2_Generic_h224[8U] = diff --git a/Modules/_hacl/refresh.sh b/Modules/_hacl/refresh.sh index 594873862a2d..dba8cb3972ea 100755 --- a/Modules/_hacl/refresh.sh +++ b/Modules/_hacl/refresh.sh @@ -22,7 +22,7 @@ fi # Update this when updating to a new version after verifying that the changes # the update brings in are good. -expected_hacl_star_rev=94aabbb4cf71347d3779a8db486c761403c6d036 +expected_hacl_star_rev=4751fc2b11639f651718abf8522fcc36902ca67c hacl_dir="$(realpath "$1")" cd "$(dirname "$0")" @@ -54,6 +54,8 @@ include_files=( declare -a lib_files lib_files=( krmllib/dist/minimal/FStar_UInt_8_16_32_64.h + krmllib/dist/minimal/fstar_uint128_struct_endianness.h + krmllib/dist/minimal/FStar_UInt128_Verified.h ) # C files for the algorithms themselves: current directory @@ -82,10 +84,27 @@ fi readarray -t all_files < <(find . -name '*.h' -or -name '*.c') -# types.h is a simple wrapper that defines the uint128 type then proceeds to -# include FStar_UInt_8_16_32_64.h; we jump the types.h step since our current -# selection of algorithms does not necessitate the use of uint128 -$sed -i 's!#include.*types.h"!#include "krml/FStar_UInt_8_16_32_64.h"!g' "${all_files[@]}" +# types.h originally contains a complex series of if-defs and auxiliary type +# definitions; here, we just need a proper uint128 type in scope +# is a simple wrapper that defines the uint128 type +cat > include/krml/types.h < + +typedef struct FStar_UInt128_uint128_s { + uint64_t low; + uint64_t high; +} FStar_UInt128_uint128, uint128_t; + +#define KRML_VERIFIED_UINT128 + +#include "krml/lowstar_endianness.h" +#include "krml/fstar_uint128_struct_endianness.h" +#include "krml/FStar_UInt128_Verified.h" +EOF +# Adjust the include path to reflect the local directory structure +$sed -i 's!#include.*types.h"!#include "krml/types.h"!g' "${all_files[@]}" $sed -i 's!#include.*compat.h"!!g' "${all_files[@]}" # FStar_UInt_8_16_32_64 contains definitions useful in the general case, but not @@ -103,22 +122,6 @@ $sed -i 's!#include.*Hacl_Krmllib.h"!!g' "${all_files[@]}" # included in $dist_files). $sed -i 's!#include.*internal/Hacl_Streaming_SHA2.h"!#include "Hacl_Streaming_SHA2.h"!g' "${all_files[@]}" -# The SHA2 file contains all variants of SHA2. We strip 384 and 512 for the time -# being, to be included later. -# This regexp matches a separator (two new lines), followed by: -# -# * -# ... 384 or 512 ... { -# * -# } -# -# The first non-empty lines are the comment block. The second ... may spill over -# the next following lines if the arguments are printed in one-per-line mode. -$sed -i -z 's/\n\n\([^\n]\+\n\)*[^\n]*\(384\|512\)[^{]*{\n\?\( [^\n]*\n\)*}//g' Hacl_Streaming_SHA2.c - -# Same thing with function prototypes -$sed -i -z 's/\n\n\([^\n]\+\n\)*[^\n]*\(384\|512\)[^;]*;//g' Hacl_Streaming_SHA2.h - # Use globally unique names for the Hacl_ C APIs to avoid linkage conflicts. $sed -i -z 's!#include \n!#include \n#include "python_hacl_namespaces.h"\n!' Hacl_Streaming_SHA2.h diff --git a/Modules/sha256module.c b/Modules/sha256module.c index 630e4bf03bbe..301c9837bb67 100644 --- a/Modules/sha256module.c +++ b/Modules/sha256module.c @@ -8,6 +8,7 @@ Andrew Kuchling (amk at amk.ca) Greg Stein (gstein at lyra.org) Trevor Perrin (trevp at trevp.net) + Jonathan Protzenko (jonathan at protzenko.fr) Copyright (C) 2005-2007 Gregory P. Smith (greg at krypto.org) Licensed to PSF under a Contributor Agreement. diff --git a/Modules/sha512module.c b/Modules/sha512module.c index bf4408b455f2..d7dfed4e5db0 100644 --- a/Modules/sha512module.c +++ b/Modules/sha512module.c @@ -8,6 +8,7 @@ Andrew Kuchling (amk at amk.ca) Greg Stein (gstein at lyra.org) Trevor Perrin (trevp at trevp.net) + Jonathan Protzenko (jonathan at protzenko.fr) Copyright (C) 2005-2007 Gregory P. Smith (greg at krypto.org) Licensed to PSF under a Contributor Agreement. @@ -31,400 +32,32 @@ class SHA512Type "SHAobject *" "&PyType_Type" [clinic start generated code]*/ /*[clinic end generated code: output=da39a3ee5e6b4b0d input=81a3ccde92bcfe8d]*/ -/* Some useful types */ - -typedef unsigned char SHA_BYTE; -typedef uint32_t SHA_INT32; /* 32-bit integer */ -typedef uint64_t SHA_INT64; /* 64-bit integer */ /* The SHA block size and message digest sizes, in bytes */ #define SHA_BLOCKSIZE 128 #define SHA_DIGESTSIZE 64 -/* The structure for storing SHA info */ +/* The SHA2-384 and SHA2-512 implementations defer to the HACL* verified + * library. */ + +#include "_hacl/Hacl_Streaming_SHA2.h" typedef struct { PyObject_HEAD - SHA_INT64 digest[8]; /* Message digest */ - SHA_INT32 count_lo, count_hi; /* 64-bit bit count */ - SHA_BYTE data[SHA_BLOCKSIZE]; /* SHA data buffer */ - int local; /* unprocessed amount in data */ int digestsize; + Hacl_Streaming_SHA2_state_sha2_512 *state; } SHAobject; #include "clinic/sha512module.c.h" -/* When run on a little-endian CPU we need to perform byte reversal on an - array of longwords. */ - -#if PY_LITTLE_ENDIAN -static void longReverse(SHA_INT64 *buffer, int byteCount) -{ - byteCount /= sizeof(*buffer); - for (; byteCount--; buffer++) { - *buffer = _Py_bswap64(*buffer); - } -} -#endif static void SHAcopy(SHAobject *src, SHAobject *dest) { - dest->local = src->local; dest->digestsize = src->digestsize; - dest->count_lo = src->count_lo; - dest->count_hi = src->count_hi; - memcpy(dest->digest, src->digest, sizeof(src->digest)); - memcpy(dest->data, src->data, sizeof(src->data)); -} - - -/* ------------------------------------------------------------------------ - * - * This code for the SHA-512 algorithm was noted as public domain. The - * original headers are pasted below. - * - * Several changes have been made to make it more compatible with the - * Python environment and desired interface. - * - */ - -/* LibTomCrypt, modular cryptographic library -- Tom St Denis - * - * LibTomCrypt is a library that provides various cryptographic - * algorithms in a highly modular and flexible manner. - * - * The library is free for all purposes without any express - * guarantee it works. - * - * Tom St Denis, tomstdenis at iahu.ca, https://www.libtom.net - */ - - -/* SHA512 by Tom St Denis */ - -/* Various logical functions */ -#define ROR64(x, y) \ - ( ((((x) & 0xFFFFFFFFFFFFFFFFULL)>>((unsigned long long)(y) & 63)) | \ - ((x)<<((unsigned long long)(64-((y) & 63))))) & 0xFFFFFFFFFFFFFFFFULL) -#define Ch(x,y,z) (z ^ (x & (y ^ z))) -#define Maj(x,y,z) (((x | y) & z) | (x & y)) -#define S(x, n) ROR64((x),(n)) -#define R(x, n) (((x) & 0xFFFFFFFFFFFFFFFFULL) >> ((unsigned long long)n)) -#define Sigma0(x) (S(x, 28) ^ S(x, 34) ^ S(x, 39)) -#define Sigma1(x) (S(x, 14) ^ S(x, 18) ^ S(x, 41)) -#define Gamma0(x) (S(x, 1) ^ S(x, 8) ^ R(x, 7)) -#define Gamma1(x) (S(x, 19) ^ S(x, 61) ^ R(x, 6)) - - -static void -sha512_transform(SHAobject *sha_info) -{ - int i; - SHA_INT64 S[8], W[80], t0, t1; - - memcpy(W, sha_info->data, sizeof(sha_info->data)); -#if PY_LITTLE_ENDIAN - longReverse(W, (int)sizeof(sha_info->data)); -#endif - - for (i = 16; i < 80; ++i) { - W[i] = Gamma1(W[i - 2]) + W[i - 7] + Gamma0(W[i - 15]) + W[i - 16]; - } - for (i = 0; i < 8; ++i) { - S[i] = sha_info->digest[i]; - } - - /* Compress */ -#define RND(a,b,c,d,e,f,g,h,i,ki) \ - t0 = h + Sigma1(e) + Ch(e, f, g) + ki + W[i]; \ - t1 = Sigma0(a) + Maj(a, b, c); \ - d += t0; \ - h = t0 + t1; - - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],0,0x428a2f98d728ae22ULL); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],1,0x7137449123ef65cdULL); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],2,0xb5c0fbcfec4d3b2fULL); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],3,0xe9b5dba58189dbbcULL); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],4,0x3956c25bf348b538ULL); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],5,0x59f111f1b605d019ULL); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],6,0x923f82a4af194f9bULL); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],7,0xab1c5ed5da6d8118ULL); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],8,0xd807aa98a3030242ULL); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],9,0x12835b0145706fbeULL); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],10,0x243185be4ee4b28cULL); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],11,0x550c7dc3d5ffb4e2ULL); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],12,0x72be5d74f27b896fULL); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],13,0x80deb1fe3b1696b1ULL); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],14,0x9bdc06a725c71235ULL); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],15,0xc19bf174cf692694ULL); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],16,0xe49b69c19ef14ad2ULL); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],17,0xefbe4786384f25e3ULL); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],18,0x0fc19dc68b8cd5b5ULL); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],19,0x240ca1cc77ac9c65ULL); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],20,0x2de92c6f592b0275ULL); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],21,0x4a7484aa6ea6e483ULL); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],22,0x5cb0a9dcbd41fbd4ULL); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],23,0x76f988da831153b5ULL); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],24,0x983e5152ee66dfabULL); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],25,0xa831c66d2db43210ULL); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],26,0xb00327c898fb213fULL); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],27,0xbf597fc7beef0ee4ULL); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],28,0xc6e00bf33da88fc2ULL); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],29,0xd5a79147930aa725ULL); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],30,0x06ca6351e003826fULL); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],31,0x142929670a0e6e70ULL); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],32,0x27b70a8546d22ffcULL); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],33,0x2e1b21385c26c926ULL); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],34,0x4d2c6dfc5ac42aedULL); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],35,0x53380d139d95b3dfULL); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],36,0x650a73548baf63deULL); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],37,0x766a0abb3c77b2a8ULL); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],38,0x81c2c92e47edaee6ULL); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],39,0x92722c851482353bULL); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],40,0xa2bfe8a14cf10364ULL); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],41,0xa81a664bbc423001ULL); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],42,0xc24b8b70d0f89791ULL); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],43,0xc76c51a30654be30ULL); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],44,0xd192e819d6ef5218ULL); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],45,0xd69906245565a910ULL); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],46,0xf40e35855771202aULL); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],47,0x106aa07032bbd1b8ULL); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],48,0x19a4c116b8d2d0c8ULL); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],49,0x1e376c085141ab53ULL); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],50,0x2748774cdf8eeb99ULL); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],51,0x34b0bcb5e19b48a8ULL); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],52,0x391c0cb3c5c95a63ULL); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],53,0x4ed8aa4ae3418acbULL); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],54,0x5b9cca4f7763e373ULL); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],55,0x682e6ff3d6b2b8a3ULL); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],56,0x748f82ee5defb2fcULL); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],57,0x78a5636f43172f60ULL); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],58,0x84c87814a1f0ab72ULL); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],59,0x8cc702081a6439ecULL); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],60,0x90befffa23631e28ULL); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],61,0xa4506cebde82bde9ULL); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],62,0xbef9a3f7b2c67915ULL); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],63,0xc67178f2e372532bULL); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],64,0xca273eceea26619cULL); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],65,0xd186b8c721c0c207ULL); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],66,0xeada7dd6cde0eb1eULL); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],67,0xf57d4f7fee6ed178ULL); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],68,0x06f067aa72176fbaULL); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],69,0x0a637dc5a2c898a6ULL); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],70,0x113f9804bef90daeULL); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],71,0x1b710b35131c471bULL); - RND(S[0],S[1],S[2],S[3],S[4],S[5],S[6],S[7],72,0x28db77f523047d84ULL); - RND(S[7],S[0],S[1],S[2],S[3],S[4],S[5],S[6],73,0x32caab7b40c72493ULL); - RND(S[6],S[7],S[0],S[1],S[2],S[3],S[4],S[5],74,0x3c9ebe0a15c9bebcULL); - RND(S[5],S[6],S[7],S[0],S[1],S[2],S[3],S[4],75,0x431d67c49c100d4cULL); - RND(S[4],S[5],S[6],S[7],S[0],S[1],S[2],S[3],76,0x4cc5d4becb3e42b6ULL); - RND(S[3],S[4],S[5],S[6],S[7],S[0],S[1],S[2],77,0x597f299cfc657e2aULL); - RND(S[2],S[3],S[4],S[5],S[6],S[7],S[0],S[1],78,0x5fcb6fab3ad6faecULL); - RND(S[1],S[2],S[3],S[4],S[5],S[6],S[7],S[0],79,0x6c44198c4a475817ULL); - -#undef RND - - /* feedback */ - for (i = 0; i < 8; i++) { - sha_info->digest[i] = sha_info->digest[i] + S[i]; - } - -} - - - -/* initialize the SHA digest */ - -static void -sha512_init(SHAobject *sha_info) -{ - sha_info->digest[0] = Py_ULL(0x6a09e667f3bcc908); - sha_info->digest[1] = Py_ULL(0xbb67ae8584caa73b); - sha_info->digest[2] = Py_ULL(0x3c6ef372fe94f82b); - sha_info->digest[3] = Py_ULL(0xa54ff53a5f1d36f1); - sha_info->digest[4] = Py_ULL(0x510e527fade682d1); - sha_info->digest[5] = Py_ULL(0x9b05688c2b3e6c1f); - sha_info->digest[6] = Py_ULL(0x1f83d9abfb41bd6b); - sha_info->digest[7] = Py_ULL(0x5be0cd19137e2179); - sha_info->count_lo = 0L; - sha_info->count_hi = 0L; - sha_info->local = 0; - sha_info->digestsize = 64; -} - -static void -sha384_init(SHAobject *sha_info) -{ - sha_info->digest[0] = Py_ULL(0xcbbb9d5dc1059ed8); - sha_info->digest[1] = Py_ULL(0x629a292a367cd507); - sha_info->digest[2] = Py_ULL(0x9159015a3070dd17); - sha_info->digest[3] = Py_ULL(0x152fecd8f70e5939); - sha_info->digest[4] = Py_ULL(0x67332667ffc00b31); - sha_info->digest[5] = Py_ULL(0x8eb44a8768581511); - sha_info->digest[6] = Py_ULL(0xdb0c2e0d64f98fa7); - sha_info->digest[7] = Py_ULL(0x47b5481dbefa4fa4); - sha_info->count_lo = 0L; - sha_info->count_hi = 0L; - sha_info->local = 0; - sha_info->digestsize = 48; -} - - -/* update the SHA digest */ - -static void -sha512_update(SHAobject *sha_info, SHA_BYTE *buffer, Py_ssize_t count) -{ - Py_ssize_t i; - SHA_INT32 clo; - - clo = sha_info->count_lo + ((SHA_INT32) count << 3); - if (clo < sha_info->count_lo) { - ++sha_info->count_hi; - } - sha_info->count_lo = clo; - sha_info->count_hi += (SHA_INT32) count >> 29; - if (sha_info->local) { - i = SHA_BLOCKSIZE - sha_info->local; - if (i > count) { - i = count; - } - memcpy(((SHA_BYTE *) sha_info->data) + sha_info->local, buffer, i); - count -= i; - buffer += i; - sha_info->local += (int)i; - if (sha_info->local == SHA_BLOCKSIZE) { - sha512_transform(sha_info); - } - else { - return; - } - } - while (count >= SHA_BLOCKSIZE) { - memcpy(sha_info->data, buffer, SHA_BLOCKSIZE); - buffer += SHA_BLOCKSIZE; - count -= SHA_BLOCKSIZE; - sha512_transform(sha_info); - } - memcpy(sha_info->data, buffer, count); - sha_info->local = (int)count; + dest->state = Hacl_Streaming_SHA2_copy_512(src->state); } -/* finish computing the SHA digest */ - -static void -sha512_final(unsigned char digest[SHA_DIGESTSIZE], SHAobject *sha_info) -{ - int count; - SHA_INT32 lo_bit_count, hi_bit_count; - - lo_bit_count = sha_info->count_lo; - hi_bit_count = sha_info->count_hi; - count = (int) ((lo_bit_count >> 3) & 0x7f); - ((SHA_BYTE *) sha_info->data)[count++] = 0x80; - if (count > SHA_BLOCKSIZE - 16) { - memset(((SHA_BYTE *) sha_info->data) + count, 0, - SHA_BLOCKSIZE - count); - sha512_transform(sha_info); - memset((SHA_BYTE *) sha_info->data, 0, SHA_BLOCKSIZE - 16); - } - else { - memset(((SHA_BYTE *) sha_info->data) + count, 0, - SHA_BLOCKSIZE - 16 - count); - } - - /* GJS: note that we add the hi/lo in big-endian. sha512_transform will - swap these values into host-order. */ - sha_info->data[112] = 0; - sha_info->data[113] = 0; - sha_info->data[114] = 0; - sha_info->data[115] = 0; - sha_info->data[116] = 0; - sha_info->data[117] = 0; - sha_info->data[118] = 0; - sha_info->data[119] = 0; - sha_info->data[120] = (hi_bit_count >> 24) & 0xff; - sha_info->data[121] = (hi_bit_count >> 16) & 0xff; - sha_info->data[122] = (hi_bit_count >> 8) & 0xff; - sha_info->data[123] = (hi_bit_count >> 0) & 0xff; - sha_info->data[124] = (lo_bit_count >> 24) & 0xff; - sha_info->data[125] = (lo_bit_count >> 16) & 0xff; - sha_info->data[126] = (lo_bit_count >> 8) & 0xff; - sha_info->data[127] = (lo_bit_count >> 0) & 0xff; - sha512_transform(sha_info); - digest[ 0] = (unsigned char) ((sha_info->digest[0] >> 56) & 0xff); - digest[ 1] = (unsigned char) ((sha_info->digest[0] >> 48) & 0xff); - digest[ 2] = (unsigned char) ((sha_info->digest[0] >> 40) & 0xff); - digest[ 3] = (unsigned char) ((sha_info->digest[0] >> 32) & 0xff); - digest[ 4] = (unsigned char) ((sha_info->digest[0] >> 24) & 0xff); - digest[ 5] = (unsigned char) ((sha_info->digest[0] >> 16) & 0xff); - digest[ 6] = (unsigned char) ((sha_info->digest[0] >> 8) & 0xff); - digest[ 7] = (unsigned char) ((sha_info->digest[0] ) & 0xff); - digest[ 8] = (unsigned char) ((sha_info->digest[1] >> 56) & 0xff); - digest[ 9] = (unsigned char) ((sha_info->digest[1] >> 48) & 0xff); - digest[10] = (unsigned char) ((sha_info->digest[1] >> 40) & 0xff); - digest[11] = (unsigned char) ((sha_info->digest[1] >> 32) & 0xff); - digest[12] = (unsigned char) ((sha_info->digest[1] >> 24) & 0xff); - digest[13] = (unsigned char) ((sha_info->digest[1] >> 16) & 0xff); - digest[14] = (unsigned char) ((sha_info->digest[1] >> 8) & 0xff); - digest[15] = (unsigned char) ((sha_info->digest[1] ) & 0xff); - digest[16] = (unsigned char) ((sha_info->digest[2] >> 56) & 0xff); - digest[17] = (unsigned char) ((sha_info->digest[2] >> 48) & 0xff); - digest[18] = (unsigned char) ((sha_info->digest[2] >> 40) & 0xff); - digest[19] = (unsigned char) ((sha_info->digest[2] >> 32) & 0xff); - digest[20] = (unsigned char) ((sha_info->digest[2] >> 24) & 0xff); - digest[21] = (unsigned char) ((sha_info->digest[2] >> 16) & 0xff); - digest[22] = (unsigned char) ((sha_info->digest[2] >> 8) & 0xff); - digest[23] = (unsigned char) ((sha_info->digest[2] ) & 0xff); - digest[24] = (unsigned char) ((sha_info->digest[3] >> 56) & 0xff); - digest[25] = (unsigned char) ((sha_info->digest[3] >> 48) & 0xff); - digest[26] = (unsigned char) ((sha_info->digest[3] >> 40) & 0xff); - digest[27] = (unsigned char) ((sha_info->digest[3] >> 32) & 0xff); - digest[28] = (unsigned char) ((sha_info->digest[3] >> 24) & 0xff); - digest[29] = (unsigned char) ((sha_info->digest[3] >> 16) & 0xff); - digest[30] = (unsigned char) ((sha_info->digest[3] >> 8) & 0xff); - digest[31] = (unsigned char) ((sha_info->digest[3] ) & 0xff); - digest[32] = (unsigned char) ((sha_info->digest[4] >> 56) & 0xff); - digest[33] = (unsigned char) ((sha_info->digest[4] >> 48) & 0xff); - digest[34] = (unsigned char) ((sha_info->digest[4] >> 40) & 0xff); - digest[35] = (unsigned char) ((sha_info->digest[4] >> 32) & 0xff); - digest[36] = (unsigned char) ((sha_info->digest[4] >> 24) & 0xff); - digest[37] = (unsigned char) ((sha_info->digest[4] >> 16) & 0xff); - digest[38] = (unsigned char) ((sha_info->digest[4] >> 8) & 0xff); - digest[39] = (unsigned char) ((sha_info->digest[4] ) & 0xff); - digest[40] = (unsigned char) ((sha_info->digest[5] >> 56) & 0xff); - digest[41] = (unsigned char) ((sha_info->digest[5] >> 48) & 0xff); - digest[42] = (unsigned char) ((sha_info->digest[5] >> 40) & 0xff); - digest[43] = (unsigned char) ((sha_info->digest[5] >> 32) & 0xff); - digest[44] = (unsigned char) ((sha_info->digest[5] >> 24) & 0xff); - digest[45] = (unsigned char) ((sha_info->digest[5] >> 16) & 0xff); - digest[46] = (unsigned char) ((sha_info->digest[5] >> 8) & 0xff); - digest[47] = (unsigned char) ((sha_info->digest[5] ) & 0xff); - digest[48] = (unsigned char) ((sha_info->digest[6] >> 56) & 0xff); - digest[49] = (unsigned char) ((sha_info->digest[6] >> 48) & 0xff); - digest[50] = (unsigned char) ((sha_info->digest[6] >> 40) & 0xff); - digest[51] = (unsigned char) ((sha_info->digest[6] >> 32) & 0xff); - digest[52] = (unsigned char) ((sha_info->digest[6] >> 24) & 0xff); - digest[53] = (unsigned char) ((sha_info->digest[6] >> 16) & 0xff); - digest[54] = (unsigned char) ((sha_info->digest[6] >> 8) & 0xff); - digest[55] = (unsigned char) ((sha_info->digest[6] ) & 0xff); - digest[56] = (unsigned char) ((sha_info->digest[7] >> 56) & 0xff); - digest[57] = (unsigned char) ((sha_info->digest[7] >> 48) & 0xff); - digest[58] = (unsigned char) ((sha_info->digest[7] >> 40) & 0xff); - digest[59] = (unsigned char) ((sha_info->digest[7] >> 32) & 0xff); - digest[60] = (unsigned char) ((sha_info->digest[7] >> 24) & 0xff); - digest[61] = (unsigned char) ((sha_info->digest[7] >> 16) & 0xff); - digest[62] = (unsigned char) ((sha_info->digest[7] >> 8) & 0xff); - digest[63] = (unsigned char) ((sha_info->digest[7] ) & 0xff); -} - -/* - * End of copied SHA code. - * - * ------------------------------------------------------------------------ - */ - typedef struct { PyTypeObject* sha384_type; PyTypeObject* sha512_type; @@ -463,14 +96,31 @@ SHA_traverse(PyObject *ptr, visitproc visit, void *arg) } static void -SHA512_dealloc(PyObject *ptr) +SHA512_dealloc(SHAobject *ptr) { + Hacl_Streaming_SHA2_free_512(ptr->state); PyTypeObject *tp = Py_TYPE(ptr); PyObject_GC_UnTrack(ptr); PyObject_GC_Del(ptr); Py_DECREF(tp); } +/* HACL* takes a uint32_t for the length of its parameter, but Py_ssize_t can be + * 64 bits. */ +static void update_512(Hacl_Streaming_SHA2_state_sha2_512 *state, uint8_t *buf, Py_ssize_t len) { + /* Note: we explicitly ignore the error code on the basis that it would take > + * 1 billion years to overflow the maximum admissible length for this API + * (namely, 2^64-1 bytes). */ + while (len > UINT32_MAX) { + Hacl_Streaming_SHA2_update_512(state, buf, UINT32_MAX); + len -= UINT32_MAX; + buf += UINT32_MAX; + } + /* Cast to uint32_t is safe: upon exiting the loop, len <= UINT32_MAX, and + * therefore fits in a uint32_t */ + Hacl_Streaming_SHA2_update_512(state, buf, (uint32_t) len); +} + /* External methods for a hash object */ @@ -514,11 +164,10 @@ static PyObject * SHA512Type_digest_impl(SHAobject *self) /*[clinic end generated code: output=1080bbeeef7dde1b input=f6470dd359071f4b]*/ { - unsigned char digest[SHA_DIGESTSIZE]; - SHAobject temp; - - SHAcopy(self, &temp); - sha512_final(digest, &temp); + uint8_t digest[SHA_DIGESTSIZE]; + // HACL performs copies under the hood so that self->state remains valid + // after this call. + Hacl_Streaming_SHA2_finish_512(self->state, digest); return PyBytes_FromStringAndSize((const char *)digest, self->digestsize); } @@ -532,13 +181,8 @@ static PyObject * SHA512Type_hexdigest_impl(SHAobject *self) /*[clinic end generated code: output=7373305b8601e18b input=498b877b25cbe0a2]*/ { - unsigned char digest[SHA_DIGESTSIZE]; - SHAobject temp; - - /* Get the raw (binary) digest value */ - SHAcopy(self, &temp); - sha512_final(digest, &temp); - + uint8_t digest[SHA_DIGESTSIZE]; + Hacl_Streaming_SHA2_finish_512(self->state, digest); return _Py_strhex((const char *)digest, self->digestsize); } @@ -559,7 +203,7 @@ SHA512Type_update(SHAobject *self, PyObject *obj) GET_BUFFER_VIEW_OR_ERROUT(obj, &buf); - sha512_update(self, buf.buf, buf.len); + update_512(self->state, buf.buf, buf.len); PyBuffer_Release(&buf); Py_RETURN_NONE; @@ -622,15 +266,6 @@ static PyType_Spec sha512_sha384_type_spec = { .slots = sha512_sha384_type_slots }; -static PyType_Slot sha512_sha512_type_slots[] = { - {Py_tp_dealloc, SHA512_dealloc}, - {Py_tp_methods, SHA_methods}, - {Py_tp_members, SHA_members}, - {Py_tp_getset, SHA_getseters}, - {Py_tp_traverse, SHA_traverse}, - {0,0} -}; - // Using PyType_GetModuleState() on this type is safe since // it cannot be subclassed: it does not have the Py_TPFLAGS_BASETYPE flag. static PyType_Spec sha512_sha512_type_spec = { @@ -638,7 +273,7 @@ static PyType_Spec sha512_sha512_type_spec = { .basicsize = sizeof(SHAobject), .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_DISALLOW_INSTANTIATION | Py_TPFLAGS_IMMUTABLETYPE | Py_TPFLAGS_HAVE_GC), - .slots = sha512_sha512_type_slots + .slots = sha512_sha384_type_slots }; /* The single module-level function: new() */ @@ -671,7 +306,8 @@ _sha512_sha512_impl(PyObject *module, PyObject *string, int usedforsecurity) return NULL; } - sha512_init(new); + new->state = Hacl_Streaming_SHA2_create_in_512(); + new->digestsize = 64; if (PyErr_Occurred()) { Py_DECREF(new); @@ -680,7 +316,7 @@ _sha512_sha512_impl(PyObject *module, PyObject *string, int usedforsecurity) return NULL; } if (string) { - sha512_update(new, buf.buf, buf.len); + update_512(new->state, buf.buf, buf.len); PyBuffer_Release(&buf); } @@ -715,7 +351,8 @@ _sha512_sha384_impl(PyObject *module, PyObject *string, int usedforsecurity) return NULL; } - sha384_init(new); + new->state = Hacl_Streaming_SHA2_create_in_384(); + new->digestsize = 48; if (PyErr_Occurred()) { Py_DECREF(new); @@ -724,7 +361,7 @@ _sha512_sha384_impl(PyObject *module, PyObject *string, int usedforsecurity) return NULL; } if (string) { - sha512_update(new, buf.buf, buf.len); + update_512(new->state, buf.buf, buf.len); PyBuffer_Release(&buf); } diff --git a/configure b/configure index 35088f9e5caf..c00a1e1d2ec9 100755 --- a/configure +++ b/configure @@ -26950,7 +26950,7 @@ fi as_fn_append MODULE_BLOCK "MODULE__SHA512_STATE=$py_cv_module__sha512$as_nl" if test "x$py_cv_module__sha512" = xyes; then : - + as_fn_append MODULE_BLOCK "MODULE__SHA512_CFLAGS=-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE$as_nl" fi diff --git a/configure.ac b/configure.ac index 1ab48e0d1c16..92a05c011026 100644 --- a/configure.ac +++ b/configure.ac @@ -7200,7 +7200,9 @@ PY_STDLIB_MOD([_sha1], [test "$with_builtin_sha1" = yes]) PY_STDLIB_MOD([_sha256], [test "$with_builtin_sha256" = yes], [], [-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE]) -PY_STDLIB_MOD([_sha512], [test "$with_builtin_sha512" = yes]) +PY_STDLIB_MOD([_sha512], + [test "$with_builtin_sha512" = yes], [], + [-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE]) PY_STDLIB_MOD([_sha3], [test "$with_builtin_sha3" = yes]) PY_STDLIB_MOD([_blake2], [test "$with_builtin_blake2" = yes], [], From webhook-mailer at python.org Tue Feb 14 05:20:19 2023 From: webhook-mailer at python.org (abalkin) Date: Tue, 14 Feb 2023 10:20:19 -0000 Subject: [Python-checkins] GH-101898: Fix missing term references for hashable definition (#101899) Message-ID: https://github.com/python/cpython/commit/3690688149dca11589af59b7704541336613199a commit: 3690688149dca11589af59b7704541336613199a branch: main author: Furkan Onder committer: abalkin date: 2023-02-14T14:20:11+04:00 summary: GH-101898: Fix missing term references for hashable definition (#101899) Fix missing term references for hashable definition files: M Doc/c-api/dict.rst M Doc/c-api/object.rst M Doc/faq/programming.rst M Doc/library/abc.rst M Doc/library/collections.abc.rst M Doc/library/collections.rst M Doc/library/datetime.rst M Doc/library/fractions.rst M Doc/library/functools.rst M Doc/library/graphlib.rst M Doc/library/inspect.rst M Doc/library/operator.rst M Doc/library/pathlib.rst M Doc/library/stdtypes.rst M Doc/library/typing.rst M Doc/reference/datamodel.rst diff --git a/Doc/c-api/dict.rst b/Doc/c-api/dict.rst index e5f28b59a701..34106ee6b1f3 100644 --- a/Doc/c-api/dict.rst +++ b/Doc/c-api/dict.rst @@ -80,7 +80,7 @@ Dictionary Objects .. c:function:: int PyDict_DelItem(PyObject *p, PyObject *key) - Remove the entry in dictionary *p* with key *key*. *key* must be hashable; + Remove the entry in dictionary *p* with key *key*. *key* must be :term:`hashable`; if it isn't, :exc:`TypeError` is raised. If *key* is not in the dictionary, :exc:`KeyError` is raised. Return ``0`` on success or ``-1`` on failure. diff --git a/Doc/c-api/object.rst b/Doc/c-api/object.rst index 5a25a2b6c9d3..84c72e7e108b 100644 --- a/Doc/c-api/object.rst +++ b/Doc/c-api/object.rst @@ -281,7 +281,7 @@ Object Protocol .. c:function:: Py_hash_t PyObject_HashNotImplemented(PyObject *o) - Set a :exc:`TypeError` indicating that ``type(o)`` is not hashable and return ``-1``. + Set a :exc:`TypeError` indicating that ``type(o)`` is not :term:`hashable` and return ``-1``. This function receives special treatment when stored in a ``tp_hash`` slot, allowing a type to explicitly indicate to the interpreter that it is not hashable. diff --git a/Doc/faq/programming.rst b/Doc/faq/programming.rst index ba42289f3466..38f9b171618b 100644 --- a/Doc/faq/programming.rst +++ b/Doc/faq/programming.rst @@ -1979,7 +1979,7 @@ method result will be released right away. The disadvantage is that if instances accumulate, so too will the accumulated method results. They can grow without bound. -The *lru_cache* approach works with methods that have hashable +The *lru_cache* approach works with methods that have :term:`hashable` arguments. It creates a reference to the instance unless special efforts are made to pass in weak references. diff --git a/Doc/library/abc.rst b/Doc/library/abc.rst index 3b74622e7ff4..274b2d69f0ab 100644 --- a/Doc/library/abc.rst +++ b/Doc/library/abc.rst @@ -21,7 +21,7 @@ The :mod:`collections` module has some concrete classes that derive from ABCs; these can, of course, be further derived. In addition, the :mod:`collections.abc` submodule has some ABCs that can be used to test whether a class or instance provides a particular interface, for example, if it is -hashable or if it is a mapping. +:term:`hashable` or if it is a mapping. This module provides the metaclass :class:`ABCMeta` for defining ABCs and diff --git a/Doc/library/collections.abc.rst b/Doc/library/collections.abc.rst index 132b0ce7192a..1ada0d352a0c 100644 --- a/Doc/library/collections.abc.rst +++ b/Doc/library/collections.abc.rst @@ -22,7 +22,7 @@ This module provides :term:`abstract base classes ` that can be used to test whether a class provides a particular interface; for -example, whether it is hashable or whether it is a mapping. +example, whether it is :term:`hashable` or whether it is a mapping. An :func:`issubclass` or :func:`isinstance` test for an interface works in one of three ways. @@ -406,7 +406,7 @@ Notes on using :class:`Set` and :class:`MutableSet` as a mixin: (3) The :class:`Set` mixin provides a :meth:`_hash` method to compute a hash value for the set; however, :meth:`__hash__` is not defined because not all sets - are hashable or immutable. To add set hashability using mixins, + are :term:`hashable` or immutable. To add set hashability using mixins, inherit from both :meth:`Set` and :meth:`Hashable`, then define ``__hash__ = Set._hash``. diff --git a/Doc/library/collections.rst b/Doc/library/collections.rst index 2cffc2300a22..bb46782c06e1 100644 --- a/Doc/library/collections.rst +++ b/Doc/library/collections.rst @@ -25,7 +25,7 @@ Python's general purpose built-in containers, :class:`dict`, :class:`list`, :func:`namedtuple` factory function for creating tuple subclasses with named fields :class:`deque` list-like container with fast appends and pops on either end :class:`ChainMap` dict-like class for creating a single view of multiple mappings -:class:`Counter` dict subclass for counting hashable objects +:class:`Counter` dict subclass for counting :term:`hashable` objects :class:`OrderedDict` dict subclass that remembers the order entries were added :class:`defaultdict` dict subclass that calls a factory function to supply missing values :class:`UserDict` wrapper around dictionary objects for easier dict subclassing @@ -242,7 +242,7 @@ For example:: .. class:: Counter([iterable-or-mapping]) - A :class:`Counter` is a :class:`dict` subclass for counting hashable objects. + A :class:`Counter` is a :class:`dict` subclass for counting :term:`hashable` objects. It is a collection where elements are stored as dictionary keys and their counts are stored as dictionary values. Counts are allowed to be any integer value including zero or negative counts. The :class:`Counter` diff --git a/Doc/library/datetime.rst b/Doc/library/datetime.rst index 2f1ab7c3dd4b..50827b27ebea 100644 --- a/Doc/library/datetime.rst +++ b/Doc/library/datetime.rst @@ -160,7 +160,7 @@ The :class:`date`, :class:`.datetime`, :class:`.time`, and :class:`timezone` typ share these common features: - Objects of these types are immutable. -- Objects of these types are hashable, meaning that they can be used as +- Objects of these types are :term:`hashable`, meaning that they can be used as dictionary keys. - Objects of these types support efficient pickling via the :mod:`pickle` module. diff --git a/Doc/library/fractions.rst b/Doc/library/fractions.rst index 06b0e038c89b..c61bbac892e0 100644 --- a/Doc/library/fractions.rst +++ b/Doc/library/fractions.rst @@ -77,7 +77,7 @@ another rational number, or from a string. The :class:`Fraction` class inherits from the abstract base class :class:`numbers.Rational`, and implements all of the methods and - operations from that class. :class:`Fraction` instances are hashable, + operations from that class. :class:`Fraction` instances are :term:`hashable`, and should be treated as immutable. In addition, :class:`Fraction` has the following properties and methods: diff --git a/Doc/library/functools.rst b/Doc/library/functools.rst index 2f0a9bd8be88..80a405e87d8d 100644 --- a/Doc/library/functools.rst +++ b/Doc/library/functools.rst @@ -147,7 +147,7 @@ The :mod:`functools` module defines the following functions: threads. Since a dictionary is used to cache results, the positional and keyword - arguments to the function must be hashable. + arguments to the function must be :term:`hashable`. Distinct argument patterns may be considered to be distinct calls with separate cache entries. For example, ``f(a=1, b=2)`` and ``f(b=2, a=1)`` diff --git a/Doc/library/graphlib.rst b/Doc/library/graphlib.rst index 2bc80da4ead2..fe7932e7a61c 100644 --- a/Doc/library/graphlib.rst +++ b/Doc/library/graphlib.rst @@ -17,7 +17,7 @@ .. class:: TopologicalSorter(graph=None) - Provides functionality to topologically sort a graph of hashable nodes. + Provides functionality to topologically sort a graph of :term:`hashable` nodes. A topological order is a linear ordering of the vertices in a graph such that for every directed edge u -> v from vertex u to vertex v, vertex u comes @@ -85,7 +85,7 @@ .. method:: add(node, *predecessors) Add a new node and its predecessors to the graph. Both the *node* and all - elements in *predecessors* must be hashable. + elements in *predecessors* must be :term:`hashable`. If called multiple times with the same node argument, the set of dependencies will be the union of all dependencies passed in. diff --git a/Doc/library/inspect.rst b/Doc/library/inspect.rst index 58b84a35a890..9c3be5a250a6 100644 --- a/Doc/library/inspect.rst +++ b/Doc/library/inspect.rst @@ -689,7 +689,7 @@ function. modified copy. .. versionchanged:: 3.5 - Signature objects are picklable and hashable. + Signature objects are picklable and :term:`hashable`. .. attribute:: Signature.empty @@ -768,7 +768,7 @@ function. you can use :meth:`Parameter.replace` to create a modified copy. .. versionchanged:: 3.5 - Parameter objects are picklable and hashable. + Parameter objects are picklable and :term:`hashable`. .. attribute:: Parameter.empty diff --git a/Doc/library/operator.rst b/Doc/library/operator.rst index 35e9b49ea8bc..6d90473f2367 100644 --- a/Doc/library/operator.rst +++ b/Doc/library/operator.rst @@ -327,7 +327,7 @@ expect a function argument. return g The items can be any type accepted by the operand's :meth:`__getitem__` - method. Dictionaries accept any hashable value. Lists, tuples, and + method. Dictionaries accept any :term:`hashable` value. Lists, tuples, and strings accept an index or a slice: >>> itemgetter(1)('ABCDEFG') diff --git a/Doc/library/pathlib.rst b/Doc/library/pathlib.rst index f222745a2c56..c8a734ecad8e 100644 --- a/Doc/library/pathlib.rst +++ b/Doc/library/pathlib.rst @@ -186,7 +186,7 @@ these classes, since they don't provide any operation that does system calls. General properties ^^^^^^^^^^^^^^^^^^ -Paths are immutable and hashable. Paths of a same flavour are comparable +Paths are immutable and :term:`hashable`. Paths of a same flavour are comparable and orderable. These properties respect the flavour's case-folding semantics:: diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst index 0ef03035a572..98bda6ded30f 100644 --- a/Doc/library/stdtypes.rst +++ b/Doc/library/stdtypes.rst @@ -3775,7 +3775,7 @@ copying. >>> data bytearray(b'z1spam') - One-dimensional memoryviews of hashable (read-only) types with formats + One-dimensional memoryviews of :term:`hashable` (read-only) types with formats 'B', 'b' or 'c' are also hashable. The hash is defined as ``hash(m) == hash(m.tobytes())``:: @@ -3789,7 +3789,7 @@ copying. .. versionchanged:: 3.3 One-dimensional memoryviews can now be sliced. - One-dimensional memoryviews with formats 'B', 'b' or 'c' are now hashable. + One-dimensional memoryviews with formats 'B', 'b' or 'c' are now :term:`hashable`. .. versionchanged:: 3.4 memoryview is now registered automatically with @@ -4710,7 +4710,7 @@ support membership tests: .. versionadded:: 3.10 -Keys views are set-like since their entries are unique and hashable. If all +Keys views are set-like since their entries are unique and :term:`hashable`. If all values are hashable, so that ``(key, value)`` pairs are unique and hashable, then the items view is also set-like. (Values views are not treated as set-like since the entries are generally not unique.) For set-like views, all of the diff --git a/Doc/library/typing.rst b/Doc/library/typing.rst index 356f919a1897..169f7196a74e 100644 --- a/Doc/library/typing.rst +++ b/Doc/library/typing.rst @@ -439,7 +439,7 @@ are intended primarily for static type checking. A user-defined generic class can have ABCs as base classes without a metaclass conflict. Generic metaclasses are not supported. The outcome of parameterizing -generics is cached, and most types in the typing module are hashable and +generics is cached, and most types in the typing module are :term:`hashable` and comparable for equality. diff --git a/Doc/reference/datamodel.rst b/Doc/reference/datamodel.rst index 1d2ddf3507ae..e4e471e50079 100644 --- a/Doc/reference/datamodel.rst +++ b/Doc/reference/datamodel.rst @@ -1525,7 +1525,7 @@ Basic customization :meth:`__hash__`, its instances will not be usable as items in hashable collections. If a class defines mutable objects and implements an :meth:`__eq__` method, it should not implement :meth:`__hash__`, since the - implementation of hashable collections requires that a key's hash value is + implementation of :term:`hashable` collections requires that a key's hash value is immutable (if the object's hash value changes, it will be in the wrong hash bucket). From webhook-mailer at python.org Tue Feb 14 05:45:50 2023 From: webhook-mailer at python.org (miss-islington) Date: Tue, 14 Feb 2023 10:45:50 -0000 Subject: [Python-checkins] GH-101898: Fix missing term references for hashable definition (GH-101899) Message-ID: https://github.com/python/cpython/commit/4aeae286715a3a9fa624429733582917606000c3 commit: 4aeae286715a3a9fa624429733582917606000c3 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-14T02:45:42-08:00 summary: GH-101898: Fix missing term references for hashable definition (GH-101899) Fix missing term references for hashable definition (cherry picked from commit 3690688149dca11589af59b7704541336613199a) Co-authored-by: Furkan Onder files: M Doc/c-api/dict.rst M Doc/c-api/object.rst M Doc/faq/programming.rst M Doc/library/abc.rst M Doc/library/collections.abc.rst M Doc/library/collections.rst M Doc/library/datetime.rst M Doc/library/fractions.rst M Doc/library/functools.rst M Doc/library/graphlib.rst M Doc/library/inspect.rst M Doc/library/operator.rst M Doc/library/pathlib.rst M Doc/library/stdtypes.rst M Doc/library/typing.rst M Doc/reference/datamodel.rst diff --git a/Doc/c-api/dict.rst b/Doc/c-api/dict.rst index 819168d48707..be7b5f135243 100644 --- a/Doc/c-api/dict.rst +++ b/Doc/c-api/dict.rst @@ -80,7 +80,7 @@ Dictionary Objects .. c:function:: int PyDict_DelItem(PyObject *p, PyObject *key) - Remove the entry in dictionary *p* with key *key*. *key* must be hashable; + Remove the entry in dictionary *p* with key *key*. *key* must be :term:`hashable`; if it isn't, :exc:`TypeError` is raised. If *key* is not in the dictionary, :exc:`KeyError` is raised. Return ``0`` on success or ``-1`` on failure. diff --git a/Doc/c-api/object.rst b/Doc/c-api/object.rst index 5a25a2b6c9d3..84c72e7e108b 100644 --- a/Doc/c-api/object.rst +++ b/Doc/c-api/object.rst @@ -281,7 +281,7 @@ Object Protocol .. c:function:: Py_hash_t PyObject_HashNotImplemented(PyObject *o) - Set a :exc:`TypeError` indicating that ``type(o)`` is not hashable and return ``-1``. + Set a :exc:`TypeError` indicating that ``type(o)`` is not :term:`hashable` and return ``-1``. This function receives special treatment when stored in a ``tp_hash`` slot, allowing a type to explicitly indicate to the interpreter that it is not hashable. diff --git a/Doc/faq/programming.rst b/Doc/faq/programming.rst index ba42289f3466..38f9b171618b 100644 --- a/Doc/faq/programming.rst +++ b/Doc/faq/programming.rst @@ -1979,7 +1979,7 @@ method result will be released right away. The disadvantage is that if instances accumulate, so too will the accumulated method results. They can grow without bound. -The *lru_cache* approach works with methods that have hashable +The *lru_cache* approach works with methods that have :term:`hashable` arguments. It creates a reference to the instance unless special efforts are made to pass in weak references. diff --git a/Doc/library/abc.rst b/Doc/library/abc.rst index 3b74622e7ff4..274b2d69f0ab 100644 --- a/Doc/library/abc.rst +++ b/Doc/library/abc.rst @@ -21,7 +21,7 @@ The :mod:`collections` module has some concrete classes that derive from ABCs; these can, of course, be further derived. In addition, the :mod:`collections.abc` submodule has some ABCs that can be used to test whether a class or instance provides a particular interface, for example, if it is -hashable or if it is a mapping. +:term:`hashable` or if it is a mapping. This module provides the metaclass :class:`ABCMeta` for defining ABCs and diff --git a/Doc/library/collections.abc.rst b/Doc/library/collections.abc.rst index 132b0ce7192a..1ada0d352a0c 100644 --- a/Doc/library/collections.abc.rst +++ b/Doc/library/collections.abc.rst @@ -22,7 +22,7 @@ This module provides :term:`abstract base classes ` that can be used to test whether a class provides a particular interface; for -example, whether it is hashable or whether it is a mapping. +example, whether it is :term:`hashable` or whether it is a mapping. An :func:`issubclass` or :func:`isinstance` test for an interface works in one of three ways. @@ -406,7 +406,7 @@ Notes on using :class:`Set` and :class:`MutableSet` as a mixin: (3) The :class:`Set` mixin provides a :meth:`_hash` method to compute a hash value for the set; however, :meth:`__hash__` is not defined because not all sets - are hashable or immutable. To add set hashability using mixins, + are :term:`hashable` or immutable. To add set hashability using mixins, inherit from both :meth:`Set` and :meth:`Hashable`, then define ``__hash__ = Set._hash``. diff --git a/Doc/library/collections.rst b/Doc/library/collections.rst index 20863837fa1b..285d36210411 100644 --- a/Doc/library/collections.rst +++ b/Doc/library/collections.rst @@ -25,7 +25,7 @@ Python's general purpose built-in containers, :class:`dict`, :class:`list`, :func:`namedtuple` factory function for creating tuple subclasses with named fields :class:`deque` list-like container with fast appends and pops on either end :class:`ChainMap` dict-like class for creating a single view of multiple mappings -:class:`Counter` dict subclass for counting hashable objects +:class:`Counter` dict subclass for counting :term:`hashable` objects :class:`OrderedDict` dict subclass that remembers the order entries were added :class:`defaultdict` dict subclass that calls a factory function to supply missing values :class:`UserDict` wrapper around dictionary objects for easier dict subclassing @@ -241,7 +241,7 @@ For example:: .. class:: Counter([iterable-or-mapping]) - A :class:`Counter` is a :class:`dict` subclass for counting hashable objects. + A :class:`Counter` is a :class:`dict` subclass for counting :term:`hashable` objects. It is a collection where elements are stored as dictionary keys and their counts are stored as dictionary values. Counts are allowed to be any integer value including zero or negative counts. The :class:`Counter` diff --git a/Doc/library/datetime.rst b/Doc/library/datetime.rst index c7b96671eff7..443d80cf006b 100644 --- a/Doc/library/datetime.rst +++ b/Doc/library/datetime.rst @@ -160,7 +160,7 @@ The :class:`date`, :class:`.datetime`, :class:`.time`, and :class:`timezone` typ share these common features: - Objects of these types are immutable. -- Objects of these types are hashable, meaning that they can be used as +- Objects of these types are :term:`hashable`, meaning that they can be used as dictionary keys. - Objects of these types support efficient pickling via the :mod:`pickle` module. diff --git a/Doc/library/fractions.rst b/Doc/library/fractions.rst index 5f0ecf1f135e..85bec73eafc9 100644 --- a/Doc/library/fractions.rst +++ b/Doc/library/fractions.rst @@ -77,7 +77,7 @@ another rational number, or from a string. The :class:`Fraction` class inherits from the abstract base class :class:`numbers.Rational`, and implements all of the methods and - operations from that class. :class:`Fraction` instances are hashable, + operations from that class. :class:`Fraction` instances are :term:`hashable`, and should be treated as immutable. In addition, :class:`Fraction` has the following properties and methods: diff --git a/Doc/library/functools.rst b/Doc/library/functools.rst index 2f0a9bd8be88..80a405e87d8d 100644 --- a/Doc/library/functools.rst +++ b/Doc/library/functools.rst @@ -147,7 +147,7 @@ The :mod:`functools` module defines the following functions: threads. Since a dictionary is used to cache results, the positional and keyword - arguments to the function must be hashable. + arguments to the function must be :term:`hashable`. Distinct argument patterns may be considered to be distinct calls with separate cache entries. For example, ``f(a=1, b=2)`` and ``f(b=2, a=1)`` diff --git a/Doc/library/graphlib.rst b/Doc/library/graphlib.rst index 2bc80da4ead2..fe7932e7a61c 100644 --- a/Doc/library/graphlib.rst +++ b/Doc/library/graphlib.rst @@ -17,7 +17,7 @@ .. class:: TopologicalSorter(graph=None) - Provides functionality to topologically sort a graph of hashable nodes. + Provides functionality to topologically sort a graph of :term:`hashable` nodes. A topological order is a linear ordering of the vertices in a graph such that for every directed edge u -> v from vertex u to vertex v, vertex u comes @@ -85,7 +85,7 @@ .. method:: add(node, *predecessors) Add a new node and its predecessors to the graph. Both the *node* and all - elements in *predecessors* must be hashable. + elements in *predecessors* must be :term:`hashable`. If called multiple times with the same node argument, the set of dependencies will be the union of all dependencies passed in. diff --git a/Doc/library/inspect.rst b/Doc/library/inspect.rst index 9cb7a6f94e49..6a696c977bf1 100644 --- a/Doc/library/inspect.rst +++ b/Doc/library/inspect.rst @@ -668,7 +668,7 @@ function. modified copy. .. versionchanged:: 3.5 - Signature objects are picklable and hashable. + Signature objects are picklable and :term:`hashable`. .. attribute:: Signature.empty @@ -746,7 +746,7 @@ function. you can use :meth:`Parameter.replace` to create a modified copy. .. versionchanged:: 3.5 - Parameter objects are picklable and hashable. + Parameter objects are picklable and :term:`hashable`. .. attribute:: Parameter.empty diff --git a/Doc/library/operator.rst b/Doc/library/operator.rst index 35e9b49ea8bc..6d90473f2367 100644 --- a/Doc/library/operator.rst +++ b/Doc/library/operator.rst @@ -327,7 +327,7 @@ expect a function argument. return g The items can be any type accepted by the operand's :meth:`__getitem__` - method. Dictionaries accept any hashable value. Lists, tuples, and + method. Dictionaries accept any :term:`hashable` value. Lists, tuples, and strings accept an index or a slice: >>> itemgetter(1)('ABCDEFG') diff --git a/Doc/library/pathlib.rst b/Doc/library/pathlib.rst index e8bb00fc5743..abdeea248a31 100644 --- a/Doc/library/pathlib.rst +++ b/Doc/library/pathlib.rst @@ -186,7 +186,7 @@ these classes, since they don't provide any operation that does system calls. General properties ^^^^^^^^^^^^^^^^^^ -Paths are immutable and hashable. Paths of a same flavour are comparable +Paths are immutable and :term:`hashable`. Paths of a same flavour are comparable and orderable. These properties respect the flavour's case-folding semantics:: diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst index 2bc86e978387..1563c4dff138 100644 --- a/Doc/library/stdtypes.rst +++ b/Doc/library/stdtypes.rst @@ -3765,7 +3765,7 @@ copying. >>> data bytearray(b'z1spam') - One-dimensional memoryviews of hashable (read-only) types with formats + One-dimensional memoryviews of :term:`hashable` (read-only) types with formats 'B', 'b' or 'c' are also hashable. The hash is defined as ``hash(m) == hash(m.tobytes())``:: @@ -3779,7 +3779,7 @@ copying. .. versionchanged:: 3.3 One-dimensional memoryviews can now be sliced. - One-dimensional memoryviews with formats 'B', 'b' or 'c' are now hashable. + One-dimensional memoryviews with formats 'B', 'b' or 'c' are now :term:`hashable`. .. versionchanged:: 3.4 memoryview is now registered automatically with @@ -4699,7 +4699,7 @@ support membership tests: .. versionadded:: 3.10 -Keys views are set-like since their entries are unique and hashable. If all +Keys views are set-like since their entries are unique and :term:`hashable`. If all values are hashable, so that ``(key, value)`` pairs are unique and hashable, then the items view is also set-like. (Values views are not treated as set-like since the entries are generally not unique.) For set-like views, all of the diff --git a/Doc/library/typing.rst b/Doc/library/typing.rst index 7fc0aa3a8e62..b08977ef56be 100644 --- a/Doc/library/typing.rst +++ b/Doc/library/typing.rst @@ -439,7 +439,7 @@ are intended primarily for static type checking. A user-defined generic class can have ABCs as base classes without a metaclass conflict. Generic metaclasses are not supported. The outcome of parameterizing -generics is cached, and most types in the typing module are hashable and +generics is cached, and most types in the typing module are :term:`hashable` and comparable for equality. diff --git a/Doc/reference/datamodel.rst b/Doc/reference/datamodel.rst index afd4a5477ffb..dce48a466301 100644 --- a/Doc/reference/datamodel.rst +++ b/Doc/reference/datamodel.rst @@ -1525,7 +1525,7 @@ Basic customization :meth:`__hash__`, its instances will not be usable as items in hashable collections. If a class defines mutable objects and implements an :meth:`__eq__` method, it should not implement :meth:`__hash__`, since the - implementation of hashable collections requires that a key's hash value is + implementation of :term:`hashable` collections requires that a key's hash value is immutable (if the object's hash value changes, it will be in the wrong hash bucket). From webhook-mailer at python.org Tue Feb 14 05:47:56 2023 From: webhook-mailer at python.org (miss-islington) Date: Tue, 14 Feb 2023 10:47:56 -0000 Subject: [Python-checkins] GH-101898: Fix missing term references for hashable definition (GH-101899) Message-ID: https://github.com/python/cpython/commit/bc3718eb4b6defcdffb23b0c6f5879c5e721609a commit: bc3718eb4b6defcdffb23b0c6f5879c5e721609a branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-14T02:47:49-08:00 summary: GH-101898: Fix missing term references for hashable definition (GH-101899) Fix missing term references for hashable definition (cherry picked from commit 3690688149dca11589af59b7704541336613199a) Co-authored-by: Furkan Onder files: M Doc/c-api/dict.rst M Doc/c-api/object.rst M Doc/faq/programming.rst M Doc/library/abc.rst M Doc/library/collections.abc.rst M Doc/library/collections.rst M Doc/library/datetime.rst M Doc/library/fractions.rst M Doc/library/functools.rst M Doc/library/graphlib.rst M Doc/library/inspect.rst M Doc/library/operator.rst M Doc/library/pathlib.rst M Doc/library/stdtypes.rst M Doc/library/typing.rst M Doc/reference/datamodel.rst diff --git a/Doc/c-api/dict.rst b/Doc/c-api/dict.rst index 819168d48707..be7b5f135243 100644 --- a/Doc/c-api/dict.rst +++ b/Doc/c-api/dict.rst @@ -80,7 +80,7 @@ Dictionary Objects .. c:function:: int PyDict_DelItem(PyObject *p, PyObject *key) - Remove the entry in dictionary *p* with key *key*. *key* must be hashable; + Remove the entry in dictionary *p* with key *key*. *key* must be :term:`hashable`; if it isn't, :exc:`TypeError` is raised. If *key* is not in the dictionary, :exc:`KeyError` is raised. Return ``0`` on success or ``-1`` on failure. diff --git a/Doc/c-api/object.rst b/Doc/c-api/object.rst index eef42ca1326d..feadf503205e 100644 --- a/Doc/c-api/object.rst +++ b/Doc/c-api/object.rst @@ -263,7 +263,7 @@ Object Protocol .. c:function:: Py_hash_t PyObject_HashNotImplemented(PyObject *o) - Set a :exc:`TypeError` indicating that ``type(o)`` is not hashable and return ``-1``. + Set a :exc:`TypeError` indicating that ``type(o)`` is not :term:`hashable` and return ``-1``. This function receives special treatment when stored in a ``tp_hash`` slot, allowing a type to explicitly indicate to the interpreter that it is not hashable. diff --git a/Doc/faq/programming.rst b/Doc/faq/programming.rst index 1bcca4c62d27..00b9008aa1b0 100644 --- a/Doc/faq/programming.rst +++ b/Doc/faq/programming.rst @@ -1965,7 +1965,7 @@ method result will be released right away. The disadvantage is that if instances accumulate, so too will the accumulated method results. They can grow without bound. -The *lru_cache* approach works with methods that have hashable +The *lru_cache* approach works with methods that have :term:`hashable` arguments. It creates a reference to the instance unless special efforts are made to pass in weak references. diff --git a/Doc/library/abc.rst b/Doc/library/abc.rst index 3b74622e7ff4..274b2d69f0ab 100644 --- a/Doc/library/abc.rst +++ b/Doc/library/abc.rst @@ -21,7 +21,7 @@ The :mod:`collections` module has some concrete classes that derive from ABCs; these can, of course, be further derived. In addition, the :mod:`collections.abc` submodule has some ABCs that can be used to test whether a class or instance provides a particular interface, for example, if it is -hashable or if it is a mapping. +:term:`hashable` or if it is a mapping. This module provides the metaclass :class:`ABCMeta` for defining ABCs and diff --git a/Doc/library/collections.abc.rst b/Doc/library/collections.abc.rst index 2c941b47748d..6732b04fc1a5 100644 --- a/Doc/library/collections.abc.rst +++ b/Doc/library/collections.abc.rst @@ -22,7 +22,7 @@ This module provides :term:`abstract base classes ` that can be used to test whether a class provides a particular interface; for -example, whether it is hashable or whether it is a mapping. +example, whether it is :term:`hashable` or whether it is a mapping. An :func:`issubclass` or :func:`isinstance` test for an interface works in one of three ways. @@ -406,7 +406,7 @@ Notes on using :class:`Set` and :class:`MutableSet` as a mixin: (3) The :class:`Set` mixin provides a :meth:`_hash` method to compute a hash value for the set; however, :meth:`__hash__` is not defined because not all sets - are hashable or immutable. To add set hashability using mixins, + are :term:`hashable` or immutable. To add set hashability using mixins, inherit from both :meth:`Set` and :meth:`Hashable`, then define ``__hash__ = Set._hash``. diff --git a/Doc/library/collections.rst b/Doc/library/collections.rst index 20863837fa1b..285d36210411 100644 --- a/Doc/library/collections.rst +++ b/Doc/library/collections.rst @@ -25,7 +25,7 @@ Python's general purpose built-in containers, :class:`dict`, :class:`list`, :func:`namedtuple` factory function for creating tuple subclasses with named fields :class:`deque` list-like container with fast appends and pops on either end :class:`ChainMap` dict-like class for creating a single view of multiple mappings -:class:`Counter` dict subclass for counting hashable objects +:class:`Counter` dict subclass for counting :term:`hashable` objects :class:`OrderedDict` dict subclass that remembers the order entries were added :class:`defaultdict` dict subclass that calls a factory function to supply missing values :class:`UserDict` wrapper around dictionary objects for easier dict subclassing @@ -241,7 +241,7 @@ For example:: .. class:: Counter([iterable-or-mapping]) - A :class:`Counter` is a :class:`dict` subclass for counting hashable objects. + A :class:`Counter` is a :class:`dict` subclass for counting :term:`hashable` objects. It is a collection where elements are stored as dictionary keys and their counts are stored as dictionary values. Counts are allowed to be any integer value including zero or negative counts. The :class:`Counter` diff --git a/Doc/library/datetime.rst b/Doc/library/datetime.rst index e23d2a96cc26..9e1cd323ee16 100644 --- a/Doc/library/datetime.rst +++ b/Doc/library/datetime.rst @@ -154,7 +154,7 @@ The :class:`date`, :class:`.datetime`, :class:`.time`, and :class:`timezone` typ share these common features: - Objects of these types are immutable. -- Objects of these types are hashable, meaning that they can be used as +- Objects of these types are :term:`hashable`, meaning that they can be used as dictionary keys. - Objects of these types support efficient pickling via the :mod:`pickle` module. diff --git a/Doc/library/fractions.rst b/Doc/library/fractions.rst index 0f7940ae68be..43cfd554df0b 100644 --- a/Doc/library/fractions.rst +++ b/Doc/library/fractions.rst @@ -76,7 +76,7 @@ another rational number, or from a string. The :class:`Fraction` class inherits from the abstract base class :class:`numbers.Rational`, and implements all of the methods and - operations from that class. :class:`Fraction` instances are hashable, + operations from that class. :class:`Fraction` instances are :term:`hashable`, and should be treated as immutable. In addition, :class:`Fraction` has the following properties and methods: diff --git a/Doc/library/functools.rst b/Doc/library/functools.rst index 188443c2964d..4802420681c4 100644 --- a/Doc/library/functools.rst +++ b/Doc/library/functools.rst @@ -141,7 +141,7 @@ The :mod:`functools` module defines the following functions: function is periodically called with the same arguments. Since a dictionary is used to cache results, the positional and keyword - arguments to the function must be hashable. + arguments to the function must be :term:`hashable`. Distinct argument patterns may be considered to be distinct calls with separate cache entries. For example, `f(a=1, b=2)` and `f(b=2, a=1)` diff --git a/Doc/library/graphlib.rst b/Doc/library/graphlib.rst index 2bc80da4ead2..fe7932e7a61c 100644 --- a/Doc/library/graphlib.rst +++ b/Doc/library/graphlib.rst @@ -17,7 +17,7 @@ .. class:: TopologicalSorter(graph=None) - Provides functionality to topologically sort a graph of hashable nodes. + Provides functionality to topologically sort a graph of :term:`hashable` nodes. A topological order is a linear ordering of the vertices in a graph such that for every directed edge u -> v from vertex u to vertex v, vertex u comes @@ -85,7 +85,7 @@ .. method:: add(node, *predecessors) Add a new node and its predecessors to the graph. Both the *node* and all - elements in *predecessors* must be hashable. + elements in *predecessors* must be :term:`hashable`. If called multiple times with the same node argument, the set of dependencies will be the union of all dependencies passed in. diff --git a/Doc/library/inspect.rst b/Doc/library/inspect.rst index 56e6fff921ff..cc9c22fe3d97 100644 --- a/Doc/library/inspect.rst +++ b/Doc/library/inspect.rst @@ -641,7 +641,7 @@ function. modified copy. .. versionchanged:: 3.5 - Signature objects are picklable and hashable. + Signature objects are picklable and :term:`hashable`. .. attribute:: Signature.empty @@ -719,7 +719,7 @@ function. you can use :meth:`Parameter.replace` to create a modified copy. .. versionchanged:: 3.5 - Parameter objects are picklable and hashable. + Parameter objects are picklable and :term:`hashable`. .. attribute:: Parameter.empty diff --git a/Doc/library/operator.rst b/Doc/library/operator.rst index 0cdba68f3770..08497c4254d0 100644 --- a/Doc/library/operator.rst +++ b/Doc/library/operator.rst @@ -316,7 +316,7 @@ expect a function argument. return g The items can be any type accepted by the operand's :meth:`__getitem__` - method. Dictionaries accept any hashable value. Lists, tuples, and + method. Dictionaries accept any :term:`hashable` value. Lists, tuples, and strings accept an index or a slice: >>> itemgetter(1)('ABCDEFG') diff --git a/Doc/library/pathlib.rst b/Doc/library/pathlib.rst index 878b97926bd2..593674cf825f 100644 --- a/Doc/library/pathlib.rst +++ b/Doc/library/pathlib.rst @@ -186,7 +186,7 @@ these classes, since they don't provide any operation that does system calls. General properties ^^^^^^^^^^^^^^^^^^ -Paths are immutable and hashable. Paths of a same flavour are comparable +Paths are immutable and :term:`hashable`. Paths of a same flavour are comparable and orderable. These properties respect the flavour's case-folding semantics:: diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst index 5750c7cb68c8..7cdde5334e24 100644 --- a/Doc/library/stdtypes.rst +++ b/Doc/library/stdtypes.rst @@ -3725,7 +3725,7 @@ copying. >>> data bytearray(b'z1spam') - One-dimensional memoryviews of hashable (read-only) types with formats + One-dimensional memoryviews of :term:`hashable` (read-only) types with formats 'B', 'b' or 'c' are also hashable. The hash is defined as ``hash(m) == hash(m.tobytes())``:: @@ -3739,7 +3739,7 @@ copying. .. versionchanged:: 3.3 One-dimensional memoryviews can now be sliced. - One-dimensional memoryviews with formats 'B', 'b' or 'c' are now hashable. + One-dimensional memoryviews with formats 'B', 'b' or 'c' are now :term:`hashable`. .. versionchanged:: 3.4 memoryview is now registered automatically with @@ -4659,7 +4659,7 @@ support membership tests: .. versionadded:: 3.10 -Keys views are set-like since their entries are unique and hashable. If all +Keys views are set-like since their entries are unique and :term:`hashable`. If all values are hashable, so that ``(key, value)`` pairs are unique and hashable, then the items view is also set-like. (Values views are not treated as set-like since the entries are generally not unique.) For set-like views, all of the diff --git a/Doc/library/typing.rst b/Doc/library/typing.rst index d415f149027f..276133653376 100644 --- a/Doc/library/typing.rst +++ b/Doc/library/typing.rst @@ -425,7 +425,7 @@ are intended primarily for static type checking. A user-defined generic class can have ABCs as base classes without a metaclass conflict. Generic metaclasses are not supported. The outcome of parameterizing -generics is cached, and most types in the typing module are hashable and +generics is cached, and most types in the typing module are :term:`hashable` and comparable for equality. diff --git a/Doc/reference/datamodel.rst b/Doc/reference/datamodel.rst index 6aed328a9fdb..e5a1e4d89f74 100644 --- a/Doc/reference/datamodel.rst +++ b/Doc/reference/datamodel.rst @@ -1490,7 +1490,7 @@ Basic customization :meth:`__hash__`, its instances will not be usable as items in hashable collections. If a class defines mutable objects and implements an :meth:`__eq__` method, it should not implement :meth:`__hash__`, since the - implementation of hashable collections requires that a key's hash value is + implementation of :term:`hashable` collections requires that a key's hash value is immutable (if the object's hash value changes, it will be in the wrong hash bucket). From webhook-mailer at python.org Tue Feb 14 06:54:51 2023 From: webhook-mailer at python.org (iritkatriel) Date: Tue, 14 Feb 2023 11:54:51 -0000 Subject: [Python-checkins] gh-101799: implement PREP_RERAISE_STAR as an intrinsic function (#101800) Message-ID: https://github.com/python/cpython/commit/81e3aa835c32363f4547b6566edf1125386f1f6d commit: 81e3aa835c32363f4547b6566edf1125386f1f6d branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-14T11:54:13Z summary: gh-101799: implement PREP_RERAISE_STAR as an intrinsic function (#101800) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-13-18-21-14.gh-issue-101799.wpHbCn.rst M Doc/library/dis.rst M Include/internal/pycore_intrinsics.h M Include/internal/pycore_opcode.h M Include/opcode.h M Lib/importlib/_bootstrap_external.py M Lib/opcode.py M Python/bytecodes.c M Python/compile.c M Python/generated_cases.c.h M Python/intrinsics.c M Python/opcode_metadata.h M Python/opcode_targets.h diff --git a/Doc/library/dis.rst b/Doc/library/dis.rst index b1e61d7e77b2..a5bc5e7fb6ea 100644 --- a/Doc/library/dis.rst +++ b/Doc/library/dis.rst @@ -768,16 +768,6 @@ iterations of the loop. .. versionadded:: 3.11 -.. opcode:: PREP_RERAISE_STAR - - Combines the raised and reraised exceptions list from ``STACK[-1]``, into an - exception group to propagate from a try-except* block. Uses the original exception - group from ``STACK[-2]`` to reconstruct the structure of reraised exceptions. Pops - two items from the stack and pushes the exception to reraise or ``None`` - if there isn't one. - - .. versionadded:: 3.11 - .. opcode:: WITH_EXCEPT_START Calls the function in position 4 on the stack with arguments (type, val, tb) @@ -1515,7 +1505,8 @@ iterations of the loop. .. opcode:: CALL_INTRINSIC_1 Calls an intrinsic function with one argument. Passes ``STACK[-1]`` as the - argument and sets ``STACK[-1]`` to the result. Used to implement functionality that is necessary but not performance critical. + argument and sets ``STACK[-1]`` to the result. Used to implement + functionality that is necessary but not performance critical. The operand determines which intrinsic function is called: @@ -1529,6 +1520,19 @@ iterations of the loop. .. versionadded:: 3.12 +.. opcode:: CALL_INTRINSIC_2 + + Calls an intrinsic function with two arguments. Passes ``STACK[-2]``, ``STACK[-1]`` as the + arguments and sets ``STACK[-1]`` to the result. Used to implement functionality that is + necessary but not performance critical. + + The operand determines which intrinsic function is called: + + * ``0`` Not valid + * ``1`` Calculates the :exc:`ExceptionGroup` to raise from a ``try-except*``. + + .. versionadded:: 3.12 + **Pseudo-instructions** diff --git a/Include/internal/pycore_intrinsics.h b/Include/internal/pycore_intrinsics.h index 1da618f2b4a5..deac145fff76 100644 --- a/Include/internal/pycore_intrinsics.h +++ b/Include/internal/pycore_intrinsics.h @@ -1,4 +1,6 @@ +/* Unary Functions: */ + #define INTRINSIC_PRINT 1 #define INTRINSIC_IMPORT_STAR 2 #define INTRINSIC_STOPITERATION_ERROR 3 @@ -8,6 +10,17 @@ #define MAX_INTRINSIC_1 6 + +/* Binary Functions: */ + +#define INTRINSIC_PREP_RERAISE_STAR 1 + +#define MAX_INTRINSIC_2 1 + + typedef PyObject *(*instrinsic_func1)(PyThreadState* tstate, PyObject *value); +typedef PyObject *(*instrinsic_func2)(PyThreadState* tstate, PyObject *value1, PyObject *value2); extern instrinsic_func1 _PyIntrinsics_UnaryFunctions[]; +extern instrinsic_func2 _PyIntrinsics_BinaryFunctions[]; + diff --git a/Include/internal/pycore_opcode.h b/Include/internal/pycore_opcode.h index 5e65adee9e00..f9ab95ca4bb9 100644 --- a/Include/internal/pycore_opcode.h +++ b/Include/internal/pycore_opcode.h @@ -87,6 +87,7 @@ const uint8_t _PyOpcode_Deopt[256] = { [CALL_BUILTIN_FAST_WITH_KEYWORDS] = CALL, [CALL_FUNCTION_EX] = CALL_FUNCTION_EX, [CALL_INTRINSIC_1] = CALL_INTRINSIC_1, + [CALL_INTRINSIC_2] = CALL_INTRINSIC_2, [CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS] = CALL, [CALL_NO_KW_BUILTIN_FAST] = CALL, [CALL_NO_KW_BUILTIN_O] = CALL, @@ -187,7 +188,6 @@ const uint8_t _PyOpcode_Deopt[256] = { [POP_JUMP_IF_NOT_NONE] = POP_JUMP_IF_NOT_NONE, [POP_JUMP_IF_TRUE] = POP_JUMP_IF_TRUE, [POP_TOP] = POP_TOP, - [PREP_RERAISE_STAR] = PREP_RERAISE_STAR, [PUSH_EXC_INFO] = PUSH_EXC_INFO, [PUSH_NULL] = PUSH_NULL, [RAISE_VARARGS] = RAISE_VARARGS, @@ -319,7 +319,7 @@ static const char *const _PyOpcode_OpName[263] = { [SETUP_ANNOTATIONS] = "SETUP_ANNOTATIONS", [STORE_ATTR_INSTANCE_VALUE] = "STORE_ATTR_INSTANCE_VALUE", [STORE_ATTR_SLOT] = "STORE_ATTR_SLOT", - [PREP_RERAISE_STAR] = "PREP_RERAISE_STAR", + [STORE_ATTR_WITH_HINT] = "STORE_ATTR_WITH_HINT", [POP_EXCEPT] = "POP_EXCEPT", [STORE_NAME] = "STORE_NAME", [DELETE_NAME] = "DELETE_NAME", @@ -344,7 +344,7 @@ static const char *const _PyOpcode_OpName[263] = { [JUMP_FORWARD] = "JUMP_FORWARD", [JUMP_IF_FALSE_OR_POP] = "JUMP_IF_FALSE_OR_POP", [JUMP_IF_TRUE_OR_POP] = "JUMP_IF_TRUE_OR_POP", - [STORE_ATTR_WITH_HINT] = "STORE_ATTR_WITH_HINT", + [STORE_FAST__LOAD_FAST] = "STORE_FAST__LOAD_FAST", [POP_JUMP_IF_FALSE] = "POP_JUMP_IF_FALSE", [POP_JUMP_IF_TRUE] = "POP_JUMP_IF_TRUE", [LOAD_GLOBAL] = "LOAD_GLOBAL", @@ -374,7 +374,7 @@ static const char *const _PyOpcode_OpName[263] = { [JUMP_BACKWARD] = "JUMP_BACKWARD", [COMPARE_AND_BRANCH] = "COMPARE_AND_BRANCH", [CALL_FUNCTION_EX] = "CALL_FUNCTION_EX", - [STORE_FAST__LOAD_FAST] = "STORE_FAST__LOAD_FAST", + [STORE_FAST__STORE_FAST] = "STORE_FAST__STORE_FAST", [EXTENDED_ARG] = "EXTENDED_ARG", [LIST_APPEND] = "LIST_APPEND", [SET_ADD] = "SET_ADD", @@ -384,20 +384,20 @@ static const char *const _PyOpcode_OpName[263] = { [YIELD_VALUE] = "YIELD_VALUE", [RESUME] = "RESUME", [MATCH_CLASS] = "MATCH_CLASS", - [STORE_FAST__STORE_FAST] = "STORE_FAST__STORE_FAST", [STORE_SUBSCR_DICT] = "STORE_SUBSCR_DICT", + [STORE_SUBSCR_LIST_INT] = "STORE_SUBSCR_LIST_INT", [FORMAT_VALUE] = "FORMAT_VALUE", [BUILD_CONST_KEY_MAP] = "BUILD_CONST_KEY_MAP", [BUILD_STRING] = "BUILD_STRING", - [STORE_SUBSCR_LIST_INT] = "STORE_SUBSCR_LIST_INT", [UNPACK_SEQUENCE_LIST] = "UNPACK_SEQUENCE_LIST", [UNPACK_SEQUENCE_TUPLE] = "UNPACK_SEQUENCE_TUPLE", [UNPACK_SEQUENCE_TWO_TUPLE] = "UNPACK_SEQUENCE_TWO_TUPLE", + [SEND_GEN] = "SEND_GEN", [LIST_EXTEND] = "LIST_EXTEND", [SET_UPDATE] = "SET_UPDATE", [DICT_MERGE] = "DICT_MERGE", [DICT_UPDATE] = "DICT_UPDATE", - [SEND_GEN] = "SEND_GEN", + [166] = "<166>", [167] = "<167>", [168] = "<168>", [169] = "<169>", @@ -405,7 +405,7 @@ static const char *const _PyOpcode_OpName[263] = { [CALL] = "CALL", [KW_NAMES] = "KW_NAMES", [CALL_INTRINSIC_1] = "CALL_INTRINSIC_1", - [174] = "<174>", + [CALL_INTRINSIC_2] = "CALL_INTRINSIC_2", [175] = "<175>", [176] = "<176>", [177] = "<177>", @@ -498,11 +498,11 @@ static const char *const _PyOpcode_OpName[263] = { #endif #define EXTRA_CASES \ + case 166: \ case 167: \ case 168: \ case 169: \ case 170: \ - case 174: \ case 175: \ case 176: \ case 177: \ diff --git a/Include/opcode.h b/Include/opcode.h index d643741c3c3a..760ff945f31f 100644 --- a/Include/opcode.h +++ b/Include/opcode.h @@ -43,7 +43,6 @@ extern "C" { #define RETURN_GENERATOR 75 #define RETURN_VALUE 83 #define SETUP_ANNOTATIONS 85 -#define PREP_RERAISE_STAR 88 #define POP_EXCEPT 89 #define HAVE_ARGUMENT 90 #define STORE_NAME 90 @@ -117,6 +116,7 @@ extern "C" { #define CALL 171 #define KW_NAMES 172 #define CALL_INTRINSIC_1 173 +#define CALL_INTRINSIC_2 174 #define MIN_PSEUDO_OPCODE 256 #define SETUP_FINALLY 256 #define SETUP_CLEANUP 257 @@ -179,15 +179,15 @@ extern "C" { #define LOAD_GLOBAL_MODULE 84 #define STORE_ATTR_INSTANCE_VALUE 86 #define STORE_ATTR_SLOT 87 -#define STORE_ATTR_WITH_HINT 113 -#define STORE_FAST__LOAD_FAST 143 -#define STORE_FAST__STORE_FAST 153 -#define STORE_SUBSCR_DICT 154 -#define STORE_SUBSCR_LIST_INT 158 -#define UNPACK_SEQUENCE_LIST 159 -#define UNPACK_SEQUENCE_TUPLE 160 -#define UNPACK_SEQUENCE_TWO_TUPLE 161 -#define SEND_GEN 166 +#define STORE_ATTR_WITH_HINT 88 +#define STORE_FAST__LOAD_FAST 113 +#define STORE_FAST__STORE_FAST 143 +#define STORE_SUBSCR_DICT 153 +#define STORE_SUBSCR_LIST_INT 154 +#define UNPACK_SEQUENCE_LIST 158 +#define UNPACK_SEQUENCE_TUPLE 159 +#define UNPACK_SEQUENCE_TWO_TUPLE 160 +#define SEND_GEN 161 #define DO_TRACING 255 #define HAS_ARG(op) ((((op) >= HAVE_ARGUMENT) && (!IS_PSEUDO_OPCODE(op)))\ diff --git a/Lib/importlib/_bootstrap_external.py b/Lib/importlib/_bootstrap_external.py index 38d4a384c2cc..954401cfa85e 100644 --- a/Lib/importlib/_bootstrap_external.py +++ b/Lib/importlib/_bootstrap_external.py @@ -433,6 +433,7 @@ def _write_atomic(path, data, mode=0o666): # Python 3.12a5 3517 (Change YIELD_VALUE oparg to exception block depth) # Python 3.12a5 3518 (Add RETURN_CONST instruction) # Python 3.12a5 3519 (Modify SEND instruction) +# Python 3.12a5 3520 (Remove PREP_RERAISE_STAR, add CALL_INTRINSIC_2) # Python 3.13 will start with 3550 @@ -445,7 +446,7 @@ def _write_atomic(path, data, mode=0o666): # Whenever MAGIC_NUMBER is changed, the ranges in the magic_values array # in PC/launcher.c must also be updated. -MAGIC_NUMBER = (3519).to_bytes(2, 'little') + b'\r\n' +MAGIC_NUMBER = (3520).to_bytes(2, 'little') + b'\r\n' _RAW_MAGIC_NUMBER = int.from_bytes(MAGIC_NUMBER, 'little') # For import.c diff --git a/Lib/opcode.py b/Lib/opcode.py index b69cd1bbdd61..809d24e51676 100644 --- a/Lib/opcode.py +++ b/Lib/opcode.py @@ -127,7 +127,6 @@ def pseudo_op(name, op, real_ops): def_op('SETUP_ANNOTATIONS', 85) -def_op('PREP_RERAISE_STAR', 88) def_op('POP_EXCEPT', 89) HAVE_ARGUMENT = 90 # real opcodes from here have an argument: @@ -224,6 +223,7 @@ def pseudo_op(name, op, real_ops): def_op('KW_NAMES', 172) hasconst.append(172) def_op('CALL_INTRINSIC_1', 173) +def_op('CALL_INTRINSIC_2', 174) hasarg.extend([op for op in opmap.values() if op >= HAVE_ARGUMENT]) diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-13-18-21-14.gh-issue-101799.wpHbCn.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-13-18-21-14.gh-issue-101799.wpHbCn.rst new file mode 100644 index 000000000000..3233a573be7a --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-13-18-21-14.gh-issue-101799.wpHbCn.rst @@ -0,0 +1,2 @@ +Add :opcode:`CALL_INTRINSIC_2` and use it instead of +:opcode:`PREP_RERAISE_STAR`. diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 429cd7fdafa1..be54e5f6f589 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -501,7 +501,14 @@ dummy_func( inst(CALL_INTRINSIC_1, (value -- res)) { assert(oparg <= MAX_INTRINSIC_1); res = _PyIntrinsics_UnaryFunctions[oparg](tstate, value); - Py_DECREF(value); + DECREF_INPUTS(); + ERROR_IF(res == NULL, error); + } + + inst(CALL_INTRINSIC_2, (value2, value1 -- res)) { + assert(oparg <= MAX_INTRINSIC_2); + res = _PyIntrinsics_BinaryFunctions[oparg](tstate, value2, value1); + DECREF_INPUTS(); ERROR_IF(res == NULL, error); } @@ -788,15 +795,6 @@ dummy_func( goto exception_unwind; } - inst(PREP_RERAISE_STAR, (orig, excs -- val)) { - assert(PyList_Check(excs)); - - val = _PyExc_PrepReraiseStar(orig, excs); - DECREF_INPUTS(); - - ERROR_IF(val == NULL, error); - } - inst(END_ASYNC_FOR, (awaitable, exc -- )) { assert(exc && PyExceptionInstance_Check(exc)); if (PyErr_GivenExceptionMatches(exc, PyExc_StopAsyncIteration)) { @@ -2383,7 +2381,7 @@ dummy_func( } // Cache layout: counter/1, func_version/2, min_args/1 - // Neither CALL_INTRINSIC_1 nor CALL_FUNCTION_EX are members! + // Neither CALL_INTRINSIC_1/2 nor CALL_FUNCTION_EX are members! family(call, INLINE_CACHE_ENTRIES_CALL) = { CALL, CALL_BOUND_METHOD_EXACT_ARGS, diff --git a/Python/compile.c b/Python/compile.c index b49eda314eee..0534b536e3d1 100644 --- a/Python/compile.c +++ b/Python/compile.c @@ -3431,7 +3431,7 @@ compiler_try_except(struct compiler *c, stmt_ty s) [orig, res, rest] Ln+1: LIST_APPEND 1 ) add unhandled exc to res (could be None) - [orig, res] PREP_RERAISE_STAR + [orig, res] CALL_INTRINSIC_2 PREP_RERAISE_STAR [exc] COPY 1 [exc, exc] POP_JUMP_IF_NOT_NONE RER [exc] POP_TOP @@ -3580,7 +3580,7 @@ compiler_try_star_except(struct compiler *c, stmt_ty s) NEW_JUMP_TARGET_LABEL(c, reraise); USE_LABEL(c, reraise_star); - ADDOP(c, NO_LOCATION, PREP_RERAISE_STAR); + ADDOP_I(c, NO_LOCATION, CALL_INTRINSIC_2, INTRINSIC_PREP_RERAISE_STAR); ADDOP_I(c, NO_LOCATION, COPY, 1); ADDOP_JUMP(c, NO_LOCATION, POP_JUMP_IF_NOT_NONE, reraise); diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index 093ebff026b5..beb797cbd233 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -689,6 +689,20 @@ DISPATCH(); } + TARGET(CALL_INTRINSIC_2) { + PyObject *value1 = PEEK(1); + PyObject *value2 = PEEK(2); + PyObject *res; + assert(oparg <= MAX_INTRINSIC_2); + res = _PyIntrinsics_BinaryFunctions[oparg](tstate, value2, value1); + Py_DECREF(value2); + Py_DECREF(value1); + if (res == NULL) goto pop_2_error; + STACK_SHRINK(1); + POKE(1, res); + DISPATCH(); + } + TARGET(RAISE_VARARGS) { PyObject **args = &PEEK(oparg); PyObject *cause = NULL, *exc = NULL; @@ -999,22 +1013,6 @@ goto exception_unwind; } - TARGET(PREP_RERAISE_STAR) { - PyObject *excs = PEEK(1); - PyObject *orig = PEEK(2); - PyObject *val; - assert(PyList_Check(excs)); - - val = _PyExc_PrepReraiseStar(orig, excs); - Py_DECREF(orig); - Py_DECREF(excs); - - if (val == NULL) goto pop_2_error; - STACK_SHRINK(1); - POKE(1, val); - DISPATCH(); - } - TARGET(END_ASYNC_FOR) { PyObject *exc = PEEK(1); PyObject *awaitable = PEEK(2); diff --git a/Python/intrinsics.c b/Python/intrinsics.c index ae1775862d94..9e90ef32130f 100644 --- a/Python/intrinsics.c +++ b/Python/intrinsics.c @@ -9,6 +9,7 @@ #include "pycore_pyerrors.h" +/******** Unary functions ********/ static PyObject * no_intrinsic(PyThreadState* tstate, PyObject *unused) @@ -208,3 +209,20 @@ _PyIntrinsics_UnaryFunctions[] = { [INTRINSIC_UNARY_POSITIVE] = unary_pos, [INTRINSIC_LIST_TO_TUPLE] = list_to_tuple, }; + + +/******** Binary functions ********/ + + +static PyObject * +prep_reraise_star(PyThreadState* unused, PyObject *orig, PyObject *excs) +{ + assert(PyList_Check(excs)); + return _PyExc_PrepReraiseStar(orig, excs); +} + +instrinsic_func2 +_PyIntrinsics_BinaryFunctions[] = { + [INTRINSIC_PREP_RERAISE_STAR] = prep_reraise_star, +}; + diff --git a/Python/opcode_metadata.h b/Python/opcode_metadata.h index d622eb12c8cb..f27906a3e349 100644 --- a/Python/opcode_metadata.h +++ b/Python/opcode_metadata.h @@ -88,6 +88,8 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { return 2; case CALL_INTRINSIC_1: return 1; + case CALL_INTRINSIC_2: + return 2; case RAISE_VARARGS: return oparg; case INTERPRETER_EXIT: @@ -112,8 +114,6 @@ _PyOpcode_num_popped(int opcode, int oparg, bool jump) { return 1; case RERAISE: return oparg + 1; - case PREP_RERAISE_STAR: - return 2; case END_ASYNC_FOR: return 2; case CLEANUP_THROW: @@ -440,6 +440,8 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { return 0; case CALL_INTRINSIC_1: return 1; + case CALL_INTRINSIC_2: + return 1; case RAISE_VARARGS: return 0; case INTERPRETER_EXIT: @@ -464,8 +466,6 @@ _PyOpcode_num_pushed(int opcode, int oparg, bool jump) { return 0; case RERAISE: return oparg; - case PREP_RERAISE_STAR: - return 1; case END_ASYNC_FOR: return 0; case CLEANUP_THROW: @@ -760,6 +760,7 @@ const struct opcode_metadata _PyOpcode_opcode_metadata[256] = { [STORE_SUBSCR_DICT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IXC }, [DELETE_SUBSCR] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [CALL_INTRINSIC_1] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, + [CALL_INTRINSIC_2] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [RAISE_VARARGS] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, [INTERPRETER_EXIT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [RETURN_VALUE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, @@ -772,7 +773,6 @@ const struct opcode_metadata _PyOpcode_opcode_metadata[256] = { [YIELD_VALUE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [POP_EXCEPT] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [RERAISE] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IB }, - [PREP_RERAISE_STAR] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [END_ASYNC_FOR] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [CLEANUP_THROW] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, [LOAD_ASSERTION_ERROR] = { DIR_NONE, DIR_NONE, DIR_NONE, true, INSTR_FMT_IX }, diff --git a/Python/opcode_targets.h b/Python/opcode_targets.h index 301ec6e005da..bc64bd582fd5 100644 --- a/Python/opcode_targets.h +++ b/Python/opcode_targets.h @@ -87,7 +87,7 @@ static void *opcode_targets[256] = { &&TARGET_SETUP_ANNOTATIONS, &&TARGET_STORE_ATTR_INSTANCE_VALUE, &&TARGET_STORE_ATTR_SLOT, - &&TARGET_PREP_RERAISE_STAR, + &&TARGET_STORE_ATTR_WITH_HINT, &&TARGET_POP_EXCEPT, &&TARGET_STORE_NAME, &&TARGET_DELETE_NAME, @@ -112,7 +112,7 @@ static void *opcode_targets[256] = { &&TARGET_JUMP_FORWARD, &&TARGET_JUMP_IF_FALSE_OR_POP, &&TARGET_JUMP_IF_TRUE_OR_POP, - &&TARGET_STORE_ATTR_WITH_HINT, + &&TARGET_STORE_FAST__LOAD_FAST, &&TARGET_POP_JUMP_IF_FALSE, &&TARGET_POP_JUMP_IF_TRUE, &&TARGET_LOAD_GLOBAL, @@ -142,7 +142,7 @@ static void *opcode_targets[256] = { &&TARGET_JUMP_BACKWARD, &&TARGET_COMPARE_AND_BRANCH, &&TARGET_CALL_FUNCTION_EX, - &&TARGET_STORE_FAST__LOAD_FAST, + &&TARGET_STORE_FAST__STORE_FAST, &&TARGET_EXTENDED_ARG, &&TARGET_LIST_APPEND, &&TARGET_SET_ADD, @@ -152,20 +152,20 @@ static void *opcode_targets[256] = { &&TARGET_YIELD_VALUE, &&TARGET_RESUME, &&TARGET_MATCH_CLASS, - &&TARGET_STORE_FAST__STORE_FAST, &&TARGET_STORE_SUBSCR_DICT, + &&TARGET_STORE_SUBSCR_LIST_INT, &&TARGET_FORMAT_VALUE, &&TARGET_BUILD_CONST_KEY_MAP, &&TARGET_BUILD_STRING, - &&TARGET_STORE_SUBSCR_LIST_INT, &&TARGET_UNPACK_SEQUENCE_LIST, &&TARGET_UNPACK_SEQUENCE_TUPLE, &&TARGET_UNPACK_SEQUENCE_TWO_TUPLE, + &&TARGET_SEND_GEN, &&TARGET_LIST_EXTEND, &&TARGET_SET_UPDATE, &&TARGET_DICT_MERGE, &&TARGET_DICT_UPDATE, - &&TARGET_SEND_GEN, + &&_unknown_opcode, &&_unknown_opcode, &&_unknown_opcode, &&_unknown_opcode, @@ -173,7 +173,7 @@ static void *opcode_targets[256] = { &&TARGET_CALL, &&TARGET_KW_NAMES, &&TARGET_CALL_INTRINSIC_1, - &&_unknown_opcode, + &&TARGET_CALL_INTRINSIC_2, &&_unknown_opcode, &&_unknown_opcode, &&_unknown_opcode, From webhook-mailer at python.org Tue Feb 14 16:26:13 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Tue, 14 Feb 2023 21:26:13 -0000 Subject: [Python-checkins] gh-101758: Add a Test For Single-Phase Init Module Variants (gh-101891) Message-ID: https://github.com/python/cpython/commit/096d0097a09e439a4564531b297a998e5d74c9b5 commit: 096d0097a09e439a4564531b297a998e5d74c9b5 branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-14T14:26:03-07:00 summary: gh-101758: Add a Test For Single-Phase Init Module Variants (gh-101891) The new test exercises the most important variants for single-phase init extension modules. We also add some explanation about those variants to import.c. https://github.com/python/cpython/issues/101758 files: M Lib/test/test_imp.py M Modules/_testsinglephase.c M Python/import.c diff --git a/Lib/test/test_imp.py b/Lib/test/test_imp.py index 80abc720c325..31dce21587e2 100644 --- a/Lib/test/test_imp.py +++ b/Lib/test/test_imp.py @@ -251,6 +251,205 @@ def test_issue16421_multiple_modules_in_one_dll(self): with self.assertRaises(ImportError): imp.load_dynamic('nonexistent', pathname) + @requires_load_dynamic + def test_singlephase_variants(self): + '''Exercise the most meaningful variants described in Python/import.c.''' + self.maxDiff = None + + basename = '_testsinglephase' + fileobj, pathname, _ = imp.find_module(basename) + fileobj.close() + + modules = {} + def load(name): + assert name not in modules + module = imp.load_dynamic(name, pathname) + self.assertNotIn(module, modules.values()) + modules[name] = module + return module + + def re_load(name, module): + assert sys.modules[name] is module + before = type(module)(module.__name__) + before.__dict__.update(vars(module)) + + reloaded = imp.load_dynamic(name, pathname) + + return before, reloaded + + def check_common(name, module): + summed = module.sum(1, 2) + lookedup = module.look_up_self() + initialized = module.initialized() + cached = sys.modules[name] + + # module.__name__ might not match, but the spec will. + self.assertEqual(module.__spec__.name, name) + if initialized is not None: + self.assertIsInstance(initialized, float) + self.assertGreater(initialized, 0) + self.assertEqual(summed, 3) + self.assertTrue(issubclass(module.error, Exception)) + self.assertEqual(module.int_const, 1969) + self.assertEqual(module.str_const, 'something different') + self.assertIs(cached, module) + + return lookedup, initialized, cached + + def check_direct(name, module, lookedup): + # The module has its own PyModuleDef, with a matching name. + self.assertEqual(module.__name__, name) + self.assertIs(lookedup, module) + + def check_indirect(name, module, lookedup, orig): + # The module re-uses another's PyModuleDef, with a different name. + assert orig is not module + assert orig.__name__ != name + self.assertNotEqual(module.__name__, name) + self.assertIs(lookedup, module) + + def check_basic(module, initialized): + init_count = module.initialized_count() + + self.assertIsNot(initialized, None) + self.assertIsInstance(init_count, int) + self.assertGreater(init_count, 0) + + return init_count + + def check_common_reloaded(name, module, cached, before, reloaded): + recached = sys.modules[name] + + self.assertEqual(reloaded.__spec__.name, name) + self.assertEqual(reloaded.__name__, before.__name__) + self.assertEqual(before.__dict__, module.__dict__) + self.assertIs(recached, reloaded) + + def check_basic_reloaded(module, lookedup, initialized, init_count, + before, reloaded): + relookedup = reloaded.look_up_self() + reinitialized = reloaded.initialized() + reinit_count = reloaded.initialized_count() + + self.assertIs(reloaded, module) + self.assertIs(reloaded.__dict__, module.__dict__) + # It only happens to be the same but that's good enough here. + # We really just want to verify that the re-loaded attrs + # didn't change. + self.assertIs(relookedup, lookedup) + self.assertEqual(reinitialized, initialized) + self.assertEqual(reinit_count, init_count) + + def check_with_reinit_reloaded(module, lookedup, initialized, + before, reloaded): + relookedup = reloaded.look_up_self() + reinitialized = reloaded.initialized() + + self.assertIsNot(reloaded, module) + self.assertIsNot(reloaded, module) + self.assertNotEqual(reloaded.__dict__, module.__dict__) + self.assertIs(relookedup, reloaded) + if initialized is None: + self.assertIs(reinitialized, None) + else: + self.assertGreater(reinitialized, initialized) + + # Check the "basic" module. + + name = basename + expected_init_count = 1 + with self.subTest(name): + mod = load(name) + lookedup, initialized, cached = check_common(name, mod) + check_direct(name, mod, lookedup) + init_count = check_basic(mod, initialized) + self.assertEqual(init_count, expected_init_count) + + before, reloaded = re_load(name, mod) + check_common_reloaded(name, mod, cached, before, reloaded) + check_basic_reloaded(mod, lookedup, initialized, init_count, + before, reloaded) + basic = mod + + # Check its indirect variants. + + name = f'{basename}_basic_wrapper' + expected_init_count += 1 + with self.subTest(name): + mod = load(name) + lookedup, initialized, cached = check_common(name, mod) + check_indirect(name, mod, lookedup, basic) + init_count = check_basic(mod, initialized) + self.assertEqual(init_count, expected_init_count) + + before, reloaded = re_load(name, mod) + check_common_reloaded(name, mod, cached, before, reloaded) + check_basic_reloaded(mod, lookedup, initialized, init_count, + before, reloaded) + + # Currently _PyState_AddModule() always replaces the cached module. + self.assertIs(basic.look_up_self(), mod) + self.assertEqual(basic.initialized_count(), expected_init_count) + + # The cached module shouldn't be changed after this point. + basic_lookedup = mod + + # Check its direct variant. + + name = f'{basename}_basic_copy' + expected_init_count += 1 + with self.subTest(name): + mod = load(name) + lookedup, initialized, cached = check_common(name, mod) + check_direct(name, mod, lookedup) + init_count = check_basic(mod, initialized) + self.assertEqual(init_count, expected_init_count) + + before, reloaded = re_load(name, mod) + check_common_reloaded(name, mod, cached, before, reloaded) + check_basic_reloaded(mod, lookedup, initialized, init_count, + before, reloaded) + + # This should change the cached module for _testsinglephase. + self.assertIs(basic.look_up_self(), basic_lookedup) + self.assertEqual(basic.initialized_count(), expected_init_count) + + # Check the non-basic variant that has no state. + + name = f'{basename}_with_reinit' + with self.subTest(name): + mod = load(name) + lookedup, initialized, cached = check_common(name, mod) + self.assertIs(initialized, None) + check_direct(name, mod, lookedup) + + before, reloaded = re_load(name, mod) + check_common_reloaded(name, mod, cached, before, reloaded) + check_with_reinit_reloaded(mod, lookedup, initialized, + before, reloaded) + + # This should change the cached module for _testsinglephase. + self.assertIs(basic.look_up_self(), basic_lookedup) + self.assertEqual(basic.initialized_count(), expected_init_count) + + # Check the basic variant that has state. + + name = f'{basename}_with_state' + with self.subTest(name): + mod = load(name) + lookedup, initialized, cached = check_common(name, mod) + self.assertIsNot(initialized, None) + check_direct(name, mod, lookedup) + + before, reloaded = re_load(name, mod) + check_common_reloaded(name, mod, cached, before, reloaded) + check_with_reinit_reloaded(mod, lookedup, initialized, + before, reloaded) + + # This should change the cached module for _testsinglephase. + self.assertIs(basic.look_up_self(), basic_lookedup) + self.assertEqual(basic.initialized_count(), expected_init_count) + @requires_load_dynamic def test_load_dynamic_ImportError_path(self): # Issue #1559549 added `name` and `path` attributes to ImportError diff --git a/Modules/_testsinglephase.c b/Modules/_testsinglephase.c index 3bfe159e54fe..9e8dd647ee76 100644 --- a/Modules/_testsinglephase.c +++ b/Modules/_testsinglephase.c @@ -5,74 +5,412 @@ # define Py_BUILD_CORE_MODULE 1 #endif +//#include #include "Python.h" #include "pycore_namespace.h" // _PyNamespace_New() +typedef struct { + _PyTime_t initialized; + PyObject *error; + PyObject *int_const; + PyObject *str_const; +} module_state; + +/* Process-global state is only used by _testsinglephase + since it's the only one that does not support re-init. */ +static struct { + int initialized_count; + module_state module; +} global_state = { .initialized_count = -1 }; + +static inline module_state * +get_module_state(PyObject *module) +{ + PyModuleDef *def = PyModule_GetDef(module); + if (def->m_size == -1) { + return &global_state.module; + } + else if (def->m_size == 0) { + return NULL; + } + else { + module_state *state = (module_state*)PyModule_GetState(module); + assert(state != NULL); + return state; + } +} + +static void +clear_state(module_state *state) +{ + state->initialized = 0; + Py_CLEAR(state->error); + Py_CLEAR(state->int_const); + Py_CLEAR(state->str_const); +} + +static int +_set_initialized(_PyTime_t *initialized) +{ + /* We go strictly monotonic to ensure each time is unique. */ + _PyTime_t prev; + if (_PyTime_GetMonotonicClockWithInfo(&prev, NULL) != 0) { + return -1; + } + /* We do a busy sleep since the interval should be super short. */ + _PyTime_t t; + do { + if (_PyTime_GetMonotonicClockWithInfo(&t, NULL) != 0) { + return -1; + } + } while (t == prev); + + *initialized = t; + return 0; +} + +static int +init_state(module_state *state) +{ + assert(state->initialized == 0 && + state->error == NULL && + state->int_const == NULL && + state->str_const == NULL); + + if (_set_initialized(&state->initialized) != 0) { + goto error; + } + assert(state->initialized > 0); + + /* Add an exception type */ + state->error = PyErr_NewException("_testsinglephase.error", NULL, NULL); + if (state->error == NULL) { + goto error; + } + + state->int_const = PyLong_FromLong(1969); + if (state->int_const == NULL) { + goto error; + } + + state->str_const = PyUnicode_FromString("something different"); + if (state->str_const == NULL) { + goto error; + } + + return 0; + +error: + clear_state(state); + return -1; +} + +static int +init_module(PyObject *module, module_state *state) +{ + if (PyModule_AddObjectRef(module, "error", state->error) != 0) { + return -1; + } + if (PyModule_AddObjectRef(module, "int_const", state->int_const) != 0) { + return -1; + } + if (PyModule_AddObjectRef(module, "str_const", state->str_const) != 0) { + return -1; + } + return 0; +} + + +PyDoc_STRVAR(common_initialized_doc, +"initialized()\n\ +\n\ +Return the seconds-since-epoch when the module was initialized."); + +static PyObject * +common_initialized(PyObject *self, PyObject *Py_UNUSED(ignored)) +{ + module_state *state = get_module_state(self); + if (state == NULL) { + Py_RETURN_NONE; + } + double d = _PyTime_AsSecondsDouble(state->initialized); + return PyFloat_FromDouble(d); +} + +#define INITIALIZED_METHODDEF \ + {"initialized", common_initialized, METH_NOARGS, \ + common_initialized_doc} + + +PyDoc_STRVAR(common_look_up_self_doc, +"look_up_self()\n\ +\n\ +Return the module associated with this module's def.m_base.m_index."); + +static PyObject * +common_look_up_self(PyObject *self, PyObject *Py_UNUSED(ignored)) +{ + PyModuleDef *def = PyModule_GetDef(self); + if (def == NULL) { + return NULL; + } + return Py_NewRef( + PyState_FindModule(def)); +} + +#define LOOK_UP_SELF_METHODDEF \ + {"look_up_self", common_look_up_self, METH_NOARGS, common_look_up_self_doc} + + /* Function of two integers returning integer */ -PyDoc_STRVAR(testexport_foo_doc, -"foo(i,j)\n\ +PyDoc_STRVAR(common_sum_doc, +"sum(i,j)\n\ \n\ Return the sum of i and j."); static PyObject * -testexport_foo(PyObject *self, PyObject *args) +common_sum(PyObject *self, PyObject *args) { long i, j; long res; - if (!PyArg_ParseTuple(args, "ll:foo", &i, &j)) + if (!PyArg_ParseTuple(args, "ll:sum", &i, &j)) return NULL; res = i + j; return PyLong_FromLong(res); } +#define SUM_METHODDEF \ + {"sum", common_sum, METH_VARARGS, common_sum_doc} -static PyMethodDef TestMethods[] = { - {"foo", testexport_foo, METH_VARARGS, - testexport_foo_doc}, - {NULL, NULL} /* sentinel */ -}; +PyDoc_STRVAR(basic_initialized_count_doc, +"initialized_count()\n\ +\n\ +Return how many times the module has been initialized."); + +static PyObject * +basic_initialized_count(PyObject *self, PyObject *Py_UNUSED(ignored)) +{ + assert(PyModule_GetDef(self)->m_size == -1); + return PyLong_FromLong(global_state.initialized_count); +} + +#define INITIALIZED_COUNT_METHODDEF \ + {"initialized_count", basic_initialized_count, METH_VARARGS, \ + basic_initialized_count_doc} + + +/*********************************************/ +/* the _testsinglephase module (and aliases) */ +/*********************************************/ + +/* This ia more typical of legacy extensions in the wild: + - single-phase init + - no module state + - does not support repeated initialization + (so m_copy is used) + - the module def is cached in _PyRuntime.extensions + (by name/filename) + + Also note that, because the module has single-phase init, + it is cached in interp->module_by_index (using mod->md_def->m_base.m_index). + */ -static struct PyModuleDef _testsinglephase = { +static PyMethodDef TestMethods_Basic[] = { + LOOK_UP_SELF_METHODDEF, + SUM_METHODDEF, + INITIALIZED_METHODDEF, + INITIALIZED_COUNT_METHODDEF, + {NULL, NULL} /* sentinel */ +}; + +static struct PyModuleDef _testsinglephase_basic = { PyModuleDef_HEAD_INIT, .m_name = "_testsinglephase", - .m_doc = PyDoc_STR("Test module _testsinglephase (main)"), + .m_doc = PyDoc_STR("Test module _testsinglephase"), .m_size = -1, // no module state - .m_methods = TestMethods, + .m_methods = TestMethods_Basic, }; +static PyObject * +init__testsinglephase_basic(PyModuleDef *def) +{ + if (global_state.initialized_count == -1) { + global_state.initialized_count = 0; + } + + PyObject *module = PyModule_Create(def); + if (module == NULL) { + return NULL; + } + + module_state *state = &global_state.module; + // It may have been set by a previous run or under a different name. + clear_state(state); + if (init_state(state) < 0) { + Py_CLEAR(module); + return NULL; + } + + if (init_module(module, state) < 0) { + Py_CLEAR(module); + goto finally; + } + + global_state.initialized_count++; + +finally: + return module; +} PyMODINIT_FUNC PyInit__testsinglephase(void) { - PyObject *module = PyModule_Create(&_testsinglephase); + return init__testsinglephase_basic(&_testsinglephase_basic); +} + + +PyMODINIT_FUNC +PyInit__testsinglephase_basic_wrapper(void) +{ + return PyInit__testsinglephase(); +} + + +PyMODINIT_FUNC +PyInit__testsinglephase_basic_copy(void) +{ + static struct PyModuleDef def = { + PyModuleDef_HEAD_INIT, + .m_name = "_testsinglephase_basic_copy", + .m_doc = PyDoc_STR("Test module _testsinglephase_basic_copy"), + .m_size = -1, // no module state + .m_methods = TestMethods_Basic, + }; + return init__testsinglephase_basic(&def); +} + + +/*******************************************/ +/* the _testsinglephase_with_reinit module */ +/*******************************************/ + +/* This ia less typical of legacy extensions in the wild: + - single-phase init (same as _testsinglephase above) + - no module state + - supports repeated initialization + (so m_copy is not used) + - the module def is not cached in _PyRuntime.extensions + + At this point most modules would reach for multi-phase init (PEP 489). + However, module state has been around a while (PEP 3121), + and most extensions predate multi-phase init. + + (This module is basically the same as _testsinglephase, + but supports repeated initialization.) + */ + +static PyMethodDef TestMethods_Reinit[] = { + LOOK_UP_SELF_METHODDEF, + SUM_METHODDEF, + INITIALIZED_METHODDEF, + {NULL, NULL} /* sentinel */ +}; + +static struct PyModuleDef _testsinglephase_with_reinit = { + PyModuleDef_HEAD_INIT, + .m_name = "_testsinglephase_with_reinit", + .m_doc = PyDoc_STR("Test module _testsinglephase_with_reinit"), + .m_size = 0, + .m_methods = TestMethods_Reinit, +}; + +PyMODINIT_FUNC +PyInit__testsinglephase_with_reinit(void) +{ + /* We purposefully do not try PyState_FindModule() first here + since we want to check the behavior of re-loading the module. */ + PyObject *module = PyModule_Create(&_testsinglephase_with_reinit); if (module == NULL) { return NULL; } - /* Add an exception type */ - PyObject *temp = PyErr_NewException("_testsinglephase.error", NULL, NULL); - if (temp == NULL) { - goto error; + assert(get_module_state(module) == NULL); + + module_state state = {0}; + if (init_state(&state) < 0) { + Py_CLEAR(module); + return NULL; } - if (PyModule_AddObject(module, "error", temp) != 0) { - Py_DECREF(temp); - goto error; + + if (init_module(module, &state) < 0) { + Py_CLEAR(module); + goto finally; } - if (PyModule_AddIntConstant(module, "int_const", 1969) != 0) { - goto error; +finally: + /* We only needed the module state for setting the module attrs. */ + clear_state(&state); + return module; +} + + +/******************************************/ +/* the _testsinglephase_with_state module */ +/******************************************/ + +/* This ia less typical of legacy extensions in the wild: + - single-phase init (same as _testsinglephase above) + - has some module state + - supports repeated initialization + (so m_copy is not used) + - the module def is not cached in _PyRuntime.extensions + + At this point most modules would reach for multi-phase init (PEP 489). + However, module state has been around a while (PEP 3121), + and most extensions predate multi-phase init. + */ + +static PyMethodDef TestMethods_WithState[] = { + LOOK_UP_SELF_METHODDEF, + SUM_METHODDEF, + INITIALIZED_METHODDEF, + {NULL, NULL} /* sentinel */ +}; + +static struct PyModuleDef _testsinglephase_with_state = { + PyModuleDef_HEAD_INIT, + .m_name = "_testsinglephase_with_state", + .m_doc = PyDoc_STR("Test module _testsinglephase_with_state"), + .m_size = sizeof(module_state), + .m_methods = TestMethods_WithState, +}; + +PyMODINIT_FUNC +PyInit__testsinglephase_with_state(void) +{ + /* We purposefully do not try PyState_FindModule() first here + since we want to check the behavior of re-loading the module. */ + PyObject *module = PyModule_Create(&_testsinglephase_with_state); + if (module == NULL) { + return NULL; } - if (PyModule_AddStringConstant(module, "str_const", "something different") != 0) { - goto error; + module_state *state = get_module_state(module); + assert(state != NULL); + if (init_state(state) < 0) { + Py_CLEAR(module); + return NULL; } - return module; + if (init_module(module, state) < 0) { + clear_state(state); + Py_CLEAR(module); + goto finally; + } -error: - Py_DECREF(module); - return NULL; +finally: + return module; } diff --git a/Python/import.c b/Python/import.c index 302255d76edc..63ed2443657b 100644 --- a/Python/import.c +++ b/Python/import.c @@ -428,6 +428,71 @@ PyImport_GetMagicTag(void) } +/* +We support a number of kinds of single-phase init builtin/extension modules: + +* "basic" + * no module state (PyModuleDef.m_size == -1) + * does not support repeated init (we use PyModuleDef.m_base.m_copy) + * may have process-global state + * the module's def is cached in _PyRuntime.imports.extensions, + by (name, filename) +* "reinit" + * no module state (PyModuleDef.m_size == 0) + * supports repeated init (m_copy is never used) + * should not have any process-global state + * its def is never cached in _PyRuntime.imports.extensions + (except, currently, under the main interpreter, for some reason) +* "with state" (almost the same as reinit) + * has module state (PyModuleDef.m_size > 0) + * supports repeated init (m_copy is never used) + * should not have any process-global state + * its def is never cached in _PyRuntime.imports.extensions + (except, currently, under the main interpreter, for some reason) + +There are also variants within those classes: + +* two or more modules share a PyModuleDef + * a module's init func uses another module's PyModuleDef + * a module's init func calls another's module's init func + * a module's init "func" is actually a variable statically initialized + to another module's init func +* two or modules share "methods" + * a module's init func copies another module's PyModuleDef + (with a different name) +* (basic-only) two or modules share process-global state + +In the first case, where modules share a PyModuleDef, the following +notable weirdness happens: + +* the module's __name__ matches the def, not the requested name +* the last module (with the same def) to be imported for the first time wins + * returned by PyState_Find_Module() (via interp->modules_by_index) + * (non-basic-only) its init func is used when re-loading any of them + (via the def's m_init) + * (basic-only) the copy of its __dict__ is used when re-loading any of them + (via the def's m_copy) + +However, the following happens as expected: + +* a new module object (with its own __dict__) is created for each request +* the module's __spec__ has the requested name +* the loaded module is cached in sys.modules under the requested name +* the m_index field of the shared def is not changed, + so at least PyState_FindModule() will always look in the same place + +For "basic" modules there are other quirks: + +* (whether sharing a def or not) when loaded the first time, + m_copy is set before _init_module_attrs() is called + in importlib._bootstrap.module_from_spec(), + so when the module is re-loaded, the previous value + for __wpec__ (and others) is reset, possibly unexpectedly. + +Generally, when multiple interpreters are involved, some of the above +gets even messier. +*/ + /* Magic for extension modules (built-in as well as dynamically loaded). To prevent initializing an extension module more than once, we keep a static dictionary 'extensions' keyed by the tuple @@ -489,9 +554,8 @@ _extensions_cache_clear(void) Py_CLEAR(_PyRuntime.imports.extensions); } -int -_PyImport_FixupExtensionObject(PyObject *mod, PyObject *name, - PyObject *filename, PyObject *modules) +static int +fix_up_extension(PyObject *mod, PyObject *name, PyObject *filename) { if (mod == NULL || !PyModule_Check(mod)) { PyErr_BadInternalCall(); @@ -505,16 +569,13 @@ _PyImport_FixupExtensionObject(PyObject *mod, PyObject *name, } PyThreadState *tstate = _PyThreadState_GET(); - if (PyObject_SetItem(modules, name, mod) < 0) { - return -1; - } if (_PyState_AddModule(tstate, mod, def) < 0) { - PyMapping_DelItem(modules, name); return -1; } // bpo-44050: Extensions and def->m_base.m_copy can be updated // when the extension module doesn't support sub-interpreters. + // XXX Why special-case the main interpreter? if (_Py_IsMainInterpreter(tstate->interp) || def->m_size == -1) { if (def->m_size == -1) { if (def->m_base.m_copy) { @@ -541,15 +602,39 @@ _PyImport_FixupExtensionObject(PyObject *mod, PyObject *name, return 0; } +int +_PyImport_FixupExtensionObject(PyObject *mod, PyObject *name, + PyObject *filename, PyObject *modules) +{ + if (PyObject_SetItem(modules, name, mod) < 0) { + return -1; + } + if (fix_up_extension(mod, name, filename) < 0) { + PyMapping_DelItem(modules, name); + return -1; + } + return 0; +} + int _PyImport_FixupBuiltin(PyObject *mod, const char *name, PyObject *modules) { - int res; + int res = -1; PyObject *nameobj; nameobj = PyUnicode_InternFromString(name); - if (nameobj == NULL) + if (nameobj == NULL) { return -1; - res = _PyImport_FixupExtensionObject(mod, nameobj, nameobj, modules); + } + if (PyObject_SetItem(modules, nameobj, mod) < 0) { + goto finally; + } + if (fix_up_extension(mod, nameobj, nameobj) < 0) { + PyMapping_DelItem(modules, nameobj); + goto finally; + } + res = 0; + +finally: Py_DECREF(nameobj); return res; } From webhook-mailer at python.org Tue Feb 14 18:57:08 2023 From: webhook-mailer at python.org (gpshead) Date: Tue, 14 Feb 2023 23:57:08 -0000 Subject: [Python-checkins] gh-99108: Build the hashlib HACL* code as a static library. (#101917) Message-ID: https://github.com/python/cpython/commit/d777790bab878b8d1a218a1a60894b2823485cca commit: d777790bab878b8d1a218a1a60894b2823485cca branch: main author: Gregory P. Smith committer: gpshead date: 2023-02-14T15:57:01-08:00 summary: gh-99108: Build the hashlib HACL* code as a static library. (#101917) This builds HACL* as a library in one place. A followup to #101707 which broke some WASM builds. This fixes 2/4 of them, but the enscripten toolchain in the others don't deduplicate linker arguments and error out. A follow-on PR will address those. files: A Modules/_hacl/python_hacl_namespaces.h D Modules/_hacl/include/python_hacl_namespaces.h M Makefile.pre.in M Modules/Setup.stdlib.in diff --git a/Makefile.pre.in b/Makefile.pre.in index d42d4d8a3c1c..ce3fed3d6485 100644 --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -207,6 +207,7 @@ ENSUREPIP= @ENSUREPIP@ # Internal static libraries LIBMPDEC_A= Modules/_decimal/libmpdec/libmpdec.a LIBEXPAT_A= Modules/expat/libexpat.a +LIBHACL_A= Modules/_hacl/libHacl_Streaming_SHA2.a # Module state, compiler flags and linker flags # Empty CFLAGS and LDFLAGS are omitted. @@ -571,6 +572,23 @@ LIBEXPAT_HEADERS= \ Modules/expat/xmltok.h \ Modules/expat/xmltok_impl.h +########################################################################## +# hashlib's HACL* library + +LIBHACL_OBJS= \ + Modules/_hacl/Hacl_Streaming_SHA2.o + +LIBHACL_HEADERS= \ + Modules/_hacl/Hacl_Streaming_SHA2.h \ + Modules/_hacl/include/krml/FStar_UInt128_Verified.h \ + Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h \ + Modules/_hacl/include/krml/fstar_uint128_struct_endianness.h \ + Modules/_hacl/include/krml/internal/target.h \ + Modules/_hacl/include/krml/lowstar_endianness.h \ + Modules/_hacl/include/krml/types.h \ + Modules/_hacl/internal/Hacl_SHA2_Generic.h \ + Modules/_hacl/python_hacl_namespaces.h + ######################################################################### # Rules @@ -890,6 +908,17 @@ $(LIBEXPAT_A): $(LIBEXPAT_OBJS) -rm -f $@ $(AR) $(ARFLAGS) $@ $(LIBEXPAT_OBJS) +########################################################################## +# Build HACL* static libraries for hashlib: libHacl_Streaming_SHA2.a +LIBHACL_CFLAGS=-I$(srcdir)/Modules/_hacl/include -D_BSD_SOURCE -D_DEFAULT_SOURCE $(PY_STDMODULE_CFLAGS) $(CCSHARED) + +Modules/_hacl/Hacl_Streaming_SHA2.o: $(srcdir)/Modules/_hacl/Hacl_Streaming_SHA2.c $(LIBHACL_HEADERS) + $(CC) -c $(LIBHACL_CFLAGS) -o $@ $(srcdir)/Modules/_hacl/Hacl_Streaming_SHA2.c + +$(LIBHACL_A): $(LIBHACL_OBJS) + -rm -f $@ + $(AR) $(ARFLAGS) $@ $(LIBHACL_OBJS) + # create relative links from build/lib.platform/egg.so to Modules/egg.so # pybuilddir.txt is created too late. We cannot use it in Makefile # targets. ln --relative is not portable. @@ -2606,9 +2635,9 @@ MODULE__HASHLIB_DEPS=$(srcdir)/Modules/hashlib.h MODULE__IO_DEPS=$(srcdir)/Modules/_io/_iomodule.h MODULE__MD5_DEPS=$(srcdir)/Modules/hashlib.h MODULE__SHA1_DEPS=$(srcdir)/Modules/hashlib.h -MODULE__SHA256_DEPS=$(srcdir)/Modules/hashlib.h $(srcdir)/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h $(srcdir)/Modules/_hacl/include/krml/lowstar_endianness.h $(srcdir)/Modules/_hacl/include/krml/internal/target.h $(srcdir)/Modules/_hacl/Hacl_Streaming_SHA2.h +MODULE__SHA256_DEPS=$(srcdir)/Modules/hashlib.h $(LIBHACL_HEADERS) $(LIBHACL_A) MODULE__SHA3_DEPS=$(srcdir)/Modules/_sha3/sha3.c $(srcdir)/Modules/_sha3/sha3.h $(srcdir)/Modules/hashlib.h -MODULE__SHA512_DEPS=$(srcdir)/Modules/hashlib.h $(srcdir)/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h $(srcdir)/Modules/_hacl/include/krml/lowstar_endianness.h $(srcdir)/Modules/_hacl/include/krml/internal/target.h $(srcdir)/Modules/_hacl/Hacl_Streaming_SHA2.h +MODULE__SHA512_DEPS=$(srcdir)/Modules/hashlib.h $(LIBHACL_HEADERS) $(LIBHACL_A) MODULE__SOCKET_DEPS=$(srcdir)/Modules/socketmodule.h $(srcdir)/Modules/addrinfo.h $(srcdir)/Modules/getaddrinfo.c $(srcdir)/Modules/getnameinfo.c MODULE__SSL_DEPS=$(srcdir)/Modules/_ssl.h $(srcdir)/Modules/_ssl/cert.c $(srcdir)/Modules/_ssl/debughelpers.c $(srcdir)/Modules/_ssl/misc.c $(srcdir)/Modules/_ssl_data.h $(srcdir)/Modules/_ssl_data_111.h $(srcdir)/Modules/_ssl_data_300.h $(srcdir)/Modules/socketmodule.h MODULE__TESTCAPI_DEPS=$(srcdir)/Modules/_testcapi/testcapi_long.h $(srcdir)/Modules/_testcapi/parts.h diff --git a/Modules/Setup.stdlib.in b/Modules/Setup.stdlib.in index b6d13e04d3fa..22bcb423db23 100644 --- a/Modules/Setup.stdlib.in +++ b/Modules/Setup.stdlib.in @@ -79,8 +79,8 @@ # hashing builtins, can be disabled with --without-builtin-hashlib-hashes @MODULE__MD5_TRUE at _md5 md5module.c @MODULE__SHA1_TRUE at _sha1 sha1module.c - at MODULE__SHA256_TRUE@_sha256 sha256module.c _hacl/Hacl_Streaming_SHA2.c - at MODULE__SHA512_TRUE@_sha512 sha512module.c _hacl/Hacl_Streaming_SHA2.c + at MODULE__SHA256_TRUE@_sha256 sha256module.c -I$(srcdir)/Modules/_hacl/include Modules/_hacl/libHacl_Streaming_SHA2.a + at MODULE__SHA512_TRUE@_sha512 sha512module.c -I$(srcdir)/Modules/_hacl/include Modules/_hacl/libHacl_Streaming_SHA2.a @MODULE__SHA3_TRUE at _sha3 _sha3/sha3module.c @MODULE__BLAKE2_TRUE at _blake2 _blake2/blake2module.c _blake2/blake2b_impl.c _blake2/blake2s_impl.c diff --git a/Modules/_hacl/include/python_hacl_namespaces.h b/Modules/_hacl/python_hacl_namespaces.h similarity index 97% rename from Modules/_hacl/include/python_hacl_namespaces.h rename to Modules/_hacl/python_hacl_namespaces.h index 65608d1fd283..ac12f386257b 100644 --- a/Modules/_hacl/include/python_hacl_namespaces.h +++ b/Modules/_hacl/python_hacl_namespaces.h @@ -25,6 +25,7 @@ #define Hacl_Streaming_SHA2_init_224 python_hashlib_Hacl_Streaming_SHA2_init_224 #define Hacl_Streaming_SHA2_init_512 python_hashlib_Hacl_Streaming_SHA2_init_512 #define Hacl_Streaming_SHA2_init_384 python_hashlib_Hacl_Streaming_SHA2_init_384 +#define Hacl_SHA2_Scalar32_sha512_init python_hashlib_Hacl_SHA2_Scalar32_sha512_init #define Hacl_Streaming_SHA2_update_256 python_hashlib_Hacl_Streaming_SHA2_update_256 #define Hacl_Streaming_SHA2_update_224 python_hashlib_Hacl_Streaming_SHA2_update_224 #define Hacl_Streaming_SHA2_update_512 python_hashlib_Hacl_Streaming_SHA2_update_512 From webhook-mailer at python.org Wed Feb 15 00:27:25 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Wed, 15 Feb 2023 05:27:25 -0000 Subject: [Python-checkins] gh-101693: In sqlite3, deprecate using named placeholders with parameters supplied as a sequence (#101698) Message-ID: https://github.com/python/cpython/commit/8a2b7ee64d1bde762438b458ea7fe88f054a3a88 commit: 8a2b7ee64d1bde762438b458ea7fe88f054a3a88 branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-15T06:27:16+01:00 summary: gh-101693: In sqlite3, deprecate using named placeholders with parameters supplied as a sequence (#101698) files: A Misc/NEWS.d/next/Library/2023-02-08-18-20-58.gh-issue-101693.4_LPXj.rst M Doc/library/sqlite3.rst M Doc/whatsnew/3.12.rst M Lib/test/test_sqlite3/test_dbapi.py M Modules/_sqlite/cursor.c diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst index 8ffc0aad9199..18d0a5e630f6 100644 --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -1442,6 +1442,14 @@ Cursor objects and there is no open transaction, a transaction is implicitly opened before executing *sql*. + .. deprecated-removed:: 3.12 3.14 + + :exc:`DeprecationWarning` is emitted if + :ref:`named placeholders ` are used + and *parameters* is a sequence instead of a :class:`dict`. + Starting with Python 3.14, :exc:`ProgrammingError` will + be raised instead. + Use :meth:`executescript` to execute multiple SQL statements. .. method:: executemany(sql, parameters, /) @@ -1476,6 +1484,15 @@ Cursor objects # cur is an sqlite3.Cursor object cur.executemany("INSERT INTO data VALUES(?)", rows) + .. deprecated-removed:: 3.12 3.14 + + :exc:`DeprecationWarning` is emitted if + :ref:`named placeholders ` are used + and the items in *parameters* are sequences + instead of :class:`dict`\s. + Starting with Python 3.14, :exc:`ProgrammingError` will + be raised instead. + .. method:: executescript(sql_script, /) Execute the SQL statements in *sql_script*. @@ -1971,7 +1988,7 @@ question marks (qmark style) or named placeholders (named style). For the qmark style, *parameters* must be a :term:`sequence` whose length must match the number of placeholders, or a :exc:`ProgrammingError` is raised. -For the named style, *parameters* should be +For the named style, *parameters* must be an instance of a :class:`dict` (or a subclass), which must contain keys for all named parameters; any extra items are ignored. diff --git a/Doc/whatsnew/3.12.rst b/Doc/whatsnew/3.12.rst index 45a5e5062d55..c62f462a19a2 100644 --- a/Doc/whatsnew/3.12.rst +++ b/Doc/whatsnew/3.12.rst @@ -415,6 +415,13 @@ Deprecated and tailor them to your needs. (Contributed by Erlend E. Aasland in :gh:`90016`.) +* In :meth:`~sqlite3.Cursor.execute`, :exc:`DeprecationWarning` is now emitted + when :ref:`named placeholders ` are used together with + parameters supplied as a :term:`sequence` instead of as a :class:`dict`. + Starting from Python 3.14, using named placeholders with parameters supplied + as a sequence will raise a :exc:`~sqlite3.ProgrammingError`. + (Contributed by Erlend E. Aasland in :gh:`101698`.) + * The 3-arg signatures (type, value, traceback) of :meth:`~coroutine.throw`, :meth:`~generator.throw` and :meth:`~agen.athrow` are deprecated and may be removed in a future version of Python. Use the single-arg versions diff --git a/Lib/test/test_sqlite3/test_dbapi.py b/Lib/test/test_sqlite3/test_dbapi.py index 363a308f3e5f..695e213cdc7b 100644 --- a/Lib/test/test_sqlite3/test_dbapi.py +++ b/Lib/test/test_sqlite3/test_dbapi.py @@ -861,6 +861,21 @@ def __getitem__(slf, x): with self.assertRaises(ZeroDivisionError): self.cu.execute("select name from test where name=?", L()) + def test_execute_named_param_and_sequence(self): + dataset = ( + ("select :a", (1,)), + ("select :a, ?, ?", (1, 2, 3)), + ("select ?, :b, ?", (1, 2, 3)), + ("select ?, ?, :c", (1, 2, 3)), + ("select :a, :b, ?", (1, 2, 3)), + ) + msg = "Binding.*is a named parameter" + for query, params in dataset: + with self.subTest(query=query, params=params): + with self.assertWarnsRegex(DeprecationWarning, msg) as cm: + self.cu.execute(query, params) + self.assertEqual(cm.filename, __file__) + def test_execute_too_many_params(self): category = sqlite.SQLITE_LIMIT_VARIABLE_NUMBER msg = "too many SQL variables" diff --git a/Misc/NEWS.d/next/Library/2023-02-08-18-20-58.gh-issue-101693.4_LPXj.rst b/Misc/NEWS.d/next/Library/2023-02-08-18-20-58.gh-issue-101693.4_LPXj.rst new file mode 100644 index 000000000000..e436054b15b6 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-08-18-20-58.gh-issue-101693.4_LPXj.rst @@ -0,0 +1,6 @@ +In :meth:`sqlite3.Cursor.execute`, :exc:`DeprecationWarning` is now emitted +when :ref:`named placeholders ` are used together with +parameters supplied as a :term:`sequence` instead of as a :class:`dict`. +Starting from Python 3.14, using named placeholders with parameters supplied +as a sequence will raise a :exc:`~sqlite3.ProgrammingError`. +Patch by Erlend E. Aasland. diff --git a/Modules/_sqlite/cursor.c b/Modules/_sqlite/cursor.c index a4e22bb4a2b5..6f7970cf8197 100644 --- a/Modules/_sqlite/cursor.c +++ b/Modules/_sqlite/cursor.c @@ -662,6 +662,19 @@ bind_parameters(pysqlite_state *state, pysqlite_Statement *self, return; } for (i = 0; i < num_params; i++) { + const char *name = sqlite3_bind_parameter_name(self->st, i+1); + if (name != NULL) { + int ret = PyErr_WarnFormat(PyExc_DeprecationWarning, 1, + "Binding %d ('%s') is a named parameter, but you " + "supplied a sequence which requires nameless (qmark) " + "placeholders. Starting with Python 3.14 an " + "sqlite3.ProgrammingError will be raised.", + i+1, name); + if (ret < 0) { + return; + } + } + if (PyTuple_CheckExact(parameters)) { PyObject *item = PyTuple_GET_ITEM(parameters, i); current_param = Py_NewRef(item); From webhook-mailer at python.org Wed Feb 15 05:18:43 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Wed, 15 Feb 2023 10:18:43 -0000 Subject: [Python-checkins] gh-101819: Remove _testcapi dependencies on specific _io symbols (#101918) Message-ID: https://github.com/python/cpython/commit/e8b6aaad2faf11fe315410138a5c5943d610d8d8 commit: e8b6aaad2faf11fe315410138a5c5943d610d8d8 branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-15T11:18:27+01:00 summary: gh-101819: Remove _testcapi dependencies on specific _io symbols (#101918) files: M Modules/_io/_iomodule.c M Modules/_testcapimodule.c diff --git a/Modules/_io/_iomodule.c b/Modules/_io/_iomodule.c index 175fa97479d2..811b1d221a01 100644 --- a/Modules/_io/_iomodule.c +++ b/Modules/_io/_iomodule.c @@ -720,16 +720,8 @@ PyInit__io(void) // Add types for (size_t i=0; i < Py_ARRAY_LENGTH(static_types); i++) { PyTypeObject *type = static_types[i]; - // Private type not exposed in the _io module - if (type == &_PyBytesIOBuffer_Type) { - if (PyType_Ready(type) < 0) { - goto fail; - } - } - else { - if (PyModule_AddType(m, type) < 0) { - goto fail; - } + if (PyModule_AddType(m, type) < 0) { + goto fail; } } diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index 3c411fa0d763..5610a7689136 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -1448,12 +1448,10 @@ test_from_contiguous(PyObject* self, PyObject *Py_UNUSED(ignored)) } #if (defined(__linux__) || defined(__FreeBSD__)) && defined(__GNUC__) -extern PyTypeObject _PyBytesIOBuffer_Type; static PyObject * test_pep3118_obsolete_write_locks(PyObject* self, PyObject *Py_UNUSED(ignored)) { - PyTypeObject *type = &_PyBytesIOBuffer_Type; PyObject *b; char *dummy[1]; int ret, match; @@ -1466,7 +1464,13 @@ test_pep3118_obsolete_write_locks(PyObject* self, PyObject *Py_UNUSED(ignored)) goto error; /* bytesiobuf_getbuffer() */ + PyTypeObject *type = (PyTypeObject *)_PyImport_GetModuleAttrString( + "_io", "_BytesIOBuffer"); + if (type == NULL) { + return NULL; + } b = type->tp_alloc(type, 0); + Py_DECREF(type); if (b == NULL) { return NULL; } From webhook-mailer at python.org Wed Feb 15 07:22:20 2023 From: webhook-mailer at python.org (markshannon) Date: Wed, 15 Feb 2023 12:22:20 -0000 Subject: [Python-checkins] GH-87849: Fix refleak in SEND instruction. (GH-101908) Message-ID: https://github.com/python/cpython/commit/c7766245c14fa03b8afd3aff9be30b13d0069f95 commit: c7766245c14fa03b8afd3aff9be30b13d0069f95 branch: main author: Mark Shannon committer: markshannon date: 2023-02-15T12:21:40Z summary: GH-87849: Fix refleak in SEND instruction. (GH-101908) Fix refleak in SEND instruction. files: M Python/bytecodes.c M Python/generated_cases.c.h diff --git a/Python/bytecodes.c b/Python/bytecodes.c index be54e5f6f589..d5d5034cbfbf 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -727,6 +727,7 @@ dummy_func( else { assert(retval != NULL); } + Py_DECREF(v); } inst(SEND_GEN, (unused/1, receiver, v -- receiver)) { diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index beb797cbd233..8b8a7161ad89 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -934,6 +934,7 @@ else { assert(retval != NULL); } + Py_DECREF(v); POKE(1, retval); JUMPBY(1); DISPATCH(); From webhook-mailer at python.org Wed Feb 15 08:08:25 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 15 Feb 2023 13:08:25 -0000 Subject: [Python-checkins] gh-101819: Remove _PyWindowsConsoleIO_Type from the Windows DLL (GH-101904) Message-ID: https://github.com/python/cpython/commit/eb0c485b6c836abb71932537a5058344d11d7bc8 commit: eb0c485b6c836abb71932537a5058344d11d7bc8 branch: main author: Erlend E. Aasland committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-15T05:07:59-08:00 summary: gh-101819: Remove _PyWindowsConsoleIO_Type from the Windows DLL (GH-101904) Automerge-Triggered-By: GH:erlend-aasland files: M Include/internal/pycore_global_objects_fini_generated.h M Include/internal/pycore_global_strings.h M Include/internal/pycore_runtime_init_generated.h M Include/internal/pycore_unicodeobject_generated.h M Modules/_io/_iomodule.h M Modules/_io/winconsoleio.c M PC/_testconsole.c M Python/pylifecycle.c diff --git a/Include/internal/pycore_global_objects_fini_generated.h b/Include/internal/pycore_global_objects_fini_generated.h index 8c210111b589..dc5cd58d8535 100644 --- a/Include/internal/pycore_global_objects_fini_generated.h +++ b/Include/internal/pycore_global_objects_fini_generated.h @@ -577,6 +577,7 @@ _PyStaticObjects_CheckRefcnt(PyInterpreterState *interp) { _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(True)); _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(WarningMessage)); _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(_)); + _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(_WindowsConsoleIO)); _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(__IOBase_closed)); _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(__abc_tpflags__)); _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(__abs__)); @@ -752,6 +753,7 @@ _PyStaticObjects_CheckRefcnt(PyInterpreterState *interp) { _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(_get_sourcefile)); _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(_handle_fromlist)); _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(_initializing)); + _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(_io)); _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(_is_text_encoding)); _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(_length_)); _PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(_limbo)); diff --git a/Include/internal/pycore_global_strings.h b/Include/internal/pycore_global_strings.h index 6b1c8424424d..8b23aa154793 100644 --- a/Include/internal/pycore_global_strings.h +++ b/Include/internal/pycore_global_strings.h @@ -63,6 +63,7 @@ struct _Py_global_strings { STRUCT_FOR_ID(True) STRUCT_FOR_ID(WarningMessage) STRUCT_FOR_ID(_) + STRUCT_FOR_ID(_WindowsConsoleIO) STRUCT_FOR_ID(__IOBase_closed) STRUCT_FOR_ID(__abc_tpflags__) STRUCT_FOR_ID(__abs__) @@ -238,6 +239,7 @@ struct _Py_global_strings { STRUCT_FOR_ID(_get_sourcefile) STRUCT_FOR_ID(_handle_fromlist) STRUCT_FOR_ID(_initializing) + STRUCT_FOR_ID(_io) STRUCT_FOR_ID(_is_text_encoding) STRUCT_FOR_ID(_length_) STRUCT_FOR_ID(_limbo) diff --git a/Include/internal/pycore_runtime_init_generated.h b/Include/internal/pycore_runtime_init_generated.h index fcb613083ffe..471efadb13bb 100644 --- a/Include/internal/pycore_runtime_init_generated.h +++ b/Include/internal/pycore_runtime_init_generated.h @@ -569,6 +569,7 @@ extern "C" { INIT_ID(True), \ INIT_ID(WarningMessage), \ INIT_ID(_), \ + INIT_ID(_WindowsConsoleIO), \ INIT_ID(__IOBase_closed), \ INIT_ID(__abc_tpflags__), \ INIT_ID(__abs__), \ @@ -744,6 +745,7 @@ extern "C" { INIT_ID(_get_sourcefile), \ INIT_ID(_handle_fromlist), \ INIT_ID(_initializing), \ + INIT_ID(_io), \ INIT_ID(_is_text_encoding), \ INIT_ID(_length_), \ INIT_ID(_limbo), \ diff --git a/Include/internal/pycore_unicodeobject_generated.h b/Include/internal/pycore_unicodeobject_generated.h index 301aee5210e7..b47d240e492f 100644 --- a/Include/internal/pycore_unicodeobject_generated.h +++ b/Include/internal/pycore_unicodeobject_generated.h @@ -32,6 +32,8 @@ _PyUnicode_InitStaticStrings(void) { PyUnicode_InternInPlace(&string); string = &_Py_ID(_); PyUnicode_InternInPlace(&string); + string = &_Py_ID(_WindowsConsoleIO); + PyUnicode_InternInPlace(&string); string = &_Py_ID(__IOBase_closed); PyUnicode_InternInPlace(&string); string = &_Py_ID(__abc_tpflags__); @@ -382,6 +384,8 @@ _PyUnicode_InitStaticStrings(void) { PyUnicode_InternInPlace(&string); string = &_Py_ID(_initializing); PyUnicode_InternInPlace(&string); + string = &_Py_ID(_io); + PyUnicode_InternInPlace(&string); string = &_Py_ID(_is_text_encoding); PyUnicode_InternInPlace(&string); string = &_Py_ID(_length_); diff --git a/Modules/_io/_iomodule.h b/Modules/_io/_iomodule.h index c260080f0e34..7617cb8fb70e 100644 --- a/Modules/_io/_iomodule.h +++ b/Modules/_io/_iomodule.h @@ -21,13 +21,9 @@ extern PyTypeObject PyBufferedRandom_Type; extern PyTypeObject PyTextIOWrapper_Type; extern PyTypeObject PyIncrementalNewlineDecoder_Type; -#ifndef Py_LIMITED_API #ifdef MS_WINDOWS extern PyTypeObject PyWindowsConsoleIO_Type; -PyAPI_DATA(PyObject *) _PyWindowsConsoleIO_Type; -#define PyWindowsConsoleIO_Check(op) (PyObject_TypeCheck((op), (PyTypeObject*)_PyWindowsConsoleIO_Type)) #endif /* MS_WINDOWS */ -#endif /* Py_LIMITED_API */ /* These functions are used as METH_NOARGS methods, are normally called * with args=NULL, and return a new reference. diff --git a/Modules/_io/winconsoleio.c b/Modules/_io/winconsoleio.c index 4f41ab965e2e..e913d8318746 100644 --- a/Modules/_io/winconsoleio.c +++ b/Modules/_io/winconsoleio.c @@ -260,7 +260,7 @@ _io__WindowsConsoleIO___init___impl(winconsoleio *self, PyObject *nameobj, int fd_is_own = 0; HANDLE handle = NULL; - assert(PyWindowsConsoleIO_Check(self)); + assert(PyObject_TypeCheck(self, (PyTypeObject *)&PyWindowsConsoleIO_Type)); if (self->fd >= 0) { if (self->closefd) { /* Have to close the existing file first. */ @@ -1174,6 +1174,4 @@ PyTypeObject PyWindowsConsoleIO_Type = { 0, /* tp_finalize */ }; -PyObject * _PyWindowsConsoleIO_Type = (PyObject*)&PyWindowsConsoleIO_Type; - #endif /* MS_WINDOWS */ diff --git a/PC/_testconsole.c b/PC/_testconsole.c index a8308835d8f8..f14a2d45b1be 100644 --- a/PC/_testconsole.c +++ b/PC/_testconsole.c @@ -10,7 +10,7 @@ #ifdef MS_WINDOWS #include "pycore_fileutils.h" // _Py_get_osfhandle() -#include "..\modules\_io\_iomodule.h" +#include "pycore_runtime.h" // _Py_ID() #define WIN32_LEAN_AND_MEAN #include @@ -51,7 +51,14 @@ _testconsole_write_input_impl(PyObject *module, PyObject *file, { INPUT_RECORD *rec = NULL; - if (!PyWindowsConsoleIO_Check(file)) { + PyTypeObject *winconsoleio_type = (PyTypeObject *)_PyImport_GetModuleAttr( + &_Py_ID(_io), &_Py_ID(_WindowsConsoleIO)); + if (winconsoleio_type == NULL) { + return NULL; + } + int is_subclass = PyObject_TypeCheck(file, winconsoleio_type); + Py_DECREF(winconsoleio_type); + if (!is_subclass) { PyErr_SetString(PyExc_TypeError, "expected raw console object"); return NULL; } diff --git a/Python/pylifecycle.c b/Python/pylifecycle.c index a8a8e7f3d84f..045a2996e898 100644 --- a/Python/pylifecycle.c +++ b/Python/pylifecycle.c @@ -54,10 +54,6 @@ extern void _PyIO_Fini(void); #ifdef MS_WINDOWS # undef BYTE - - extern PyTypeObject PyWindowsConsoleIO_Type; -# define PyWindowsConsoleIO_Check(op) \ - (PyObject_TypeCheck((op), &PyWindowsConsoleIO_Type)) #endif #define PUTS(fd, str) _Py_write_noraise(fd, str, (int)strlen(str)) @@ -2358,8 +2354,16 @@ create_stdio(const PyConfig *config, PyObject* io, #ifdef MS_WINDOWS /* Windows console IO is always UTF-8 encoded */ - if (PyWindowsConsoleIO_Check(raw)) + PyTypeObject *winconsoleio_type = (PyTypeObject *)_PyImport_GetModuleAttr( + &_Py_ID(_io), &_Py_ID(_WindowsConsoleIO)); + if (winconsoleio_type == NULL) { + goto error; + } + int is_subclass = PyObject_TypeCheck(raw, winconsoleio_type); + Py_DECREF(winconsoleio_type); + if (is_subclass) { encoding = L"utf-8"; + } #endif text = PyUnicode_FromString(name); From webhook-mailer at python.org Wed Feb 15 16:58:57 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Wed, 15 Feb 2023 21:58:57 -0000 Subject: [Python-checkins] gh-99138: Isolate _zoneinfo (#99218) Message-ID: https://github.com/python/cpython/commit/c1ce0d178fe57b50f37578b285a343d77485ac02 commit: c1ce0d178fe57b50f37578b285a343d77485ac02 branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-15T22:58:48+01:00 summary: gh-99138: Isolate _zoneinfo (#99218) * Convert zone info type to heap type and add it to module state * Add global variables to module state files: A Misc/NEWS.d/next/Library/2023-01-02-22-41-44.gh-issue-99138.17hp9U.rst M Doc/library/zoneinfo.rst M Lib/test/test_zoneinfo/test_zoneinfo.py M Modules/_zoneinfo.c M Modules/clinic/_zoneinfo.c.h M Tools/c-analyzer/cpython/globals-to-fix.tsv diff --git a/Doc/library/zoneinfo.rst b/Doc/library/zoneinfo.rst index d2e5619e7e47..f8624da6e51d 100644 --- a/Doc/library/zoneinfo.rst +++ b/Doc/library/zoneinfo.rst @@ -241,7 +241,7 @@ The following class methods are also available: .. warning:: Invoking this function may change the semantics of datetimes using - ``ZoneInfo`` in surprising ways; this modifies process-wide global state + ``ZoneInfo`` in surprising ways; this modifies module state and thus may have wide-ranging effects. Only use it if you know that you need to. diff --git a/Lib/test/test_zoneinfo/test_zoneinfo.py b/Lib/test/test_zoneinfo/test_zoneinfo.py index fd0e3bc032ec..82041a2b4883 100644 --- a/Lib/test/test_zoneinfo/test_zoneinfo.py +++ b/Lib/test/test_zoneinfo/test_zoneinfo.py @@ -1797,12 +1797,10 @@ def test_cache_location(self): self.assertTrue(hasattr(py_zoneinfo.ZoneInfo, "_weak_cache")) def test_gc_tracked(self): - # The pure Python version is tracked by the GC but (for now) the C - # version is not. import gc self.assertTrue(gc.is_tracked(py_zoneinfo.ZoneInfo)) - self.assertFalse(gc.is_tracked(c_zoneinfo.ZoneInfo)) + self.assertTrue(gc.is_tracked(c_zoneinfo.ZoneInfo)) @dataclasses.dataclass(frozen=True) diff --git a/Misc/NEWS.d/next/Library/2023-01-02-22-41-44.gh-issue-99138.17hp9U.rst b/Misc/NEWS.d/next/Library/2023-01-02-22-41-44.gh-issue-99138.17hp9U.rst new file mode 100644 index 000000000000..3dd4646f40e1 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-01-02-22-41-44.gh-issue-99138.17hp9U.rst @@ -0,0 +1 @@ +Apply :pep:`687` to :mod:`zoneinfo`. Patch by Erlend E. Aasland. diff --git a/Modules/_zoneinfo.c b/Modules/_zoneinfo.c index 9d38589ea3d1..9f423559f51a 100644 --- a/Modules/_zoneinfo.c +++ b/Modules/_zoneinfo.c @@ -19,10 +19,6 @@ class zoneinfo.ZoneInfo "PyObject *" "PyTypeObject *" [clinic start generated code]*/ /*[clinic end generated code: output=da39a3ee5e6b4b0d input=d12c73c0eef36df8]*/ -// Imports -static PyObject *io_open = NULL; -static PyObject *_tzpath_find_tzfile = NULL; -static PyObject *_common_mod = NULL; typedef struct TransitionRuleType TransitionRuleType; typedef struct StrongCacheNode StrongCacheNode; @@ -90,15 +86,21 @@ struct StrongCacheNode { PyObject *zone; }; -static PyTypeObject PyZoneInfo_ZoneInfoType; +typedef struct { + PyTypeObject *ZoneInfoType; + + // Imports + PyObject *io_open; + PyObject *_tzpath_find_tzfile; + PyObject *_common_mod; -// Globals -static PyObject *TIMEDELTA_CACHE = NULL; -static PyObject *ZONEINFO_WEAK_CACHE = NULL; -static StrongCacheNode *ZONEINFO_STRONG_CACHE = NULL; -static size_t ZONEINFO_STRONG_CACHE_MAX_SIZE = 8; + // Caches + PyObject *TIMEDELTA_CACHE; + PyObject *ZONEINFO_WEAK_CACHE; + StrongCacheNode *ZONEINFO_STRONG_CACHE; -static _ttinfo NO_TTINFO = {NULL, NULL, NULL, 0}; + _ttinfo NO_TTINFO; +} zoneinfo_state; // Constants static const int EPOCHORDINAL = 719163; @@ -114,9 +116,12 @@ static const int SOURCE_NOCACHE = 0; static const int SOURCE_CACHE = 1; static const int SOURCE_FILE = 2; +static const size_t ZONEINFO_STRONG_CACHE_MAX_SIZE = 8; + // Forward declarations static int -load_data(PyZoneInfo_ZoneInfo *self, PyObject *file_obj); +load_data(zoneinfo_state *state, PyZoneInfo_ZoneInfo *self, + PyObject *file_obj); static void utcoff_to_dstoff(size_t *trans_idx, long *utcoffs, long *dstoffs, unsigned char *isdsts, size_t num_transitions, @@ -127,7 +132,7 @@ ts_to_local(size_t *trans_idx, int64_t *trans_utc, long *utcoff, size_t num_transitions); static int -parse_tz_str(PyObject *tz_str_obj, _tzrule *out); +parse_tz_str(zoneinfo_state *state, PyObject *tz_str_obj, _tzrule *out); static Py_ssize_t parse_abbr(const char *const p, PyObject **abbr); @@ -146,26 +151,27 @@ find_tzrule_ttinfo_fromutc(_tzrule *rule, int64_t ts, int year, unsigned char *fold); static int -build_ttinfo(long utcoffset, long dstoffset, PyObject *tzname, _ttinfo *out); +build_ttinfo(zoneinfo_state *state, long utcoffset, long dstoffset, + PyObject *tzname, _ttinfo *out); static void xdecref_ttinfo(_ttinfo *ttinfo); static int ttinfo_eq(const _ttinfo *const tti0, const _ttinfo *const tti1); static int -build_tzrule(PyObject *std_abbr, PyObject *dst_abbr, long std_offset, - long dst_offset, TransitionRuleType *start, +build_tzrule(zoneinfo_state *state, PyObject *std_abbr, PyObject *dst_abbr, + long std_offset, long dst_offset, TransitionRuleType *start, TransitionRuleType *end, _tzrule *out); static void free_tzrule(_tzrule *tzrule); static PyObject * -load_timedelta(long seconds); +load_timedelta(zoneinfo_state *state, long seconds); static int get_local_timestamp(PyObject *dt, int64_t *local_ts); static _ttinfo * -find_ttinfo(PyZoneInfo_ZoneInfo *self, PyObject *dt); +find_ttinfo(zoneinfo_state *state, PyZoneInfo_ZoneInfo *self, PyObject *dt); static int ymd_to_ord(int y, int m, int d); @@ -176,27 +182,57 @@ static size_t _bisect(const int64_t value, const int64_t *arr, size_t size); static int -eject_from_strong_cache(const PyTypeObject *const type, PyObject *key); +eject_from_strong_cache(zoneinfo_state *state, const PyTypeObject *const type, + PyObject *key); static void -clear_strong_cache(const PyTypeObject *const type); +clear_strong_cache(zoneinfo_state *state, const PyTypeObject *const type); static void -update_strong_cache(const PyTypeObject *const type, PyObject *key, - PyObject *zone); +update_strong_cache(zoneinfo_state *state, const PyTypeObject *const type, + PyObject *key, PyObject *zone); static PyObject * -zone_from_strong_cache(const PyTypeObject *const type, PyObject *const key); +zone_from_strong_cache(zoneinfo_state *state, const PyTypeObject *const type, + PyObject *const key); + +static inline zoneinfo_state * +zoneinfo_get_state(PyObject *mod) +{ + zoneinfo_state *state = (zoneinfo_state *)PyModule_GetState(mod); + assert(state != NULL); + return state; +} + +static inline zoneinfo_state * +zoneinfo_get_state_by_cls(PyTypeObject *cls) +{ + zoneinfo_state *state = (zoneinfo_state *)PyType_GetModuleState(cls); + assert(state != NULL); + return state; +} + +static struct PyModuleDef zoneinfomodule; + +static inline zoneinfo_state * +zoneinfo_get_state_by_self(PyTypeObject *self) +{ + PyObject *mod = PyType_GetModuleByDef(self, &zoneinfomodule); + assert(mod != NULL); + return zoneinfo_get_state(mod); +} static PyObject * -zoneinfo_new_instance(PyTypeObject *type, PyObject *key) +zoneinfo_new_instance(zoneinfo_state *state, PyTypeObject *type, PyObject *key) { PyObject *file_obj = NULL; PyObject *file_path = NULL; - file_path = PyObject_CallFunctionObjArgs(_tzpath_find_tzfile, key, NULL); + file_path = PyObject_CallFunctionObjArgs(state->_tzpath_find_tzfile, + key, NULL); if (file_path == NULL) { return NULL; } else if (file_path == Py_None) { - file_obj = PyObject_CallMethod(_common_mod, "load_tzdata", "O", key); + PyObject *meth = state->_common_mod; + file_obj = PyObject_CallMethod(meth, "load_tzdata", "O", key); if (file_obj == NULL) { Py_DECREF(file_path); return NULL; @@ -209,13 +245,14 @@ zoneinfo_new_instance(PyTypeObject *type, PyObject *key) } if (file_obj == NULL) { - file_obj = PyObject_CallFunction(io_open, "Os", file_path, "rb"); + PyObject *func = state->io_open; + file_obj = PyObject_CallFunction(func, "Os", file_path, "rb"); if (file_obj == NULL) { goto error; } } - if (load_data((PyZoneInfo_ZoneInfo *)self, file_obj)) { + if (load_data(state, (PyZoneInfo_ZoneInfo *)self, file_obj)) { goto error; } @@ -248,10 +285,10 @@ zoneinfo_new_instance(PyTypeObject *type, PyObject *key) } static PyObject * -get_weak_cache(PyTypeObject *type) +get_weak_cache(zoneinfo_state *state, PyTypeObject *type) { - if (type == &PyZoneInfo_ZoneInfoType) { - return ZONEINFO_WEAK_CACHE; + if (type == state->ZoneInfoType) { + return state->ZONEINFO_WEAK_CACHE; } else { PyObject *cache = @@ -273,12 +310,13 @@ zoneinfo_new(PyTypeObject *type, PyObject *args, PyObject *kw) return NULL; } - PyObject *instance = zone_from_strong_cache(type, key); + zoneinfo_state *state = zoneinfo_get_state_by_self(type); + PyObject *instance = zone_from_strong_cache(state, type, key); if (instance != NULL || PyErr_Occurred()) { return instance; } - PyObject *weak_cache = get_weak_cache(type); + PyObject *weak_cache = get_weak_cache(state, type); instance = PyObject_CallMethod(weak_cache, "get", "O", key, Py_None); if (instance == NULL) { return NULL; @@ -286,7 +324,7 @@ zoneinfo_new(PyTypeObject *type, PyObject *args, PyObject *kw) if (instance == Py_None) { Py_DECREF(instance); - PyObject *tmp = zoneinfo_new_instance(type, key); + PyObject *tmp = zoneinfo_new_instance(state, type, key); if (tmp == NULL) { return NULL; } @@ -300,14 +338,32 @@ zoneinfo_new(PyTypeObject *type, PyObject *args, PyObject *kw) ((PyZoneInfo_ZoneInfo *)instance)->source = SOURCE_CACHE; } - update_strong_cache(type, key, instance); + update_strong_cache(state, type, key, instance); return instance; } +static int +zoneinfo_traverse(PyZoneInfo_ZoneInfo *self, visitproc visit, void *arg) +{ + Py_VISIT(Py_TYPE(self)); + Py_VISIT(self->key); + return 0; +} + +static int +zoneinfo_clear(PyZoneInfo_ZoneInfo *self) +{ + Py_CLEAR(self->key); + Py_CLEAR(self->file_repr); + return 0; +} + static void zoneinfo_dealloc(PyObject *obj_self) { PyZoneInfo_ZoneInfo *self = (PyZoneInfo_ZoneInfo *)obj_self; + PyTypeObject *tp = Py_TYPE(self); + PyObject_GC_UnTrack(self); if (self->weakreflist != NULL) { PyObject_ClearWeakRefs(obj_self); @@ -336,16 +392,16 @@ zoneinfo_dealloc(PyObject *obj_self) free_tzrule(&(self->tzrule_after)); - Py_XDECREF(self->key); - Py_XDECREF(self->file_repr); - - Py_TYPE(self)->tp_free((PyObject *)self); + zoneinfo_clear(self); + tp->tp_free(obj_self); + Py_DECREF(tp); } /*[clinic input] @classmethod zoneinfo.ZoneInfo.from_file + cls: defining_class file_obj: object / key: object = None @@ -354,9 +410,9 @@ Create a ZoneInfo file from a file object. [clinic start generated code]*/ static PyObject * -zoneinfo_ZoneInfo_from_file_impl(PyTypeObject *type, PyObject *file_obj, - PyObject *key) -/*[clinic end generated code: output=68ed2022404ae5be input=ccfe73708133d2e4]*/ +zoneinfo_ZoneInfo_from_file_impl(PyTypeObject *type, PyTypeObject *cls, + PyObject *file_obj, PyObject *key) +/*[clinic end generated code: output=77887d1d56a48324 input=d26111f29eed6863]*/ { PyObject *file_repr = NULL; PyZoneInfo_ZoneInfo *self = NULL; @@ -372,7 +428,8 @@ zoneinfo_ZoneInfo_from_file_impl(PyTypeObject *type, PyObject *file_obj, goto error; } - if (load_data(self, file_obj)) { + zoneinfo_state *state = zoneinfo_get_state_by_cls(cls); + if (load_data(state, self, file_obj)) { goto error; } @@ -391,16 +448,20 @@ zoneinfo_ZoneInfo_from_file_impl(PyTypeObject *type, PyObject *file_obj, @classmethod zoneinfo.ZoneInfo.no_cache + cls: defining_class + / key: object Get a new instance of ZoneInfo, bypassing the cache. [clinic start generated code]*/ static PyObject * -zoneinfo_ZoneInfo_no_cache_impl(PyTypeObject *type, PyObject *key) -/*[clinic end generated code: output=751c6894ad66f91b input=bb24afd84a80ba46]*/ +zoneinfo_ZoneInfo_no_cache_impl(PyTypeObject *type, PyTypeObject *cls, + PyObject *key) +/*[clinic end generated code: output=b0b09b3344c171b7 input=0238f3d56b1ea3f1]*/ { - PyObject *out = zoneinfo_new_instance(type, key); + zoneinfo_state *state = zoneinfo_get_state_by_cls(cls); + PyObject *out = zoneinfo_new_instance(state, type, key); if (out != NULL) { ((PyZoneInfo_ZoneInfo *)out)->source = SOURCE_NOCACHE; } @@ -412,6 +473,8 @@ zoneinfo_ZoneInfo_no_cache_impl(PyTypeObject *type, PyObject *key) @classmethod zoneinfo.ZoneInfo.clear_cache + cls: defining_class + / * only_keys: object = None @@ -419,10 +482,12 @@ Clear the ZoneInfo cache. [clinic start generated code]*/ static PyObject * -zoneinfo_ZoneInfo_clear_cache_impl(PyTypeObject *type, PyObject *only_keys) -/*[clinic end generated code: output=eec0a3276f07bd90 input=8cff0182a95f295b]*/ +zoneinfo_ZoneInfo_clear_cache_impl(PyTypeObject *type, PyTypeObject *cls, + PyObject *only_keys) +/*[clinic end generated code: output=114d9b7c8a22e660 input=e32ca3bb396788ba]*/ { - PyObject *weak_cache = get_weak_cache(type); + zoneinfo_state *state = zoneinfo_get_state_by_cls(cls); + PyObject *weak_cache = get_weak_cache(state, type); if (only_keys == NULL || only_keys == Py_None) { PyObject *rv = PyObject_CallMethod(weak_cache, "clear", NULL); @@ -430,7 +495,7 @@ zoneinfo_ZoneInfo_clear_cache_impl(PyTypeObject *type, PyObject *only_keys) Py_DECREF(rv); } - clear_strong_cache(type); + clear_strong_cache(state, type); } else { PyObject *item = NULL; @@ -447,7 +512,7 @@ zoneinfo_ZoneInfo_clear_cache_impl(PyTypeObject *type, PyObject *only_keys) while ((item = PyIter_Next(iter))) { // Remove from strong cache - if (eject_from_strong_cache(type, item) < 0) { + if (eject_from_strong_cache(state, type, item) < 0) { Py_DECREF(item); break; } @@ -473,30 +538,68 @@ zoneinfo_ZoneInfo_clear_cache_impl(PyTypeObject *type, PyObject *only_keys) Py_RETURN_NONE; } +/*[clinic input] +zoneinfo.ZoneInfo.utcoffset + + cls: defining_class + dt: object + / + +Retrieve a timedelta representing the UTC offset in a zone at the given datetime. +[clinic start generated code]*/ + static PyObject * -zoneinfo_utcoffset(PyObject *self, PyObject *dt) +zoneinfo_ZoneInfo_utcoffset_impl(PyObject *self, PyTypeObject *cls, + PyObject *dt) +/*[clinic end generated code: output=b71016c319ba1f91 input=2bb6c5364938f19c]*/ { - _ttinfo *tti = find_ttinfo((PyZoneInfo_ZoneInfo *)self, dt); + zoneinfo_state *state = zoneinfo_get_state_by_cls(cls); + _ttinfo *tti = find_ttinfo(state, (PyZoneInfo_ZoneInfo *)self, dt); if (tti == NULL) { return NULL; } return Py_NewRef(tti->utcoff); } +/*[clinic input] +zoneinfo.ZoneInfo.dst + + cls: defining_class + dt: object + / + +Retrieve a timedelta representing the amount of DST applied in a zone at the given datetime. +[clinic start generated code]*/ + static PyObject * -zoneinfo_dst(PyObject *self, PyObject *dt) +zoneinfo_ZoneInfo_dst_impl(PyObject *self, PyTypeObject *cls, PyObject *dt) +/*[clinic end generated code: output=cb6168d7723a6ae6 input=2167fb80cf8645c6]*/ { - _ttinfo *tti = find_ttinfo((PyZoneInfo_ZoneInfo *)self, dt); + zoneinfo_state *state = zoneinfo_get_state_by_cls(cls); + _ttinfo *tti = find_ttinfo(state, (PyZoneInfo_ZoneInfo *)self, dt); if (tti == NULL) { return NULL; } return Py_NewRef(tti->dstoff); } +/*[clinic input] +zoneinfo.ZoneInfo.tzname + + cls: defining_class + dt: object + / + +Retrieve a string containing the abbreviation for the time zone that applies in a zone at a given datetime. +[clinic start generated code]*/ + static PyObject * -zoneinfo_tzname(PyObject *self, PyObject *dt) +zoneinfo_ZoneInfo_tzname_impl(PyObject *self, PyTypeObject *cls, + PyObject *dt) +/*[clinic end generated code: output=3b6ae6c3053ea75a input=15a59a4f92ed1f1f]*/ { - _ttinfo *tti = find_ttinfo((PyZoneInfo_ZoneInfo *)self, dt); + zoneinfo_state *state = zoneinfo_get_state_by_cls(cls); + _ttinfo *tti = find_ttinfo(state, (PyZoneInfo_ZoneInfo *)self, dt); if (tti == NULL) { return NULL; } @@ -693,28 +796,37 @@ zoneinfo_reduce(PyObject *obj_self, PyObject *unused) return rv; } +/*[clinic input] + at classmethod +zoneinfo.ZoneInfo._unpickle + + cls: defining_class + key: object + from_cache: unsigned_char(bitwise=True) + / + +Private method used in unpickling. +[clinic start generated code]*/ + static PyObject * -zoneinfo__unpickle(PyTypeObject *cls, PyObject *args) +zoneinfo_ZoneInfo__unpickle_impl(PyTypeObject *type, PyTypeObject *cls, + PyObject *key, unsigned char from_cache) +/*[clinic end generated code: output=556712fc709deecb input=6ac8c73eed3de316]*/ { - PyObject *key; - unsigned char from_cache; - if (!PyArg_ParseTuple(args, "OB", &key, &from_cache)) { - return NULL; - } - if (from_cache) { PyObject *val_args = Py_BuildValue("(O)", key); if (val_args == NULL) { return NULL; } - PyObject *rv = zoneinfo_new(cls, val_args, NULL); + PyObject *rv = zoneinfo_new(type, val_args, NULL); Py_DECREF(val_args); return rv; } else { - return zoneinfo_new_instance(cls, key); + zoneinfo_state *state = zoneinfo_get_state_by_cls(cls); + return zoneinfo_new_instance(state, type, key); } } @@ -732,14 +844,14 @@ zoneinfo__unpickle(PyTypeObject *cls, PyObject *args) * This returns a new reference to the timedelta. */ static PyObject * -load_timedelta(long seconds) +load_timedelta(zoneinfo_state *state, long seconds) { PyObject *rv; PyObject *pyoffset = PyLong_FromLong(seconds); if (pyoffset == NULL) { return NULL; } - rv = PyDict_GetItemWithError(TIMEDELTA_CACHE, pyoffset); + rv = PyDict_GetItemWithError(state->TIMEDELTA_CACHE, pyoffset); if (rv == NULL) { if (PyErr_Occurred()) { goto error; @@ -751,7 +863,7 @@ load_timedelta(long seconds) goto error; } - rv = PyDict_SetDefault(TIMEDELTA_CACHE, pyoffset, tmp); + rv = PyDict_SetDefault(state->TIMEDELTA_CACHE, pyoffset, tmp); Py_DECREF(tmp); } @@ -768,19 +880,20 @@ load_timedelta(long seconds) * initialized _ttinfo objects. */ static int -build_ttinfo(long utcoffset, long dstoffset, PyObject *tzname, _ttinfo *out) +build_ttinfo(zoneinfo_state *state, long utcoffset, long dstoffset, + PyObject *tzname, _ttinfo *out) { out->utcoff = NULL; out->dstoff = NULL; out->tzname = NULL; out->utcoff_seconds = utcoffset; - out->utcoff = load_timedelta(utcoffset); + out->utcoff = load_timedelta(state, utcoffset); if (out->utcoff == NULL) { return -1; } - out->dstoff = load_timedelta(dstoffset); + out->dstoff = load_timedelta(state, dstoffset); if (out->dstoff == NULL) { return -1; } @@ -836,7 +949,7 @@ ttinfo_eq(const _ttinfo *const tti0, const _ttinfo *const tti1) * the object only needs to be freed / deallocated if this succeeds. */ static int -load_data(PyZoneInfo_ZoneInfo *self, PyObject *file_obj) +load_data(zoneinfo_state *state, PyZoneInfo_ZoneInfo *self, PyObject *file_obj) { PyObject *data_tuple = NULL; @@ -854,7 +967,8 @@ load_data(PyZoneInfo_ZoneInfo *self, PyObject *file_obj) size_t ttinfos_allocated = 0; - data_tuple = PyObject_CallMethod(_common_mod, "load_data", "O", file_obj); + data_tuple = PyObject_CallMethod(state->_common_mod, "load_data", "O", + file_obj); if (data_tuple == NULL) { goto error; @@ -1012,7 +1126,9 @@ load_data(PyZoneInfo_ZoneInfo *self, PyObject *file_obj) } ttinfos_allocated++; - if (build_ttinfo(utcoff[i], dstoff[i], tzname, &(self->_ttinfos[i]))) { + int rc = build_ttinfo(state, utcoff[i], dstoff[i], tzname, + &(self->_ttinfos[i])); + if (rc) { goto error; } } @@ -1044,7 +1160,7 @@ load_data(PyZoneInfo_ZoneInfo *self, PyObject *file_obj) } if (tz_str != Py_None && PyObject_IsTrue(tz_str)) { - if (parse_tz_str(tz_str, &(self->tzrule_after))) { + if (parse_tz_str(state, tz_str, &(self->tzrule_after))) { goto error; } } @@ -1063,8 +1179,8 @@ load_data(PyZoneInfo_ZoneInfo *self, PyObject *file_obj) } _ttinfo *tti = &(self->_ttinfos[idx]); - build_tzrule(tti->tzname, NULL, tti->utcoff_seconds, 0, NULL, NULL, - &(self->tzrule_after)); + build_tzrule(state, tti->tzname, NULL, tti->utcoff_seconds, 0, NULL, + NULL, &(self->tzrule_after)); // We've abused the build_tzrule constructor to construct an STD-only // rule mimicking whatever ttinfo we've picked up, but it's possible @@ -1463,7 +1579,7 @@ find_tzrule_ttinfo_fromutc(_tzrule *rule, int64_t ts, int year, * https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html */ static int -parse_tz_str(PyObject *tz_str_obj, _tzrule *out) +parse_tz_str(zoneinfo_state *state, PyObject *tz_str_obj, _tzrule *out) { PyObject *std_abbr = NULL; PyObject *dst_abbr = NULL; @@ -1555,7 +1671,8 @@ parse_tz_str(PyObject *tz_str_obj, _tzrule *out) } complete: - build_tzrule(std_abbr, dst_abbr, std_offset, dst_offset, start, end, out); + build_tzrule(state, std_abbr, dst_abbr, std_offset, dst_offset, + start, end, out); Py_DECREF(std_abbr); Py_XDECREF(dst_abbr); @@ -1913,8 +2030,8 @@ parse_transition_time(const char *const p, int8_t *hour, int8_t *minute, * Returns 0 on success. */ static int -build_tzrule(PyObject *std_abbr, PyObject *dst_abbr, long std_offset, - long dst_offset, TransitionRuleType *start, +build_tzrule(zoneinfo_state *state, PyObject *std_abbr, PyObject *dst_abbr, + long std_offset, long dst_offset, TransitionRuleType *start, TransitionRuleType *end, _tzrule *out) { _tzrule rv = {{0}}; @@ -1922,13 +2039,13 @@ build_tzrule(PyObject *std_abbr, PyObject *dst_abbr, long std_offset, rv.start = start; rv.end = end; - if (build_ttinfo(std_offset, 0, std_abbr, &rv.std)) { + if (build_ttinfo(state, std_offset, 0, std_abbr, &rv.std)) { goto error; } if (dst_abbr != NULL) { rv.dst_diff = dst_offset - std_offset; - if (build_ttinfo(dst_offset, rv.dst_diff, dst_abbr, &rv.dst)) { + if (build_ttinfo(state, dst_offset, rv.dst_diff, dst_abbr, &rv.dst)) { goto error; } } @@ -2132,7 +2249,7 @@ _bisect(const int64_t value, const int64_t *arr, size_t size) /* Find the ttinfo rules that apply at a given local datetime. */ static _ttinfo * -find_ttinfo(PyZoneInfo_ZoneInfo *self, PyObject *dt) +find_ttinfo(zoneinfo_state *state, PyZoneInfo_ZoneInfo *self, PyObject *dt) { // datetime.time has a .tzinfo attribute that passes None as the dt // argument; it only really has meaning for fixed-offset zones. @@ -2141,7 +2258,7 @@ find_ttinfo(PyZoneInfo_ZoneInfo *self, PyObject *dt) return &(self->tzrule_after.std); } else { - return &NO_TTINFO; + return &(state->NO_TTINFO); } } @@ -2317,10 +2434,10 @@ strong_cache_free(StrongCacheNode *root) * the front of the cache. */ static void -remove_from_strong_cache(StrongCacheNode *node) +remove_from_strong_cache(zoneinfo_state *state, StrongCacheNode *node) { - if (ZONEINFO_STRONG_CACHE == node) { - ZONEINFO_STRONG_CACHE = node->next; + if (state->ZONEINFO_STRONG_CACHE == node) { + state->ZONEINFO_STRONG_CACHE = node->next; } if (node->prev != NULL) { @@ -2366,15 +2483,17 @@ find_in_strong_cache(const StrongCacheNode *const root, PyObject *const key) * This function is used to enable the per-key functionality in clear_cache. */ static int -eject_from_strong_cache(const PyTypeObject *const type, PyObject *key) +eject_from_strong_cache(zoneinfo_state *state, const PyTypeObject *const type, + PyObject *key) { - if (type != &PyZoneInfo_ZoneInfoType) { + if (type != state->ZoneInfoType) { return 0; } - StrongCacheNode *node = find_in_strong_cache(ZONEINFO_STRONG_CACHE, key); + StrongCacheNode *cache = state->ZONEINFO_STRONG_CACHE; + StrongCacheNode *node = find_in_strong_cache(cache, key); if (node != NULL) { - remove_from_strong_cache(node); + remove_from_strong_cache(state, node); strong_cache_node_free(node); } @@ -2390,14 +2509,15 @@ eject_from_strong_cache(const PyTypeObject *const type, PyObject *key) * it is not at the front of the cache, it needs to be moved there. */ static void -move_strong_cache_node_to_front(StrongCacheNode **root, StrongCacheNode *node) +move_strong_cache_node_to_front(zoneinfo_state *state, StrongCacheNode **root, + StrongCacheNode *node) { StrongCacheNode *root_p = *root; if (root_p == node) { return; } - remove_from_strong_cache(node); + remove_from_strong_cache(state, node); node->prev = NULL; node->next = root_p; @@ -2419,16 +2539,19 @@ move_strong_cache_node_to_front(StrongCacheNode **root, StrongCacheNode *node) * always returns a cache miss for subclasses. */ static PyObject * -zone_from_strong_cache(const PyTypeObject *const type, PyObject *const key) +zone_from_strong_cache(zoneinfo_state *state, const PyTypeObject *const type, + PyObject *const key) { - if (type != &PyZoneInfo_ZoneInfoType) { + if (type != state->ZoneInfoType) { return NULL; // Strong cache currently only implemented for base class } - StrongCacheNode *node = find_in_strong_cache(ZONEINFO_STRONG_CACHE, key); + StrongCacheNode *cache = state->ZONEINFO_STRONG_CACHE; + StrongCacheNode *node = find_in_strong_cache(cache, key); if (node != NULL) { - move_strong_cache_node_to_front(&ZONEINFO_STRONG_CACHE, node); + StrongCacheNode **root = &(state->ZONEINFO_STRONG_CACHE); + move_strong_cache_node_to_front(state, root, node); return Py_NewRef(node->zone); } @@ -2442,16 +2565,16 @@ zone_from_strong_cache(const PyTypeObject *const type, PyObject *const key) * the cache to at most ZONEINFO_STRONG_CACHE_MAX_SIZE). */ static void -update_strong_cache(const PyTypeObject *const type, PyObject *key, - PyObject *zone) +update_strong_cache(zoneinfo_state *state, const PyTypeObject *const type, + PyObject *key, PyObject *zone) { - if (type != &PyZoneInfo_ZoneInfoType) { + if (type != state->ZoneInfoType) { return; } StrongCacheNode *new_node = strong_cache_node_new(key, zone); - - move_strong_cache_node_to_front(&ZONEINFO_STRONG_CACHE, new_node); + StrongCacheNode **root = &(state->ZONEINFO_STRONG_CACHE); + move_strong_cache_node_to_front(state, root, new_node); StrongCacheNode *node = new_node->next; for (size_t i = 1; i < ZONEINFO_STRONG_CACHE_MAX_SIZE; ++i) { @@ -2476,14 +2599,14 @@ update_strong_cache(const PyTypeObject *const type, PyObject *key, * for everything except the base class. */ void -clear_strong_cache(const PyTypeObject *const type) +clear_strong_cache(zoneinfo_state *state, const PyTypeObject *const type) { - if (type != &PyZoneInfo_ZoneInfoType) { + if (type != state->ZoneInfoType) { return; } - strong_cache_free(ZONEINFO_STRONG_CACHE); - ZONEINFO_STRONG_CACHE = NULL; + strong_cache_free(state->ZONEINFO_STRONG_CACHE); + state->ZONEINFO_STRONG_CACHE = NULL; } static PyObject * @@ -2499,29 +2622,17 @@ new_weak_cache(void) return weak_cache; } +// This function is not idempotent and must be called on a new module object. static int -initialize_caches(void) +initialize_caches(zoneinfo_state *state) { - // TODO: Move to a PyModule_GetState / PEP 573 based caching system. - if (TIMEDELTA_CACHE == NULL) { - TIMEDELTA_CACHE = PyDict_New(); - } - else { - Py_INCREF(TIMEDELTA_CACHE); - } - - if (TIMEDELTA_CACHE == NULL) { + state->TIMEDELTA_CACHE = PyDict_New(); + if (state->TIMEDELTA_CACHE == NULL) { return -1; } - if (ZONEINFO_WEAK_CACHE == NULL) { - ZONEINFO_WEAK_CACHE = new_weak_cache(); - } - else { - Py_INCREF(ZONEINFO_WEAK_CACHE); - } - - if (ZONEINFO_WEAK_CACHE == NULL) { + state->ZONEINFO_WEAK_CACHE = new_weak_cache(); + if (state->ZONEINFO_WEAK_CACHE == NULL) { return -1; } @@ -2551,22 +2662,15 @@ static PyMethodDef zoneinfo_methods[] = { ZONEINFO_ZONEINFO_CLEAR_CACHE_METHODDEF ZONEINFO_ZONEINFO_NO_CACHE_METHODDEF ZONEINFO_ZONEINFO_FROM_FILE_METHODDEF - {"utcoffset", (PyCFunction)zoneinfo_utcoffset, METH_O, - PyDoc_STR("Retrieve a timedelta representing the UTC offset in a zone at " - "the given datetime.")}, - {"dst", (PyCFunction)zoneinfo_dst, METH_O, - PyDoc_STR("Retrieve a timedelta representing the amount of DST applied " - "in a zone at the given datetime.")}, - {"tzname", (PyCFunction)zoneinfo_tzname, METH_O, - PyDoc_STR("Retrieve a string containing the abbreviation for the time " - "zone that applies in a zone at a given datetime.")}, + ZONEINFO_ZONEINFO_UTCOFFSET_METHODDEF + ZONEINFO_ZONEINFO_DST_METHODDEF + ZONEINFO_ZONEINFO_TZNAME_METHODDEF {"fromutc", (PyCFunction)zoneinfo_fromutc, METH_O, PyDoc_STR("Given a datetime with local time in UTC, retrieve an adjusted " "datetime in local time.")}, {"__reduce__", (PyCFunction)zoneinfo_reduce, METH_NOARGS, PyDoc_STR("Function for serialization with the pickle protocol.")}, - {"_unpickle", (PyCFunction)zoneinfo__unpickle, METH_VARARGS | METH_CLASS, - PyDoc_STR("Private method used in unpickling.")}, + ZONEINFO_ZONEINFO__UNPICKLE_METHODDEF {"__init_subclass__", (PyCFunction)(void (*)(void))zoneinfo_init_subclass, METH_VARARGS | METH_KEYWORDS | METH_CLASS, PyDoc_STR("Function to initialize subclasses.")}, @@ -2579,50 +2683,88 @@ static PyMemberDef zoneinfo_members[] = { .type = T_OBJECT_EX, .flags = READONLY, .doc = NULL}, + {.name = "__weaklistoffset__", + .offset = offsetof(PyZoneInfo_ZoneInfo, weakreflist), + .type = T_PYSSIZET, + .flags = READONLY}, {NULL}, /* Sentinel */ }; -static PyTypeObject PyZoneInfo_ZoneInfoType = { - PyVarObject_HEAD_INIT(NULL, 0) // - .tp_name = "zoneinfo.ZoneInfo", - .tp_basicsize = sizeof(PyZoneInfo_ZoneInfo), - .tp_weaklistoffset = offsetof(PyZoneInfo_ZoneInfo, weakreflist), - .tp_repr = (reprfunc)zoneinfo_repr, - .tp_str = (reprfunc)zoneinfo_str, - .tp_getattro = PyObject_GenericGetAttr, - .tp_flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE), - /* .tp_doc = zoneinfo_doc, */ - .tp_methods = zoneinfo_methods, - .tp_members = zoneinfo_members, - .tp_new = zoneinfo_new, - .tp_dealloc = zoneinfo_dealloc, +static PyType_Slot zoneinfo_slots[] = { + {Py_tp_repr, zoneinfo_repr}, + {Py_tp_str, zoneinfo_str}, + {Py_tp_getattro, PyObject_GenericGetAttr}, + {Py_tp_methods, zoneinfo_methods}, + {Py_tp_members, zoneinfo_members}, + {Py_tp_new, zoneinfo_new}, + {Py_tp_dealloc, zoneinfo_dealloc}, + {Py_tp_traverse, zoneinfo_traverse}, + {Py_tp_clear, zoneinfo_clear}, + {0, NULL}, +}; + +static PyType_Spec zoneinfo_spec = { + .name = "zoneinfo.ZoneInfo", + .basicsize = sizeof(PyZoneInfo_ZoneInfo), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | + Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_IMMUTABLETYPE), + .slots = zoneinfo_slots, }; ///// // Specify the _zoneinfo module static PyMethodDef module_methods[] = {{NULL, NULL}}; -static void -module_free(void *m) + +static int +module_traverse(PyObject *mod, visitproc visit, void *arg) { - Py_CLEAR(_tzpath_find_tzfile); - Py_CLEAR(_common_mod); - Py_CLEAR(io_open); + zoneinfo_state *state = zoneinfo_get_state(mod); - xdecref_ttinfo(&NO_TTINFO); + Py_VISIT(state->ZoneInfoType); + Py_VISIT(state->io_open); + Py_VISIT(state->_tzpath_find_tzfile); + Py_VISIT(state->_common_mod); + Py_VISIT(state->TIMEDELTA_CACHE); + Py_VISIT(state->ZONEINFO_WEAK_CACHE); - if (TIMEDELTA_CACHE != NULL && Py_REFCNT(TIMEDELTA_CACHE) > 1) { - Py_DECREF(TIMEDELTA_CACHE); - } else { - Py_CLEAR(TIMEDELTA_CACHE); + StrongCacheNode *node = state->ZONEINFO_STRONG_CACHE; + while (node != NULL) { + StrongCacheNode *next = node->next; + Py_VISIT(node->key); + Py_VISIT(node->zone); + node = next; } - if (ZONEINFO_WEAK_CACHE != NULL && Py_REFCNT(ZONEINFO_WEAK_CACHE) > 1) { - Py_DECREF(ZONEINFO_WEAK_CACHE); - } else { - Py_CLEAR(ZONEINFO_WEAK_CACHE); - } + Py_VISIT(state->NO_TTINFO.utcoff); + Py_VISIT(state->NO_TTINFO.dstoff); + Py_VISIT(state->NO_TTINFO.tzname); - clear_strong_cache(&PyZoneInfo_ZoneInfoType); + return 0; +} + +static int +module_clear(PyObject *mod) +{ + zoneinfo_state *state = zoneinfo_get_state(mod); + + Py_CLEAR(state->ZoneInfoType); + Py_CLEAR(state->io_open); + Py_CLEAR(state->_tzpath_find_tzfile); + Py_CLEAR(state->_common_mod); + Py_CLEAR(state->TIMEDELTA_CACHE); + Py_CLEAR(state->ZONEINFO_WEAK_CACHE); + clear_strong_cache(state, state->ZoneInfoType); + Py_CLEAR(state->NO_TTINFO.utcoff); + Py_CLEAR(state->NO_TTINFO.dstoff); + Py_CLEAR(state->NO_TTINFO.tzname); + + return 0; +} + +static void +module_free(void *mod) +{ + (void)module_clear((PyObject *)mod); } static int @@ -2632,39 +2774,45 @@ zoneinfomodule_exec(PyObject *m) if (PyDateTimeAPI == NULL) { goto error; } - PyZoneInfo_ZoneInfoType.tp_base = PyDateTimeAPI->TZInfoType; - if (PyType_Ready(&PyZoneInfo_ZoneInfoType) < 0) { + + zoneinfo_state *state = zoneinfo_get_state(m); + PyObject *base = (PyObject *)PyDateTimeAPI->TZInfoType; + state->ZoneInfoType = (PyTypeObject *)PyType_FromModuleAndSpec(m, + &zoneinfo_spec, base); + if (state->ZoneInfoType == NULL) { goto error; } - if (PyModule_AddObjectRef(m, "ZoneInfo", (PyObject *)&PyZoneInfo_ZoneInfoType) < 0) { + int rc = PyModule_AddObjectRef(m, "ZoneInfo", + (PyObject *)state->ZoneInfoType); + if (rc < 0) { goto error; } /* Populate imports */ - _tzpath_find_tzfile = + state->_tzpath_find_tzfile = _PyImport_GetModuleAttrString("zoneinfo._tzpath", "find_tzfile"); - if (_tzpath_find_tzfile == NULL) { + if (state->_tzpath_find_tzfile == NULL) { goto error; } - io_open = _PyImport_GetModuleAttrString("io", "open"); - if (io_open == NULL) { + state->io_open = _PyImport_GetModuleAttrString("io", "open"); + if (state->io_open == NULL) { goto error; } - _common_mod = PyImport_ImportModule("zoneinfo._common"); - if (_common_mod == NULL) { + state->_common_mod = PyImport_ImportModule("zoneinfo._common"); + if (state->_common_mod == NULL) { goto error; } - if (NO_TTINFO.utcoff == NULL) { - NO_TTINFO.utcoff = Py_NewRef(Py_None); - NO_TTINFO.dstoff = Py_NewRef(Py_None); - NO_TTINFO.tzname = Py_NewRef(Py_None); + if (state->NO_TTINFO.utcoff == NULL) { + state->NO_TTINFO.utcoff = Py_NewRef(Py_None); + state->NO_TTINFO.dstoff = Py_NewRef(Py_None); + state->NO_TTINFO.tzname = Py_NewRef(Py_None); } - if (initialize_caches()) { + if (initialize_caches(state)) { goto error; } @@ -2678,13 +2826,16 @@ static PyModuleDef_Slot zoneinfomodule_slots[] = { {Py_mod_exec, zoneinfomodule_exec}, {0, NULL}}; static struct PyModuleDef zoneinfomodule = { - PyModuleDef_HEAD_INIT, + .m_base = PyModuleDef_HEAD_INIT, .m_name = "_zoneinfo", .m_doc = "C implementation of the zoneinfo module", - .m_size = 0, + .m_size = sizeof(zoneinfo_state), .m_methods = module_methods, .m_slots = zoneinfomodule_slots, - .m_free = (freefunc)module_free}; + .m_traverse = module_traverse, + .m_clear = module_clear, + .m_free = module_free, +}; PyMODINIT_FUNC PyInit__zoneinfo(void) diff --git a/Modules/clinic/_zoneinfo.c.h b/Modules/clinic/_zoneinfo.c.h index 78fcbfa9411b..ae62865e0f67 100644 --- a/Modules/clinic/_zoneinfo.c.h +++ b/Modules/clinic/_zoneinfo.c.h @@ -15,14 +15,14 @@ PyDoc_STRVAR(zoneinfo_ZoneInfo_from_file__doc__, "Create a ZoneInfo file from a file object."); #define ZONEINFO_ZONEINFO_FROM_FILE_METHODDEF \ - {"from_file", _PyCFunction_CAST(zoneinfo_ZoneInfo_from_file), METH_FASTCALL|METH_KEYWORDS|METH_CLASS, zoneinfo_ZoneInfo_from_file__doc__}, + {"from_file", _PyCFunction_CAST(zoneinfo_ZoneInfo_from_file), METH_METHOD|METH_FASTCALL|METH_KEYWORDS|METH_CLASS, zoneinfo_ZoneInfo_from_file__doc__}, static PyObject * -zoneinfo_ZoneInfo_from_file_impl(PyTypeObject *type, PyObject *file_obj, - PyObject *key); +zoneinfo_ZoneInfo_from_file_impl(PyTypeObject *type, PyTypeObject *cls, + PyObject *file_obj, PyObject *key); static PyObject * -zoneinfo_ZoneInfo_from_file(PyTypeObject *type, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +zoneinfo_ZoneInfo_from_file(PyTypeObject *type, PyTypeObject *cls, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) { PyObject *return_value = NULL; #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) @@ -65,7 +65,7 @@ zoneinfo_ZoneInfo_from_file(PyTypeObject *type, PyObject *const *args, Py_ssize_ } key = args[1]; skip_optional_pos: - return_value = zoneinfo_ZoneInfo_from_file_impl(type, file_obj, key); + return_value = zoneinfo_ZoneInfo_from_file_impl(type, cls, file_obj, key); exit: return return_value; @@ -78,13 +78,14 @@ PyDoc_STRVAR(zoneinfo_ZoneInfo_no_cache__doc__, "Get a new instance of ZoneInfo, bypassing the cache."); #define ZONEINFO_ZONEINFO_NO_CACHE_METHODDEF \ - {"no_cache", _PyCFunction_CAST(zoneinfo_ZoneInfo_no_cache), METH_FASTCALL|METH_KEYWORDS|METH_CLASS, zoneinfo_ZoneInfo_no_cache__doc__}, + {"no_cache", _PyCFunction_CAST(zoneinfo_ZoneInfo_no_cache), METH_METHOD|METH_FASTCALL|METH_KEYWORDS|METH_CLASS, zoneinfo_ZoneInfo_no_cache__doc__}, static PyObject * -zoneinfo_ZoneInfo_no_cache_impl(PyTypeObject *type, PyObject *key); +zoneinfo_ZoneInfo_no_cache_impl(PyTypeObject *type, PyTypeObject *cls, + PyObject *key); static PyObject * -zoneinfo_ZoneInfo_no_cache(PyTypeObject *type, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +zoneinfo_ZoneInfo_no_cache(PyTypeObject *type, PyTypeObject *cls, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) { PyObject *return_value = NULL; #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) @@ -120,7 +121,7 @@ zoneinfo_ZoneInfo_no_cache(PyTypeObject *type, PyObject *const *args, Py_ssize_t goto exit; } key = args[0]; - return_value = zoneinfo_ZoneInfo_no_cache_impl(type, key); + return_value = zoneinfo_ZoneInfo_no_cache_impl(type, cls, key); exit: return return_value; @@ -133,13 +134,14 @@ PyDoc_STRVAR(zoneinfo_ZoneInfo_clear_cache__doc__, "Clear the ZoneInfo cache."); #define ZONEINFO_ZONEINFO_CLEAR_CACHE_METHODDEF \ - {"clear_cache", _PyCFunction_CAST(zoneinfo_ZoneInfo_clear_cache), METH_FASTCALL|METH_KEYWORDS|METH_CLASS, zoneinfo_ZoneInfo_clear_cache__doc__}, + {"clear_cache", _PyCFunction_CAST(zoneinfo_ZoneInfo_clear_cache), METH_METHOD|METH_FASTCALL|METH_KEYWORDS|METH_CLASS, zoneinfo_ZoneInfo_clear_cache__doc__}, static PyObject * -zoneinfo_ZoneInfo_clear_cache_impl(PyTypeObject *type, PyObject *only_keys); +zoneinfo_ZoneInfo_clear_cache_impl(PyTypeObject *type, PyTypeObject *cls, + PyObject *only_keys); static PyObject * -zoneinfo_ZoneInfo_clear_cache(PyTypeObject *type, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +zoneinfo_ZoneInfo_clear_cache(PyTypeObject *type, PyTypeObject *cls, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) { PyObject *return_value = NULL; #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) @@ -180,9 +182,194 @@ zoneinfo_ZoneInfo_clear_cache(PyTypeObject *type, PyObject *const *args, Py_ssiz } only_keys = args[0]; skip_optional_kwonly: - return_value = zoneinfo_ZoneInfo_clear_cache_impl(type, only_keys); + return_value = zoneinfo_ZoneInfo_clear_cache_impl(type, cls, only_keys); exit: return return_value; } -/*[clinic end generated code: output=d2da73ef66146b83 input=a9049054013a1b77]*/ + +PyDoc_STRVAR(zoneinfo_ZoneInfo_utcoffset__doc__, +"utcoffset($self, dt, /)\n" +"--\n" +"\n" +"Retrieve a timedelta representing the UTC offset in a zone at the given datetime."); + +#define ZONEINFO_ZONEINFO_UTCOFFSET_METHODDEF \ + {"utcoffset", _PyCFunction_CAST(zoneinfo_ZoneInfo_utcoffset), METH_METHOD|METH_FASTCALL|METH_KEYWORDS, zoneinfo_ZoneInfo_utcoffset__doc__}, + +static PyObject * +zoneinfo_ZoneInfo_utcoffset_impl(PyObject *self, PyTypeObject *cls, + PyObject *dt); + +static PyObject * +zoneinfo_ZoneInfo_utcoffset(PyObject *self, PyTypeObject *cls, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + PyObject *return_value = NULL; + #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) + # define KWTUPLE (PyObject *)&_Py_SINGLETON(tuple_empty) + #else + # define KWTUPLE NULL + #endif + + static const char * const _keywords[] = {"", NULL}; + static _PyArg_Parser _parser = { + .keywords = _keywords, + .fname = "utcoffset", + .kwtuple = KWTUPLE, + }; + #undef KWTUPLE + PyObject *argsbuf[1]; + PyObject *dt; + + args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 1, 1, 0, argsbuf); + if (!args) { + goto exit; + } + dt = args[0]; + return_value = zoneinfo_ZoneInfo_utcoffset_impl(self, cls, dt); + +exit: + return return_value; +} + +PyDoc_STRVAR(zoneinfo_ZoneInfo_dst__doc__, +"dst($self, dt, /)\n" +"--\n" +"\n" +"Retrieve a timedelta representing the amount of DST applied in a zone at the given datetime."); + +#define ZONEINFO_ZONEINFO_DST_METHODDEF \ + {"dst", _PyCFunction_CAST(zoneinfo_ZoneInfo_dst), METH_METHOD|METH_FASTCALL|METH_KEYWORDS, zoneinfo_ZoneInfo_dst__doc__}, + +static PyObject * +zoneinfo_ZoneInfo_dst_impl(PyObject *self, PyTypeObject *cls, PyObject *dt); + +static PyObject * +zoneinfo_ZoneInfo_dst(PyObject *self, PyTypeObject *cls, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + PyObject *return_value = NULL; + #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) + # define KWTUPLE (PyObject *)&_Py_SINGLETON(tuple_empty) + #else + # define KWTUPLE NULL + #endif + + static const char * const _keywords[] = {"", NULL}; + static _PyArg_Parser _parser = { + .keywords = _keywords, + .fname = "dst", + .kwtuple = KWTUPLE, + }; + #undef KWTUPLE + PyObject *argsbuf[1]; + PyObject *dt; + + args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 1, 1, 0, argsbuf); + if (!args) { + goto exit; + } + dt = args[0]; + return_value = zoneinfo_ZoneInfo_dst_impl(self, cls, dt); + +exit: + return return_value; +} + +PyDoc_STRVAR(zoneinfo_ZoneInfo_tzname__doc__, +"tzname($self, dt, /)\n" +"--\n" +"\n" +"Retrieve a string containing the abbreviation for the time zone that applies in a zone at a given datetime."); + +#define ZONEINFO_ZONEINFO_TZNAME_METHODDEF \ + {"tzname", _PyCFunction_CAST(zoneinfo_ZoneInfo_tzname), METH_METHOD|METH_FASTCALL|METH_KEYWORDS, zoneinfo_ZoneInfo_tzname__doc__}, + +static PyObject * +zoneinfo_ZoneInfo_tzname_impl(PyObject *self, PyTypeObject *cls, + PyObject *dt); + +static PyObject * +zoneinfo_ZoneInfo_tzname(PyObject *self, PyTypeObject *cls, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + PyObject *return_value = NULL; + #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) + # define KWTUPLE (PyObject *)&_Py_SINGLETON(tuple_empty) + #else + # define KWTUPLE NULL + #endif + + static const char * const _keywords[] = {"", NULL}; + static _PyArg_Parser _parser = { + .keywords = _keywords, + .fname = "tzname", + .kwtuple = KWTUPLE, + }; + #undef KWTUPLE + PyObject *argsbuf[1]; + PyObject *dt; + + args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 1, 1, 0, argsbuf); + if (!args) { + goto exit; + } + dt = args[0]; + return_value = zoneinfo_ZoneInfo_tzname_impl(self, cls, dt); + +exit: + return return_value; +} + +PyDoc_STRVAR(zoneinfo_ZoneInfo__unpickle__doc__, +"_unpickle($type, key, from_cache, /)\n" +"--\n" +"\n" +"Private method used in unpickling."); + +#define ZONEINFO_ZONEINFO__UNPICKLE_METHODDEF \ + {"_unpickle", _PyCFunction_CAST(zoneinfo_ZoneInfo__unpickle), METH_METHOD|METH_FASTCALL|METH_KEYWORDS|METH_CLASS, zoneinfo_ZoneInfo__unpickle__doc__}, + +static PyObject * +zoneinfo_ZoneInfo__unpickle_impl(PyTypeObject *type, PyTypeObject *cls, + PyObject *key, unsigned char from_cache); + +static PyObject * +zoneinfo_ZoneInfo__unpickle(PyTypeObject *type, PyTypeObject *cls, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + PyObject *return_value = NULL; + #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) + # define KWTUPLE (PyObject *)&_Py_SINGLETON(tuple_empty) + #else + # define KWTUPLE NULL + #endif + + static const char * const _keywords[] = {"", "", NULL}; + static _PyArg_Parser _parser = { + .keywords = _keywords, + .fname = "_unpickle", + .kwtuple = KWTUPLE, + }; + #undef KWTUPLE + PyObject *argsbuf[2]; + PyObject *key; + unsigned char from_cache; + + args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 2, 2, 0, argsbuf); + if (!args) { + goto exit; + } + key = args[0]; + { + unsigned long ival = PyLong_AsUnsignedLongMask(args[1]); + if (ival == (unsigned long)-1 && PyErr_Occurred()) { + goto exit; + } + else { + from_cache = (unsigned char) ival; + } + } + return_value = zoneinfo_ZoneInfo__unpickle_impl(type, cls, key, from_cache); + +exit: + return return_value; +} +/*[clinic end generated code: output=54051388dfc408af input=a9049054013a1b77]*/ diff --git a/Tools/c-analyzer/cpython/globals-to-fix.tsv b/Tools/c-analyzer/cpython/globals-to-fix.tsv index 52ea0b4901d4..6011b1604508 100644 --- a/Tools/c-analyzer/cpython/globals-to-fix.tsv +++ b/Tools/c-analyzer/cpython/globals-to-fix.tsv @@ -403,7 +403,6 @@ Modules/_pickle.c - PicklerMemoProxyType - Modules/_pickle.c - Pickler_Type - Modules/_pickle.c - UnpicklerMemoProxyType - Modules/_pickle.c - Unpickler_Type - -Modules/_zoneinfo.c - PyZoneInfo_ZoneInfoType - Modules/ossaudiodev.c - OSSAudioType - Modules/ossaudiodev.c - OSSMixerType - Modules/socketmodule.c - sock_type - @@ -442,11 +441,6 @@ Modules/xxmodule.c - ErrorObject - Modules/_ctypes/callproc.c _ctypes_get_errobj error_object_name - Modules/_ctypes/_ctypes.c CreateSwappedType suffix - -## other - during module init -Modules/_zoneinfo.c - io_open - -Modules/_zoneinfo.c - _tzpath_find_tzfile - -Modules/_zoneinfo.c - _common_mod - - ##----------------------- ## other @@ -481,8 +475,6 @@ Modules/_tkinter.c - tcl_lock - Modules/_tkinter.c - excInCmd - Modules/_tkinter.c - valInCmd - Modules/_tkinter.c - trbInCmd - -Modules/_zoneinfo.c - TIMEDELTA_CACHE - -Modules/_zoneinfo.c - ZONEINFO_WEAK_CACHE - ################################## @@ -556,9 +548,6 @@ Modules/_tkinter.c - HeadFHCD - Modules/_tkinter.c - stdin_ready - Modules/_tkinter.c - event_tstate - Modules/_xxsubinterpretersmodule.c - _globals - -Modules/_zoneinfo.c - ZONEINFO_STRONG_CACHE - -Modules/_zoneinfo.c - ZONEINFO_STRONG_CACHE_MAX_SIZE - -Modules/_zoneinfo.c - NO_TTINFO - Modules/readline.c - completer_word_break_characters - Modules/readline.c - _history_length - Modules/readline.c - should_auto_add_history - From webhook-mailer at python.org Wed Feb 15 17:33:13 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Wed, 15 Feb 2023 22:33:13 -0000 Subject: [Python-checkins] gh-101758: Clean Up Uses of Import State (gh-101919) Message-ID: https://github.com/python/cpython/commit/b2fc5492789623d656953d458f3eeaac03c1ef56 commit: b2fc5492789623d656953d458f3eeaac03c1ef56 branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-15T15:32:31-07:00 summary: gh-101758: Clean Up Uses of Import State (gh-101919) This change is almost entirely moving code around and hiding import state behind internal API. We introduce no changes to behavior, nor to non-internal API. (Since there was already going to be a lot of churn, I took this as an opportunity to re-organize import.c into topically-grouped sections of code.) The motivation is to simplify a number of upcoming changes. Specific changes: * move existing import-related code to import.c, wherever possible * add internal API for interacting with import state (both global and per-interpreter) * use only API outside of import.c (to limit churn there when changing the location, etc.) * consolidate the import-related state of PyInterpreterState into a single struct field (this changes layout slightly) * add macros for import state in import.c (to simplify changing the location) * group code in import.c into sections *remove _PyState_AddModule() https://github.com/python/cpython/issues/101758 files: M Include/internal/pycore_import.h M Include/internal/pycore_interp.h M Include/internal/pycore_pylifecycle.h M Include/internal/pycore_pystate.h M Include/internal/pycore_runtime_init.h M Include/internal/pycore_sysmodule.h M Lib/test/test_imp.py M Lib/test/test_stable_abi_ctypes.py M Misc/stable_abi.toml M Objects/moduleobject.c M PC/python3dll.c M Python/_warnings.c M Python/ceval.c M Python/dynload_shlib.c M Python/import.c M Python/importdl.c M Python/pylifecycle.c M Python/pystate.c M Python/pythonrun.c M Python/sysmodule.c diff --git a/Include/internal/pycore_import.h b/Include/internal/pycore_import.h index 9036dff67253..da766253ef6b 100644 --- a/Include/internal/pycore_import.h +++ b/Include/internal/pycore_import.h @@ -36,11 +36,112 @@ struct _import_runtime_state { const char * pkgcontext; }; +struct _import_state { + /* cached sys.modules dictionary */ + PyObject *modules; + /* This is the list of module objects for all legacy (single-phase init) + extension modules ever loaded in this process (i.e. imported + in this interpreter or in any other). Py_None stands in for + modules that haven't actually been imported in this interpreter. + + A module's index (PyModuleDef.m_base.m_index) is used to look up + the corresponding module object for this interpreter, if any. + (See PyState_FindModule().) When any extension module + is initialized during import, its moduledef gets initialized by + PyModuleDef_Init(), and the first time that happens for each + PyModuleDef, its index gets set to the current value of + a global counter (see _PyRuntimeState.imports.last_module_index). + The entry for that index in this interpreter remains unset until + the module is actually imported here. (Py_None is used as + a placeholder.) Note that multi-phase init modules always get + an index for which there will never be a module set. + + This is initialized lazily in PyState_AddModule(), which is also + where modules get added. */ + PyObject *modules_by_index; + /* importlib module._bootstrap */ + PyObject *importlib; + /* override for config->use_frozen_modules (for tests) + (-1: "off", 1: "on", 0: no override) */ + int override_frozen_modules; +#ifdef HAVE_DLOPEN + int dlopenflags; +#endif + PyObject *import_func; +}; + +#ifdef HAVE_DLOPEN +# include +# if HAVE_DECL_RTLD_NOW +# define _Py_DLOPEN_FLAGS RTLD_NOW +# else +# define _Py_DLOPEN_FLAGS RTLD_LAZY +# endif +# define DLOPENFLAGS_INIT .dlopenflags = _Py_DLOPEN_FLAGS, +#else +# define _Py_DLOPEN_FLAGS 0 +# define DLOPENFLAGS_INIT +#endif + +#define IMPORTS_INIT \ + { \ + .override_frozen_modules = 0, \ + DLOPENFLAGS_INIT \ + } + +extern void _PyImport_ClearCore(PyInterpreterState *interp); + +extern Py_ssize_t _PyImport_GetNextModuleIndex(void); +extern const char * _PyImport_ResolveNameWithPackageContext(const char *name); +extern const char * _PyImport_SwapPackageContext(const char *newcontext); + +extern int _PyImport_GetDLOpenFlags(PyInterpreterState *interp); +extern void _PyImport_SetDLOpenFlags(PyInterpreterState *interp, int new_val); + +extern PyObject * _PyImport_InitModules(PyInterpreterState *interp); +extern PyObject * _PyImport_GetModules(PyInterpreterState *interp); +extern void _PyImport_ClearModules(PyInterpreterState *interp); + +extern void _PyImport_ClearModulesByIndex(PyInterpreterState *interp); + +extern int _PyImport_InitDefaultImportFunc(PyInterpreterState *interp); +extern int _PyImport_IsDefaultImportFunc( + PyInterpreterState *interp, + PyObject *func); + +extern PyObject * _PyImport_GetImportlibLoader( + PyInterpreterState *interp, + const char *loader_name); +extern PyObject * _PyImport_GetImportlibExternalLoader( + PyInterpreterState *interp, + const char *loader_name); +extern PyObject * _PyImport_BlessMyLoader( + PyInterpreterState *interp, + PyObject *module_globals); +extern PyObject * _PyImport_ImportlibModuleRepr( + PyInterpreterState *interp, + PyObject *module); + + +extern PyStatus _PyImport_Init(void); +extern void _PyImport_Fini(void); +extern void _PyImport_Fini2(void); + +extern PyStatus _PyImport_InitCore( + PyThreadState *tstate, + PyObject *sysmod, + int importlib); +extern PyStatus _PyImport_InitExternal(PyThreadState *tstate); +extern void _PyImport_FiniCore(PyInterpreterState *interp); +extern void _PyImport_FiniExternal(PyInterpreterState *interp); + #ifdef HAVE_FORK extern PyStatus _PyImport_ReInitLock(void); #endif -extern PyObject* _PyImport_BootstrapImp(PyThreadState *tstate); + + +extern PyObject* _PyImport_GetBuiltinModuleNames(void); struct _module_alias { const char *name; /* ASCII encoded string */ diff --git a/Include/internal/pycore_interp.h b/Include/internal/pycore_interp.h index 0e3d46852f2e..60de31b336f6 100644 --- a/Include/internal/pycore_interp.h +++ b/Include/internal/pycore_interp.h @@ -21,6 +21,7 @@ extern "C" { #include "pycore_function.h" // FUNC_MAX_WATCHERS #include "pycore_genobject.h" // struct _Py_async_gen_state #include "pycore_gc.h" // struct _gc_runtime_state +#include "pycore_import.h" // struct _import_state #include "pycore_list.h" // struct _Py_list_state #include "pycore_global_objects.h" // struct _Py_interp_static_objects #include "pycore_tuple.h" // struct _Py_tuple_state @@ -92,37 +93,12 @@ struct _is { struct _ceval_state ceval; struct _gc_runtime_state gc; - // sys.modules dictionary - PyObject *modules; - /* This is the list of module objects for all legacy (single-phase init) - extension modules ever loaded in this process (i.e. imported - in this interpreter or in any other). Py_None stands in for - modules that haven't actually been imported in this interpreter. - - A module's index (PyModuleDef.m_base.m_index) is used to look up - the corresponding module object for this interpreter, if any. - (See PyState_FindModule().) When any extension module - is initialized during import, its moduledef gets initialized by - PyModuleDef_Init(), and the first time that happens for each - PyModuleDef, its index gets set to the current value of - a global counter (see _PyRuntimeState.imports.last_module_index). - The entry for that index in this interpreter remains unset until - the module is actually imported here. (Py_None is used as - a placeholder.) Note that multi-phase init modules always get - an index for which there will never be a module set. - - This is initialized lazily in _PyState_AddModule(), which is also - where modules get added. */ - PyObject *modules_by_index; + struct _import_state imports; + // Dictionary of the sys module PyObject *sysdict; // Dictionary of the builtins module PyObject *builtins; - // importlib module - PyObject *importlib; - // override for config->use_frozen_modules (for tests) - // (-1: "off", 1: "on", 0: no override) - int override_frozen_modules; PyObject *codec_search_path; PyObject *codec_search_cache; @@ -130,15 +106,11 @@ struct _is { int codecs_initialized; PyConfig config; -#ifdef HAVE_DLOPEN - int dlopenflags; -#endif unsigned long feature_flags; PyObject *dict; /* Stores per-interpreter state */ PyObject *builtins_copy; - PyObject *import_func; // Initialized to _PyEval_EvalFrameDefault(). _PyFrameEvalFunction eval_frame; @@ -205,7 +177,6 @@ struct _is { /* other API */ -extern void _PyInterpreterState_ClearModules(PyInterpreterState *interp); extern void _PyInterpreterState_Clear(PyThreadState *tstate); diff --git a/Include/internal/pycore_pylifecycle.h b/Include/internal/pycore_pylifecycle.h index 2d431befd74f..e7a318072052 100644 --- a/Include/internal/pycore_pylifecycle.h +++ b/Include/internal/pycore_pylifecycle.h @@ -30,7 +30,6 @@ PyAPI_FUNC(int) _Py_IsLocaleCoercionTarget(const char *ctype_loc); /* Various one-time initializers */ extern void _Py_InitVersion(void); -extern PyStatus _PyImport_Init(void); extern PyStatus _PyFaulthandler_Init(int enable); extern int _PyTraceMalloc_Init(int enable); extern PyObject * _PyBuiltin_Init(PyInterpreterState *interp); @@ -45,7 +44,6 @@ extern int _PyBuiltins_AddExceptions(PyObject * bltinmod); extern PyStatus _Py_HashRandomization_Init(const PyConfig *); extern PyStatus _PyTime_Init(void); -extern PyStatus _PyImportZip_Init(PyThreadState *tstate); extern PyStatus _PyGC_Init(PyInterpreterState *interp); extern PyStatus _PyAtExit_Init(PyInterpreterState *interp); extern int _Py_Deepfreeze_Init(void); @@ -55,8 +53,6 @@ extern int _Py_Deepfreeze_Init(void); extern int _PySignal_Init(int install_signal_handlers); extern void _PySignal_Fini(void); -extern void _PyImport_Fini(void); -extern void _PyImport_Fini2(void); extern void _PyGC_Fini(PyInterpreterState *interp); extern void _Py_HashRandomization_Fini(void); extern void _PyFaulthandler_Fini(void); diff --git a/Include/internal/pycore_pystate.h b/Include/internal/pycore_pystate.h index 7046ec8d9ada..638b86253879 100644 --- a/Include/internal/pycore_pystate.h +++ b/Include/internal/pycore_pystate.h @@ -152,12 +152,6 @@ extern void _PySignal_AfterFork(void); #endif -PyAPI_FUNC(int) _PyState_AddModule( - PyThreadState *tstate, - PyObject* module, - PyModuleDef* def); - - PyAPI_FUNC(int) _PyOS_InterruptOccurred(PyThreadState *tstate); #define HEAD_LOCK(runtime) \ diff --git a/Include/internal/pycore_runtime_init.h b/Include/internal/pycore_runtime_init.h index c6a27d076eae..a8d5953ff98b 100644 --- a/Include/internal/pycore_runtime_init.h +++ b/Include/internal/pycore_runtime_init.h @@ -97,23 +97,10 @@ extern "C" { ._main_interpreter = _PyInterpreterState_INIT, \ } -#ifdef HAVE_DLOPEN -# include -# if HAVE_DECL_RTLD_NOW -# define _Py_DLOPEN_FLAGS RTLD_NOW -# else -# define _Py_DLOPEN_FLAGS RTLD_LAZY -# endif -# define DLOPENFLAGS_INIT .dlopenflags = _Py_DLOPEN_FLAGS, -#else -# define _Py_DLOPEN_FLAGS 0 -# define DLOPENFLAGS_INIT -#endif - #define _PyInterpreterState_INIT \ { \ .id_refcount = -1, \ - DLOPENFLAGS_INIT \ + .imports = IMPORTS_INIT, \ .ceval = { \ .recursion_limit = Py_DEFAULT_RECURSION_LIMIT, \ }, \ diff --git a/Include/internal/pycore_sysmodule.h b/Include/internal/pycore_sysmodule.h index 10d092cdc30a..b4b1febafa44 100644 --- a/Include/internal/pycore_sysmodule.h +++ b/Include/internal/pycore_sysmodule.h @@ -20,6 +20,9 @@ extern void _PySys_ClearAuditHooks(PyThreadState *tstate); PyAPI_FUNC(int) _PySys_SetAttr(PyObject *, PyObject *); +extern int _PySys_ClearAttrString(PyInterpreterState *interp, + const char *name, int verbose); + #ifdef __cplusplus } #endif diff --git a/Lib/test/test_imp.py b/Lib/test/test_imp.py index 31dce21587e2..c85ab92307de 100644 --- a/Lib/test/test_imp.py +++ b/Lib/test/test_imp.py @@ -387,7 +387,7 @@ def check_with_reinit_reloaded(module, lookedup, initialized, check_basic_reloaded(mod, lookedup, initialized, init_count, before, reloaded) - # Currently _PyState_AddModule() always replaces the cached module. + # Currently PyState_AddModule() always replaces the cached module. self.assertIs(basic.look_up_self(), mod) self.assertEqual(basic.initialized_count(), expected_init_count) diff --git a/Lib/test/test_stable_abi_ctypes.py b/Lib/test/test_stable_abi_ctypes.py index e77c1c840988..7e50fbda2c07 100644 --- a/Lib/test/test_stable_abi_ctypes.py +++ b/Lib/test/test_stable_abi_ctypes.py @@ -864,7 +864,6 @@ def test_windows_feature_macros(self): "_PyObject_GC_Resize", "_PyObject_New", "_PyObject_NewVar", - "_PyState_AddModule", "_PyThreadState_Init", "_PyThreadState_Prealloc", "_PyWeakref_CallableProxyType", diff --git a/Misc/stable_abi.toml b/Misc/stable_abi.toml index 21ff96161334..c04a3a228caf 100644 --- a/Misc/stable_abi.toml +++ b/Misc/stable_abi.toml @@ -1684,9 +1684,6 @@ [function._PyObject_NewVar] added = '3.2' abi_only = true -[function._PyState_AddModule] - added = '3.2' - abi_only = true [function._PyThreadState_Init] added = '3.2' abi_only = true diff --git a/Objects/moduleobject.c b/Objects/moduleobject.c index 24190e320ee6..a0be19a3ca8a 100644 --- a/Objects/moduleobject.c +++ b/Objects/moduleobject.c @@ -42,10 +42,9 @@ PyModuleDef_Init(PyModuleDef* def) { assert(PyModuleDef_Type.tp_flags & Py_TPFLAGS_READY); if (def->m_base.m_index == 0) { - _PyRuntime.imports.last_module_index++; Py_SET_REFCNT(def, 1); Py_SET_TYPE(def, &PyModuleDef_Type); - def->m_base.m_index = _PyRuntime.imports.last_module_index; + def->m_base.m_index = _PyImport_GetNextModuleIndex(); } return (PyObject*)def; } @@ -209,24 +208,7 @@ _PyModule_CreateInitialized(PyModuleDef* module, int module_api_version) "module %s: PyModule_Create is incompatible with m_slots", name); return NULL; } - /* Make sure name is fully qualified. - - This is a bit of a hack: when the shared library is loaded, - the module name is "package.module", but the module calls - PyModule_Create*() with just "module" for the name. The shared - library loader squirrels away the true name of the module in - _Py_PackageContext, and PyModule_Create*() will substitute this - (if the name actually matches). - */ -#define _Py_PackageContext (_PyRuntime.imports.pkgcontext) - if (_Py_PackageContext != NULL) { - const char *p = strrchr(_Py_PackageContext, '.'); - if (p != NULL && strcmp(module->m_name, p+1) == 0) { - name = _Py_PackageContext; - _Py_PackageContext = NULL; - } - } -#undef _Py_PackageContext + name = _PyImport_ResolveNameWithPackageContext(name); if ((m = (PyModuleObject*)PyModule_New(name)) == NULL) return NULL; @@ -710,8 +692,7 @@ static PyObject * module_repr(PyModuleObject *m) { PyInterpreterState *interp = _PyInterpreterState_GET(); - - return PyObject_CallMethod(interp->importlib, "_module_repr", "O", m); + return _PyImport_ImportlibModuleRepr(interp, (PyObject *)m); } /* Check if the "_initializing" attribute of the module spec is set to true. diff --git a/PC/python3dll.c b/PC/python3dll.c index e30081936575..79f09037282f 100755 --- a/PC/python3dll.c +++ b/PC/python3dll.c @@ -34,7 +34,6 @@ EXPORT_FUNC(_PyObject_GC_NewVar) EXPORT_FUNC(_PyObject_GC_Resize) EXPORT_FUNC(_PyObject_New) EXPORT_FUNC(_PyObject_NewVar) -EXPORT_FUNC(_PyState_AddModule) EXPORT_FUNC(_PyThreadState_Init) EXPORT_FUNC(_PyThreadState_Prealloc) EXPORT_FUNC(Py_AddPendingCall) diff --git a/Python/_warnings.c b/Python/_warnings.c index e78f21644f37..d510381c365b 100644 --- a/Python/_warnings.c +++ b/Python/_warnings.c @@ -214,7 +214,7 @@ get_warnings_attr(PyInterpreterState *interp, PyObject *attr, int try_import) gone, then we can't even use PyImport_GetModule without triggering an interpreter abort. */ - if (!interp->modules) { + if (!_PyImport_GetModules(interp)) { return NULL; } warnings_module = PyImport_GetModule(&_Py_ID(warnings)); @@ -1050,7 +1050,6 @@ warnings_warn_impl(PyObject *module, PyObject *message, PyObject *category, static PyObject * get_source_line(PyInterpreterState *interp, PyObject *module_globals, int lineno) { - PyObject *external; PyObject *loader; PyObject *module_name; PyObject *get_source; @@ -1059,13 +1058,7 @@ get_source_line(PyInterpreterState *interp, PyObject *module_globals, int lineno PyObject *source_line; /* stolen from import.c */ - external = PyObject_GetAttrString(interp->importlib, "_bootstrap_external"); - if (external == NULL) { - return NULL; - } - - loader = PyObject_CallMethod(external, "_bless_my_loader", "O", module_globals, NULL); - Py_DECREF(external); + loader = _PyImport_BlessMyLoader(interp, module_globals); if (loader == NULL) { return NULL; } diff --git a/Python/ceval.c b/Python/ceval.c index 611d62b0eba9..09fd2f29266c 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -2688,7 +2688,7 @@ import_name(PyThreadState *tstate, _PyInterpreterFrame *frame, } PyObject *locals = frame->f_locals; /* Fast path for not overloaded __import__. */ - if (import_func == tstate->interp->import_func) { + if (_PyImport_IsDefaultImportFunc(tstate->interp, import_func)) { int ilevel = _PyLong_AsInt(level); if (ilevel == -1 && _PyErr_Occurred(tstate)) { return NULL; diff --git a/Python/dynload_shlib.c b/Python/dynload_shlib.c index 3c5fd83df584..6761bba45798 100644 --- a/Python/dynload_shlib.c +++ b/Python/dynload_shlib.c @@ -75,7 +75,7 @@ _PyImport_FindSharedFuncptr(const char *prefix, return NULL; } - dlopenflags = _PyInterpreterState_GET()->dlopenflags; + dlopenflags = _PyImport_GetDLOpenFlags(_PyInterpreterState_GET()); handle = dlopen(pathname, dlopenflags); diff --git a/Python/import.c b/Python/import.c index 63ed2443657b..ae27aaf56848 100644 --- a/Python/import.c +++ b/Python/import.c @@ -4,7 +4,7 @@ #include "pycore_import.h" // _PyImport_BootstrapImp() #include "pycore_initconfig.h" // _PyStatus_OK() -#include "pycore_interp.h" // _PyInterpreterState_ClearModules() +#include "pycore_interp.h" // struct _import_runtime_state #include "pycore_namespace.h" // _PyNamespace_Type #include "pycore_pyerrors.h" // _PyErr_SetString() #include "pycore_pyhash.h" // _Py_KeyedHash() @@ -24,8 +24,18 @@ extern "C" { #endif -/* Forward references */ -static PyObject *import_add_module(PyThreadState *tstate, PyObject *name); + +/*[clinic input] +module _imp +[clinic start generated code]*/ +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=9c332475d8686284]*/ + +#include "clinic/import.c.h" + + +/*******************************/ +/* process-global import state */ +/*******************************/ /* This table is defined in config.c: */ extern struct _inittab _PyImport_Inittab[]; @@ -37,67 +47,51 @@ struct _inittab *PyImport_Inittab = _PyImport_Inittab; // we track the pointer here so we can deallocate it during finalization. static struct _inittab *inittab_copy = NULL; -/*[clinic input] -module _imp -[clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=9c332475d8686284]*/ - -#include "clinic/import.c.h" - -/* Initialize things */ -PyStatus -_PyImportZip_Init(PyThreadState *tstate) -{ - PyObject *path_hooks; - int err = 0; +/*******************************/ +/* runtime-global import state */ +/*******************************/ - path_hooks = PySys_GetObject("path_hooks"); - if (path_hooks == NULL) { - _PyErr_SetString(tstate, PyExc_RuntimeError, - "unable to get sys.path_hooks"); - goto error; - } +#define INITTAB _PyRuntime.imports.inittab +#define LAST_MODULE_INDEX _PyRuntime.imports.last_module_index +#define EXTENSIONS _PyRuntime.imports.extensions - int verbose = _PyInterpreterState_GetConfig(tstate->interp)->verbose; - if (verbose) { - PySys_WriteStderr("# installing zipimport hook\n"); - } +#define import_lock _PyRuntime.imports.lock.mutex +#define import_lock_thread _PyRuntime.imports.lock.thread +#define import_lock_level _PyRuntime.imports.lock.level - PyObject *zipimporter = _PyImport_GetModuleAttrString("zipimport", "zipimporter"); - if (zipimporter == NULL) { - _PyErr_Clear(tstate); /* No zipimporter object -- okay */ - if (verbose) { - PySys_WriteStderr("# can't import zipimport.zipimporter\n"); - } - } - else { - /* sys.path_hooks.insert(0, zipimporter) */ - err = PyList_Insert(path_hooks, 0, zipimporter); - Py_DECREF(zipimporter); - if (err < 0) { - goto error; - } - if (verbose) { - PySys_WriteStderr("# installed zipimport hook\n"); - } - } +#define FIND_AND_LOAD _PyRuntime.imports.find_and_load +#define PKGCONTEXT (_PyRuntime.imports.pkgcontext) + + +/*******************************/ +/* interpreter import state */ +/*******************************/ + +#define MODULES(interp) \ + (interp)->imports.modules +#define MODULES_BY_INDEX(interp) \ + (interp)->imports.modules_by_index +#define IMPORTLIB(interp) \ + (interp)->imports.importlib +#define OVERRIDE_FROZEN_MODULES(interp) \ + (interp)->imports.override_frozen_modules +#ifdef HAVE_DLOPEN +# define DLOPENFLAGS(interp) \ + (interp)->imports.dlopenflags +#endif +#define IMPORT_FUNC(interp) \ + (interp)->imports.import_func - return _PyStatus_OK(); - error: - PyErr_Print(); - return _PyStatus_ERR("initializing zipimport failed"); -} +/*******************/ +/* the import lock */ +/*******************/ /* Locking primitives to prevent parallel imports of the same module in different threads to return with a partially loaded module. These calls are serialized by the global interpreter lock. */ -#define import_lock _PyRuntime.imports.lock.mutex -#define import_lock_thread _PyRuntime.imports.lock.thread -#define import_lock_level _PyRuntime.imports.lock.level - void _PyImport_AcquireLock(void) { @@ -170,154 +164,45 @@ _PyImport_ReInitLock(void) } #endif -/*[clinic input] -_imp.lock_held - -Return True if the import lock is currently held, else False. - -On platforms without threads, return False. -[clinic start generated code]*/ - -static PyObject * -_imp_lock_held_impl(PyObject *module) -/*[clinic end generated code: output=8b89384b5e1963fc input=9b088f9b217d9bdf]*/ -{ - return PyBool_FromLong(import_lock_thread != PYTHREAD_INVALID_THREAD_ID); -} - -/*[clinic input] -_imp.acquire_lock - -Acquires the interpreter's import lock for the current thread. - -This lock should be used by import hooks to ensure thread-safety when importing -modules. On platforms without threads, this function does nothing. -[clinic start generated code]*/ - -static PyObject * -_imp_acquire_lock_impl(PyObject *module) -/*[clinic end generated code: output=1aff58cb0ee1b026 input=4a2d4381866d5fdc]*/ -{ - _PyImport_AcquireLock(); - Py_RETURN_NONE; -} - -/*[clinic input] -_imp.release_lock - -Release the interpreter's import lock. -On platforms without threads, this function does nothing. -[clinic start generated code]*/ +/***************/ +/* sys.modules */ +/***************/ -static PyObject * -_imp_release_lock_impl(PyObject *module) -/*[clinic end generated code: output=7faab6d0be178b0a input=934fb11516dd778b]*/ +PyObject * +_PyImport_InitModules(PyInterpreterState *interp) { - if (_PyImport_ReleaseLock() < 0) { - PyErr_SetString(PyExc_RuntimeError, - "not holding the import lock"); + assert(MODULES(interp) == NULL); + MODULES(interp) = PyDict_New(); + if (MODULES(interp) == NULL) { return NULL; } - Py_RETURN_NONE; -} - -PyStatus -_PyImport_Init(void) -{ - if (_PyRuntime.imports.inittab != NULL) { - return _PyStatus_ERR("global import state already initialized"); - } - PyStatus status = _PyStatus_OK(); - - size_t size; - for (size = 0; PyImport_Inittab[size].name != NULL; size++) - ; - size++; - - /* Force default raw memory allocator to get a known allocator to be able - to release the memory in _PyImport_Fini() */ - PyMemAllocatorEx old_alloc; - _PyMem_SetDefaultAllocator(PYMEM_DOMAIN_RAW, &old_alloc); - - /* Make the copy. */ - struct _inittab *copied = PyMem_RawMalloc(size * sizeof(struct _inittab)); - if (copied == NULL) { - status = PyStatus_NoMemory(); - goto done; - } - memcpy(copied, PyImport_Inittab, size * sizeof(struct _inittab)); - _PyRuntime.imports.inittab = copied; - -done: - PyMem_SetAllocator(PYMEM_DOMAIN_RAW, &old_alloc); - return status; + return MODULES(interp); } -static inline void _extensions_cache_clear(void); - -void -_PyImport_Fini(void) +PyObject * +_PyImport_GetModules(PyInterpreterState *interp) { - _extensions_cache_clear(); - if (import_lock != NULL) { - PyThread_free_lock(import_lock); - import_lock = NULL; - } - - /* Use the same memory allocator as _PyImport_Init(). */ - PyMemAllocatorEx old_alloc; - _PyMem_SetDefaultAllocator(PYMEM_DOMAIN_RAW, &old_alloc); - - /* Free memory allocated by _PyImport_Init() */ - struct _inittab *inittab = _PyRuntime.imports.inittab; - _PyRuntime.imports.inittab = NULL; - PyMem_RawFree(inittab); - - PyMem_SetAllocator(PYMEM_DOMAIN_RAW, &old_alloc); + return MODULES(interp); } void -_PyImport_Fini2(void) +_PyImport_ClearModules(PyInterpreterState *interp) { - /* Use the same memory allocator than PyImport_ExtendInittab(). */ - PyMemAllocatorEx old_alloc; - _PyMem_SetDefaultAllocator(PYMEM_DOMAIN_RAW, &old_alloc); - - // Reset PyImport_Inittab - PyImport_Inittab = _PyImport_Inittab; - - /* Free memory allocated by PyImport_ExtendInittab() */ - PyMem_RawFree(inittab_copy); - inittab_copy = NULL; - - PyMem_SetAllocator(PYMEM_DOMAIN_RAW, &old_alloc); + Py_SETREF(MODULES(interp), NULL); } -/* Helper for sys */ - PyObject * PyImport_GetModuleDict(void) { PyInterpreterState *interp = _PyInterpreterState_GET(); - if (interp->modules == NULL) { + if (MODULES(interp) == NULL) { Py_FatalError("interpreter has no modules dictionary"); } - return interp->modules; -} - -/* In some corner cases it is important to be sure that the import - machinery has been initialized (or not cleaned up yet). For - example, see issue #4236 and PyModule_Create2(). */ - -int -_PyImport_IsInitialized(PyInterpreterState *interp) -{ - if (interp->modules == NULL) - return 0; - return 1; + return MODULES(interp); } +// This is only kept around for extensions that use _Py_IDENTIFIER. PyObject * _PyImport_GetModuleId(_Py_Identifier *nameid) { @@ -332,7 +217,7 @@ int _PyImport_SetModule(PyObject *name, PyObject *m) { PyInterpreterState *interp = _PyInterpreterState_GET(); - PyObject *modules = interp->modules; + PyObject *modules = MODULES(interp); return PyObject_SetItem(modules, name, m); } @@ -340,14 +225,14 @@ int _PyImport_SetModuleString(const char *name, PyObject *m) { PyInterpreterState *interp = _PyInterpreterState_GET(); - PyObject *modules = interp->modules; + PyObject *modules = MODULES(interp); return PyMapping_SetItemString(modules, name, m); } static PyObject * import_get_module(PyThreadState *tstate, PyObject *name) { - PyObject *modules = tstate->interp->modules; + PyObject *modules = MODULES(tstate->interp); if (modules == NULL) { _PyErr_SetString(tstate, PyExc_RuntimeError, "unable to get sys.modules"); @@ -370,7 +255,6 @@ import_get_module(PyThreadState *tstate, PyObject *name) return m; } - static int import_ensure_initialized(PyInterpreterState *interp, PyObject *mod, PyObject *name) { @@ -387,7 +271,7 @@ import_ensure_initialized(PyInterpreterState *interp, PyObject *mod, PyObject *n if (busy) { /* Wait until module is done importing. */ PyObject *value = _PyObject_CallMethodOneArg( - interp->importlib, &_Py_ID(_lock_unlock_module), name); + IMPORTLIB(interp), &_Py_ID(_lock_unlock_module), name); if (value == NULL) { return -1; } @@ -396,47 +280,381 @@ import_ensure_initialized(PyInterpreterState *interp, PyObject *mod, PyObject *n return 0; } +static void remove_importlib_frames(PyThreadState *tstate); -/* Helper for pythonrun.c -- return magic number and tag. */ - -long -PyImport_GetMagicNumber(void) +PyObject * +PyImport_GetModule(PyObject *name) { - long res; - PyInterpreterState *interp = _PyInterpreterState_GET(); - PyObject *external, *pyc_magic; + PyThreadState *tstate = _PyThreadState_GET(); + PyObject *mod; - external = PyObject_GetAttrString(interp->importlib, "_bootstrap_external"); - if (external == NULL) - return -1; - pyc_magic = PyObject_GetAttrString(external, "_RAW_MAGIC_NUMBER"); - Py_DECREF(external); - if (pyc_magic == NULL) - return -1; - res = PyLong_AsLong(pyc_magic); - Py_DECREF(pyc_magic); - return res; + mod = import_get_module(tstate, name); + if (mod != NULL && mod != Py_None) { + if (import_ensure_initialized(tstate->interp, mod, name) < 0) { + Py_DECREF(mod); + remove_importlib_frames(tstate); + return NULL; + } + } + return mod; } +/* Get the module object corresponding to a module name. + First check the modules dictionary if there's one there, + if not, create a new one and insert it in the modules dictionary. */ -extern const char * _PySys_ImplCacheTag; - -const char * -PyImport_GetMagicTag(void) +static PyObject * +import_add_module(PyThreadState *tstate, PyObject *name) { - return _PySys_ImplCacheTag; -} + PyObject *modules = MODULES(tstate->interp); + if (modules == NULL) { + _PyErr_SetString(tstate, PyExc_RuntimeError, + "no import module dictionary"); + return NULL; + } + PyObject *m; + if (PyDict_CheckExact(modules)) { + m = Py_XNewRef(PyDict_GetItemWithError(modules, name)); + } + else { + m = PyObject_GetItem(modules, name); + // For backward-compatibility we copy the behavior + // of PyDict_GetItemWithError(). + if (_PyErr_ExceptionMatches(tstate, PyExc_KeyError)) { + _PyErr_Clear(tstate); + } + } + if (_PyErr_Occurred(tstate)) { + return NULL; + } + if (m != NULL && PyModule_Check(m)) { + return m; + } + Py_XDECREF(m); + m = PyModule_NewObject(name); + if (m == NULL) + return NULL; + if (PyObject_SetItem(modules, name, m) != 0) { + Py_DECREF(m); + return NULL; + } -/* -We support a number of kinds of single-phase init builtin/extension modules: + return m; +} -* "basic" - * no module state (PyModuleDef.m_size == -1) - * does not support repeated init (we use PyModuleDef.m_base.m_copy) - * may have process-global state - * the module's def is cached in _PyRuntime.imports.extensions, - by (name, filename) +PyObject * +PyImport_AddModuleObject(PyObject *name) +{ + PyThreadState *tstate = _PyThreadState_GET(); + PyObject *mod = import_add_module(tstate, name); + if (mod) { + PyObject *ref = PyWeakref_NewRef(mod, NULL); + Py_DECREF(mod); + if (ref == NULL) { + return NULL; + } + mod = PyWeakref_GetObject(ref); + Py_DECREF(ref); + } + return mod; /* borrowed reference */ +} + + +PyObject * +PyImport_AddModule(const char *name) +{ + PyObject *nameobj = PyUnicode_FromString(name); + if (nameobj == NULL) { + return NULL; + } + PyObject *module = PyImport_AddModuleObject(nameobj); + Py_DECREF(nameobj); + return module; +} + + +/* Remove name from sys.modules, if it's there. + * Can be called with an exception raised. + * If fail to remove name a new exception will be chained with the old + * exception, otherwise the old exception is preserved. + */ +static void +remove_module(PyThreadState *tstate, PyObject *name) +{ + PyObject *type, *value, *traceback; + _PyErr_Fetch(tstate, &type, &value, &traceback); + + PyObject *modules = MODULES(tstate->interp); + if (PyDict_CheckExact(modules)) { + PyObject *mod = _PyDict_Pop(modules, name, Py_None); + Py_XDECREF(mod); + } + else if (PyMapping_DelItem(modules, name) < 0) { + if (_PyErr_ExceptionMatches(tstate, PyExc_KeyError)) { + _PyErr_Clear(tstate); + } + } + + _PyErr_ChainExceptions(type, value, traceback); +} + + +/************************************/ +/* per-interpreter modules-by-index */ +/************************************/ + +Py_ssize_t +_PyImport_GetNextModuleIndex(void) +{ + LAST_MODULE_INDEX++; + return LAST_MODULE_INDEX; +} + +static const char * +_modules_by_index_check(PyInterpreterState *interp, Py_ssize_t index) +{ + if (index == 0) { + return "invalid module index"; + } + if (MODULES_BY_INDEX(interp) == NULL) { + return "Interpreters module-list not accessible."; + } + if (index > PyList_GET_SIZE(MODULES_BY_INDEX(interp))) { + return "Module index out of bounds."; + } + return NULL; +} + +static PyObject * +_modules_by_index_get(PyInterpreterState *interp, PyModuleDef *def) +{ + Py_ssize_t index = def->m_base.m_index; + if (_modules_by_index_check(interp, index) != NULL) { + return NULL; + } + PyObject *res = PyList_GET_ITEM(MODULES_BY_INDEX(interp), index); + return res==Py_None ? NULL : res; +} + +static int +_modules_by_index_set(PyInterpreterState *interp, + PyModuleDef *def, PyObject *module) +{ + assert(def != NULL); + assert(def->m_slots == NULL); + assert(def->m_base.m_index > 0); + + if (MODULES_BY_INDEX(interp) == NULL) { + MODULES_BY_INDEX(interp) = PyList_New(0); + if (MODULES_BY_INDEX(interp) == NULL) { + return -1; + } + } + + Py_ssize_t index = def->m_base.m_index; + while (PyList_GET_SIZE(MODULES_BY_INDEX(interp)) <= index) { + if (PyList_Append(MODULES_BY_INDEX(interp), Py_None) < 0) { + return -1; + } + } + + return PyList_SetItem(MODULES_BY_INDEX(interp), index, Py_NewRef(module)); +} + +static int +_modules_by_index_clear(PyInterpreterState *interp, PyModuleDef *def) +{ + Py_ssize_t index = def->m_base.m_index; + const char *err = _modules_by_index_check(interp, index); + if (err != NULL) { + Py_FatalError(err); + return -1; + } + return PyList_SetItem(MODULES_BY_INDEX(interp), index, Py_NewRef(Py_None)); +} + + +PyObject* +PyState_FindModule(PyModuleDef* module) +{ + PyInterpreterState *interp = _PyInterpreterState_GET(); + if (module->m_slots) { + return NULL; + } + return _modules_by_index_get(interp, module); +} + +int +PyState_AddModule(PyObject* module, PyModuleDef* def) +{ + if (!def) { + Py_FatalError("module definition is NULL"); + return -1; + } + + PyThreadState *tstate = _PyThreadState_GET(); + if (def->m_slots) { + _PyErr_SetString(tstate, + PyExc_SystemError, + "PyState_AddModule called on module with slots"); + return -1; + } + + PyInterpreterState *interp = tstate->interp; + Py_ssize_t index = def->m_base.m_index; + if (MODULES_BY_INDEX(interp) && + index < PyList_GET_SIZE(MODULES_BY_INDEX(interp)) && + module == PyList_GET_ITEM(MODULES_BY_INDEX(interp), index)) + { + _Py_FatalErrorFormat(__func__, "module %p already added", module); + return -1; + } + + return _modules_by_index_set(interp, def, module); +} + +int +PyState_RemoveModule(PyModuleDef* def) +{ + PyThreadState *tstate = _PyThreadState_GET(); + if (def->m_slots) { + _PyErr_SetString(tstate, + PyExc_SystemError, + "PyState_RemoveModule called on module with slots"); + return -1; + } + return _modules_by_index_clear(tstate->interp, def); +} + + +// Used by finalize_modules() +void +_PyImport_ClearModulesByIndex(PyInterpreterState *interp) +{ + if (!MODULES_BY_INDEX(interp)) { + return; + } + + Py_ssize_t i; + for (i = 0; i < PyList_GET_SIZE(MODULES_BY_INDEX(interp)); i++) { + PyObject *m = PyList_GET_ITEM(MODULES_BY_INDEX(interp), i); + if (PyModule_Check(m)) { + /* cleanup the saved copy of module dicts */ + PyModuleDef *md = PyModule_GetDef(m); + if (md) { + Py_CLEAR(md->m_base.m_copy); + } + } + } + + /* Setting modules_by_index to NULL could be dangerous, so we + clear the list instead. */ + if (PyList_SetSlice(MODULES_BY_INDEX(interp), + 0, PyList_GET_SIZE(MODULES_BY_INDEX(interp)), + NULL)) { + PyErr_WriteUnraisable(MODULES_BY_INDEX(interp)); + } +} + + +/*********************/ +/* extension modules */ +/*********************/ + +/* Make sure name is fully qualified. + + This is a bit of a hack: when the shared library is loaded, + the module name is "package.module", but the module calls + PyModule_Create*() with just "module" for the name. The shared + library loader squirrels away the true name of the module in + _PyRuntime.imports.pkgcontext, and PyModule_Create*() will + substitute this (if the name actually matches). +*/ +const char * +_PyImport_ResolveNameWithPackageContext(const char *name) +{ + if (PKGCONTEXT != NULL) { + const char *p = strrchr(PKGCONTEXT, '.'); + if (p != NULL && strcmp(name, p+1) == 0) { + name = PKGCONTEXT; + PKGCONTEXT = NULL; + } + } + return name; +} + +const char * +_PyImport_SwapPackageContext(const char *newcontext) +{ + const char *oldcontext = PKGCONTEXT; + PKGCONTEXT = newcontext; + return oldcontext; +} + +#ifdef HAVE_DLOPEN +int +_PyImport_GetDLOpenFlags(PyInterpreterState *interp) +{ + return DLOPENFLAGS(interp); +} + +void +_PyImport_SetDLOpenFlags(PyInterpreterState *interp, int new_val) +{ + DLOPENFLAGS(interp) = new_val; +} +#endif // HAVE_DLOPEN + + +/* Common implementation for _imp.exec_dynamic and _imp.exec_builtin */ +static int +exec_builtin_or_dynamic(PyObject *mod) { + PyModuleDef *def; + void *state; + + if (!PyModule_Check(mod)) { + return 0; + } + + def = PyModule_GetDef(mod); + if (def == NULL) { + return 0; + } + + state = PyModule_GetState(mod); + if (state) { + /* Already initialized; skip reload */ + return 0; + } + + return PyModule_ExecDef(mod, def); +} + + +/*******************/ + +#if defined(__EMSCRIPTEN__) && defined(PY_CALL_TRAMPOLINE) +#include +EM_JS(PyObject*, _PyImport_InitFunc_TrampolineCall, (PyModInitFunction func), { + return wasmTable.get(func)(); +}); +#endif // __EMSCRIPTEN__ && PY_CALL_TRAMPOLINE + + +/*****************************/ +/* single-phase init modules */ +/*****************************/ + +/* +We support a number of kinds of single-phase init builtin/extension modules: + +* "basic" + * no module state (PyModuleDef.m_size == -1) + * does not support repeated init (we use PyModuleDef.m_base.m_copy) + * may have process-global state + * the module's def is cached in _PyRuntime.imports.extensions, + by (name, filename) * "reinit" * no module state (PyModuleDef.m_size == 0) * supports repeated init (m_copy is never used) @@ -512,7 +730,7 @@ gets even messier. static PyModuleDef * _extensions_cache_get(PyObject *filename, PyObject *name) { - PyObject *extensions = _PyRuntime.imports.extensions; + PyObject *extensions = EXTENSIONS; if (extensions == NULL) { return NULL; } @@ -528,13 +746,13 @@ _extensions_cache_get(PyObject *filename, PyObject *name) static int _extensions_cache_set(PyObject *filename, PyObject *name, PyModuleDef *def) { - PyObject *extensions = _PyRuntime.imports.extensions; + PyObject *extensions = EXTENSIONS; if (extensions == NULL) { extensions = PyDict_New(); if (extensions == NULL) { return -1; } - _PyRuntime.imports.extensions = extensions; + EXTENSIONS = extensions; } PyObject *key = PyTuple_Pack(2, filename, name); if (key == NULL) { @@ -551,7 +769,7 @@ _extensions_cache_set(PyObject *filename, PyObject *name, PyModuleDef *def) static void _extensions_cache_clear(void) { - Py_CLEAR(_PyRuntime.imports.extensions); + Py_CLEAR(EXTENSIONS); } static int @@ -569,7 +787,7 @@ fix_up_extension(PyObject *mod, PyObject *name, PyObject *filename) } PyThreadState *tstate = _PyThreadState_GET(); - if (_PyState_AddModule(tstate, mod, def) < 0) { + if (_modules_by_index_set(tstate->interp, def, mod) < 0) { return -1; } @@ -616,40 +834,19 @@ _PyImport_FixupExtensionObject(PyObject *mod, PyObject *name, return 0; } -int -_PyImport_FixupBuiltin(PyObject *mod, const char *name, PyObject *modules) -{ - int res = -1; - PyObject *nameobj; - nameobj = PyUnicode_InternFromString(name); - if (nameobj == NULL) { - return -1; - } - if (PyObject_SetItem(modules, nameobj, mod) < 0) { - goto finally; - } - if (fix_up_extension(mod, nameobj, nameobj) < 0) { - PyMapping_DelItem(modules, nameobj); - goto finally; - } - res = 0; - -finally: - Py_DECREF(nameobj); - return res; -} static PyObject * import_find_extension(PyThreadState *tstate, PyObject *name, PyObject *filename) { + /* Only single-phase init modules will be in the cache. */ PyModuleDef *def = _extensions_cache_get(filename, name); if (def == NULL) { return NULL; } PyObject *mod, *mdict; - PyObject *modules = tstate->interp->modules; + PyObject *modules = MODULES(tstate->interp); if (def->m_size == -1) { /* Module does not support repeated initialization */ @@ -679,7 +876,7 @@ import_find_extension(PyThreadState *tstate, PyObject *name, return NULL; } } - if (_PyState_AddModule(tstate, mod, def) < 0) { + if (_modules_by_index_set(tstate->interp, def, mod) < 0) { PyMapping_DelItem(modules, name); Py_DECREF(mod); return NULL; @@ -694,107 +891,268 @@ import_find_extension(PyThreadState *tstate, PyObject *name, } -/* Get the module object corresponding to a module name. - First check the modules dictionary if there's one there, - if not, create a new one and insert it in the modules dictionary. */ +/*******************/ +/* builtin modules */ +/*******************/ -static PyObject * -import_add_module(PyThreadState *tstate, PyObject *name) +int +_PyImport_FixupBuiltin(PyObject *mod, const char *name, PyObject *modules) { - PyObject *modules = tstate->interp->modules; - if (modules == NULL) { - _PyErr_SetString(tstate, PyExc_RuntimeError, - "no import module dictionary"); - return NULL; + int res = -1; + PyObject *nameobj; + nameobj = PyUnicode_InternFromString(name); + if (nameobj == NULL) { + return -1; } - - PyObject *m; - if (PyDict_CheckExact(modules)) { - m = Py_XNewRef(PyDict_GetItemWithError(modules, name)); + if (PyObject_SetItem(modules, nameobj, mod) < 0) { + goto finally; } - else { - m = PyObject_GetItem(modules, name); - // For backward-compatibility we copy the behavior - // of PyDict_GetItemWithError(). - if (_PyErr_ExceptionMatches(tstate, PyExc_KeyError)) { - _PyErr_Clear(tstate); - } + if (fix_up_extension(mod, nameobj, nameobj) < 0) { + PyMapping_DelItem(modules, nameobj); + goto finally; } - if (_PyErr_Occurred(tstate)) { - return NULL; + res = 0; + +finally: + Py_DECREF(nameobj); + return res; +} + +/* Helper to test for built-in module */ + +static int +is_builtin(PyObject *name) +{ + int i; + struct _inittab *inittab = INITTAB; + for (i = 0; inittab[i].name != NULL; i++) { + if (_PyUnicode_EqualToASCIIString(name, inittab[i].name)) { + if (inittab[i].initfunc == NULL) + return -1; + else + return 1; + } } - if (m != NULL && PyModule_Check(m)) { - return m; + return 0; +} + +static PyObject* +create_builtin(PyThreadState *tstate, PyObject *name, PyObject *spec) +{ + PyObject *mod = import_find_extension(tstate, name, name); + if (mod || _PyErr_Occurred(tstate)) { + return mod; } - Py_XDECREF(m); - m = PyModule_NewObject(name); - if (m == NULL) - return NULL; - if (PyObject_SetItem(modules, name, m) != 0) { - Py_DECREF(m); - return NULL; + + PyObject *modules = MODULES(tstate->interp); + for (struct _inittab *p = INITTAB; p->name != NULL; p++) { + if (_PyUnicode_EqualToASCIIString(name, p->name)) { + if (p->initfunc == NULL) { + /* Cannot re-init internal module ("sys" or "builtins") */ + mod = PyImport_AddModuleObject(name); + return Py_XNewRef(mod); + } + mod = _PyImport_InitFunc_TrampolineCall(*p->initfunc); + if (mod == NULL) { + return NULL; + } + + if (PyObject_TypeCheck(mod, &PyModuleDef_Type)) { + return PyModule_FromDefAndSpec((PyModuleDef*)mod, spec); + } + else { + /* Remember pointer to module init function. */ + PyModuleDef *def = PyModule_GetDef(mod); + if (def == NULL) { + return NULL; + } + + def->m_base.m_init = p->initfunc; + if (_PyImport_FixupExtensionObject(mod, name, name, + modules) < 0) { + return NULL; + } + return mod; + } + } } - return m; + // not found + Py_RETURN_NONE; } -PyObject * -PyImport_AddModuleObject(PyObject *name) + +/*****************************/ +/* the builtin modules table */ +/*****************************/ + +/* API for embedding applications that want to add their own entries + to the table of built-in modules. This should normally be called + *before* Py_Initialize(). When the table resize fails, -1 is + returned and the existing table is unchanged. + + After a similar function by Just van Rossum. */ + +int +PyImport_ExtendInittab(struct _inittab *newtab) { - PyThreadState *tstate = _PyThreadState_GET(); - PyObject *mod = import_add_module(tstate, name); - if (mod) { - PyObject *ref = PyWeakref_NewRef(mod, NULL); - Py_DECREF(mod); - if (ref == NULL) { - return NULL; - } - mod = PyWeakref_GetObject(ref); - Py_DECREF(ref); + struct _inittab *p; + size_t i, n; + int res = 0; + + if (INITTAB != NULL) { + Py_FatalError("PyImport_ExtendInittab() may not be called after Py_Initialize()"); } - return mod; /* borrowed reference */ + + /* Count the number of entries in both tables */ + for (n = 0; newtab[n].name != NULL; n++) + ; + if (n == 0) + return 0; /* Nothing to do */ + for (i = 0; PyImport_Inittab[i].name != NULL; i++) + ; + + /* Force default raw memory allocator to get a known allocator to be able + to release the memory in _PyImport_Fini2() */ + PyMemAllocatorEx old_alloc; + _PyMem_SetDefaultAllocator(PYMEM_DOMAIN_RAW, &old_alloc); + + /* Allocate new memory for the combined table */ + p = NULL; + if (i + n <= SIZE_MAX / sizeof(struct _inittab) - 1) { + size_t size = sizeof(struct _inittab) * (i + n + 1); + p = PyMem_RawRealloc(inittab_copy, size); + } + if (p == NULL) { + res = -1; + goto done; + } + + /* Copy the tables into the new memory at the first call + to PyImport_ExtendInittab(). */ + if (inittab_copy != PyImport_Inittab) { + memcpy(p, PyImport_Inittab, (i+1) * sizeof(struct _inittab)); + } + memcpy(p + i, newtab, (n + 1) * sizeof(struct _inittab)); + PyImport_Inittab = inittab_copy = p; + +done: + PyMem_SetAllocator(PYMEM_DOMAIN_RAW, &old_alloc); + return res; } +/* Shorthand to add a single entry given a name and a function */ -PyObject * -PyImport_AddModule(const char *name) +int +PyImport_AppendInittab(const char *name, PyObject* (*initfunc)(void)) { - PyObject *nameobj = PyUnicode_FromString(name); - if (nameobj == NULL) { - return NULL; + struct _inittab newtab[2]; + + if (INITTAB != NULL) { + Py_FatalError("PyImport_AppendInittab() may not be called after Py_Initialize()"); } - PyObject *module = PyImport_AddModuleObject(nameobj); - Py_DECREF(nameobj); - return module; + + memset(newtab, '\0', sizeof newtab); + + newtab[0].name = name; + newtab[0].initfunc = initfunc; + + return PyImport_ExtendInittab(newtab); } -/* Remove name from sys.modules, if it's there. - * Can be called with an exception raised. - * If fail to remove name a new exception will be chained with the old - * exception, otherwise the old exception is preserved. - */ +/* the internal table */ + +static int +init_builtin_modules_table(void) +{ + size_t size; + for (size = 0; PyImport_Inittab[size].name != NULL; size++) + ; + size++; + + /* Make the copy. */ + struct _inittab *copied = PyMem_RawMalloc(size * sizeof(struct _inittab)); + if (copied == NULL) { + return -1; + } + memcpy(copied, PyImport_Inittab, size * sizeof(struct _inittab)); + INITTAB = copied; + return 0; +} + static void -remove_module(PyThreadState *tstate, PyObject *name) +fini_builtin_modules_table(void) { - PyObject *type, *value, *traceback; - _PyErr_Fetch(tstate, &type, &value, &traceback); + struct _inittab *inittab = INITTAB; + INITTAB = NULL; + PyMem_RawFree(inittab); +} - PyObject *modules = tstate->interp->modules; - if (PyDict_CheckExact(modules)) { - PyObject *mod = _PyDict_Pop(modules, name, Py_None); - Py_XDECREF(mod); +PyObject * +_PyImport_GetBuiltinModuleNames(void) +{ + PyObject *list = PyList_New(0); + if (list == NULL) { + return NULL; } - else if (PyMapping_DelItem(modules, name) < 0) { - if (_PyErr_ExceptionMatches(tstate, PyExc_KeyError)) { - _PyErr_Clear(tstate); + struct _inittab *inittab = INITTAB; + for (Py_ssize_t i = 0; inittab[i].name != NULL; i++) { + PyObject *name = PyUnicode_FromString(inittab[i].name); + if (name == NULL) { + Py_DECREF(list); + return NULL; } + if (PyList_Append(list, name) < 0) { + Py_DECREF(name); + Py_DECREF(list); + return NULL; + } + Py_DECREF(name); } + return list; +} - _PyErr_ChainExceptions(type, value, traceback); + +/********************/ +/* the magic number */ +/********************/ + +/* Helper for pythonrun.c -- return magic number and tag. */ + +long +PyImport_GetMagicNumber(void) +{ + long res; + PyInterpreterState *interp = _PyInterpreterState_GET(); + PyObject *external, *pyc_magic; + + external = PyObject_GetAttrString(IMPORTLIB(interp), "_bootstrap_external"); + if (external == NULL) + return -1; + pyc_magic = PyObject_GetAttrString(external, "_RAW_MAGIC_NUMBER"); + Py_DECREF(external); + if (pyc_magic == NULL) + return -1; + res = PyLong_AsLong(pyc_magic); + Py_DECREF(pyc_magic); + return res; +} + + +extern const char * _PySys_ImplCacheTag; + +const char * +PyImport_GetMagicTag(void) +{ + return _PySys_ImplCacheTag; } +/*********************************/ +/* a Python module's code object */ +/*********************************/ + /* Execute a code object in a module and return the module object * WITH INCREMENTED REFERENCE COUNT. If an error occurs, name is * removed from sys.modules, to avoid leaving damaged module objects @@ -851,7 +1209,7 @@ PyImport_ExecCodeModuleWithPathnames(const char *name, PyObject *co, Py_FatalError("no current interpreter"); } - external= PyObject_GetAttrString(interp->importlib, + external= PyObject_GetAttrString(IMPORTLIB(interp), "_bootstrap_external"); if (external != NULL) { pathobj = _PyObject_CallMethodOneArg( @@ -936,7 +1294,7 @@ PyImport_ExecCodeModuleObject(PyObject *name, PyObject *co, PyObject *pathname, if (pathname == NULL) { pathname = ((PyCodeObject *)co)->co_filename; } - external = PyObject_GetAttrString(tstate->interp->importlib, + external = PyObject_GetAttrString(IMPORTLIB(tstate->interp), "_bootstrap_external"); if (external == NULL) { Py_DECREF(d); @@ -989,204 +1347,10 @@ update_compiled_module(PyCodeObject *co, PyObject *newname) Py_DECREF(oldname); } -/*[clinic input] -_imp._fix_co_filename - - code: object(type="PyCodeObject *", subclass_of="&PyCode_Type") - Code object to change. - - path: unicode - File path to use. - / - -Changes code.co_filename to specify the passed-in file path. -[clinic start generated code]*/ - -static PyObject * -_imp__fix_co_filename_impl(PyObject *module, PyCodeObject *code, - PyObject *path) -/*[clinic end generated code: output=1d002f100235587d input=895ba50e78b82f05]*/ - -{ - update_compiled_module(code, path); - - Py_RETURN_NONE; -} - - -/* Helper to test for built-in module */ - -static int -is_builtin(PyObject *name) -{ - int i; - struct _inittab *inittab = _PyRuntime.imports.inittab; - for (i = 0; inittab[i].name != NULL; i++) { - if (_PyUnicode_EqualToASCIIString(name, inittab[i].name)) { - if (inittab[i].initfunc == NULL) - return -1; - else - return 1; - } - } - return 0; -} - - -/* Return a finder object for a sys.path/pkg.__path__ item 'p', - possibly by fetching it from the path_importer_cache dict. If it - wasn't yet cached, traverse path_hooks until a hook is found - that can handle the path item. Return None if no hook could; - this tells our caller that the path based finder could not find - a finder for this path item. Cache the result in - path_importer_cache. */ - -static PyObject * -get_path_importer(PyThreadState *tstate, PyObject *path_importer_cache, - PyObject *path_hooks, PyObject *p) -{ - PyObject *importer; - Py_ssize_t j, nhooks; - - /* These conditions are the caller's responsibility: */ - assert(PyList_Check(path_hooks)); - assert(PyDict_Check(path_importer_cache)); - - nhooks = PyList_Size(path_hooks); - if (nhooks < 0) - return NULL; /* Shouldn't happen */ - - importer = PyDict_GetItemWithError(path_importer_cache, p); - if (importer != NULL || _PyErr_Occurred(tstate)) { - return Py_XNewRef(importer); - } - - /* set path_importer_cache[p] to None to avoid recursion */ - if (PyDict_SetItem(path_importer_cache, p, Py_None) != 0) - return NULL; - - for (j = 0; j < nhooks; j++) { - PyObject *hook = PyList_GetItem(path_hooks, j); - if (hook == NULL) - return NULL; - importer = PyObject_CallOneArg(hook, p); - if (importer != NULL) - break; - - if (!_PyErr_ExceptionMatches(tstate, PyExc_ImportError)) { - return NULL; - } - _PyErr_Clear(tstate); - } - if (importer == NULL) { - Py_RETURN_NONE; - } - if (PyDict_SetItem(path_importer_cache, p, importer) < 0) { - Py_DECREF(importer); - return NULL; - } - return importer; -} - -PyObject * -PyImport_GetImporter(PyObject *path) -{ - PyThreadState *tstate = _PyThreadState_GET(); - PyObject *path_importer_cache = PySys_GetObject("path_importer_cache"); - PyObject *path_hooks = PySys_GetObject("path_hooks"); - if (path_importer_cache == NULL || path_hooks == NULL) { - return NULL; - } - return get_path_importer(tstate, path_importer_cache, path_hooks, path); -} - -#if defined(__EMSCRIPTEN__) && defined(PY_CALL_TRAMPOLINE) -#include -EM_JS(PyObject*, _PyImport_InitFunc_TrampolineCall, (PyModInitFunction func), { - return wasmTable.get(func)(); -}); -#endif // __EMSCRIPTEN__ && PY_CALL_TRAMPOLINE - -static PyObject* -create_builtin(PyThreadState *tstate, PyObject *name, PyObject *spec) -{ - PyObject *mod = import_find_extension(tstate, name, name); - if (mod || _PyErr_Occurred(tstate)) { - return mod; - } - - PyObject *modules = tstate->interp->modules; - for (struct _inittab *p = _PyRuntime.imports.inittab; p->name != NULL; p++) { - if (_PyUnicode_EqualToASCIIString(name, p->name)) { - if (p->initfunc == NULL) { - /* Cannot re-init internal module ("sys" or "builtins") */ - mod = PyImport_AddModuleObject(name); - return Py_XNewRef(mod); - } - mod = _PyImport_InitFunc_TrampolineCall(*p->initfunc); - if (mod == NULL) { - return NULL; - } - - if (PyObject_TypeCheck(mod, &PyModuleDef_Type)) { - return PyModule_FromDefAndSpec((PyModuleDef*)mod, spec); - } - else { - /* Remember pointer to module init function. */ - PyModuleDef *def = PyModule_GetDef(mod); - if (def == NULL) { - return NULL; - } - - def->m_base.m_init = p->initfunc; - if (_PyImport_FixupExtensionObject(mod, name, name, - modules) < 0) { - return NULL; - } - return mod; - } - } - } - - // not found - Py_RETURN_NONE; -} - - - -/*[clinic input] -_imp.create_builtin - - spec: object - / - -Create an extension module. -[clinic start generated code]*/ - -static PyObject * -_imp_create_builtin(PyObject *module, PyObject *spec) -/*[clinic end generated code: output=ace7ff22271e6f39 input=37f966f890384e47]*/ -{ - PyThreadState *tstate = _PyThreadState_GET(); - - PyObject *name = PyObject_GetAttrString(spec, "name"); - if (name == NULL) { - return NULL; - } - - if (!PyUnicode_Check(name)) { - PyErr_Format(PyExc_TypeError, - "name must be string, not %.200s", - Py_TYPE(name)->tp_name); - Py_DECREF(name); - return NULL; - } - - PyObject *mod = create_builtin(tstate, name, spec); - Py_DECREF(name); - return mod; -} +/******************/ +/* frozen modules */ +/******************/ /* Return true if the name is an alias. In that case, "alias" is set to the original module name. If it is an alias but the original @@ -1210,14 +1374,11 @@ resolve_module_alias(const char *name, const struct _module_alias *aliases, } } - -/* Frozen modules */ - static bool use_frozen(void) { PyInterpreterState *interp = _PyInterpreterState_GET(); - int override = interp->override_frozen_modules; + int override = OVERRIDE_FROZEN_MODULES(interp); if (override > 0) { return true; } @@ -1587,54 +1748,315 @@ PyImport_ImportFrozenModule(const char *name) } -/* Import a module, either built-in, frozen, or external, and return - its module object WITH INCREMENTED REFERENCE COUNT */ +/*************/ +/* importlib */ +/*************/ -PyObject * -PyImport_ImportModule(const char *name) +/* Import the _imp extension by calling manually _imp.create_builtin() and + _imp.exec_builtin() since importlib is not initialized yet. Initializing + importlib requires the _imp module: this function fix the bootstrap issue. + */ +static PyObject* +bootstrap_imp(PyThreadState *tstate) { - PyObject *pname; - PyObject *result; - - pname = PyUnicode_FromString(name); - if (pname == NULL) + PyObject *name = PyUnicode_FromString("_imp"); + if (name == NULL) { return NULL; - result = PyImport_Import(pname); - Py_DECREF(pname); - return result; -} + } + // Mock a ModuleSpec object just good enough for PyModule_FromDefAndSpec(): + // an object with just a name attribute. + // + // _imp.__spec__ is overridden by importlib._bootstrap._instal() anyway. + PyObject *attrs = Py_BuildValue("{sO}", "name", name); + if (attrs == NULL) { + goto error; + } + PyObject *spec = _PyNamespace_New(attrs); + Py_DECREF(attrs); + if (spec == NULL) { + goto error; + } -/* Import a module without blocking - * - * At first it tries to fetch the module from sys.modules. If the module was - * never loaded before it loads it with PyImport_ImportModule() unless another - * thread holds the import lock. In the latter case the function raises an - * ImportError instead of blocking. - * - * Returns the module object with incremented ref count. - */ -PyObject * -PyImport_ImportModuleNoBlock(const char *name) -{ - return PyImport_ImportModule(name); + // Create the _imp module from its definition. + PyObject *mod = create_builtin(tstate, name, spec); + Py_CLEAR(name); + Py_DECREF(spec); + if (mod == NULL) { + goto error; + } + assert(mod != Py_None); // not found + + // Execute the _imp module: call imp_module_exec(). + if (exec_builtin_or_dynamic(mod) < 0) { + Py_DECREF(mod); + goto error; + } + return mod; + +error: + Py_XDECREF(name); + return NULL; } +/* Global initializations. Can be undone by Py_FinalizeEx(). Don't + call this twice without an intervening Py_FinalizeEx() call. When + initializations fail, a fatal error is issued and the function does + not return. On return, the first thread and interpreter state have + been created. -/* Remove importlib frames from the traceback, - * except in Verbose mode. */ -static void -remove_importlib_frames(PyThreadState *tstate) + Locking: you must hold the interpreter lock while calling this. + (If the lock has not yet been initialized, that's equivalent to + having the lock, but you cannot use multiple threads.) + +*/ +static int +init_importlib(PyThreadState *tstate, PyObject *sysmod) { - const char *importlib_filename = ""; - const char *external_filename = ""; - const char *remove_frames = "_call_with_frames_removed"; - int always_trim = 0; - int in_importlib = 0; - PyObject *exception, *value, *base_tb, *tb; - PyObject **prev_link, **outer_link = NULL; + assert(!_PyErr_Occurred(tstate)); - /* Synopsis: if it's an ImportError, we trim all importlib chunks + PyInterpreterState *interp = tstate->interp; + int verbose = _PyInterpreterState_GetConfig(interp)->verbose; + + // Import _importlib through its frozen version, _frozen_importlib. + if (verbose) { + PySys_FormatStderr("import _frozen_importlib # frozen\n"); + } + if (PyImport_ImportFrozenModule("_frozen_importlib") <= 0) { + return -1; + } + PyObject *importlib = PyImport_AddModule("_frozen_importlib"); // borrowed + if (importlib == NULL) { + return -1; + } + IMPORTLIB(interp) = Py_NewRef(importlib); + + // Import the _imp module + if (verbose) { + PySys_FormatStderr("import _imp # builtin\n"); + } + PyObject *imp_mod = bootstrap_imp(tstate); + if (imp_mod == NULL) { + return -1; + } + if (_PyImport_SetModuleString("_imp", imp_mod) < 0) { + Py_DECREF(imp_mod); + return -1; + } + + // Install importlib as the implementation of import + PyObject *value = PyObject_CallMethod(importlib, "_install", + "OO", sysmod, imp_mod); + Py_DECREF(imp_mod); + if (value == NULL) { + return -1; + } + Py_DECREF(value); + + assert(!_PyErr_Occurred(tstate)); + return 0; +} + + +static int +init_importlib_external(PyInterpreterState *interp) +{ + PyObject *value; + value = PyObject_CallMethod(IMPORTLIB(interp), + "_install_external_importers", ""); + if (value == NULL) { + return -1; + } + Py_DECREF(value); + return 0; +} + +PyObject * +_PyImport_GetImportlibLoader(PyInterpreterState *interp, + const char *loader_name) +{ + return PyObject_GetAttrString(IMPORTLIB(interp), loader_name); +} + +PyObject * +_PyImport_GetImportlibExternalLoader(PyInterpreterState *interp, + const char *loader_name) +{ + PyObject *bootstrap = PyObject_GetAttrString(IMPORTLIB(interp), + "_bootstrap_external"); + if (bootstrap == NULL) { + return NULL; + } + + PyObject *loader_type = PyObject_GetAttrString(bootstrap, loader_name); + Py_DECREF(bootstrap); + return loader_type; +} + +PyObject * +_PyImport_BlessMyLoader(PyInterpreterState *interp, PyObject *module_globals) +{ + PyObject *external = PyObject_GetAttrString(IMPORTLIB(interp), + "_bootstrap_external"); + if (external == NULL) { + return NULL; + } + + PyObject *loader = PyObject_CallMethod(external, "_bless_my_loader", + "O", module_globals, NULL); + Py_DECREF(external); + return loader; +} + +PyObject * +_PyImport_ImportlibModuleRepr(PyInterpreterState *interp, PyObject *m) +{ + return PyObject_CallMethod(IMPORTLIB(interp), "_module_repr", "O", m); +} + + +/*******************/ + +/* Return a finder object for a sys.path/pkg.__path__ item 'p', + possibly by fetching it from the path_importer_cache dict. If it + wasn't yet cached, traverse path_hooks until a hook is found + that can handle the path item. Return None if no hook could; + this tells our caller that the path based finder could not find + a finder for this path item. Cache the result in + path_importer_cache. */ + +static PyObject * +get_path_importer(PyThreadState *tstate, PyObject *path_importer_cache, + PyObject *path_hooks, PyObject *p) +{ + PyObject *importer; + Py_ssize_t j, nhooks; + + /* These conditions are the caller's responsibility: */ + assert(PyList_Check(path_hooks)); + assert(PyDict_Check(path_importer_cache)); + + nhooks = PyList_Size(path_hooks); + if (nhooks < 0) + return NULL; /* Shouldn't happen */ + + importer = PyDict_GetItemWithError(path_importer_cache, p); + if (importer != NULL || _PyErr_Occurred(tstate)) { + return Py_XNewRef(importer); + } + + /* set path_importer_cache[p] to None to avoid recursion */ + if (PyDict_SetItem(path_importer_cache, p, Py_None) != 0) + return NULL; + + for (j = 0; j < nhooks; j++) { + PyObject *hook = PyList_GetItem(path_hooks, j); + if (hook == NULL) + return NULL; + importer = PyObject_CallOneArg(hook, p); + if (importer != NULL) + break; + + if (!_PyErr_ExceptionMatches(tstate, PyExc_ImportError)) { + return NULL; + } + _PyErr_Clear(tstate); + } + if (importer == NULL) { + Py_RETURN_NONE; + } + if (PyDict_SetItem(path_importer_cache, p, importer) < 0) { + Py_DECREF(importer); + return NULL; + } + return importer; +} + +PyObject * +PyImport_GetImporter(PyObject *path) +{ + PyThreadState *tstate = _PyThreadState_GET(); + PyObject *path_importer_cache = PySys_GetObject("path_importer_cache"); + PyObject *path_hooks = PySys_GetObject("path_hooks"); + if (path_importer_cache == NULL || path_hooks == NULL) { + return NULL; + } + return get_path_importer(tstate, path_importer_cache, path_hooks, path); +} + + +/*********************/ +/* importing modules */ +/*********************/ + +int +_PyImport_InitDefaultImportFunc(PyInterpreterState *interp) +{ + // Get the __import__ function + PyObject *import_func = _PyDict_GetItemStringWithError(interp->builtins, + "__import__"); + if (import_func == NULL) { + return -1; + } + IMPORT_FUNC(interp) = Py_NewRef(import_func); + return 0; +} + +int +_PyImport_IsDefaultImportFunc(PyInterpreterState *interp, PyObject *func) +{ + return func == IMPORT_FUNC(interp); +} + + +/* Import a module, either built-in, frozen, or external, and return + its module object WITH INCREMENTED REFERENCE COUNT */ + +PyObject * +PyImport_ImportModule(const char *name) +{ + PyObject *pname; + PyObject *result; + + pname = PyUnicode_FromString(name); + if (pname == NULL) + return NULL; + result = PyImport_Import(pname); + Py_DECREF(pname); + return result; +} + + +/* Import a module without blocking + * + * At first it tries to fetch the module from sys.modules. If the module was + * never loaded before it loads it with PyImport_ImportModule() unless another + * thread holds the import lock. In the latter case the function raises an + * ImportError instead of blocking. + * + * Returns the module object with incremented ref count. + */ +PyObject * +PyImport_ImportModuleNoBlock(const char *name) +{ + return PyImport_ImportModule(name); +} + + +/* Remove importlib frames from the traceback, + * except in Verbose mode. */ +static void +remove_importlib_frames(PyThreadState *tstate) +{ + const char *importlib_filename = ""; + const char *external_filename = ""; + const char *remove_frames = "_call_with_frames_removed"; + int always_trim = 0; + int in_importlib = 0; + PyObject *exception, *value, *base_tb, *tb; + PyObject **prev_link, **outer_link = NULL; + + /* Synopsis: if it's an ImportError, we trim all importlib chunks from the traceback. We always trim chunks which end with a call to "_call_with_frames_removed". */ @@ -1851,8 +2273,8 @@ import_find_and_load(PyThreadState *tstate, PyObject *abs_name) PyObject *mod = NULL; PyInterpreterState *interp = tstate->interp; int import_time = _PyInterpreterState_GetConfig(interp)->import_time; -#define import_level _PyRuntime.imports.find_and_load.import_level -#define accumulated _PyRuntime.imports.find_and_load.accumulated +#define import_level FIND_AND_LOAD.import_level +#define accumulated FIND_AND_LOAD.accumulated _PyTime_t t1 = 0, accumulated_copy = accumulated; @@ -1873,7 +2295,7 @@ import_find_and_load(PyThreadState *tstate, PyObject *abs_name) * _PyDict_GetItemIdWithError(). */ if (import_time) { -#define header _PyRuntime.imports.find_and_load.header +#define header FIND_AND_LOAD.header if (header) { fputs("import time: self [us] | cumulative | imported package\n", stderr); @@ -1889,8 +2311,8 @@ import_find_and_load(PyThreadState *tstate, PyObject *abs_name) if (PyDTrace_IMPORT_FIND_LOAD_START_ENABLED()) PyDTrace_IMPORT_FIND_LOAD_START(PyUnicode_AsUTF8(abs_name)); - mod = PyObject_CallMethodObjArgs(interp->importlib, &_Py_ID(_find_and_load), - abs_name, interp->import_func, NULL); + mod = PyObject_CallMethodObjArgs(IMPORTLIB(interp), &_Py_ID(_find_and_load), + abs_name, IMPORT_FUNC(interp), NULL); if (PyDTrace_IMPORT_FIND_LOAD_DONE_ENABLED()) PyDTrace_IMPORT_FIND_LOAD_DONE(PyUnicode_AsUTF8(abs_name), @@ -1913,23 +2335,6 @@ import_find_and_load(PyThreadState *tstate, PyObject *abs_name) #undef accumulated } -PyObject * -PyImport_GetModule(PyObject *name) -{ - PyThreadState *tstate = _PyThreadState_GET(); - PyObject *mod; - - mod = import_get_module(tstate, name); - if (mod != NULL && mod != Py_None) { - if (import_ensure_initialized(tstate->interp, mod, name) < 0) { - Py_DECREF(mod); - remove_importlib_frames(tstate); - return NULL; - } - } - return mod; -} - PyObject * PyImport_ImportModuleLevelObject(PyObject *name, PyObject *globals, PyObject *locals, PyObject *fromlist, @@ -2059,8 +2464,8 @@ PyImport_ImportModuleLevelObject(PyObject *name, PyObject *globals, if (path) { Py_DECREF(path); final_mod = PyObject_CallMethodObjArgs( - interp->importlib, &_Py_ID(_handle_fromlist), - mod, fromlist, interp->import_func, NULL); + IMPORTLIB(interp), &_Py_ID(_handle_fromlist), + mod, fromlist, IMPORT_FUNC(interp), NULL); } else { final_mod = Py_NewRef(mod); @@ -2129,72 +2534,429 @@ PyImport_ReloadModule(PyObject *m) PyObject * PyImport_Import(PyObject *module_name) { - PyThreadState *tstate = _PyThreadState_GET(); - PyObject *globals = NULL; - PyObject *import = NULL; - PyObject *builtins = NULL; - PyObject *r = NULL; + PyThreadState *tstate = _PyThreadState_GET(); + PyObject *globals = NULL; + PyObject *import = NULL; + PyObject *builtins = NULL; + PyObject *r = NULL; + + PyObject *from_list = PyList_New(0); + if (from_list == NULL) { + goto err; + } + + /* Get the builtins from current globals */ + globals = PyEval_GetGlobals(); + if (globals != NULL) { + Py_INCREF(globals); + builtins = PyObject_GetItem(globals, &_Py_ID(__builtins__)); + if (builtins == NULL) + goto err; + } + else { + /* No globals -- use standard builtins, and fake globals */ + builtins = PyImport_ImportModuleLevel("builtins", + NULL, NULL, NULL, 0); + if (builtins == NULL) { + goto err; + } + globals = Py_BuildValue("{OO}", &_Py_ID(__builtins__), builtins); + if (globals == NULL) + goto err; + } + + /* Get the __import__ function from the builtins */ + if (PyDict_Check(builtins)) { + import = PyObject_GetItem(builtins, &_Py_ID(__import__)); + if (import == NULL) { + _PyErr_SetObject(tstate, PyExc_KeyError, &_Py_ID(__import__)); + } + } + else + import = PyObject_GetAttr(builtins, &_Py_ID(__import__)); + if (import == NULL) + goto err; + + /* Call the __import__ function with the proper argument list + Always use absolute import here. + Calling for side-effect of import. */ + r = PyObject_CallFunction(import, "OOOOi", module_name, globals, + globals, from_list, 0, NULL); + if (r == NULL) + goto err; + Py_DECREF(r); + + r = import_get_module(tstate, module_name); + if (r == NULL && !_PyErr_Occurred(tstate)) { + _PyErr_SetObject(tstate, PyExc_KeyError, module_name); + } + + err: + Py_XDECREF(globals); + Py_XDECREF(builtins); + Py_XDECREF(import); + Py_XDECREF(from_list); + + return r; +} + + +/*********************/ +/* runtime lifecycle */ +/*********************/ + +PyStatus +_PyImport_Init(void) +{ + if (INITTAB != NULL) { + return _PyStatus_ERR("global import state already initialized"); + } + + PyStatus status = _PyStatus_OK(); + + /* Force default raw memory allocator to get a known allocator to be able + to release the memory in _PyImport_Fini() */ + PyMemAllocatorEx old_alloc; + _PyMem_SetDefaultAllocator(PYMEM_DOMAIN_RAW, &old_alloc); + + if (init_builtin_modules_table() != 0) { + status = PyStatus_NoMemory(); + goto done; + } + +done: + PyMem_SetAllocator(PYMEM_DOMAIN_RAW, &old_alloc); + return status; +} + +void +_PyImport_Fini(void) +{ + /* Destroy the database used by _PyImport_{Fixup,Find}Extension */ + _extensions_cache_clear(); + if (import_lock != NULL) { + PyThread_free_lock(import_lock); + import_lock = NULL; + } + + /* Use the same memory allocator as _PyImport_Init(). */ + PyMemAllocatorEx old_alloc; + _PyMem_SetDefaultAllocator(PYMEM_DOMAIN_RAW, &old_alloc); + + /* Free memory allocated by _PyImport_Init() */ + fini_builtin_modules_table(); + + PyMem_SetAllocator(PYMEM_DOMAIN_RAW, &old_alloc); +} + +void +_PyImport_Fini2(void) +{ + /* Use the same memory allocator than PyImport_ExtendInittab(). */ + PyMemAllocatorEx old_alloc; + _PyMem_SetDefaultAllocator(PYMEM_DOMAIN_RAW, &old_alloc); + + // Reset PyImport_Inittab + PyImport_Inittab = _PyImport_Inittab; + + /* Free memory allocated by PyImport_ExtendInittab() */ + PyMem_RawFree(inittab_copy); + inittab_copy = NULL; + + PyMem_SetAllocator(PYMEM_DOMAIN_RAW, &old_alloc); +} + + +/*************************/ +/* interpreter lifecycle */ +/*************************/ + +PyStatus +_PyImport_InitCore(PyThreadState *tstate, PyObject *sysmod, int importlib) +{ + // XXX Initialize here: interp->modules and interp->import_func. + // XXX Initialize here: sys.modules and sys.meta_path. + + if (importlib) { + /* This call sets up builtin and frozen import support */ + if (init_importlib(tstate, sysmod) < 0) { + return _PyStatus_ERR("failed to initialize importlib"); + } + } + + return _PyStatus_OK(); +} + +/* In some corner cases it is important to be sure that the import + machinery has been initialized (or not cleaned up yet). For + example, see issue #4236 and PyModule_Create2(). */ + +int +_PyImport_IsInitialized(PyInterpreterState *interp) +{ + if (MODULES(interp) == NULL) + return 0; + return 1; +} + +/* Clear the direct per-interpreter import state, if not cleared already. */ +void +_PyImport_ClearCore(PyInterpreterState *interp) +{ + /* interp->modules should have been cleaned up and cleared already + by _PyImport_FiniCore(). */ + Py_CLEAR(MODULES(interp)); + Py_CLEAR(MODULES_BY_INDEX(interp)); + Py_CLEAR(IMPORTLIB(interp)); + Py_CLEAR(IMPORT_FUNC(interp)); +} + +void +_PyImport_FiniCore(PyInterpreterState *interp) +{ + int verbose = _PyInterpreterState_GetConfig(interp)->verbose; + + if (_PySys_ClearAttrString(interp, "meta_path", verbose) < 0) { + PyErr_WriteUnraisable(NULL); + } + + // XXX Pull in most of finalize_modules() in pylifecycle.c. + + if (_PySys_ClearAttrString(interp, "modules", verbose) < 0) { + PyErr_WriteUnraisable(NULL); + } + + _PyImport_ClearCore(interp); +} + +// XXX Add something like _PyImport_Disable() for use early in interp fini? + + +/* "external" imports */ + +static int +init_zipimport(PyThreadState *tstate, int verbose) +{ + PyObject *path_hooks = PySys_GetObject("path_hooks"); + if (path_hooks == NULL) { + _PyErr_SetString(tstate, PyExc_RuntimeError, + "unable to get sys.path_hooks"); + return -1; + } + + if (verbose) { + PySys_WriteStderr("# installing zipimport hook\n"); + } + + PyObject *zipimporter = _PyImport_GetModuleAttrString("zipimport", "zipimporter"); + if (zipimporter == NULL) { + _PyErr_Clear(tstate); /* No zipimporter object -- okay */ + if (verbose) { + PySys_WriteStderr("# can't import zipimport.zipimporter\n"); + } + } + else { + /* sys.path_hooks.insert(0, zipimporter) */ + int err = PyList_Insert(path_hooks, 0, zipimporter); + Py_DECREF(zipimporter); + if (err < 0) { + return -1; + } + if (verbose) { + PySys_WriteStderr("# installed zipimport hook\n"); + } + } + + return 0; +} + +PyStatus +_PyImport_InitExternal(PyThreadState *tstate) +{ + int verbose = _PyInterpreterState_GetConfig(tstate->interp)->verbose; + + // XXX Initialize here: sys.path_hooks and sys.path_importer_cache. + + if (init_importlib_external(tstate->interp) != 0) { + _PyErr_Print(tstate); + return _PyStatus_ERR("external importer setup failed"); + } + + if (init_zipimport(tstate, verbose) != 0) { + PyErr_Print(); + return _PyStatus_ERR("initializing zipimport failed"); + } + + return _PyStatus_OK(); +} + +void +_PyImport_FiniExternal(PyInterpreterState *interp) +{ + int verbose = _PyInterpreterState_GetConfig(interp)->verbose; + + // XXX Uninstall importlib metapath importers here? + + if (_PySys_ClearAttrString(interp, "path_importer_cache", verbose) < 0) { + PyErr_WriteUnraisable(NULL); + } + if (_PySys_ClearAttrString(interp, "path_hooks", verbose) < 0) { + PyErr_WriteUnraisable(NULL); + } +} + + +/******************/ +/* module helpers */ +/******************/ + +PyObject * +_PyImport_GetModuleAttr(PyObject *modname, PyObject *attrname) +{ + PyObject *mod = PyImport_Import(modname); + if (mod == NULL) { + return NULL; + } + PyObject *result = PyObject_GetAttr(mod, attrname); + Py_DECREF(mod); + return result; +} + +PyObject * +_PyImport_GetModuleAttrString(const char *modname, const char *attrname) +{ + PyObject *pmodname = PyUnicode_FromString(modname); + if (pmodname == NULL) { + return NULL; + } + PyObject *pattrname = PyUnicode_FromString(attrname); + if (pattrname == NULL) { + Py_DECREF(pmodname); + return NULL; + } + PyObject *result = _PyImport_GetModuleAttr(pmodname, pattrname); + Py_DECREF(pattrname); + Py_DECREF(pmodname); + return result; +} + + +/**************/ +/* the module */ +/**************/ + +/*[clinic input] +_imp.lock_held + +Return True if the import lock is currently held, else False. + +On platforms without threads, return False. +[clinic start generated code]*/ + +static PyObject * +_imp_lock_held_impl(PyObject *module) +/*[clinic end generated code: output=8b89384b5e1963fc input=9b088f9b217d9bdf]*/ +{ + return PyBool_FromLong(import_lock_thread != PYTHREAD_INVALID_THREAD_ID); +} + +/*[clinic input] +_imp.acquire_lock + +Acquires the interpreter's import lock for the current thread. + +This lock should be used by import hooks to ensure thread-safety when importing +modules. On platforms without threads, this function does nothing. +[clinic start generated code]*/ + +static PyObject * +_imp_acquire_lock_impl(PyObject *module) +/*[clinic end generated code: output=1aff58cb0ee1b026 input=4a2d4381866d5fdc]*/ +{ + _PyImport_AcquireLock(); + Py_RETURN_NONE; +} + +/*[clinic input] +_imp.release_lock + +Release the interpreter's import lock. + +On platforms without threads, this function does nothing. +[clinic start generated code]*/ + +static PyObject * +_imp_release_lock_impl(PyObject *module) +/*[clinic end generated code: output=7faab6d0be178b0a input=934fb11516dd778b]*/ +{ + if (_PyImport_ReleaseLock() < 0) { + PyErr_SetString(PyExc_RuntimeError, + "not holding the import lock"); + return NULL; + } + Py_RETURN_NONE; +} + + +/*[clinic input] +_imp._fix_co_filename + + code: object(type="PyCodeObject *", subclass_of="&PyCode_Type") + Code object to change. + + path: unicode + File path to use. + / + +Changes code.co_filename to specify the passed-in file path. +[clinic start generated code]*/ + +static PyObject * +_imp__fix_co_filename_impl(PyObject *module, PyCodeObject *code, + PyObject *path) +/*[clinic end generated code: output=1d002f100235587d input=895ba50e78b82f05]*/ + +{ + update_compiled_module(code, path); - PyObject *from_list = PyList_New(0); - if (from_list == NULL) { - goto err; - } + Py_RETURN_NONE; +} - /* Get the builtins from current globals */ - globals = PyEval_GetGlobals(); - if (globals != NULL) { - Py_INCREF(globals); - builtins = PyObject_GetItem(globals, &_Py_ID(__builtins__)); - if (builtins == NULL) - goto err; - } - else { - /* No globals -- use standard builtins, and fake globals */ - builtins = PyImport_ImportModuleLevel("builtins", - NULL, NULL, NULL, 0); - if (builtins == NULL) { - goto err; - } - globals = Py_BuildValue("{OO}", &_Py_ID(__builtins__), builtins); - if (globals == NULL) - goto err; - } - /* Get the __import__ function from the builtins */ - if (PyDict_Check(builtins)) { - import = PyObject_GetItem(builtins, &_Py_ID(__import__)); - if (import == NULL) { - _PyErr_SetObject(tstate, PyExc_KeyError, &_Py_ID(__import__)); - } - } - else - import = PyObject_GetAttr(builtins, &_Py_ID(__import__)); - if (import == NULL) - goto err; +/*[clinic input] +_imp.create_builtin - /* Call the __import__ function with the proper argument list - Always use absolute import here. - Calling for side-effect of import. */ - r = PyObject_CallFunction(import, "OOOOi", module_name, globals, - globals, from_list, 0, NULL); - if (r == NULL) - goto err; - Py_DECREF(r); + spec: object + / - r = import_get_module(tstate, module_name); - if (r == NULL && !_PyErr_Occurred(tstate)) { - _PyErr_SetObject(tstate, PyExc_KeyError, module_name); +Create an extension module. +[clinic start generated code]*/ + +static PyObject * +_imp_create_builtin(PyObject *module, PyObject *spec) +/*[clinic end generated code: output=ace7ff22271e6f39 input=37f966f890384e47]*/ +{ + PyThreadState *tstate = _PyThreadState_GET(); + + PyObject *name = PyObject_GetAttrString(spec, "name"); + if (name == NULL) { + return NULL; } - err: - Py_XDECREF(globals); - Py_XDECREF(builtins); - Py_XDECREF(import); - Py_XDECREF(from_list); + if (!PyUnicode_Check(name)) { + PyErr_Format(PyExc_TypeError, + "name must be string, not %.200s", + Py_TYPE(name)->tp_name); + Py_DECREF(name); + return NULL; + } - return r; + PyObject *mod = create_builtin(tstate, name, spec); + Py_DECREF(name); + return mod; } + /*[clinic input] _imp.extension_suffixes @@ -2459,34 +3221,10 @@ _imp__override_frozen_modules_for_tests_impl(PyObject *module, int override) /*[clinic end generated code: output=36d5cb1594160811 input=8f1f95a3ef21aec3]*/ { PyInterpreterState *interp = _PyInterpreterState_GET(); - interp->override_frozen_modules = override; + OVERRIDE_FROZEN_MODULES(interp) = override; Py_RETURN_NONE; } -/* Common implementation for _imp.exec_dynamic and _imp.exec_builtin */ -static int -exec_builtin_or_dynamic(PyObject *mod) { - PyModuleDef *def; - void *state; - - if (!PyModule_Check(mod)) { - return 0; - } - - def = PyModule_GetDef(mod); - if (def == NULL) { - return 0; - } - - state = PyModule_GetState(mod); - if (state) { - /* Already initialized; skip reload */ - return 0; - } - - return PyModule_ExecDef(mod, def); -} - #ifdef HAVE_DYNAMIC_LOADING /*[clinic input] @@ -2674,158 +3412,6 @@ PyInit__imp(void) } -// Import the _imp extension by calling manually _imp.create_builtin() and -// _imp.exec_builtin() since importlib is not initialized yet. Initializing -// importlib requires the _imp module: this function fix the bootstrap issue. -PyObject* -_PyImport_BootstrapImp(PyThreadState *tstate) -{ - PyObject *name = PyUnicode_FromString("_imp"); - if (name == NULL) { - return NULL; - } - - // Mock a ModuleSpec object just good enough for PyModule_FromDefAndSpec(): - // an object with just a name attribute. - // - // _imp.__spec__ is overridden by importlib._bootstrap._instal() anyway. - PyObject *attrs = Py_BuildValue("{sO}", "name", name); - if (attrs == NULL) { - goto error; - } - PyObject *spec = _PyNamespace_New(attrs); - Py_DECREF(attrs); - if (spec == NULL) { - goto error; - } - - // Create the _imp module from its definition. - PyObject *mod = create_builtin(tstate, name, spec); - Py_CLEAR(name); - Py_DECREF(spec); - if (mod == NULL) { - goto error; - } - assert(mod != Py_None); // not found - - // Execute the _imp module: call imp_module_exec(). - if (exec_builtin_or_dynamic(mod) < 0) { - Py_DECREF(mod); - goto error; - } - return mod; - -error: - Py_XDECREF(name); - return NULL; -} - - -/* API for embedding applications that want to add their own entries - to the table of built-in modules. This should normally be called - *before* Py_Initialize(). When the table resize fails, -1 is - returned and the existing table is unchanged. - - After a similar function by Just van Rossum. */ - -int -PyImport_ExtendInittab(struct _inittab *newtab) -{ - struct _inittab *p; - size_t i, n; - int res = 0; - - if (_PyRuntime.imports.inittab != NULL) { - Py_FatalError("PyImport_ExtendInittab() may not be called after Py_Initialize()"); - } - - /* Count the number of entries in both tables */ - for (n = 0; newtab[n].name != NULL; n++) - ; - if (n == 0) - return 0; /* Nothing to do */ - for (i = 0; PyImport_Inittab[i].name != NULL; i++) - ; - - /* Force default raw memory allocator to get a known allocator to be able - to release the memory in _PyImport_Fini2() */ - PyMemAllocatorEx old_alloc; - _PyMem_SetDefaultAllocator(PYMEM_DOMAIN_RAW, &old_alloc); - - /* Allocate new memory for the combined table */ - p = NULL; - if (i + n <= SIZE_MAX / sizeof(struct _inittab) - 1) { - size_t size = sizeof(struct _inittab) * (i + n + 1); - p = PyMem_RawRealloc(inittab_copy, size); - } - if (p == NULL) { - res = -1; - goto done; - } - - /* Copy the tables into the new memory at the first call - to PyImport_ExtendInittab(). */ - if (inittab_copy != PyImport_Inittab) { - memcpy(p, PyImport_Inittab, (i+1) * sizeof(struct _inittab)); - } - memcpy(p + i, newtab, (n + 1) * sizeof(struct _inittab)); - PyImport_Inittab = inittab_copy = p; - -done: - PyMem_SetAllocator(PYMEM_DOMAIN_RAW, &old_alloc); - return res; -} - -/* Shorthand to add a single entry given a name and a function */ - -int -PyImport_AppendInittab(const char *name, PyObject* (*initfunc)(void)) -{ - struct _inittab newtab[2]; - - if (_PyRuntime.imports.inittab != NULL) { - Py_FatalError("PyImport_AppendInittab() may not be called after Py_Initialize()"); - } - - memset(newtab, '\0', sizeof newtab); - - newtab[0].name = name; - newtab[0].initfunc = initfunc; - - return PyImport_ExtendInittab(newtab); -} - - -PyObject * -_PyImport_GetModuleAttr(PyObject *modname, PyObject *attrname) -{ - PyObject *mod = PyImport_Import(modname); - if (mod == NULL) { - return NULL; - } - PyObject *result = PyObject_GetAttr(mod, attrname); - Py_DECREF(mod); - return result; -} - -PyObject * -_PyImport_GetModuleAttrString(const char *modname, const char *attrname) -{ - PyObject *pmodname = PyUnicode_FromString(modname); - if (pmodname == NULL) { - return NULL; - } - PyObject *pattrname = PyUnicode_FromString(attrname); - if (pattrname == NULL) { - Py_DECREF(pmodname); - return NULL; - } - PyObject *result = _PyImport_GetModuleAttr(pmodname, pattrname); - Py_DECREF(pattrname); - Py_DECREF(pmodname); - return result; -} - #ifdef __cplusplus } #endif diff --git a/Python/importdl.c b/Python/importdl.c index 91fa06f49c28..6dafb4541486 100644 --- a/Python/importdl.c +++ b/Python/importdl.c @@ -99,7 +99,7 @@ _PyImport_LoadDynamicModuleWithSpec(PyObject *spec, FILE *fp) #endif PyObject *name_unicode = NULL, *name = NULL, *path = NULL, *m = NULL; const char *name_buf, *hook_prefix; - const char *oldcontext; + const char *oldcontext, *newcontext; dl_funcptr exportfunc; PyModuleDef *def; PyModInitFunction p0; @@ -113,6 +113,10 @@ _PyImport_LoadDynamicModuleWithSpec(PyObject *spec, FILE *fp) "spec.name must be a string"); goto error; } + newcontext = PyUnicode_AsUTF8(name_unicode); + if (newcontext == NULL) { + goto error; + } name = get_encoded_name(name_unicode, &hook_prefix); if (name == NULL) { @@ -160,16 +164,9 @@ _PyImport_LoadDynamicModuleWithSpec(PyObject *spec, FILE *fp) p0 = (PyModInitFunction)exportfunc; /* Package context is needed for single-phase init */ -#define _Py_PackageContext (_PyRuntime.imports.pkgcontext) - oldcontext = _Py_PackageContext; - _Py_PackageContext = PyUnicode_AsUTF8(name_unicode); - if (_Py_PackageContext == NULL) { - _Py_PackageContext = oldcontext; - goto error; - } + oldcontext = _PyImport_SwapPackageContext(newcontext); m = _PyImport_InitFunc_TrampolineCall(p0); - _Py_PackageContext = oldcontext; -#undef _Py_PackageContext + _PyImport_SwapPackageContext(oldcontext); if (m == NULL) { if (!PyErr_Occurred()) { diff --git a/Python/pylifecycle.c b/Python/pylifecycle.c index 045a2996e898..281035dafa95 100644 --- a/Python/pylifecycle.c +++ b/Python/pylifecycle.c @@ -156,79 +156,6 @@ Py_IsInitialized(void) } -/* Global initializations. Can be undone by Py_FinalizeEx(). Don't - call this twice without an intervening Py_FinalizeEx() call. When - initializations fail, a fatal error is issued and the function does - not return. On return, the first thread and interpreter state have - been created. - - Locking: you must hold the interpreter lock while calling this. - (If the lock has not yet been initialized, that's equivalent to - having the lock, but you cannot use multiple threads.) - -*/ -static int -init_importlib(PyThreadState *tstate, PyObject *sysmod) -{ - assert(!_PyErr_Occurred(tstate)); - - PyInterpreterState *interp = tstate->interp; - int verbose = _PyInterpreterState_GetConfig(interp)->verbose; - - // Import _importlib through its frozen version, _frozen_importlib. - if (verbose) { - PySys_FormatStderr("import _frozen_importlib # frozen\n"); - } - if (PyImport_ImportFrozenModule("_frozen_importlib") <= 0) { - return -1; - } - PyObject *importlib = PyImport_AddModule("_frozen_importlib"); // borrowed - if (importlib == NULL) { - return -1; - } - interp->importlib = Py_NewRef(importlib); - - // Import the _imp module - if (verbose) { - PySys_FormatStderr("import _imp # builtin\n"); - } - PyObject *imp_mod = _PyImport_BootstrapImp(tstate); - if (imp_mod == NULL) { - return -1; - } - if (_PyImport_SetModuleString("_imp", imp_mod) < 0) { - Py_DECREF(imp_mod); - return -1; - } - - // Install importlib as the implementation of import - PyObject *value = PyObject_CallMethod(importlib, "_install", - "OO", sysmod, imp_mod); - Py_DECREF(imp_mod); - if (value == NULL) { - return -1; - } - Py_DECREF(value); - - assert(!_PyErr_Occurred(tstate)); - return 0; -} - - -static PyStatus -init_importlib_external(PyThreadState *tstate) -{ - PyObject *value; - value = PyObject_CallMethod(tstate->interp->importlib, - "_install_external_importers", ""); - if (value == NULL) { - _PyErr_Print(tstate); - return _PyStatus_ERR("external importer setup failed"); - } - Py_DECREF(value); - return _PyImportZip_Init(tstate); -} - /* Helper functions to better handle the legacy C locale * * The legacy C locale assumes ASCII as the default text encoding, which @@ -814,7 +741,8 @@ pycore_init_builtins(PyThreadState *tstate) goto error; } - if (_PyImport_FixupBuiltin(bimod, "builtins", interp->modules) < 0) { + PyObject *modules = _PyImport_GetModules(interp); + if (_PyImport_FixupBuiltin(bimod, "builtins", modules) < 0) { goto error; } @@ -850,13 +778,9 @@ pycore_init_builtins(PyThreadState *tstate) } Py_DECREF(bimod); - // Get the __import__ function - PyObject *import_func = _PyDict_GetItemStringWithError(interp->builtins, - "__import__"); - if (import_func == NULL) { + if (_PyImport_InitDefaultImportFunc(interp) < 0) { goto error; } - interp->import_func = Py_NewRef(import_func); assert(!_PyErr_Occurred(tstate)); return _PyStatus_OK(); @@ -918,11 +842,10 @@ pycore_interp_init(PyThreadState *tstate) } const PyConfig *config = _PyInterpreterState_GetConfig(interp); - if (config->_install_importlib) { - /* This call sets up builtin and frozen import support */ - if (init_importlib(tstate, sysmod) < 0) { - return _PyStatus_ERR("failed to initialize importlib"); - } + + status = _PyImport_InitCore(tstate, sysmod, config->_install_importlib); + if (_PyStatus_EXCEPTION(status)) { + goto done; } done: @@ -1172,7 +1095,7 @@ init_interp_main(PyThreadState *tstate) return _PyStatus_ERR("failed to update the Python config"); } - status = init_importlib_external(tstate); + status = _PyImport_InitExternal(tstate); if (_PyStatus_EXCEPTION(status)) { return status; } @@ -1379,8 +1302,11 @@ finalize_modules_delete_special(PyThreadState *tstate, int verbose) static const char * const sys_deletes[] = { "path", "argv", "ps1", "ps2", "last_type", "last_value", "last_traceback", - "path_hooks", "path_importer_cache", "meta_path", "__interactivehook__", + // path_hooks and path_importer_cache are cleared + // by _PyImport_FiniExternal(). + // XXX Clear meta_path in _PyImport_FiniCore(). + "meta_path", NULL }; @@ -1401,10 +1327,7 @@ finalize_modules_delete_special(PyThreadState *tstate, int verbose) const char * const *p; for (p = sys_deletes; *p != NULL; p++) { - if (verbose) { - PySys_WriteStderr("# clear sys.%s\n", *p); - } - if (PyDict_SetItemString(interp->sysdict, *p, Py_None) < 0) { + if (_PySys_ClearAttrString(interp, *p, verbose) < 0) { PyErr_WriteUnraisable(NULL); } } @@ -1576,11 +1499,12 @@ finalize_clear_sys_builtins_dict(PyInterpreterState *interp, int verbose) /* Clear modules, as good as we can */ +// XXX Move most of this to import.c. static void finalize_modules(PyThreadState *tstate) { PyInterpreterState *interp = tstate->interp; - PyObject *modules = interp->modules; + PyObject *modules = _PyImport_GetModules(interp); if (modules == NULL) { // Already done return; @@ -1645,12 +1569,12 @@ finalize_modules(PyThreadState *tstate) // clear PyInterpreterState.modules_by_index and // clear PyModuleDef.m_base.m_copy (of extensions not using the multi-phase // initialization API) - _PyInterpreterState_ClearModules(interp); + _PyImport_ClearModulesByIndex(interp); // Clear and delete the modules directory. Actual modules will // still be there only if imported during the execution of some // destructor. - Py_SETREF(interp->modules, NULL); + _PyImport_ClearModules(interp); // Collect garbage once more _PyGC_CollectNoFail(tstate); @@ -1861,6 +1785,8 @@ Py_FinalizeEx(void) runtime->initialized = 0; runtime->core_initialized = 0; + // XXX Call something like _PyImport_Disable() here? + /* Destroy the state of all threads of the interpreter, except of the current thread. In practice, only daemon threads should still be alive, except if wait_for_thread_shutdown() has been cancelled by CTRL+C. @@ -1910,6 +1836,7 @@ Py_FinalizeEx(void) PyGC_Collect(); /* Destroy all modules */ + _PyImport_FiniExternal(tstate->interp); finalize_modules(tstate); /* Print debug stats if any */ @@ -1943,7 +1870,9 @@ Py_FinalizeEx(void) so it is possible to use tracemalloc in objects destructor. */ _PyTraceMalloc_Fini(); - /* Destroy the database used by _PyImport_{Fixup,Find}Extension */ + /* Finalize any remaining import state */ + // XXX Move these up to where finalize_modules() is currently. + _PyImport_FiniCore(tstate->interp); _PyImport_Fini(); /* unload faulthandler module */ @@ -2183,7 +2112,11 @@ Py_EndInterpreter(PyThreadState *tstate) Py_FatalError("not the last thread"); } + // XXX Call something like _PyImport_Disable() here? + + _PyImport_FiniExternal(tstate->interp); finalize_modules(tstate); + _PyImport_FiniCore(tstate->interp); finalize_interp_clear(tstate); finalize_interp_delete(tstate->interp); @@ -2232,8 +2165,8 @@ add_main_module(PyInterpreterState *interp) if (PyErr_Occurred()) { return _PyStatus_ERR("Failed to test __main__.__loader__"); } - PyObject *loader = PyObject_GetAttrString(interp->importlib, - "BuiltinImporter"); + PyObject *loader = _PyImport_GetImportlibLoader(interp, + "BuiltinImporter"); if (loader == NULL) { return _PyStatus_ERR("Failed to retrieve BuiltinImporter"); } @@ -2739,7 +2672,7 @@ _Py_DumpExtensionModules(int fd, PyInterpreterState *interp) if (interp == NULL) { return; } - PyObject *modules = interp->modules; + PyObject *modules = _PyImport_GetModules(interp); if (modules == NULL || !PyDict_Check(modules)) { return; } diff --git a/Python/pystate.c b/Python/pystate.c index 1261092d1435..4770caaed0a3 100644 --- a/Python/pystate.c +++ b/Python/pystate.c @@ -772,11 +772,13 @@ interpreter_clear(PyInterpreterState *interp, PyThreadState *tstate) Py_CLEAR(interp->codec_search_path); Py_CLEAR(interp->codec_search_cache); Py_CLEAR(interp->codec_error_registry); - Py_CLEAR(interp->modules); - Py_CLEAR(interp->modules_by_index); + + assert(interp->imports.modules == NULL); + assert(interp->imports.modules_by_index == NULL); + assert(interp->imports.importlib == NULL); + assert(interp->imports.import_func == NULL); + Py_CLEAR(interp->builtins_copy); - Py_CLEAR(interp->importlib); - Py_CLEAR(interp->import_func); Py_CLEAR(interp->dict); #ifdef HAVE_FORK Py_CLEAR(interp->before_forkers); @@ -836,6 +838,7 @@ PyInterpreterState_Clear(PyInterpreterState *interp) // garbage. It can be different than the current Python thread state // of 'interp'. PyThreadState *current_tstate = current_fast_get(interp->runtime); + _PyImport_ClearCore(interp); interpreter_clear(interp, current_tstate); } @@ -843,6 +846,7 @@ PyInterpreterState_Clear(PyInterpreterState *interp) void _PyInterpreterState_Clear(PyThreadState *tstate) { + _PyImport_ClearCore(tstate->interp); interpreter_clear(tstate->interp, tstate); } @@ -945,36 +949,6 @@ _PyInterpreterState_DeleteExceptMain(_PyRuntimeState *runtime) #endif -// Used by finalize_modules() -void -_PyInterpreterState_ClearModules(PyInterpreterState *interp) -{ - if (!interp->modules_by_index) { - return; - } - - Py_ssize_t i; - for (i = 0; i < PyList_GET_SIZE(interp->modules_by_index); i++) { - PyObject *m = PyList_GET_ITEM(interp->modules_by_index, i); - if (PyModule_Check(m)) { - /* cleanup the saved copy of module dicts */ - PyModuleDef *md = PyModule_GetDef(m); - if (md) { - Py_CLEAR(md->m_base.m_copy); - } - } - } - - /* Setting modules_by_index to NULL could be dangerous, so we - clear the list instead. */ - if (PyList_SetSlice(interp->modules_by_index, - 0, PyList_GET_SIZE(interp->modules_by_index), - NULL)) { - PyErr_WriteUnraisable(interp->modules_by_index); - } -} - - //---------- // accessors //---------- @@ -1058,11 +1032,12 @@ _PyInterpreterState_RequireIDRef(PyInterpreterState *interp, int required) PyObject * _PyInterpreterState_GetMainModule(PyInterpreterState *interp) { - if (interp->modules == NULL) { + PyObject *modules = _PyImport_GetModules(interp); + if (modules == NULL) { PyErr_SetString(PyExc_RuntimeError, "interpreter not initialized"); return NULL; } - return PyMapping_GetItemString(interp->modules, "__main__"); + return PyMapping_GetItemString(modules, "__main__"); } PyObject * @@ -1922,110 +1897,6 @@ _PyThread_CurrentExceptions(void) } -/****************/ -/* module state */ -/****************/ - -PyObject* -PyState_FindModule(PyModuleDef* module) -{ - Py_ssize_t index = module->m_base.m_index; - PyInterpreterState *state = _PyInterpreterState_GET(); - PyObject *res; - if (module->m_slots) { - return NULL; - } - if (index == 0) - return NULL; - if (state->modules_by_index == NULL) - return NULL; - if (index >= PyList_GET_SIZE(state->modules_by_index)) - return NULL; - res = PyList_GET_ITEM(state->modules_by_index, index); - return res==Py_None ? NULL : res; -} - -int -_PyState_AddModule(PyThreadState *tstate, PyObject* module, PyModuleDef* def) -{ - if (!def) { - assert(_PyErr_Occurred(tstate)); - return -1; - } - if (def->m_slots) { - _PyErr_SetString(tstate, - PyExc_SystemError, - "PyState_AddModule called on module with slots"); - return -1; - } - - PyInterpreterState *interp = tstate->interp; - if (!interp->modules_by_index) { - interp->modules_by_index = PyList_New(0); - if (!interp->modules_by_index) { - return -1; - } - } - - while (PyList_GET_SIZE(interp->modules_by_index) <= def->m_base.m_index) { - if (PyList_Append(interp->modules_by_index, Py_None) < 0) { - return -1; - } - } - - return PyList_SetItem(interp->modules_by_index, - def->m_base.m_index, Py_NewRef(module)); -} - -int -PyState_AddModule(PyObject* module, PyModuleDef* def) -{ - if (!def) { - Py_FatalError("module definition is NULL"); - return -1; - } - - PyThreadState *tstate = current_fast_get(&_PyRuntime); - PyInterpreterState *interp = tstate->interp; - Py_ssize_t index = def->m_base.m_index; - if (interp->modules_by_index && - index < PyList_GET_SIZE(interp->modules_by_index) && - module == PyList_GET_ITEM(interp->modules_by_index, index)) - { - _Py_FatalErrorFormat(__func__, "module %p already added", module); - return -1; - } - return _PyState_AddModule(tstate, module, def); -} - -int -PyState_RemoveModule(PyModuleDef* def) -{ - PyThreadState *tstate = current_fast_get(&_PyRuntime); - PyInterpreterState *interp = tstate->interp; - - if (def->m_slots) { - _PyErr_SetString(tstate, - PyExc_SystemError, - "PyState_RemoveModule called on module with slots"); - return -1; - } - - Py_ssize_t index = def->m_base.m_index; - if (index == 0) { - Py_FatalError("invalid module index"); - } - if (interp->modules_by_index == NULL) { - Py_FatalError("Interpreters module-list not accessible."); - } - if (index > PyList_GET_SIZE(interp->modules_by_index)) { - Py_FatalError("Module index out of bounds."); - } - - return PyList_SetItem(interp->modules_by_index, index, Py_NewRef(Py_None)); -} - - /***********************************/ /* Python "auto thread state" API. */ /***********************************/ diff --git a/Python/pythonrun.c b/Python/pythonrun.c index 6a4d59376869..ce993ea8796c 100644 --- a/Python/pythonrun.c +++ b/Python/pythonrun.c @@ -350,14 +350,8 @@ static int set_main_loader(PyObject *d, PyObject *filename, const char *loader_name) { PyInterpreterState *interp = _PyInterpreterState_GET(); - PyObject *bootstrap = PyObject_GetAttrString(interp->importlib, - "_bootstrap_external"); - if (bootstrap == NULL) { - return -1; - } - - PyObject *loader_type = PyObject_GetAttrString(bootstrap, loader_name); - Py_DECREF(bootstrap); + PyObject *loader_type = _PyImport_GetImportlibExternalLoader(interp, + loader_name); if (loader_type == NULL) { return -1; } diff --git a/Python/sysmodule.c b/Python/sysmodule.c index 6e81ef92b67f..b69b80356092 100644 --- a/Python/sysmodule.c +++ b/Python/sysmodule.c @@ -142,6 +142,20 @@ PySys_SetObject(const char *name, PyObject *v) return sys_set_object_str(interp, name, v); } +int +_PySys_ClearAttrString(PyInterpreterState *interp, + const char *name, int verbose) +{ + if (verbose) { + PySys_WriteStderr("# clear sys.%s\n", name); + } + /* To play it safe, we set the attr to None instead of deleting it. */ + if (PyDict_SetItemString(interp->sysdict, name, Py_None) < 0) { + return -1; + } + return 0; +} + static int should_audit(PyInterpreterState *interp) @@ -1650,7 +1664,7 @@ sys_setdlopenflags_impl(PyObject *module, int new_val) /*[clinic end generated code: output=ec918b7fe0a37281 input=4c838211e857a77f]*/ { PyInterpreterState *interp = _PyInterpreterState_GET(); - interp->dlopenflags = new_val; + _PyImport_SetDLOpenFlags(interp, new_val); Py_RETURN_NONE; } @@ -1668,7 +1682,8 @@ sys_getdlopenflags_impl(PyObject *module) /*[clinic end generated code: output=e92cd1bc5005da6e input=dc4ea0899c53b4b6]*/ { PyInterpreterState *interp = _PyInterpreterState_GET(); - return PyLong_FromLong(interp->dlopenflags); + return PyLong_FromLong( + _PyImport_GetDLOpenFlags(interp)); } #endif /* HAVE_DLOPEN */ @@ -2279,22 +2294,10 @@ static PyMethodDef sys_methods[] = { static PyObject * list_builtin_module_names(void) { - PyObject *list = PyList_New(0); + PyObject *list = _PyImport_GetBuiltinModuleNames(); if (list == NULL) { return NULL; } - struct _inittab *inittab = _PyRuntime.imports.inittab; - for (Py_ssize_t i = 0; inittab[i].name != NULL; i++) { - PyObject *name = PyUnicode_FromString(inittab[i].name); - if (name == NULL) { - goto error; - } - if (PyList_Append(list, name) < 0) { - Py_DECREF(name); - goto error; - } - Py_DECREF(name); - } if (PyList_Sort(list) != 0) { goto error; } @@ -3411,11 +3414,10 @@ _PySys_Create(PyThreadState *tstate, PyObject **sysmod_p) PyInterpreterState *interp = tstate->interp; - PyObject *modules = PyDict_New(); + PyObject *modules = _PyImport_InitModules(interp); if (modules == NULL) { goto error; } - interp->modules = modules; PyObject *sysmod = _PyModule_CreateInitialized(&sysmodule, PYTHON_API_VERSION); if (sysmod == NULL) { @@ -3428,7 +3430,7 @@ _PySys_Create(PyThreadState *tstate, PyObject **sysmod_p) } interp->sysdict = Py_NewRef(sysdict); - if (PyDict_SetItemString(sysdict, "modules", interp->modules) < 0) { + if (PyDict_SetItemString(sysdict, "modules", modules) < 0) { goto error; } @@ -3442,7 +3444,7 @@ _PySys_Create(PyThreadState *tstate, PyObject **sysmod_p) return status; } - if (_PyImport_FixupBuiltin(sysmod, "sys", interp->modules) < 0) { + if (_PyImport_FixupBuiltin(sysmod, "sys", modules) < 0) { goto error; } From webhook-mailer at python.org Wed Feb 15 18:05:14 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Wed, 15 Feb 2023 23:05:14 -0000 Subject: [Python-checkins] gh-101758: Add a Test For Single-Phase Init Modules in Multiple Interpreters (gh-101920) Message-ID: https://github.com/python/cpython/commit/b365d88465d9228ce4e9e0be20b88e9e4056ad88 commit: b365d88465d9228ce4e9e0be20b88e9e4056ad88 branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-15T16:05:07-07:00 summary: gh-101758: Add a Test For Single-Phase Init Modules in Multiple Interpreters (gh-101920) The test verifies the behavior of single-phase init modules when loaded in multiple interpreters. https://github.com/python/cpython/issues/101758 files: M Include/internal/pycore_import.h M Lib/test/test_imp.py M Modules/_testinternalcapi.c M Modules/_testsinglephase.c M Python/import.c diff --git a/Include/internal/pycore_import.h b/Include/internal/pycore_import.h index da766253ef6b..6ee7356b41c0 100644 --- a/Include/internal/pycore_import.h +++ b/Include/internal/pycore_import.h @@ -153,6 +153,9 @@ PyAPI_DATA(const struct _frozen *) _PyImport_FrozenStdlib; PyAPI_DATA(const struct _frozen *) _PyImport_FrozenTest; extern const struct _module_alias * _PyImport_FrozenAliases; +// for testing +PyAPI_FUNC(int) _PyImport_ClearExtension(PyObject *name, PyObject *filename); + #ifdef __cplusplus } #endif diff --git a/Lib/test/test_imp.py b/Lib/test/test_imp.py index c85ab92307de..e81eb6f0a86f 100644 --- a/Lib/test/test_imp.py +++ b/Lib/test/test_imp.py @@ -10,10 +10,13 @@ from test.support import os_helper from test.support import script_helper from test.support import warnings_helper +import textwrap import unittest import warnings imp = warnings_helper.import_deprecated('imp') import _imp +import _testinternalcapi +import _xxsubinterpreters as _interpreters OS_PATH_NAME = os.path.__name__ @@ -251,6 +254,71 @@ def test_issue16421_multiple_modules_in_one_dll(self): with self.assertRaises(ImportError): imp.load_dynamic('nonexistent', pathname) + @requires_load_dynamic + def test_singlephase_multiple_interpreters(self): + # Currently, for every single-phrase init module loaded + # in multiple interpreters, those interpreters share a + # PyModuleDef for that object, which can be a problem. + + # This single-phase module has global state, which is shared + # by the interpreters. + import _testsinglephase + name = _testsinglephase.__name__ + filename = _testsinglephase.__file__ + + del sys.modules[name] + _testsinglephase._clear_globals() + _testinternalcapi.clear_extension(name, filename) + init_count = _testsinglephase.initialized_count() + assert init_count == -1, (init_count,) + + def clean_up(): + _testsinglephase._clear_globals() + _testinternalcapi.clear_extension(name, filename) + self.addCleanup(clean_up) + + interp1 = _interpreters.create(isolated=False) + self.addCleanup(_interpreters.destroy, interp1) + interp2 = _interpreters.create(isolated=False) + self.addCleanup(_interpreters.destroy, interp2) + + script = textwrap.dedent(f''' + import _testsinglephase + + expected = %d + init_count = _testsinglephase.initialized_count() + if init_count != expected: + raise Exception(init_count) + + lookedup = _testsinglephase.look_up_self() + if lookedup is not _testsinglephase: + raise Exception((_testsinglephase, lookedup)) + + # Attrs set in the module init func are in m_copy. + _initialized = _testsinglephase._initialized + initialized = _testsinglephase.initialized() + if _initialized != initialized: + raise Exception((_initialized, initialized)) + + # Attrs set after loading are not in m_copy. + if hasattr(_testsinglephase, 'spam'): + raise Exception(_testsinglephase.spam) + _testsinglephase.spam = expected + ''') + + # Use an interpreter that gets destroyed right away. + ret = support.run_in_subinterp(script % 1) + self.assertEqual(ret, 0) + + # The module's init func gets run again. + # The module's globals did not get destroyed. + _interpreters.run_string(interp1, script % 2) + + # The module's init func is not run again. + # The second interpreter copies the module's m_copy. + # However, globals are still shared. + _interpreters.run_string(interp2, script % 2) + @requires_load_dynamic def test_singlephase_variants(self): '''Exercise the most meaningful variants described in Python/import.c.''' @@ -260,6 +328,11 @@ def test_singlephase_variants(self): fileobj, pathname, _ = imp.find_module(basename) fileobj.close() + def clean_up(): + import _testsinglephase + _testsinglephase._clear_globals() + self.addCleanup(clean_up) + modules = {} def load(name): assert name not in modules diff --git a/Modules/_testinternalcapi.c b/Modules/_testinternalcapi.c index ba57719d9209..632fac2de0c4 100644 --- a/Modules/_testinternalcapi.c +++ b/Modules/_testinternalcapi.c @@ -671,6 +671,20 @@ get_interp_settings(PyObject *self, PyObject *args) } +static PyObject * +clear_extension(PyObject *self, PyObject *args) +{ + PyObject *name = NULL, *filename = NULL; + if (!PyArg_ParseTuple(args, "OO:clear_extension", &name, &filename)) { + return NULL; + } + if (_PyImport_ClearExtension(name, filename) < 0) { + return NULL; + } + Py_RETURN_NONE; +} + + static PyMethodDef module_functions[] = { {"get_configs", get_configs, METH_NOARGS}, {"get_recursion_depth", get_recursion_depth, METH_NOARGS}, @@ -692,6 +706,7 @@ static PyMethodDef module_functions[] = { _TESTINTERNALCAPI_COMPILER_CODEGEN_METHODDEF _TESTINTERNALCAPI_OPTIMIZE_CFG_METHODDEF {"get_interp_settings", get_interp_settings, METH_VARARGS, NULL}, + {"clear_extension", clear_extension, METH_VARARGS, NULL}, {NULL, NULL} /* sentinel */ }; diff --git a/Modules/_testsinglephase.c b/Modules/_testsinglephase.c index 9e8dd647ee76..565221c887e5 100644 --- a/Modules/_testsinglephase.c +++ b/Modules/_testsinglephase.c @@ -17,12 +17,27 @@ typedef struct { PyObject *str_const; } module_state; + /* Process-global state is only used by _testsinglephase since it's the only one that does not support re-init. */ static struct { int initialized_count; module_state module; -} global_state = { .initialized_count = -1 }; +} global_state = { + +#define NOT_INITIALIZED -1 + .initialized_count = NOT_INITIALIZED, +}; + +static void clear_state(module_state *state); + +static void +clear_global_state(void) +{ + clear_state(&global_state.module); + global_state.initialized_count = NOT_INITIALIZED; +} + static inline module_state * get_module_state(PyObject *module) @@ -106,6 +121,7 @@ init_state(module_state *state) return -1; } + static int init_module(PyObject *module, module_state *state) { @@ -118,6 +134,16 @@ init_module(PyObject *module, module_state *state) if (PyModule_AddObjectRef(module, "str_const", state->str_const) != 0) { return -1; } + + double d = _PyTime_AsSecondsDouble(state->initialized); + PyObject *initialized = PyFloat_FromDouble(d); + if (initialized == NULL) { + return -1; + } + if (PyModule_AddObjectRef(module, "_initialized", initialized) != 0) { + return -1; + } + return 0; } @@ -198,10 +224,28 @@ basic_initialized_count(PyObject *self, PyObject *Py_UNUSED(ignored)) } #define INITIALIZED_COUNT_METHODDEF \ - {"initialized_count", basic_initialized_count, METH_VARARGS, \ + {"initialized_count", basic_initialized_count, METH_NOARGS, \ basic_initialized_count_doc} +PyDoc_STRVAR(basic__clear_globals_doc, +"_clear_globals()\n\ +\n\ +Free all global state and set it to uninitialized."); + +static PyObject * +basic__clear_globals(PyObject *self, PyObject *Py_UNUSED(ignored)) +{ + assert(PyModule_GetDef(self)->m_size == -1); + clear_global_state(); + Py_RETURN_NONE; +} + +#define _CLEAR_GLOBALS_METHODDEF \ + {"_clear_globals", basic__clear_globals, METH_NOARGS, \ + basic__clear_globals_doc} + + /*********************************************/ /* the _testsinglephase module (and aliases) */ /*********************************************/ @@ -223,6 +267,7 @@ static PyMethodDef TestMethods_Basic[] = { SUM_METHODDEF, INITIALIZED_METHODDEF, INITIALIZED_COUNT_METHODDEF, + _CLEAR_GLOBALS_METHODDEF, {NULL, NULL} /* sentinel */ }; diff --git a/Python/import.c b/Python/import.c index ae27aaf56848..87981668a305 100644 --- a/Python/import.c +++ b/Python/import.c @@ -632,6 +632,28 @@ exec_builtin_or_dynamic(PyObject *mod) { } +static int clear_singlephase_extension(PyInterpreterState *interp, + PyObject *name, PyObject *filename); + +// Currently, this is only used for testing. +// (See _testinternalcapi.clear_extension().) +int +_PyImport_ClearExtension(PyObject *name, PyObject *filename) +{ + PyInterpreterState *interp = _PyInterpreterState_GET(); + + /* Clearing a module's C globals is up to the module. */ + if (clear_singlephase_extension(interp, name, filename) < 0) { + return -1; + } + + // In the future we'll probably also make sure the extension's + // file handle (and DL handle) is closed (requires saving it). + + return 0; +} + + /*******************/ #if defined(__EMSCRIPTEN__) && defined(PY_CALL_TRAMPOLINE) @@ -766,8 +788,30 @@ _extensions_cache_set(PyObject *filename, PyObject *name, PyModuleDef *def) return 0; } +static int +_extensions_cache_delete(PyObject *filename, PyObject *name) +{ + PyObject *extensions = EXTENSIONS; + if (extensions == NULL) { + return 0; + } + PyObject *key = PyTuple_Pack(2, filename, name); + if (key == NULL) { + return -1; + } + if (PyDict_DelItem(extensions, key) < 0) { + if (!PyErr_ExceptionMatches(PyExc_KeyError)) { + Py_DECREF(key); + return -1; + } + PyErr_Clear(); + } + Py_DECREF(key); + return 0; +} + static void -_extensions_cache_clear(void) +_extensions_cache_clear_all(void) { Py_CLEAR(EXTENSIONS); } @@ -890,6 +934,34 @@ import_find_extension(PyThreadState *tstate, PyObject *name, return mod; } +static int +clear_singlephase_extension(PyInterpreterState *interp, + PyObject *name, PyObject *filename) +{ + PyModuleDef *def = _extensions_cache_get(filename, name); + if (def == NULL) { + if (PyErr_Occurred()) { + return -1; + } + return 0; + } + + /* Clear data set when the module was initially loaded. */ + def->m_base.m_init = NULL; + Py_CLEAR(def->m_base.m_copy); + // We leave m_index alone since there's no reason to reset it. + + /* Clear the PyState_*Module() cache entry. */ + if (_modules_by_index_check(interp, def->m_base.m_index) == NULL) { + if (_modules_by_index_clear(interp, def) < 0) { + return -1; + } + } + + /* Clear the cached module def. */ + return _extensions_cache_delete(filename, name); +} + /*******************/ /* builtin modules */ @@ -2633,7 +2705,7 @@ void _PyImport_Fini(void) { /* Destroy the database used by _PyImport_{Fixup,Find}Extension */ - _extensions_cache_clear(); + _extensions_cache_clear_all(); if (import_lock != NULL) { PyThread_free_lock(import_lock); import_lock = NULL; From webhook-mailer at python.org Wed Feb 15 19:54:12 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Thu, 16 Feb 2023 00:54:12 -0000 Subject: [Python-checkins] gh-101758: Fix the wasm Buildbots (gh-101943) Message-ID: https://github.com/python/cpython/commit/3dea4ba6c1b9237893d23574f931f33c940b74e8 commit: 3dea4ba6c1b9237893d23574f931f33c940b74e8 branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-15T17:54:05-07:00 summary: gh-101758: Fix the wasm Buildbots (gh-101943) They were broken by gh-101920. https://github.com/python/cpython/issues/101758 files: M Lib/test/test_imp.py M Python/pystate.c diff --git a/Lib/test/test_imp.py b/Lib/test/test_imp.py index e81eb6f0a86f..5997ffad8e12 100644 --- a/Lib/test/test_imp.py +++ b/Lib/test/test_imp.py @@ -16,12 +16,21 @@ imp = warnings_helper.import_deprecated('imp') import _imp import _testinternalcapi -import _xxsubinterpreters as _interpreters +try: + import _xxsubinterpreters as _interpreters +except ModuleNotFoundError: + _interpreters = None OS_PATH_NAME = os.path.__name__ +def requires_subinterpreters(meth): + """Decorator to skip a test if subinterpreters are not supported.""" + return unittest.skipIf(_interpreters is None, + 'subinterpreters required')(meth) + + def requires_load_dynamic(meth): """Decorator to skip a test if not running under CPython or lacking imp.load_dynamic().""" @@ -254,6 +263,7 @@ def test_issue16421_multiple_modules_in_one_dll(self): with self.assertRaises(ImportError): imp.load_dynamic('nonexistent', pathname) + @requires_subinterpreters @requires_load_dynamic def test_singlephase_multiple_interpreters(self): # Currently, for every single-phrase init module loaded diff --git a/Python/pystate.c b/Python/pystate.c index 4770caaed0a3..32b17fd19e34 100644 --- a/Python/pystate.c +++ b/Python/pystate.c @@ -197,6 +197,7 @@ gilstate_tss_clear(_PyRuntimeState *runtime) } +#ifndef NDEBUG static inline int tstate_is_alive(PyThreadState *tstate); static inline int @@ -204,6 +205,7 @@ tstate_is_bound(PyThreadState *tstate) { return tstate->_status.bound && !tstate->_status.unbound; } +#endif // !NDEBUG static void bind_gilstate_tstate(PyThreadState *); static void unbind_gilstate_tstate(PyThreadState *); @@ -1119,6 +1121,7 @@ _PyInterpreterState_LookUpID(int64_t requested_id) /* the per-thread runtime state */ /********************************/ +#ifndef NDEBUG static inline int tstate_is_alive(PyThreadState *tstate) { @@ -1127,6 +1130,7 @@ tstate_is_alive(PyThreadState *tstate) !tstate->_status.cleared && !tstate->_status.finalizing); } +#endif //---------- From webhook-mailer at python.org Wed Feb 15 20:16:18 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Thu, 16 Feb 2023 01:16:18 -0000 Subject: [Python-checkins] gh-98627: Add an Optional Check for Extension Module Subinterpreter Compatibility (gh-99040) Message-ID: https://github.com/python/cpython/commit/89ac665891dec1988bedec2ce9b2c4d016502a49 commit: 89ac665891dec1988bedec2ce9b2c4d016502a49 branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-15T18:16:00-07:00 summary: gh-98627: Add an Optional Check for Extension Module Subinterpreter Compatibility (gh-99040) Enforcing (optionally) the restriction set by PEP 489 makes sense. Furthermore, this sets the stage for a potential restriction related to a per-interpreter GIL. This change includes the following: * add tests for extension module subinterpreter compatibility * add _PyInterpreterConfig.check_multi_interp_extensions * add Py_RTFLAGS_MULTI_INTERP_EXTENSIONS * add _PyImport_CheckSubinterpIncompatibleExtensionAllowed() * fail iff the module does not implement multi-phase init and the current interpreter is configured to check https://github.com/python/cpython/issues/98627 files: A Lib/test/test_capi/check_config.py A Misc/NEWS.d/next/Core and Builtins/2022-11-02-20-23-47.gh-issue-98627.VJkdRM.rst M Include/cpython/initconfig.h M Include/cpython/pystate.h M Include/internal/pycore_import.h M Lib/test/support/import_helper.py M Lib/test/test_capi/test_misc.py M Lib/test/test_embed.py M Lib/test/test_import/__init__.py M Lib/test/test_threading.py M Modules/_testcapimodule.c M Python/clinic/import.c.h M Python/import.c M Python/importdl.c M Python/pylifecycle.c diff --git a/Include/cpython/initconfig.h b/Include/cpython/initconfig.h index 6ce42b4c0950..a070fa9ff3a0 100644 --- a/Include/cpython/initconfig.h +++ b/Include/cpython/initconfig.h @@ -248,6 +248,7 @@ typedef struct { int allow_exec; int allow_threads; int allow_daemon_threads; + int check_multi_interp_extensions; } _PyInterpreterConfig; #define _PyInterpreterConfig_INIT \ @@ -256,6 +257,7 @@ typedef struct { .allow_exec = 0, \ .allow_threads = 1, \ .allow_daemon_threads = 0, \ + .check_multi_interp_extensions = 1, \ } #define _PyInterpreterConfig_LEGACY_INIT \ @@ -264,6 +266,7 @@ typedef struct { .allow_exec = 1, \ .allow_threads = 1, \ .allow_daemon_threads = 1, \ + .check_multi_interp_extensions = 0, \ } /* --- Helper functions --------------------------------------- */ diff --git a/Include/cpython/pystate.h b/Include/cpython/pystate.h index be1fcb61fa22..3efb241e8237 100644 --- a/Include/cpython/pystate.h +++ b/Include/cpython/pystate.h @@ -11,6 +11,9 @@ is available in a given context. For example, forking the process might not be allowed in the current interpreter (i.e. os.fork() would fail). */ +/* Set if import should check a module for subinterpreter support. */ +#define Py_RTFLAGS_MULTI_INTERP_EXTENSIONS (1UL << 8) + /* Set if threads are allowed. */ #define Py_RTFLAGS_THREADS (1UL << 10) diff --git a/Include/internal/pycore_import.h b/Include/internal/pycore_import.h index 6ee7356b41c0..b7ffe01c0c0e 100644 --- a/Include/internal/pycore_import.h +++ b/Include/internal/pycore_import.h @@ -64,6 +64,7 @@ struct _import_state { /* override for config->use_frozen_modules (for tests) (-1: "off", 1: "on", 0: no override) */ int override_frozen_modules; + int override_multi_interp_extensions_check; #ifdef HAVE_DLOPEN int dlopenflags; #endif @@ -153,6 +154,10 @@ PyAPI_DATA(const struct _frozen *) _PyImport_FrozenStdlib; PyAPI_DATA(const struct _frozen *) _PyImport_FrozenTest; extern const struct _module_alias * _PyImport_FrozenAliases; +PyAPI_FUNC(int) _PyImport_CheckSubinterpIncompatibleExtensionAllowed( + const char *name); + + // for testing PyAPI_FUNC(int) _PyImport_ClearExtension(PyObject *name, PyObject *filename); diff --git a/Lib/test/support/import_helper.py b/Lib/test/support/import_helper.py index 63a8a7952db7..772c0987c2eb 100644 --- a/Lib/test/support/import_helper.py +++ b/Lib/test/support/import_helper.py @@ -105,6 +105,24 @@ def frozen_modules(enabled=True): _imp._override_frozen_modules_for_tests(0) + at contextlib.contextmanager +def multi_interp_extensions_check(enabled=True): + """Force legacy modules to be allowed in subinterpreters (or not). + + ("legacy" == single-phase init) + + This only applies to modules that haven't been imported yet. + It overrides the PyInterpreterConfig.check_multi_interp_extensions + setting (see support.run_in_subinterp_with_config() and + _xxsubinterpreters.create()). + """ + old = _imp._override_multi_interp_extensions_check(1 if enabled else -1) + try: + yield + finally: + _imp._override_multi_interp_extensions_check(old) + + def import_fresh_module(name, fresh=(), blocked=(), *, deprecated=False, usefrozen=False, diff --git a/Lib/test/test_capi/check_config.py b/Lib/test/test_capi/check_config.py new file mode 100644 index 000000000000..aaedd82f39af --- /dev/null +++ b/Lib/test/test_capi/check_config.py @@ -0,0 +1,77 @@ +# This script is used by test_misc. + +import _imp +import _testinternalcapi +import json +import os +import sys + + +def import_singlephase(): + assert '_testsinglephase' not in sys.modules + try: + import _testsinglephase + except ImportError: + sys.modules.pop('_testsinglephase') + return False + else: + del sys.modules['_testsinglephase'] + return True + + +def check_singlephase(override): + # Check using the default setting. + settings_initial = _testinternalcapi.get_interp_settings() + allowed_initial = import_singlephase() + assert(_testinternalcapi.get_interp_settings() == settings_initial) + + # Apply the override and check. + override_initial = _imp._override_multi_interp_extensions_check(override) + settings_after = _testinternalcapi.get_interp_settings() + allowed_after = import_singlephase() + + # Apply the override again and check. + noop = {} + override_after = _imp._override_multi_interp_extensions_check(override) + settings_noop = _testinternalcapi.get_interp_settings() + if settings_noop != settings_after: + noop['settings_noop'] = settings_noop + allowed_noop = import_singlephase() + if allowed_noop != allowed_after: + noop['allowed_noop'] = allowed_noop + + # Restore the original setting and check. + override_noop = _imp._override_multi_interp_extensions_check(override_initial) + if override_noop != override_after: + noop['override_noop'] = override_noop + settings_restored = _testinternalcapi.get_interp_settings() + allowed_restored = import_singlephase() + + # Restore the original setting again. + override_restored = _imp._override_multi_interp_extensions_check(override_initial) + assert(_testinternalcapi.get_interp_settings() == settings_restored) + + return dict({ + 'requested': override, + 'override__initial': override_initial, + 'override_after': override_after, + 'override_restored': override_restored, + 'settings__initial': settings_initial, + 'settings_after': settings_after, + 'settings_restored': settings_restored, + 'allowed__initial': allowed_initial, + 'allowed_after': allowed_after, + 'allowed_restored': allowed_restored, + }, **noop) + + +def run_singlephase_check(override, outfd): + with os.fdopen(outfd, 'w') as outfile: + sys.stdout = outfile + sys.stderr = outfile + try: + results = check_singlephase(override) + json.dump(results, outfile) + finally: + sys.stdout = sys.__stdout__ + sys.stderr = sys.__stderr__ diff --git a/Lib/test/test_capi/test_misc.py b/Lib/test/test_capi/test_misc.py index 7612cddb1f65..f26b4723d1e6 100644 --- a/Lib/test/test_capi/test_misc.py +++ b/Lib/test/test_capi/test_misc.py @@ -31,6 +31,10 @@ import _testmultiphase except ImportError: _testmultiphase = None +try: + import _testsinglephase +except ImportError: + _testsinglephase = None # Skip this test if the _testcapi module isn't available. _testcapi = import_helper.import_module('_testcapi') @@ -1297,17 +1301,20 @@ def test_configured_settings(self): """ import json + EXTENSIONS = 1<<8 THREADS = 1<<10 DAEMON_THREADS = 1<<11 FORK = 1<<15 EXEC = 1<<16 - features = ['fork', 'exec', 'threads', 'daemon_threads'] + features = ['fork', 'exec', 'threads', 'daemon_threads', 'extensions'] kwlist = [f'allow_{n}' for n in features] + kwlist[-1] = 'check_multi_interp_extensions' for config, expected in { - (True, True, True, True): FORK | EXEC | THREADS | DAEMON_THREADS, - (False, False, False, False): 0, - (False, False, True, False): THREADS, + (True, True, True, True, True): + FORK | EXEC | THREADS | DAEMON_THREADS | EXTENSIONS, + (False, False, False, False, False): 0, + (False, False, True, False, True): THREADS | EXTENSIONS, }.items(): kwargs = dict(zip(kwlist, config)) expected = { @@ -1322,12 +1329,93 @@ def test_configured_settings(self): json.dump(settings, stdin) ''') with os.fdopen(r) as stdout: - support.run_in_subinterp_with_config(script, **kwargs) + ret = support.run_in_subinterp_with_config(script, **kwargs) + self.assertEqual(ret, 0) out = stdout.read() settings = json.loads(out) self.assertEqual(settings, expected) + @unittest.skipIf(_testsinglephase is None, "test requires _testsinglephase module") + @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") + def test_overridden_setting_extensions_subinterp_check(self): + """ + PyInterpreterConfig.check_multi_interp_extensions can be overridden + with PyInterpreterState.override_multi_interp_extensions_check. + This verifies that the override works but does not modify + the underlying setting. + """ + import json + + EXTENSIONS = 1<<8 + THREADS = 1<<10 + DAEMON_THREADS = 1<<11 + FORK = 1<<15 + EXEC = 1<<16 + BASE_FLAGS = FORK | EXEC | THREADS | DAEMON_THREADS + base_kwargs = { + 'allow_fork': True, + 'allow_exec': True, + 'allow_threads': True, + 'allow_daemon_threads': True, + } + + def check(enabled, override): + kwargs = dict( + base_kwargs, + check_multi_interp_extensions=enabled, + ) + flags = BASE_FLAGS | EXTENSIONS if enabled else BASE_FLAGS + settings = { + 'feature_flags': flags, + } + + expected = { + 'requested': override, + 'override__initial': 0, + 'override_after': override, + 'override_restored': 0, + # The override should not affect the config or settings. + 'settings__initial': settings, + 'settings_after': settings, + 'settings_restored': settings, + # These are the most likely values to be wrong. + 'allowed__initial': not enabled, + 'allowed_after': not ((override > 0) if override else enabled), + 'allowed_restored': not enabled, + } + + r, w = os.pipe() + script = textwrap.dedent(f''' + from test.test_capi.check_config import run_singlephase_check + run_singlephase_check({override}, {w}) + ''') + with os.fdopen(r) as stdout: + ret = support.run_in_subinterp_with_config(script, **kwargs) + self.assertEqual(ret, 0) + out = stdout.read() + results = json.loads(out) + + self.assertEqual(results, expected) + + self.maxDiff = None + + # setting: check disabled + with self.subTest('config: check disabled; override: disabled'): + check(False, -1) + with self.subTest('config: check disabled; override: use config'): + check(False, 0) + with self.subTest('config: check disabled; override: enabled'): + check(False, 1) + + # setting: check enabled + with self.subTest('config: check enabled; override: disabled'): + check(True, -1) + with self.subTest('config: check enabled; override: use config'): + check(True, 0) + with self.subTest('config: check enabled; override: enabled'): + check(True, 1) + def test_mutate_exception(self): """ Exceptions saved in global module state get shared between diff --git a/Lib/test/test_embed.py b/Lib/test/test_embed.py index 4d422da5b99f..e56d0db8627e 100644 --- a/Lib/test/test_embed.py +++ b/Lib/test/test_embed.py @@ -1656,13 +1656,15 @@ def test_init_use_frozen_modules(self): api=API_PYTHON, env=env) def test_init_main_interpreter_settings(self): + EXTENSIONS = 1<<8 THREADS = 1<<10 DAEMON_THREADS = 1<<11 FORK = 1<<15 EXEC = 1<<16 expected = { # All optional features should be enabled. - 'feature_flags': FORK | EXEC | THREADS | DAEMON_THREADS, + 'feature_flags': + FORK | EXEC | THREADS | DAEMON_THREADS, } out, err = self.run_embedded_interpreter( 'test_init_main_interpreter_settings', diff --git a/Lib/test/test_import/__init__.py b/Lib/test/test_import/__init__.py index 1e4429ed7efe..96815b3f758a 100644 --- a/Lib/test/test_import/__init__.py +++ b/Lib/test/test_import/__init__.py @@ -21,7 +21,7 @@ from test.support import os_helper from test.support import ( STDLIB_DIR, swap_attr, swap_item, cpython_only, is_emscripten, - is_wasi) + is_wasi, run_in_subinterp_with_config) from test.support.import_helper import ( forget, make_legacy_pyc, unlink, unload, DirsOnSysPath, CleanImport) from test.support.os_helper import ( @@ -30,6 +30,14 @@ from test.support import threading_helper from test.test_importlib.util import uncache from types import ModuleType +try: + import _testsinglephase +except ImportError: + _testsinglephase = None +try: + import _testmultiphase +except ImportError: + _testmultiphase = None skip_if_dont_write_bytecode = unittest.skipIf( @@ -1392,6 +1400,216 @@ def test_unwritable_module(self): unwritable.x = 42 +class SubinterpImportTests(unittest.TestCase): + + RUN_KWARGS = dict( + allow_fork=False, + allow_exec=False, + allow_threads=True, + allow_daemon_threads=False, + ) + + @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") + def pipe(self): + r, w = os.pipe() + self.addCleanup(os.close, r) + self.addCleanup(os.close, w) + if hasattr(os, 'set_blocking'): + os.set_blocking(r, False) + return (r, w) + + def import_script(self, name, fd, check_override=None): + override_text = '' + if check_override is not None: + override_text = f''' + import _imp + _imp._override_multi_interp_extensions_check({check_override}) + ''' + return textwrap.dedent(f''' + import os, sys + {override_text} + try: + import {name} + except ImportError as exc: + text = 'ImportError: ' + str(exc) + else: + text = 'okay' + os.write({fd}, text.encode('utf-8')) + ''') + + def run_shared(self, name, *, + check_singlephase_setting=False, + check_singlephase_override=None, + ): + """ + Try importing the named module in a subinterpreter. + + The subinterpreter will be in the current process. + The module will have already been imported in the main interpreter. + Thus, for extension/builtin modules, the module definition will + have been loaded already and cached globally. + + "check_singlephase_setting" determines whether or not + the interpreter will be configured to check for modules + that are not compatible with use in multiple interpreters. + + This should always return "okay" for all modules if the + setting is False (with no override). + """ + __import__(name) + + kwargs = dict( + **self.RUN_KWARGS, + check_multi_interp_extensions=check_singlephase_setting, + ) + + r, w = self.pipe() + script = self.import_script(name, w, check_singlephase_override) + + ret = run_in_subinterp_with_config(script, **kwargs) + self.assertEqual(ret, 0) + return os.read(r, 100) + + def check_compatible_shared(self, name, *, strict=False): + # Verify that the named module may be imported in a subinterpreter. + # (See run_shared() for more info.) + out = self.run_shared(name, check_singlephase_setting=strict) + self.assertEqual(out, b'okay') + + def check_incompatible_shared(self, name): + # Differences from check_compatible_shared(): + # * verify that import fails + # * "strict" is always True + out = self.run_shared(name, check_singlephase_setting=True) + self.assertEqual( + out.decode('utf-8'), + f'ImportError: module {name} does not support loading in subinterpreters', + ) + + def check_compatible_isolated(self, name, *, strict=False): + # Differences from check_compatible_shared(): + # * subinterpreter in a new process + # * module has never been imported before in that process + # * this tests importing the module for the first time + _, out, err = script_helper.assert_python_ok('-c', textwrap.dedent(f''' + import _testcapi, sys + assert ( + {name!r} in sys.builtin_module_names or + {name!r} not in sys.modules + ), repr({name!r}) + ret = _testcapi.run_in_subinterp_with_config( + {self.import_script(name, "sys.stdout.fileno()")!r}, + **{self.RUN_KWARGS}, + check_multi_interp_extensions={strict}, + ) + assert ret == 0, ret + ''')) + self.assertEqual(err, b'') + self.assertEqual(out, b'okay') + + def check_incompatible_isolated(self, name): + # Differences from check_compatible_isolated(): + # * verify that import fails + # * "strict" is always True + _, out, err = script_helper.assert_python_ok('-c', textwrap.dedent(f''' + import _testcapi, sys + assert {name!r} not in sys.modules, {name!r} + ret = _testcapi.run_in_subinterp_with_config( + {self.import_script(name, "sys.stdout.fileno()")!r}, + **{self.RUN_KWARGS}, + check_multi_interp_extensions=True, + ) + assert ret == 0, ret + ''')) + self.assertEqual(err, b'') + self.assertEqual( + out.decode('utf-8'), + f'ImportError: module {name} does not support loading in subinterpreters', + ) + + def test_builtin_compat(self): + module = 'sys' + with self.subTest(f'{module}: not strict'): + self.check_compatible_shared(module, strict=False) + with self.subTest(f'{module}: strict, shared'): + self.check_compatible_shared(module, strict=True) + + @cpython_only + def test_frozen_compat(self): + module = '_frozen_importlib' + if __import__(module).__spec__.origin != 'frozen': + raise unittest.SkipTest(f'{module} is unexpectedly not frozen') + with self.subTest(f'{module}: not strict'): + self.check_compatible_shared(module, strict=False) + with self.subTest(f'{module}: strict, shared'): + self.check_compatible_shared(module, strict=True) + + @unittest.skipIf(_testsinglephase is None, "test requires _testsinglephase module") + def test_single_init_extension_compat(self): + module = '_testsinglephase' + with self.subTest(f'{module}: not strict'): + self.check_compatible_shared(module, strict=False) + with self.subTest(f'{module}: strict, shared'): + self.check_incompatible_shared(module) + with self.subTest(f'{module}: strict, isolated'): + self.check_incompatible_isolated(module) + + @unittest.skipIf(_testmultiphase is None, "test requires _testmultiphase module") + def test_multi_init_extension_compat(self): + module = '_testmultiphase' + with self.subTest(f'{module}: not strict'): + self.check_compatible_shared(module, strict=False) + with self.subTest(f'{module}: strict, shared'): + self.check_compatible_shared(module, strict=True) + with self.subTest(f'{module}: strict, isolated'): + self.check_compatible_isolated(module, strict=True) + + def test_python_compat(self): + module = 'threading' + if __import__(module).__spec__.origin == 'frozen': + raise unittest.SkipTest(f'{module} is unexpectedly frozen') + with self.subTest(f'{module}: not strict'): + self.check_compatible_shared(module, strict=False) + with self.subTest(f'{module}: strict, shared'): + self.check_compatible_shared(module, strict=True) + with self.subTest(f'{module}: strict, isolated'): + self.check_compatible_isolated(module, strict=True) + + @unittest.skipIf(_testsinglephase is None, "test requires _testsinglephase module") + def test_singlephase_check_with_setting_and_override(self): + module = '_testsinglephase' + + def check_compatible(setting, override): + out = self.run_shared( + module, + check_singlephase_setting=setting, + check_singlephase_override=override, + ) + self.assertEqual(out, b'okay') + + def check_incompatible(setting, override): + out = self.run_shared( + module, + check_singlephase_setting=setting, + check_singlephase_override=override, + ) + self.assertNotEqual(out, b'okay') + + with self.subTest('config: check enabled; override: enabled'): + check_incompatible(True, 1) + with self.subTest('config: check enabled; override: use config'): + check_incompatible(True, 0) + with self.subTest('config: check enabled; override: disabled'): + check_compatible(True, -1) + + with self.subTest('config: check disabled; override: enabled'): + check_incompatible(False, 1) + with self.subTest('config: check disabled; override: use config'): + check_compatible(False, 0) + with self.subTest('config: check disabled; override: disabled'): + check_compatible(False, -1) + + if __name__ == '__main__': # Test needs to be a package, so we can do relative imports. unittest.main() diff --git a/Lib/test/test_threading.py b/Lib/test/test_threading.py index 31bf46311a80..7fea2d38673e 100644 --- a/Lib/test/test_threading.py +++ b/Lib/test/test_threading.py @@ -1347,6 +1347,7 @@ def func(): allow_exec=True, allow_threads={allowed}, allow_daemon_threads={daemon_allowed}, + check_multi_interp_extensions=False, ) """) with test.support.SuppressCrashReport(): diff --git a/Misc/NEWS.d/next/Core and Builtins/2022-11-02-20-23-47.gh-issue-98627.VJkdRM.rst b/Misc/NEWS.d/next/Core and Builtins/2022-11-02-20-23-47.gh-issue-98627.VJkdRM.rst new file mode 100644 index 000000000000..3d2d6f6eb0c4 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2022-11-02-20-23-47.gh-issue-98627.VJkdRM.rst @@ -0,0 +1,5 @@ +When an interpreter is configured to check (and only then), importing an +extension module will now fail when the extension does not support multiple +interpreters (i.e. doesn't implement PEP 489 multi-phase init). This does +not apply to the main interpreter, nor to subinterpreters created with +``Py_NewInterpreter()``. diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index 5610a7689136..0d8d1d73fb23 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -1618,6 +1618,7 @@ run_in_subinterp_with_config(PyObject *self, PyObject *args, PyObject *kwargs) int allow_exec = -1; int allow_threads = -1; int allow_daemon_threads = -1; + int check_multi_interp_extensions = -1; int r; PyThreadState *substate, *mainstate; /* only initialise 'cflags.cf_flags' to test backwards compatibility */ @@ -1628,11 +1629,13 @@ run_in_subinterp_with_config(PyObject *self, PyObject *args, PyObject *kwargs) "allow_exec", "allow_threads", "allow_daemon_threads", + "check_multi_interp_extensions", NULL}; if (!PyArg_ParseTupleAndKeywords(args, kwargs, - "s$pppp:run_in_subinterp_with_config", kwlist, + "s$ppppp:run_in_subinterp_with_config", kwlist, &code, &allow_fork, &allow_exec, - &allow_threads, &allow_daemon_threads)) { + &allow_threads, &allow_daemon_threads, + &check_multi_interp_extensions)) { return NULL; } if (allow_fork < 0) { @@ -1651,6 +1654,10 @@ run_in_subinterp_with_config(PyObject *self, PyObject *args, PyObject *kwargs) PyErr_SetString(PyExc_ValueError, "missing allow_daemon_threads"); return NULL; } + if (check_multi_interp_extensions < 0) { + PyErr_SetString(PyExc_ValueError, "missing check_multi_interp_extensions"); + return NULL; + } mainstate = PyThreadState_Get(); @@ -1661,6 +1668,7 @@ run_in_subinterp_with_config(PyObject *self, PyObject *args, PyObject *kwargs) .allow_exec = allow_exec, .allow_threads = allow_threads, .allow_daemon_threads = allow_daemon_threads, + .check_multi_interp_extensions = check_multi_interp_extensions, }; substate = _Py_NewInterpreterFromConfig(&config); if (substate == NULL) { diff --git a/Python/clinic/import.c.h b/Python/clinic/import.c.h index 819fb1c75c15..cb74be6a4221 100644 --- a/Python/clinic/import.c.h +++ b/Python/clinic/import.c.h @@ -442,6 +442,37 @@ _imp__override_frozen_modules_for_tests(PyObject *module, PyObject *arg) return return_value; } +PyDoc_STRVAR(_imp__override_multi_interp_extensions_check__doc__, +"_override_multi_interp_extensions_check($module, override, /)\n" +"--\n" +"\n" +"(internal-only) Override PyInterpreterConfig.check_multi_interp_extensions.\n" +"\n" +"(-1: \"never\", 1: \"always\", 0: no override)"); + +#define _IMP__OVERRIDE_MULTI_INTERP_EXTENSIONS_CHECK_METHODDEF \ + {"_override_multi_interp_extensions_check", (PyCFunction)_imp__override_multi_interp_extensions_check, METH_O, _imp__override_multi_interp_extensions_check__doc__}, + +static PyObject * +_imp__override_multi_interp_extensions_check_impl(PyObject *module, + int override); + +static PyObject * +_imp__override_multi_interp_extensions_check(PyObject *module, PyObject *arg) +{ + PyObject *return_value = NULL; + int override; + + override = _PyLong_AsInt(arg); + if (override == -1 && PyErr_Occurred()) { + goto exit; + } + return_value = _imp__override_multi_interp_extensions_check_impl(module, override); + +exit: + return return_value; +} + #if defined(HAVE_DYNAMIC_LOADING) PyDoc_STRVAR(_imp_create_dynamic__doc__, @@ -617,4 +648,4 @@ _imp_source_hash(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyOb #ifndef _IMP_EXEC_DYNAMIC_METHODDEF #define _IMP_EXEC_DYNAMIC_METHODDEF #endif /* !defined(_IMP_EXEC_DYNAMIC_METHODDEF) */ -/*[clinic end generated code: output=806352838c3f7008 input=a9049054013a1b77]*/ +/*[clinic end generated code: output=b18d46e0036eff49 input=a9049054013a1b77]*/ diff --git a/Python/import.c b/Python/import.c index 87981668a305..ec126f28b858 100644 --- a/Python/import.c +++ b/Python/import.c @@ -74,6 +74,8 @@ static struct _inittab *inittab_copy = NULL; (interp)->imports.modules_by_index #define IMPORTLIB(interp) \ (interp)->imports.importlib +#define OVERRIDE_MULTI_INTERP_EXTENSIONS_CHECK(interp) \ + (interp)->imports.override_multi_interp_extensions_check #define OVERRIDE_FROZEN_MODULES(interp) \ (interp)->imports.override_frozen_modules #ifdef HAVE_DLOPEN @@ -816,6 +818,38 @@ _extensions_cache_clear_all(void) Py_CLEAR(EXTENSIONS); } + +static bool +check_multi_interp_extensions(PyInterpreterState *interp) +{ + int override = OVERRIDE_MULTI_INTERP_EXTENSIONS_CHECK(interp); + if (override < 0) { + return false; + } + else if (override > 0) { + return true; + } + else if (_PyInterpreterState_HasFeature( + interp, Py_RTFLAGS_MULTI_INTERP_EXTENSIONS)) { + return true; + } + return false; +} + +int +_PyImport_CheckSubinterpIncompatibleExtensionAllowed(const char *name) +{ + PyInterpreterState *interp = _PyInterpreterState_Get(); + if (check_multi_interp_extensions(interp)) { + assert(!_Py_IsMainInterpreter(interp)); + PyErr_Format(PyExc_ImportError, + "module %s does not support loading in subinterpreters", + name); + return -1; + } + return 0; +} + static int fix_up_extension(PyObject *mod, PyObject *name, PyObject *filename) { @@ -3297,6 +3331,34 @@ _imp__override_frozen_modules_for_tests_impl(PyObject *module, int override) Py_RETURN_NONE; } +/*[clinic input] +_imp._override_multi_interp_extensions_check + + override: int + / + +(internal-only) Override PyInterpreterConfig.check_multi_interp_extensions. + +(-1: "never", 1: "always", 0: no override) +[clinic start generated code]*/ + +static PyObject * +_imp__override_multi_interp_extensions_check_impl(PyObject *module, + int override) +/*[clinic end generated code: output=3ff043af52bbf280 input=e086a2ea181f92ae]*/ +{ + PyInterpreterState *interp = _PyInterpreterState_GET(); + if (_Py_IsMainInterpreter(interp)) { + PyErr_SetString(PyExc_RuntimeError, + "_imp._override_multi_interp_extensions_check() " + "cannot be used in the main interpreter"); + return NULL; + } + int oldvalue = OVERRIDE_MULTI_INTERP_EXTENSIONS_CHECK(interp); + OVERRIDE_MULTI_INTERP_EXTENSIONS_CHECK(interp) = override; + return PyLong_FromLong(oldvalue); +} + #ifdef HAVE_DYNAMIC_LOADING /*[clinic input] @@ -3329,18 +3391,23 @@ _imp_create_dynamic_impl(PyObject *module, PyObject *spec, PyObject *file) PyThreadState *tstate = _PyThreadState_GET(); mod = import_find_extension(tstate, name, path); - if (mod != NULL || PyErr_Occurred()) { - Py_DECREF(name); - Py_DECREF(path); - return mod; + if (mod != NULL) { + const char *name_buf = PyUnicode_AsUTF8(name); + assert(name_buf != NULL); + if (_PyImport_CheckSubinterpIncompatibleExtensionAllowed(name_buf) < 0) { + Py_DECREF(mod); + mod = NULL; + } + goto finally; + } + else if (PyErr_Occurred()) { + goto finally; } if (file != NULL) { fp = _Py_fopen_obj(path, "r"); if (fp == NULL) { - Py_DECREF(name); - Py_DECREF(path); - return NULL; + goto finally; } } else @@ -3348,10 +3415,12 @@ _imp_create_dynamic_impl(PyObject *module, PyObject *spec, PyObject *file) mod = _PyImport_LoadDynamicModuleWithSpec(spec, fp); - Py_DECREF(name); - Py_DECREF(path); if (fp) fclose(fp); + +finally: + Py_DECREF(name); + Py_DECREF(path); return mod; } @@ -3436,6 +3505,7 @@ static PyMethodDef imp_methods[] = { _IMP_IS_FROZEN_METHODDEF _IMP__FROZEN_MODULE_NAMES_METHODDEF _IMP__OVERRIDE_FROZEN_MODULES_FOR_TESTS_METHODDEF + _IMP__OVERRIDE_MULTI_INTERP_EXTENSIONS_CHECK_METHODDEF _IMP_CREATE_DYNAMIC_METHODDEF _IMP_EXEC_DYNAMIC_METHODDEF _IMP_EXEC_BUILTIN_METHODDEF diff --git a/Python/importdl.c b/Python/importdl.c index 6dafb4541486..3a3a30ddbdcd 100644 --- a/Python/importdl.c +++ b/Python/importdl.c @@ -3,6 +3,7 @@ #include "Python.h" #include "pycore_call.h" +#include "pycore_import.h" #include "pycore_pystate.h" #include "pycore_runtime.h" @@ -203,6 +204,10 @@ _PyImport_LoadDynamicModuleWithSpec(PyObject *spec, FILE *fp) /* Fall back to single-phase init mechanism */ + if (_PyImport_CheckSubinterpIncompatibleExtensionAllowed(name_buf) < 0) { + goto error; + } + if (hook_prefix == nonascii_prefix) { /* don't allow legacy init for non-ASCII module names */ PyErr_Format( diff --git a/Python/pylifecycle.c b/Python/pylifecycle.c index 281035dafa95..e80dd30c89df 100644 --- a/Python/pylifecycle.c +++ b/Python/pylifecycle.c @@ -565,6 +565,10 @@ init_interp_settings(PyInterpreterState *interp, const _PyInterpreterConfig *con if (config->allow_daemon_threads) { interp->feature_flags |= Py_RTFLAGS_DAEMON_THREADS; } + + if (config->check_multi_interp_extensions) { + interp->feature_flags |= Py_RTFLAGS_MULTI_INTERP_EXTENSIONS; + } } From webhook-mailer at python.org Thu Feb 16 01:08:27 2023 From: webhook-mailer at python.org (gpshead) Date: Thu, 16 Feb 2023 06:08:27 -0000 Subject: [Python-checkins] gh-99108: Refactor _sha256 & _sha512 into _sha2. (#101924) Message-ID: https://github.com/python/cpython/commit/0b13575e74ff3321364a3389eda6b4e92792afe1 commit: 0b13575e74ff3321364a3389eda6b4e92792afe1 branch: main author: Gregory P. Smith committer: gpshead date: 2023-02-15T22:08:20-08:00 summary: gh-99108: Refactor _sha256 & _sha512 into _sha2. (#101924) This merges their code. They're backed by the same single HACL* static library, having them be a single module simplifies maintenance. This should unbreak the wasm enscripten builds that currently fail due to linking in --whole-archive mode and the HACL* library appearing twice. Long unnoticed error fixed: _sha512.SHA384Type was doubly assigned and was actually SHA512Type. Nobody depends on those internal names. Also rename LIBHACL_ make vars to LIBHACL_SHA2_ in preperation for other future HACL things. files: A Misc/NEWS.d/next/Library/2023-02-15-01-54-06.gh-issue-99108.rjTSic.rst A Modules/clinic/sha2module.c.h A Modules/sha2module.c D Modules/clinic/sha256module.c.h D Modules/clinic/sha512module.c.h D Modules/sha256module.c D Modules/sha512module.c M Lib/hashlib.py M Lib/test/test_hashlib.py M Makefile.pre.in M Modules/Setup M Modules/Setup.stdlib.in M PC/config.c M PCbuild/pythoncore.vcxproj M PCbuild/pythoncore.vcxproj.filters M Python/stdlib_module_names.h M configure M configure.ac diff --git a/Lib/hashlib.py b/Lib/hashlib.py index 21b5e912f3c7..1b16441cb60b 100644 --- a/Lib/hashlib.py +++ b/Lib/hashlib.py @@ -92,13 +92,13 @@ def __get_builtin_constructor(name): import _md5 cache['MD5'] = cache['md5'] = _md5.md5 elif name in {'SHA256', 'sha256', 'SHA224', 'sha224'}: - import _sha256 - cache['SHA224'] = cache['sha224'] = _sha256.sha224 - cache['SHA256'] = cache['sha256'] = _sha256.sha256 + import _sha2 + cache['SHA224'] = cache['sha224'] = _sha2.sha224 + cache['SHA256'] = cache['sha256'] = _sha2.sha256 elif name in {'SHA512', 'sha512', 'SHA384', 'sha384'}: - import _sha512 - cache['SHA384'] = cache['sha384'] = _sha512.sha384 - cache['SHA512'] = cache['sha512'] = _sha512.sha512 + import _sha2 + cache['SHA384'] = cache['sha384'] = _sha2.sha384 + cache['SHA512'] = cache['sha512'] = _sha2.sha512 elif name in {'blake2b', 'blake2s'}: import _blake2 cache['blake2b'] = _blake2.blake2b diff --git a/Lib/test/test_hashlib.py b/Lib/test/test_hashlib.py index 9c92b4e9c280..5ead88579435 100644 --- a/Lib/test/test_hashlib.py +++ b/Lib/test/test_hashlib.py @@ -1,6 +1,4 @@ -# Test hashlib module -# -# $Id$ +# Test the hashlib module. # # Copyright (C) 2005-2010 Gregory P. Smith (greg at krypto.org) # Licensed to PSF under a Contributor Agreement. @@ -28,7 +26,6 @@ from http.client import HTTPException -# default builtin hash module default_builtin_hashes = {'md5', 'sha1', 'sha256', 'sha512', 'sha3', 'blake2'} # --with-builtin-hashlib-hashes override builtin_hashes = sysconfig.get_config_var("PY_BUILTIN_HASHLIB_HASHES") @@ -66,6 +63,7 @@ def get_fips_mode(): requires_blake2 = unittest.skipUnless(_blake2, 'requires _blake2') # bpo-46913: Don't test the _sha3 extension on a Python UBSAN build +# TODO(gh-99108): Revisit this after _sha3 uses HACL*. SKIP_SHA3 = support.check_sanitizer(ub=True) requires_sha3 = unittest.skipUnless(not SKIP_SHA3, 'requires _sha3') @@ -107,7 +105,7 @@ class HashLibTestCase(unittest.TestCase): shakes = {'shake_128', 'shake_256'} - # Issue #14693: fallback modules are always compiled under POSIX + # gh-58898: Fallback modules are always compiled under POSIX. _warn_on_extension_import = (os.name == 'posix' or support.Py_DEBUG) def _conditional_import_module(self, module_name): @@ -116,7 +114,7 @@ def _conditional_import_module(self, module_name): return importlib.import_module(module_name) except ModuleNotFoundError as error: if self._warn_on_extension_import and module_name in builtin_hashes: - warnings.warn('Did a C extension fail to compile? %s' % error) + warnings.warn(f'Did a C extension fail to compile? {error}') return None def __init__(self, *args, **kwargs): @@ -147,7 +145,7 @@ def _test_algorithm_via_hashlib_new(data=None, _alg=algorithm, **kwargs): _hashlib = self._conditional_import_module('_hashlib') self._hashlib = _hashlib if _hashlib: - # These two algorithms should always be present when this module + # These algorithms should always be present when this module # is compiled. If not, something was compiled wrong. self.assertTrue(hasattr(_hashlib, 'openssl_md5')) self.assertTrue(hasattr(_hashlib, 'openssl_sha1')) @@ -172,12 +170,10 @@ def add_builtin_constructor(name): _sha1 = self._conditional_import_module('_sha1') if _sha1: add_builtin_constructor('sha1') - _sha256 = self._conditional_import_module('_sha256') - if _sha256: + _sha2 = self._conditional_import_module('_sha2') + if _sha2: add_builtin_constructor('sha224') add_builtin_constructor('sha256') - _sha512 = self._conditional_import_module('_sha512') - if _sha512: add_builtin_constructor('sha384') add_builtin_constructor('sha512') if _blake2: @@ -460,9 +456,9 @@ def check_blocksize_name(self, name, block_size=0, digest_size=0, self.assertEqual(len(m.hexdigest()), 2*digest_size) self.assertEqual(m.name, name) # split for sha3_512 / _sha3.sha3 object - self.assertIn(name.split("_")[0], repr(m)) + self.assertIn(name.split("_")[0], repr(m).lower()) - def test_blocksize_name(self): + def test_blocksize_and_name(self): self.check_blocksize_name('md5', 64, 16) self.check_blocksize_name('sha1', 64, 20) self.check_blocksize_name('sha224', 64, 28) diff --git a/Makefile.pre.in b/Makefile.pre.in index ce3fed3d6485..490483a71201 100644 --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -207,7 +207,7 @@ ENSUREPIP= @ENSUREPIP@ # Internal static libraries LIBMPDEC_A= Modules/_decimal/libmpdec/libmpdec.a LIBEXPAT_A= Modules/expat/libexpat.a -LIBHACL_A= Modules/_hacl/libHacl_Streaming_SHA2.a +LIBHACL_SHA2_A= Modules/_hacl/libHacl_Streaming_SHA2.a # Module state, compiler flags and linker flags # Empty CFLAGS and LDFLAGS are omitted. @@ -575,10 +575,10 @@ LIBEXPAT_HEADERS= \ ########################################################################## # hashlib's HACL* library -LIBHACL_OBJS= \ +LIBHACL_SHA2_OBJS= \ Modules/_hacl/Hacl_Streaming_SHA2.o -LIBHACL_HEADERS= \ +LIBHACL_SHA2_HEADERS= \ Modules/_hacl/Hacl_Streaming_SHA2.h \ Modules/_hacl/include/krml/FStar_UInt128_Verified.h \ Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h \ @@ -912,12 +912,12 @@ $(LIBEXPAT_A): $(LIBEXPAT_OBJS) # Build HACL* static libraries for hashlib: libHacl_Streaming_SHA2.a LIBHACL_CFLAGS=-I$(srcdir)/Modules/_hacl/include -D_BSD_SOURCE -D_DEFAULT_SOURCE $(PY_STDMODULE_CFLAGS) $(CCSHARED) -Modules/_hacl/Hacl_Streaming_SHA2.o: $(srcdir)/Modules/_hacl/Hacl_Streaming_SHA2.c $(LIBHACL_HEADERS) +Modules/_hacl/Hacl_Streaming_SHA2.o: $(srcdir)/Modules/_hacl/Hacl_Streaming_SHA2.c $(LIBHACL_SHA2_HEADERS) $(CC) -c $(LIBHACL_CFLAGS) -o $@ $(srcdir)/Modules/_hacl/Hacl_Streaming_SHA2.c -$(LIBHACL_A): $(LIBHACL_OBJS) +$(LIBHACL_SHA2_A): $(LIBHACL_SHA2_OBJS) -rm -f $@ - $(AR) $(ARFLAGS) $@ $(LIBHACL_OBJS) + $(AR) $(ARFLAGS) $@ $(LIBHACL_SHA2_OBJS) # create relative links from build/lib.platform/egg.so to Modules/egg.so # pybuilddir.txt is created too late. We cannot use it in Makefile @@ -2635,9 +2635,8 @@ MODULE__HASHLIB_DEPS=$(srcdir)/Modules/hashlib.h MODULE__IO_DEPS=$(srcdir)/Modules/_io/_iomodule.h MODULE__MD5_DEPS=$(srcdir)/Modules/hashlib.h MODULE__SHA1_DEPS=$(srcdir)/Modules/hashlib.h -MODULE__SHA256_DEPS=$(srcdir)/Modules/hashlib.h $(LIBHACL_HEADERS) $(LIBHACL_A) +MODULE__SHA2_DEPS=$(srcdir)/Modules/hashlib.h $(LIBHACL_SHA2_HEADERS) $(LIBHACL_SHA2_A) MODULE__SHA3_DEPS=$(srcdir)/Modules/_sha3/sha3.c $(srcdir)/Modules/_sha3/sha3.h $(srcdir)/Modules/hashlib.h -MODULE__SHA512_DEPS=$(srcdir)/Modules/hashlib.h $(LIBHACL_HEADERS) $(LIBHACL_A) MODULE__SOCKET_DEPS=$(srcdir)/Modules/socketmodule.h $(srcdir)/Modules/addrinfo.h $(srcdir)/Modules/getaddrinfo.c $(srcdir)/Modules/getnameinfo.c MODULE__SSL_DEPS=$(srcdir)/Modules/_ssl.h $(srcdir)/Modules/_ssl/cert.c $(srcdir)/Modules/_ssl/debughelpers.c $(srcdir)/Modules/_ssl/misc.c $(srcdir)/Modules/_ssl_data.h $(srcdir)/Modules/_ssl_data_111.h $(srcdir)/Modules/_ssl_data_300.h $(srcdir)/Modules/socketmodule.h MODULE__TESTCAPI_DEPS=$(srcdir)/Modules/_testcapi/testcapi_long.h $(srcdir)/Modules/_testcapi/parts.h diff --git a/Misc/NEWS.d/next/Library/2023-02-15-01-54-06.gh-issue-99108.rjTSic.rst b/Misc/NEWS.d/next/Library/2023-02-15-01-54-06.gh-issue-99108.rjTSic.rst new file mode 100644 index 000000000000..1612c89c0ea6 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-15-01-54-06.gh-issue-99108.rjTSic.rst @@ -0,0 +1,3 @@ +The built-in extension modules for :mod:`hashlib` SHA2 algorithms, used when +OpenSSL does not provide them, now live in a single internal ``_sha2`` module +instead of separate ``_sha256`` and ``_sha512`` modules. diff --git a/Modules/Setup b/Modules/Setup index 428be0a1bf8f..1d5183bc2df1 100644 --- a/Modules/Setup +++ b/Modules/Setup @@ -165,8 +165,7 @@ PYTHONPATH=$(COREPYTHONPATH) #_blake2 _blake2/blake2module.c _blake2/blake2b_impl.c _blake2/blake2s_impl.c #_md5 md5module.c #_sha1 sha1module.c -#_sha256 sha256module.c -#_sha512 sha512module.c +#_sha2 sha2module.c -I$(srcdir)/Modules/_hacl/include Modules/_hacl/libHacl_Streaming_SHA2.a #_sha3 _sha3/sha3module.c # text encodings and unicode diff --git a/Modules/Setup.stdlib.in b/Modules/Setup.stdlib.in index 22bcb423db23..8f5e14a4e80e 100644 --- a/Modules/Setup.stdlib.in +++ b/Modules/Setup.stdlib.in @@ -79,8 +79,7 @@ # hashing builtins, can be disabled with --without-builtin-hashlib-hashes @MODULE__MD5_TRUE at _md5 md5module.c @MODULE__SHA1_TRUE at _sha1 sha1module.c - at MODULE__SHA256_TRUE@_sha256 sha256module.c -I$(srcdir)/Modules/_hacl/include Modules/_hacl/libHacl_Streaming_SHA2.a - at MODULE__SHA512_TRUE@_sha512 sha512module.c -I$(srcdir)/Modules/_hacl/include Modules/_hacl/libHacl_Streaming_SHA2.a + at MODULE__SHA2_TRUE@_sha2 sha2module.c -I$(srcdir)/Modules/_hacl/include Modules/_hacl/libHacl_Streaming_SHA2.a @MODULE__SHA3_TRUE at _sha3 _sha3/sha3module.c @MODULE__BLAKE2_TRUE at _blake2 _blake2/blake2module.c _blake2/blake2b_impl.c _blake2/blake2s_impl.c diff --git a/Modules/clinic/sha256module.c.h b/Modules/clinic/sha256module.c.h deleted file mode 100644 index 10d09fac695f..000000000000 --- a/Modules/clinic/sha256module.c.h +++ /dev/null @@ -1,225 +0,0 @@ -/*[clinic input] -preserve -[clinic start generated code]*/ - -#if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) -# include "pycore_gc.h" // PyGC_Head -# include "pycore_runtime.h" // _Py_ID() -#endif - - -PyDoc_STRVAR(SHA256Type_copy__doc__, -"copy($self, /)\n" -"--\n" -"\n" -"Return a copy of the hash object."); - -#define SHA256TYPE_COPY_METHODDEF \ - {"copy", _PyCFunction_CAST(SHA256Type_copy), METH_METHOD|METH_FASTCALL|METH_KEYWORDS, SHA256Type_copy__doc__}, - -static PyObject * -SHA256Type_copy_impl(SHAobject *self, PyTypeObject *cls); - -static PyObject * -SHA256Type_copy(SHAobject *self, PyTypeObject *cls, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) -{ - if (nargs) { - PyErr_SetString(PyExc_TypeError, "copy() takes no arguments"); - return NULL; - } - return SHA256Type_copy_impl(self, cls); -} - -PyDoc_STRVAR(SHA256Type_digest__doc__, -"digest($self, /)\n" -"--\n" -"\n" -"Return the digest value as a bytes object."); - -#define SHA256TYPE_DIGEST_METHODDEF \ - {"digest", (PyCFunction)SHA256Type_digest, METH_NOARGS, SHA256Type_digest__doc__}, - -static PyObject * -SHA256Type_digest_impl(SHAobject *self); - -static PyObject * -SHA256Type_digest(SHAobject *self, PyObject *Py_UNUSED(ignored)) -{ - return SHA256Type_digest_impl(self); -} - -PyDoc_STRVAR(SHA256Type_hexdigest__doc__, -"hexdigest($self, /)\n" -"--\n" -"\n" -"Return the digest value as a string of hexadecimal digits."); - -#define SHA256TYPE_HEXDIGEST_METHODDEF \ - {"hexdigest", (PyCFunction)SHA256Type_hexdigest, METH_NOARGS, SHA256Type_hexdigest__doc__}, - -static PyObject * -SHA256Type_hexdigest_impl(SHAobject *self); - -static PyObject * -SHA256Type_hexdigest(SHAobject *self, PyObject *Py_UNUSED(ignored)) -{ - return SHA256Type_hexdigest_impl(self); -} - -PyDoc_STRVAR(SHA256Type_update__doc__, -"update($self, obj, /)\n" -"--\n" -"\n" -"Update this hash object\'s state with the provided string."); - -#define SHA256TYPE_UPDATE_METHODDEF \ - {"update", (PyCFunction)SHA256Type_update, METH_O, SHA256Type_update__doc__}, - -PyDoc_STRVAR(_sha256_sha256__doc__, -"sha256($module, /, string=b\'\', *, usedforsecurity=True)\n" -"--\n" -"\n" -"Return a new SHA-256 hash object; optionally initialized with a string."); - -#define _SHA256_SHA256_METHODDEF \ - {"sha256", _PyCFunction_CAST(_sha256_sha256), METH_FASTCALL|METH_KEYWORDS, _sha256_sha256__doc__}, - -static PyObject * -_sha256_sha256_impl(PyObject *module, PyObject *string, int usedforsecurity); - -static PyObject * -_sha256_sha256(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) -{ - PyObject *return_value = NULL; - #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) - - #define NUM_KEYWORDS 2 - static struct { - PyGC_Head _this_is_not_used; - PyObject_VAR_HEAD - PyObject *ob_item[NUM_KEYWORDS]; - } _kwtuple = { - .ob_base = PyVarObject_HEAD_INIT(&PyTuple_Type, NUM_KEYWORDS) - .ob_item = { &_Py_ID(string), &_Py_ID(usedforsecurity), }, - }; - #undef NUM_KEYWORDS - #define KWTUPLE (&_kwtuple.ob_base.ob_base) - - #else // !Py_BUILD_CORE - # define KWTUPLE NULL - #endif // !Py_BUILD_CORE - - static const char * const _keywords[] = {"string", "usedforsecurity", NULL}; - static _PyArg_Parser _parser = { - .keywords = _keywords, - .fname = "sha256", - .kwtuple = KWTUPLE, - }; - #undef KWTUPLE - PyObject *argsbuf[2]; - Py_ssize_t noptargs = nargs + (kwnames ? PyTuple_GET_SIZE(kwnames) : 0) - 0; - PyObject *string = NULL; - int usedforsecurity = 1; - - args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 0, 1, 0, argsbuf); - if (!args) { - goto exit; - } - if (!noptargs) { - goto skip_optional_pos; - } - if (args[0]) { - string = args[0]; - if (!--noptargs) { - goto skip_optional_pos; - } - } -skip_optional_pos: - if (!noptargs) { - goto skip_optional_kwonly; - } - usedforsecurity = PyObject_IsTrue(args[1]); - if (usedforsecurity < 0) { - goto exit; - } -skip_optional_kwonly: - return_value = _sha256_sha256_impl(module, string, usedforsecurity); - -exit: - return return_value; -} - -PyDoc_STRVAR(_sha256_sha224__doc__, -"sha224($module, /, string=b\'\', *, usedforsecurity=True)\n" -"--\n" -"\n" -"Return a new SHA-224 hash object; optionally initialized with a string."); - -#define _SHA256_SHA224_METHODDEF \ - {"sha224", _PyCFunction_CAST(_sha256_sha224), METH_FASTCALL|METH_KEYWORDS, _sha256_sha224__doc__}, - -static PyObject * -_sha256_sha224_impl(PyObject *module, PyObject *string, int usedforsecurity); - -static PyObject * -_sha256_sha224(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) -{ - PyObject *return_value = NULL; - #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) - - #define NUM_KEYWORDS 2 - static struct { - PyGC_Head _this_is_not_used; - PyObject_VAR_HEAD - PyObject *ob_item[NUM_KEYWORDS]; - } _kwtuple = { - .ob_base = PyVarObject_HEAD_INIT(&PyTuple_Type, NUM_KEYWORDS) - .ob_item = { &_Py_ID(string), &_Py_ID(usedforsecurity), }, - }; - #undef NUM_KEYWORDS - #define KWTUPLE (&_kwtuple.ob_base.ob_base) - - #else // !Py_BUILD_CORE - # define KWTUPLE NULL - #endif // !Py_BUILD_CORE - - static const char * const _keywords[] = {"string", "usedforsecurity", NULL}; - static _PyArg_Parser _parser = { - .keywords = _keywords, - .fname = "sha224", - .kwtuple = KWTUPLE, - }; - #undef KWTUPLE - PyObject *argsbuf[2]; - Py_ssize_t noptargs = nargs + (kwnames ? PyTuple_GET_SIZE(kwnames) : 0) - 0; - PyObject *string = NULL; - int usedforsecurity = 1; - - args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 0, 1, 0, argsbuf); - if (!args) { - goto exit; - } - if (!noptargs) { - goto skip_optional_pos; - } - if (args[0]) { - string = args[0]; - if (!--noptargs) { - goto skip_optional_pos; - } - } -skip_optional_pos: - if (!noptargs) { - goto skip_optional_kwonly; - } - usedforsecurity = PyObject_IsTrue(args[1]); - if (usedforsecurity < 0) { - goto exit; - } -skip_optional_kwonly: - return_value = _sha256_sha224_impl(module, string, usedforsecurity); - -exit: - return return_value; -} -/*[clinic end generated code: output=ae926f7ec85e7c97 input=a9049054013a1b77]*/ diff --git a/Modules/clinic/sha2module.c.h b/Modules/clinic/sha2module.c.h new file mode 100644 index 000000000000..8f855ca345e4 --- /dev/null +++ b/Modules/clinic/sha2module.c.h @@ -0,0 +1,440 @@ +/*[clinic input] +preserve +[clinic start generated code]*/ + +#if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) +# include "pycore_gc.h" // PyGC_Head +# include "pycore_runtime.h" // _Py_ID() +#endif + + +PyDoc_STRVAR(SHA256Type_copy__doc__, +"copy($self, /)\n" +"--\n" +"\n" +"Return a copy of the hash object."); + +#define SHA256TYPE_COPY_METHODDEF \ + {"copy", _PyCFunction_CAST(SHA256Type_copy), METH_METHOD|METH_FASTCALL|METH_KEYWORDS, SHA256Type_copy__doc__}, + +static PyObject * +SHA256Type_copy_impl(SHA256object *self, PyTypeObject *cls); + +static PyObject * +SHA256Type_copy(SHA256object *self, PyTypeObject *cls, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + if (nargs) { + PyErr_SetString(PyExc_TypeError, "copy() takes no arguments"); + return NULL; + } + return SHA256Type_copy_impl(self, cls); +} + +PyDoc_STRVAR(SHA512Type_copy__doc__, +"copy($self, /)\n" +"--\n" +"\n" +"Return a copy of the hash object."); + +#define SHA512TYPE_COPY_METHODDEF \ + {"copy", _PyCFunction_CAST(SHA512Type_copy), METH_METHOD|METH_FASTCALL|METH_KEYWORDS, SHA512Type_copy__doc__}, + +static PyObject * +SHA512Type_copy_impl(SHA512object *self, PyTypeObject *cls); + +static PyObject * +SHA512Type_copy(SHA512object *self, PyTypeObject *cls, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + if (nargs) { + PyErr_SetString(PyExc_TypeError, "copy() takes no arguments"); + return NULL; + } + return SHA512Type_copy_impl(self, cls); +} + +PyDoc_STRVAR(SHA256Type_digest__doc__, +"digest($self, /)\n" +"--\n" +"\n" +"Return the digest value as a bytes object."); + +#define SHA256TYPE_DIGEST_METHODDEF \ + {"digest", (PyCFunction)SHA256Type_digest, METH_NOARGS, SHA256Type_digest__doc__}, + +static PyObject * +SHA256Type_digest_impl(SHA256object *self); + +static PyObject * +SHA256Type_digest(SHA256object *self, PyObject *Py_UNUSED(ignored)) +{ + return SHA256Type_digest_impl(self); +} + +PyDoc_STRVAR(SHA512Type_digest__doc__, +"digest($self, /)\n" +"--\n" +"\n" +"Return the digest value as a bytes object."); + +#define SHA512TYPE_DIGEST_METHODDEF \ + {"digest", (PyCFunction)SHA512Type_digest, METH_NOARGS, SHA512Type_digest__doc__}, + +static PyObject * +SHA512Type_digest_impl(SHA512object *self); + +static PyObject * +SHA512Type_digest(SHA512object *self, PyObject *Py_UNUSED(ignored)) +{ + return SHA512Type_digest_impl(self); +} + +PyDoc_STRVAR(SHA256Type_hexdigest__doc__, +"hexdigest($self, /)\n" +"--\n" +"\n" +"Return the digest value as a string of hexadecimal digits."); + +#define SHA256TYPE_HEXDIGEST_METHODDEF \ + {"hexdigest", (PyCFunction)SHA256Type_hexdigest, METH_NOARGS, SHA256Type_hexdigest__doc__}, + +static PyObject * +SHA256Type_hexdigest_impl(SHA256object *self); + +static PyObject * +SHA256Type_hexdigest(SHA256object *self, PyObject *Py_UNUSED(ignored)) +{ + return SHA256Type_hexdigest_impl(self); +} + +PyDoc_STRVAR(SHA512Type_hexdigest__doc__, +"hexdigest($self, /)\n" +"--\n" +"\n" +"Return the digest value as a string of hexadecimal digits."); + +#define SHA512TYPE_HEXDIGEST_METHODDEF \ + {"hexdigest", (PyCFunction)SHA512Type_hexdigest, METH_NOARGS, SHA512Type_hexdigest__doc__}, + +static PyObject * +SHA512Type_hexdigest_impl(SHA512object *self); + +static PyObject * +SHA512Type_hexdigest(SHA512object *self, PyObject *Py_UNUSED(ignored)) +{ + return SHA512Type_hexdigest_impl(self); +} + +PyDoc_STRVAR(SHA256Type_update__doc__, +"update($self, obj, /)\n" +"--\n" +"\n" +"Update this hash object\'s state with the provided string."); + +#define SHA256TYPE_UPDATE_METHODDEF \ + {"update", (PyCFunction)SHA256Type_update, METH_O, SHA256Type_update__doc__}, + +PyDoc_STRVAR(SHA512Type_update__doc__, +"update($self, obj, /)\n" +"--\n" +"\n" +"Update this hash object\'s state with the provided string."); + +#define SHA512TYPE_UPDATE_METHODDEF \ + {"update", (PyCFunction)SHA512Type_update, METH_O, SHA512Type_update__doc__}, + +PyDoc_STRVAR(_sha2_sha256__doc__, +"sha256($module, /, string=b\'\', *, usedforsecurity=True)\n" +"--\n" +"\n" +"Return a new SHA-256 hash object; optionally initialized with a string."); + +#define _SHA2_SHA256_METHODDEF \ + {"sha256", _PyCFunction_CAST(_sha2_sha256), METH_FASTCALL|METH_KEYWORDS, _sha2_sha256__doc__}, + +static PyObject * +_sha2_sha256_impl(PyObject *module, PyObject *string, int usedforsecurity); + +static PyObject * +_sha2_sha256(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + PyObject *return_value = NULL; + #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) + + #define NUM_KEYWORDS 2 + static struct { + PyGC_Head _this_is_not_used; + PyObject_VAR_HEAD + PyObject *ob_item[NUM_KEYWORDS]; + } _kwtuple = { + .ob_base = PyVarObject_HEAD_INIT(&PyTuple_Type, NUM_KEYWORDS) + .ob_item = { &_Py_ID(string), &_Py_ID(usedforsecurity), }, + }; + #undef NUM_KEYWORDS + #define KWTUPLE (&_kwtuple.ob_base.ob_base) + + #else // !Py_BUILD_CORE + # define KWTUPLE NULL + #endif // !Py_BUILD_CORE + + static const char * const _keywords[] = {"string", "usedforsecurity", NULL}; + static _PyArg_Parser _parser = { + .keywords = _keywords, + .fname = "sha256", + .kwtuple = KWTUPLE, + }; + #undef KWTUPLE + PyObject *argsbuf[2]; + Py_ssize_t noptargs = nargs + (kwnames ? PyTuple_GET_SIZE(kwnames) : 0) - 0; + PyObject *string = NULL; + int usedforsecurity = 1; + + args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 0, 1, 0, argsbuf); + if (!args) { + goto exit; + } + if (!noptargs) { + goto skip_optional_pos; + } + if (args[0]) { + string = args[0]; + if (!--noptargs) { + goto skip_optional_pos; + } + } +skip_optional_pos: + if (!noptargs) { + goto skip_optional_kwonly; + } + usedforsecurity = PyObject_IsTrue(args[1]); + if (usedforsecurity < 0) { + goto exit; + } +skip_optional_kwonly: + return_value = _sha2_sha256_impl(module, string, usedforsecurity); + +exit: + return return_value; +} + +PyDoc_STRVAR(_sha2_sha224__doc__, +"sha224($module, /, string=b\'\', *, usedforsecurity=True)\n" +"--\n" +"\n" +"Return a new SHA-224 hash object; optionally initialized with a string."); + +#define _SHA2_SHA224_METHODDEF \ + {"sha224", _PyCFunction_CAST(_sha2_sha224), METH_FASTCALL|METH_KEYWORDS, _sha2_sha224__doc__}, + +static PyObject * +_sha2_sha224_impl(PyObject *module, PyObject *string, int usedforsecurity); + +static PyObject * +_sha2_sha224(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + PyObject *return_value = NULL; + #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) + + #define NUM_KEYWORDS 2 + static struct { + PyGC_Head _this_is_not_used; + PyObject_VAR_HEAD + PyObject *ob_item[NUM_KEYWORDS]; + } _kwtuple = { + .ob_base = PyVarObject_HEAD_INIT(&PyTuple_Type, NUM_KEYWORDS) + .ob_item = { &_Py_ID(string), &_Py_ID(usedforsecurity), }, + }; + #undef NUM_KEYWORDS + #define KWTUPLE (&_kwtuple.ob_base.ob_base) + + #else // !Py_BUILD_CORE + # define KWTUPLE NULL + #endif // !Py_BUILD_CORE + + static const char * const _keywords[] = {"string", "usedforsecurity", NULL}; + static _PyArg_Parser _parser = { + .keywords = _keywords, + .fname = "sha224", + .kwtuple = KWTUPLE, + }; + #undef KWTUPLE + PyObject *argsbuf[2]; + Py_ssize_t noptargs = nargs + (kwnames ? PyTuple_GET_SIZE(kwnames) : 0) - 0; + PyObject *string = NULL; + int usedforsecurity = 1; + + args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 0, 1, 0, argsbuf); + if (!args) { + goto exit; + } + if (!noptargs) { + goto skip_optional_pos; + } + if (args[0]) { + string = args[0]; + if (!--noptargs) { + goto skip_optional_pos; + } + } +skip_optional_pos: + if (!noptargs) { + goto skip_optional_kwonly; + } + usedforsecurity = PyObject_IsTrue(args[1]); + if (usedforsecurity < 0) { + goto exit; + } +skip_optional_kwonly: + return_value = _sha2_sha224_impl(module, string, usedforsecurity); + +exit: + return return_value; +} + +PyDoc_STRVAR(_sha2_sha512__doc__, +"sha512($module, /, string=b\'\', *, usedforsecurity=True)\n" +"--\n" +"\n" +"Return a new SHA-512 hash object; optionally initialized with a string."); + +#define _SHA2_SHA512_METHODDEF \ + {"sha512", _PyCFunction_CAST(_sha2_sha512), METH_FASTCALL|METH_KEYWORDS, _sha2_sha512__doc__}, + +static PyObject * +_sha2_sha512_impl(PyObject *module, PyObject *string, int usedforsecurity); + +static PyObject * +_sha2_sha512(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + PyObject *return_value = NULL; + #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) + + #define NUM_KEYWORDS 2 + static struct { + PyGC_Head _this_is_not_used; + PyObject_VAR_HEAD + PyObject *ob_item[NUM_KEYWORDS]; + } _kwtuple = { + .ob_base = PyVarObject_HEAD_INIT(&PyTuple_Type, NUM_KEYWORDS) + .ob_item = { &_Py_ID(string), &_Py_ID(usedforsecurity), }, + }; + #undef NUM_KEYWORDS + #define KWTUPLE (&_kwtuple.ob_base.ob_base) + + #else // !Py_BUILD_CORE + # define KWTUPLE NULL + #endif // !Py_BUILD_CORE + + static const char * const _keywords[] = {"string", "usedforsecurity", NULL}; + static _PyArg_Parser _parser = { + .keywords = _keywords, + .fname = "sha512", + .kwtuple = KWTUPLE, + }; + #undef KWTUPLE + PyObject *argsbuf[2]; + Py_ssize_t noptargs = nargs + (kwnames ? PyTuple_GET_SIZE(kwnames) : 0) - 0; + PyObject *string = NULL; + int usedforsecurity = 1; + + args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 0, 1, 0, argsbuf); + if (!args) { + goto exit; + } + if (!noptargs) { + goto skip_optional_pos; + } + if (args[0]) { + string = args[0]; + if (!--noptargs) { + goto skip_optional_pos; + } + } +skip_optional_pos: + if (!noptargs) { + goto skip_optional_kwonly; + } + usedforsecurity = PyObject_IsTrue(args[1]); + if (usedforsecurity < 0) { + goto exit; + } +skip_optional_kwonly: + return_value = _sha2_sha512_impl(module, string, usedforsecurity); + +exit: + return return_value; +} + +PyDoc_STRVAR(_sha2_sha384__doc__, +"sha384($module, /, string=b\'\', *, usedforsecurity=True)\n" +"--\n" +"\n" +"Return a new SHA-384 hash object; optionally initialized with a string."); + +#define _SHA2_SHA384_METHODDEF \ + {"sha384", _PyCFunction_CAST(_sha2_sha384), METH_FASTCALL|METH_KEYWORDS, _sha2_sha384__doc__}, + +static PyObject * +_sha2_sha384_impl(PyObject *module, PyObject *string, int usedforsecurity); + +static PyObject * +_sha2_sha384(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) +{ + PyObject *return_value = NULL; + #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) + + #define NUM_KEYWORDS 2 + static struct { + PyGC_Head _this_is_not_used; + PyObject_VAR_HEAD + PyObject *ob_item[NUM_KEYWORDS]; + } _kwtuple = { + .ob_base = PyVarObject_HEAD_INIT(&PyTuple_Type, NUM_KEYWORDS) + .ob_item = { &_Py_ID(string), &_Py_ID(usedforsecurity), }, + }; + #undef NUM_KEYWORDS + #define KWTUPLE (&_kwtuple.ob_base.ob_base) + + #else // !Py_BUILD_CORE + # define KWTUPLE NULL + #endif // !Py_BUILD_CORE + + static const char * const _keywords[] = {"string", "usedforsecurity", NULL}; + static _PyArg_Parser _parser = { + .keywords = _keywords, + .fname = "sha384", + .kwtuple = KWTUPLE, + }; + #undef KWTUPLE + PyObject *argsbuf[2]; + Py_ssize_t noptargs = nargs + (kwnames ? PyTuple_GET_SIZE(kwnames) : 0) - 0; + PyObject *string = NULL; + int usedforsecurity = 1; + + args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 0, 1, 0, argsbuf); + if (!args) { + goto exit; + } + if (!noptargs) { + goto skip_optional_pos; + } + if (args[0]) { + string = args[0]; + if (!--noptargs) { + goto skip_optional_pos; + } + } +skip_optional_pos: + if (!noptargs) { + goto skip_optional_kwonly; + } + usedforsecurity = PyObject_IsTrue(args[1]); + if (usedforsecurity < 0) { + goto exit; + } +skip_optional_kwonly: + return_value = _sha2_sha384_impl(module, string, usedforsecurity); + +exit: + return return_value; +} +/*[clinic end generated code: output=f81dacb48f3fee72 input=a9049054013a1b77]*/ diff --git a/Modules/clinic/sha512module.c.h b/Modules/clinic/sha512module.c.h deleted file mode 100644 index f8d326363c39..000000000000 --- a/Modules/clinic/sha512module.c.h +++ /dev/null @@ -1,225 +0,0 @@ -/*[clinic input] -preserve -[clinic start generated code]*/ - -#if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) -# include "pycore_gc.h" // PyGC_Head -# include "pycore_runtime.h" // _Py_ID() -#endif - - -PyDoc_STRVAR(SHA512Type_copy__doc__, -"copy($self, /)\n" -"--\n" -"\n" -"Return a copy of the hash object."); - -#define SHA512TYPE_COPY_METHODDEF \ - {"copy", _PyCFunction_CAST(SHA512Type_copy), METH_METHOD|METH_FASTCALL|METH_KEYWORDS, SHA512Type_copy__doc__}, - -static PyObject * -SHA512Type_copy_impl(SHAobject *self, PyTypeObject *cls); - -static PyObject * -SHA512Type_copy(SHAobject *self, PyTypeObject *cls, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) -{ - if (nargs) { - PyErr_SetString(PyExc_TypeError, "copy() takes no arguments"); - return NULL; - } - return SHA512Type_copy_impl(self, cls); -} - -PyDoc_STRVAR(SHA512Type_digest__doc__, -"digest($self, /)\n" -"--\n" -"\n" -"Return the digest value as a bytes object."); - -#define SHA512TYPE_DIGEST_METHODDEF \ - {"digest", (PyCFunction)SHA512Type_digest, METH_NOARGS, SHA512Type_digest__doc__}, - -static PyObject * -SHA512Type_digest_impl(SHAobject *self); - -static PyObject * -SHA512Type_digest(SHAobject *self, PyObject *Py_UNUSED(ignored)) -{ - return SHA512Type_digest_impl(self); -} - -PyDoc_STRVAR(SHA512Type_hexdigest__doc__, -"hexdigest($self, /)\n" -"--\n" -"\n" -"Return the digest value as a string of hexadecimal digits."); - -#define SHA512TYPE_HEXDIGEST_METHODDEF \ - {"hexdigest", (PyCFunction)SHA512Type_hexdigest, METH_NOARGS, SHA512Type_hexdigest__doc__}, - -static PyObject * -SHA512Type_hexdigest_impl(SHAobject *self); - -static PyObject * -SHA512Type_hexdigest(SHAobject *self, PyObject *Py_UNUSED(ignored)) -{ - return SHA512Type_hexdigest_impl(self); -} - -PyDoc_STRVAR(SHA512Type_update__doc__, -"update($self, obj, /)\n" -"--\n" -"\n" -"Update this hash object\'s state with the provided string."); - -#define SHA512TYPE_UPDATE_METHODDEF \ - {"update", (PyCFunction)SHA512Type_update, METH_O, SHA512Type_update__doc__}, - -PyDoc_STRVAR(_sha512_sha512__doc__, -"sha512($module, /, string=b\'\', *, usedforsecurity=True)\n" -"--\n" -"\n" -"Return a new SHA-512 hash object; optionally initialized with a string."); - -#define _SHA512_SHA512_METHODDEF \ - {"sha512", _PyCFunction_CAST(_sha512_sha512), METH_FASTCALL|METH_KEYWORDS, _sha512_sha512__doc__}, - -static PyObject * -_sha512_sha512_impl(PyObject *module, PyObject *string, int usedforsecurity); - -static PyObject * -_sha512_sha512(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) -{ - PyObject *return_value = NULL; - #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) - - #define NUM_KEYWORDS 2 - static struct { - PyGC_Head _this_is_not_used; - PyObject_VAR_HEAD - PyObject *ob_item[NUM_KEYWORDS]; - } _kwtuple = { - .ob_base = PyVarObject_HEAD_INIT(&PyTuple_Type, NUM_KEYWORDS) - .ob_item = { &_Py_ID(string), &_Py_ID(usedforsecurity), }, - }; - #undef NUM_KEYWORDS - #define KWTUPLE (&_kwtuple.ob_base.ob_base) - - #else // !Py_BUILD_CORE - # define KWTUPLE NULL - #endif // !Py_BUILD_CORE - - static const char * const _keywords[] = {"string", "usedforsecurity", NULL}; - static _PyArg_Parser _parser = { - .keywords = _keywords, - .fname = "sha512", - .kwtuple = KWTUPLE, - }; - #undef KWTUPLE - PyObject *argsbuf[2]; - Py_ssize_t noptargs = nargs + (kwnames ? PyTuple_GET_SIZE(kwnames) : 0) - 0; - PyObject *string = NULL; - int usedforsecurity = 1; - - args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 0, 1, 0, argsbuf); - if (!args) { - goto exit; - } - if (!noptargs) { - goto skip_optional_pos; - } - if (args[0]) { - string = args[0]; - if (!--noptargs) { - goto skip_optional_pos; - } - } -skip_optional_pos: - if (!noptargs) { - goto skip_optional_kwonly; - } - usedforsecurity = PyObject_IsTrue(args[1]); - if (usedforsecurity < 0) { - goto exit; - } -skip_optional_kwonly: - return_value = _sha512_sha512_impl(module, string, usedforsecurity); - -exit: - return return_value; -} - -PyDoc_STRVAR(_sha512_sha384__doc__, -"sha384($module, /, string=b\'\', *, usedforsecurity=True)\n" -"--\n" -"\n" -"Return a new SHA-384 hash object; optionally initialized with a string."); - -#define _SHA512_SHA384_METHODDEF \ - {"sha384", _PyCFunction_CAST(_sha512_sha384), METH_FASTCALL|METH_KEYWORDS, _sha512_sha384__doc__}, - -static PyObject * -_sha512_sha384_impl(PyObject *module, PyObject *string, int usedforsecurity); - -static PyObject * -_sha512_sha384(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames) -{ - PyObject *return_value = NULL; - #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) - - #define NUM_KEYWORDS 2 - static struct { - PyGC_Head _this_is_not_used; - PyObject_VAR_HEAD - PyObject *ob_item[NUM_KEYWORDS]; - } _kwtuple = { - .ob_base = PyVarObject_HEAD_INIT(&PyTuple_Type, NUM_KEYWORDS) - .ob_item = { &_Py_ID(string), &_Py_ID(usedforsecurity), }, - }; - #undef NUM_KEYWORDS - #define KWTUPLE (&_kwtuple.ob_base.ob_base) - - #else // !Py_BUILD_CORE - # define KWTUPLE NULL - #endif // !Py_BUILD_CORE - - static const char * const _keywords[] = {"string", "usedforsecurity", NULL}; - static _PyArg_Parser _parser = { - .keywords = _keywords, - .fname = "sha384", - .kwtuple = KWTUPLE, - }; - #undef KWTUPLE - PyObject *argsbuf[2]; - Py_ssize_t noptargs = nargs + (kwnames ? PyTuple_GET_SIZE(kwnames) : 0) - 0; - PyObject *string = NULL; - int usedforsecurity = 1; - - args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 0, 1, 0, argsbuf); - if (!args) { - goto exit; - } - if (!noptargs) { - goto skip_optional_pos; - } - if (args[0]) { - string = args[0]; - if (!--noptargs) { - goto skip_optional_pos; - } - } -skip_optional_pos: - if (!noptargs) { - goto skip_optional_kwonly; - } - usedforsecurity = PyObject_IsTrue(args[1]); - if (usedforsecurity < 0) { - goto exit; - } -skip_optional_kwonly: - return_value = _sha512_sha384_impl(module, string, usedforsecurity); - -exit: - return return_value; -} -/*[clinic end generated code: output=dd168f3f21097afe input=a9049054013a1b77]*/ diff --git a/Modules/sha256module.c b/Modules/sha256module.c deleted file mode 100644 index 301c9837bb67..000000000000 --- a/Modules/sha256module.c +++ /dev/null @@ -1,465 +0,0 @@ -/* SHA256 module */ - -/* This module provides an interface to NIST's SHA-256 and SHA-224 Algorithms */ - -/* See below for information about the original code this module was - based upon. Additional work performed by: - - Andrew Kuchling (amk at amk.ca) - Greg Stein (gstein at lyra.org) - Trevor Perrin (trevp at trevp.net) - Jonathan Protzenko (jonathan at protzenko.fr) - - Copyright (C) 2005-2007 Gregory P. Smith (greg at krypto.org) - Licensed to PSF under a Contributor Agreement. - -*/ - -/* SHA objects */ -#ifndef Py_BUILD_CORE_BUILTIN -# define Py_BUILD_CORE_MODULE 1 -#endif - -#include "Python.h" -#include "pycore_bitutils.h" // _Py_bswap32() -#include "pycore_strhex.h" // _Py_strhex() -#include "structmember.h" // PyMemberDef -#include "hashlib.h" - -/*[clinic input] -module _sha256 -class SHA256Type "SHAobject *" "&PyType_Type" -[clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=71a39174d4f0a744]*/ - - -/* The SHA block size and maximum message digest sizes, in bytes */ - -#define SHA_BLOCKSIZE 64 -#define SHA_DIGESTSIZE 32 - -/* The SHA2-224 and SHA2-256 implementations defer to the HACL* verified - * library. */ - -#include "_hacl/Hacl_Streaming_SHA2.h" - -typedef struct { - PyObject_HEAD - // Even though one could conceivably perform run-type checks to tell apart a - // sha224_type from a sha256_type (and thus deduce the digest size), we must - // keep this field because it's exposed as a member field on the underlying - // python object. - // TODO: could we transform this into a getter and get rid of the redundant - // field? - int digestsize; - Hacl_Streaming_SHA2_state_sha2_256 *state; -} SHAobject; - -#include "clinic/sha256module.c.h" - -/* We shall use run-time type information in the remainder of this module to - * tell apart SHA2-224 and SHA2-256 */ -typedef struct { - PyTypeObject* sha224_type; - PyTypeObject* sha256_type; -} _sha256_state; - -static inline _sha256_state* -_sha256_get_state(PyObject *module) -{ - void *state = PyModule_GetState(module); - assert(state != NULL); - return (_sha256_state *)state; -} - -static void SHAcopy(SHAobject *src, SHAobject *dest) -{ - dest->digestsize = src->digestsize; - dest->state = Hacl_Streaming_SHA2_copy_256(src->state); -} - -static SHAobject * -newSHA224object(_sha256_state *state) -{ - SHAobject *sha = (SHAobject *)PyObject_GC_New(SHAobject, - state->sha224_type); - PyObject_GC_Track(sha); - return sha; -} - -static SHAobject * -newSHA256object(_sha256_state *state) -{ - SHAobject *sha = (SHAobject *)PyObject_GC_New(SHAobject, - state->sha256_type); - PyObject_GC_Track(sha); - return sha; -} - -/* Internal methods for a hash object */ -static int -SHA_traverse(PyObject *ptr, visitproc visit, void *arg) -{ - Py_VISIT(Py_TYPE(ptr)); - return 0; -} - -static void -SHA_dealloc(SHAobject *ptr) -{ - Hacl_Streaming_SHA2_free_256(ptr->state); - PyTypeObject *tp = Py_TYPE(ptr); - PyObject_GC_UnTrack(ptr); - PyObject_GC_Del(ptr); - Py_DECREF(tp); -} - -/* HACL* takes a uint32_t for the length of its parameter, but Py_ssize_t can be - * 64 bits. */ -static void update_256(Hacl_Streaming_SHA2_state_sha2_256 *state, uint8_t *buf, Py_ssize_t len) { - /* Note: we explicitly ignore the error code on the basis that it would take > - * 1 billion years to overflow the maximum admissible length for SHA2-256 - * (namely, 2^61-1 bytes). */ - while (len > UINT32_MAX) { - Hacl_Streaming_SHA2_update_256(state, buf, UINT32_MAX); - len -= UINT32_MAX; - buf += UINT32_MAX; - } - /* Cast to uint32_t is safe: upon exiting the loop, len <= UINT32_MAX, and - * therefore fits in a uint32_t */ - Hacl_Streaming_SHA2_update_256(state, buf, (uint32_t) len); -} - - -/* External methods for a hash object */ - -/*[clinic input] -SHA256Type.copy - - cls:defining_class - -Return a copy of the hash object. -[clinic start generated code]*/ - -static PyObject * -SHA256Type_copy_impl(SHAobject *self, PyTypeObject *cls) -/*[clinic end generated code: output=9273f92c382be12f input=3137146fcb88e212]*/ -{ - SHAobject *newobj; - _sha256_state *state = PyType_GetModuleState(cls); - if (Py_IS_TYPE(self, state->sha256_type)) { - if ( (newobj = newSHA256object(state)) == NULL) { - return NULL; - } - } else { - if ( (newobj = newSHA224object(state))==NULL) { - return NULL; - } - } - - SHAcopy(self, newobj); - return (PyObject *)newobj; -} - -/*[clinic input] -SHA256Type.digest - -Return the digest value as a bytes object. -[clinic start generated code]*/ - -static PyObject * -SHA256Type_digest_impl(SHAobject *self) -/*[clinic end generated code: output=46616a5e909fbc3d input=f1f4cfea5cbde35c]*/ -{ - uint8_t digest[SHA_DIGESTSIZE]; - // HACL performs copies under the hood so that self->state remains valid - // after this call. - Hacl_Streaming_SHA2_finish_256(self->state, digest); - return PyBytes_FromStringAndSize((const char *)digest, self->digestsize); -} - -/*[clinic input] -SHA256Type.hexdigest - -Return the digest value as a string of hexadecimal digits. -[clinic start generated code]*/ - -static PyObject * -SHA256Type_hexdigest_impl(SHAobject *self) -/*[clinic end generated code: output=725f8a7041ae97f3 input=0cc4c714693010d1]*/ -{ - uint8_t digest[SHA_DIGESTSIZE]; - Hacl_Streaming_SHA2_finish_256(self->state, digest); - return _Py_strhex((const char *)digest, self->digestsize); -} - -/*[clinic input] -SHA256Type.update - - obj: object - / - -Update this hash object's state with the provided string. -[clinic start generated code]*/ - -static PyObject * -SHA256Type_update(SHAobject *self, PyObject *obj) -/*[clinic end generated code: output=0967fb2860c66af7 input=b2d449d5b30f0f5a]*/ -{ - Py_buffer buf; - - GET_BUFFER_VIEW_OR_ERROUT(obj, &buf); - - update_256(self->state, buf.buf, buf.len); - - PyBuffer_Release(&buf); - Py_RETURN_NONE; -} - -static PyMethodDef SHA_methods[] = { - SHA256TYPE_COPY_METHODDEF - SHA256TYPE_DIGEST_METHODDEF - SHA256TYPE_HEXDIGEST_METHODDEF - SHA256TYPE_UPDATE_METHODDEF - {NULL, NULL} /* sentinel */ -}; - -static PyObject * -SHA256_get_block_size(PyObject *self, void *closure) -{ - return PyLong_FromLong(SHA_BLOCKSIZE); -} - -static PyObject * -SHA256_get_name(SHAobject *self, void *closure) -{ - if (self->digestsize == 28) { - return PyUnicode_FromStringAndSize("sha224", 6); - } - return PyUnicode_FromStringAndSize("sha256", 6); -} - -static PyGetSetDef SHA_getseters[] = { - {"block_size", - (getter)SHA256_get_block_size, NULL, - NULL, - NULL}, - {"name", - (getter)SHA256_get_name, NULL, - NULL, - NULL}, - {NULL} /* Sentinel */ -}; - -static PyMemberDef SHA_members[] = { - {"digest_size", T_INT, offsetof(SHAobject, digestsize), READONLY, NULL}, - {NULL} /* Sentinel */ -}; - -static PyType_Slot sha256_types_slots[] = { - {Py_tp_dealloc, SHA_dealloc}, - {Py_tp_methods, SHA_methods}, - {Py_tp_members, SHA_members}, - {Py_tp_getset, SHA_getseters}, - {Py_tp_traverse, SHA_traverse}, - {0,0} -}; - -static PyType_Spec sha224_type_spec = { - .name = "_sha256.sha224", - .basicsize = sizeof(SHAobject), - .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_DISALLOW_INSTANTIATION | - Py_TPFLAGS_IMMUTABLETYPE | Py_TPFLAGS_HAVE_GC), - .slots = sha256_types_slots -}; - -static PyType_Spec sha256_type_spec = { - .name = "_sha256.sha256", - .basicsize = sizeof(SHAobject), - .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_DISALLOW_INSTANTIATION | - Py_TPFLAGS_IMMUTABLETYPE | Py_TPFLAGS_HAVE_GC), - .slots = sha256_types_slots -}; - -/* The single module-level function: new() */ - -/*[clinic input] -_sha256.sha256 - - string: object(c_default="NULL") = b'' - * - usedforsecurity: bool = True - -Return a new SHA-256 hash object; optionally initialized with a string. -[clinic start generated code]*/ - -static PyObject * -_sha256_sha256_impl(PyObject *module, PyObject *string, int usedforsecurity) -/*[clinic end generated code: output=a1de327e8e1185cf input=9be86301aeb14ea5]*/ -{ - Py_buffer buf; - - if (string) { - GET_BUFFER_VIEW_OR_ERROUT(string, &buf); - } - - _sha256_state *state = PyModule_GetState(module); - - SHAobject *new; - if ((new = newSHA256object(state)) == NULL) { - if (string) { - PyBuffer_Release(&buf); - } - return NULL; - } - - new->state = Hacl_Streaming_SHA2_create_in_256(); - new->digestsize = 32; - - if (PyErr_Occurred()) { - Py_DECREF(new); - if (string) { - PyBuffer_Release(&buf); - } - return NULL; - } - if (string) { - update_256(new->state, buf.buf, buf.len); - PyBuffer_Release(&buf); - } - - return (PyObject *)new; -} - -/*[clinic input] -_sha256.sha224 - - string: object(c_default="NULL") = b'' - * - usedforsecurity: bool = True - -Return a new SHA-224 hash object; optionally initialized with a string. -[clinic start generated code]*/ - -static PyObject * -_sha256_sha224_impl(PyObject *module, PyObject *string, int usedforsecurity) -/*[clinic end generated code: output=08be6b36569bc69c input=9fcfb46e460860ac]*/ -{ - Py_buffer buf; - if (string) { - GET_BUFFER_VIEW_OR_ERROUT(string, &buf); - } - - _sha256_state *state = PyModule_GetState(module); - SHAobject *new; - if ((new = newSHA224object(state)) == NULL) { - if (string) { - PyBuffer_Release(&buf); - } - return NULL; - } - - new->state = Hacl_Streaming_SHA2_create_in_224(); - new->digestsize = 28; - - if (PyErr_Occurred()) { - Py_DECREF(new); - if (string) { - PyBuffer_Release(&buf); - } - return NULL; - } - if (string) { - update_256(new->state, buf.buf, buf.len); - PyBuffer_Release(&buf); - } - - return (PyObject *)new; -} - - -/* List of functions exported by this module */ - -static struct PyMethodDef SHA_functions[] = { - _SHA256_SHA256_METHODDEF - _SHA256_SHA224_METHODDEF - {NULL, NULL} /* Sentinel */ -}; - -static int -_sha256_traverse(PyObject *module, visitproc visit, void *arg) -{ - _sha256_state *state = _sha256_get_state(module); - Py_VISIT(state->sha224_type); - Py_VISIT(state->sha256_type); - return 0; -} - -static int -_sha256_clear(PyObject *module) -{ - _sha256_state *state = _sha256_get_state(module); - Py_CLEAR(state->sha224_type); - Py_CLEAR(state->sha256_type); - return 0; -} - -static void -_sha256_free(void *module) -{ - _sha256_clear((PyObject *)module); -} - -static int sha256_exec(PyObject *module) -{ - _sha256_state *state = _sha256_get_state(module); - - state->sha224_type = (PyTypeObject *)PyType_FromModuleAndSpec( - module, &sha224_type_spec, NULL); - - if (state->sha224_type == NULL) { - return -1; - } - - state->sha256_type = (PyTypeObject *)PyType_FromModuleAndSpec( - module, &sha256_type_spec, NULL); - - if (state->sha256_type == NULL) { - return -1; - } - - Py_INCREF((PyObject *)state->sha224_type); - if (PyModule_AddObject(module, "SHA224Type", (PyObject *)state->sha224_type) < 0) { - Py_DECREF((PyObject *)state->sha224_type); - return -1; - } - Py_INCREF((PyObject *)state->sha256_type); - if (PyModule_AddObject(module, "SHA256Type", (PyObject *)state->sha256_type) < 0) { - Py_DECREF((PyObject *)state->sha256_type); - return -1; - } - return 0; -} - -static PyModuleDef_Slot _sha256_slots[] = { - {Py_mod_exec, sha256_exec}, - {0, NULL} -}; - -static struct PyModuleDef _sha256module = { - PyModuleDef_HEAD_INIT, - .m_name = "_sha256", - .m_size = sizeof(_sha256_state), - .m_methods = SHA_functions, - .m_slots = _sha256_slots, - .m_traverse = _sha256_traverse, - .m_clear = _sha256_clear, - .m_free = _sha256_free -}; - -/* Initialize this module. */ -PyMODINIT_FUNC -PyInit__sha256(void) -{ - return PyModuleDef_Init(&_sha256module); -} diff --git a/Modules/sha2module.c b/Modules/sha2module.c new file mode 100644 index 000000000000..9999f255cd57 --- /dev/null +++ b/Modules/sha2module.c @@ -0,0 +1,805 @@ +/* SHA2 module */ + +/* This provides an interface to NIST's SHA2 224, 256, 384, & 512 Algorithms */ + +/* See below for information about the original code this module was + based upon. Additional work performed by: + + Andrew Kuchling (amk at amk.ca) + Greg Stein (gstein at lyra.org) + Trevor Perrin (trevp at trevp.net) + Jonathan Protzenko (jonathan at protzenko.fr) + + Copyright (C) 2005-2007 Gregory P. Smith (greg at krypto.org) + Licensed to PSF under a Contributor Agreement. + +*/ + +/* SHA objects */ +#ifndef Py_BUILD_CORE_BUILTIN +# define Py_BUILD_CORE_MODULE 1 +#endif + +#include "Python.h" +#include "pycore_bitutils.h" // _Py_bswap32() +#include "pycore_moduleobject.h" // _PyModule_GetState() +#include "pycore_strhex.h" // _Py_strhex() +#include "structmember.h" // PyMemberDef +#include "hashlib.h" + +/*[clinic input] +module _sha2 +class SHA256Type "SHA256object *" "&PyType_Type" +class SHA512Type "SHA512object *" "&PyType_Type" +[clinic start generated code]*/ +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=b5315a7b611c9afc]*/ + + +/* The SHA block sizes and maximum message digest sizes, in bytes */ + +#define SHA256_BLOCKSIZE 64 +#define SHA256_DIGESTSIZE 32 +#define SHA512_BLOCKSIZE 128 +#define SHA512_DIGESTSIZE 64 + +/* Our SHA2 implementations defer to the HACL* verified library. */ + +#include "_hacl/Hacl_Streaming_SHA2.h" + +// TODO: Get rid of int digestsize in favor of Hacl state info? + +typedef struct { + PyObject_HEAD + int digestsize; + Hacl_Streaming_SHA2_state_sha2_256 *state; +} SHA256object; + +typedef struct { + PyObject_HEAD + int digestsize; + Hacl_Streaming_SHA2_state_sha2_512 *state; +} SHA512object; + +#include "clinic/sha2module.c.h" + +/* We shall use run-time type information in the remainder of this module to + * tell apart SHA2-224 and SHA2-256 */ +typedef struct { + PyTypeObject* sha224_type; + PyTypeObject* sha256_type; + PyTypeObject* sha384_type; + PyTypeObject* sha512_type; +} sha2_state; + +static inline sha2_state* +sha2_get_state(PyObject *module) +{ + void *state = _PyModule_GetState(module); + assert(state != NULL); + return (sha2_state *)state; +} + +static void SHA256copy(SHA256object *src, SHA256object *dest) +{ + dest->digestsize = src->digestsize; + dest->state = Hacl_Streaming_SHA2_copy_256(src->state); +} + +static void SHA512copy(SHA512object *src, SHA512object *dest) +{ + dest->digestsize = src->digestsize; + dest->state = Hacl_Streaming_SHA2_copy_512(src->state); +} + +static SHA256object * +newSHA224object(sha2_state *state) +{ + SHA256object *sha = (SHA256object *)PyObject_GC_New( + SHA256object, state->sha224_type); + if (!sha) { + return NULL; + } + PyObject_GC_Track(sha); + return sha; +} + +static SHA256object * +newSHA256object(sha2_state *state) +{ + SHA256object *sha = (SHA256object *)PyObject_GC_New( + SHA256object, state->sha256_type); + if (!sha) { + return NULL; + } + PyObject_GC_Track(sha); + return sha; +} + +static SHA512object * +newSHA384object(sha2_state *state) +{ + SHA512object *sha = (SHA512object *)PyObject_GC_New( + SHA512object, state->sha384_type); + if (!sha) { + return NULL; + } + PyObject_GC_Track(sha); + return sha; +} + +static SHA512object * +newSHA512object(sha2_state *state) +{ + SHA512object *sha = (SHA512object *)PyObject_GC_New( + SHA512object, state->sha512_type); + if (!sha) { + return NULL; + } + PyObject_GC_Track(sha); + return sha; +} + +/* Internal methods for our hash objects. */ + +static int +SHA2_traverse(PyObject *ptr, visitproc visit, void *arg) +{ + Py_VISIT(Py_TYPE(ptr)); + return 0; +} + +static void +SHA256_dealloc(SHA256object *ptr) +{ + Hacl_Streaming_SHA2_free_256(ptr->state); + PyTypeObject *tp = Py_TYPE(ptr); + PyObject_GC_UnTrack(ptr); + PyObject_GC_Del(ptr); + Py_DECREF(tp); +} + +static void +SHA512_dealloc(SHA512object *ptr) +{ + Hacl_Streaming_SHA2_free_512(ptr->state); + PyTypeObject *tp = Py_TYPE(ptr); + PyObject_GC_UnTrack(ptr); + PyObject_GC_Del(ptr); + Py_DECREF(tp); +} + +/* HACL* takes a uint32_t for the length of its parameter, but Py_ssize_t can be + * 64 bits so we loop in <4gig chunks when needed. */ + +static void update_256(Hacl_Streaming_SHA2_state_sha2_256 *state, uint8_t *buf, Py_ssize_t len) { + /* Note: we explicitly ignore the error code on the basis that it would take > + * 1 billion years to overflow the maximum admissible length for SHA2-256 + * (namely, 2^61-1 bytes). */ +#if PY_SSIZE_T_MAX > UINT32_MAX + while (len > UINT32_MAX) { + Hacl_Streaming_SHA2_update_256(state, buf, UINT32_MAX); + len -= UINT32_MAX; + buf += UINT32_MAX; + } +#endif + /* Cast to uint32_t is safe: len <= UINT32_MAX at this point. */ + Hacl_Streaming_SHA2_update_256(state, buf, (uint32_t) len); +} + +static void update_512(Hacl_Streaming_SHA2_state_sha2_512 *state, uint8_t *buf, Py_ssize_t len) { + /* Note: we explicitly ignore the error code on the basis that it would take > + * 1 billion years to overflow the maximum admissible length for this API + * (namely, 2^64-1 bytes). */ +#if PY_SSIZE_T_MAX > UINT32_MAX + while (len > UINT32_MAX) { + Hacl_Streaming_SHA2_update_512(state, buf, UINT32_MAX); + len -= UINT32_MAX; + buf += UINT32_MAX; + } +#endif + /* Cast to uint32_t is safe: len <= UINT32_MAX at this point. */ + Hacl_Streaming_SHA2_update_512(state, buf, (uint32_t) len); +} + + +/* External methods for our hash objects */ + +/*[clinic input] +SHA256Type.copy + + cls:defining_class + +Return a copy of the hash object. +[clinic start generated code]*/ + +static PyObject * +SHA256Type_copy_impl(SHA256object *self, PyTypeObject *cls) +/*[clinic end generated code: output=fabd515577805cd3 input=3137146fcb88e212]*/ +{ + SHA256object *newobj; + sha2_state *state = PyType_GetModuleState(cls); + if (Py_IS_TYPE(self, state->sha256_type)) { + if ((newobj = newSHA256object(state)) == NULL) { + return NULL; + } + } else { + if ((newobj = newSHA224object(state)) == NULL) { + return NULL; + } + } + + SHA256copy(self, newobj); + return (PyObject *)newobj; +} + +/*[clinic input] +SHA512Type.copy + + cls: defining_class + +Return a copy of the hash object. +[clinic start generated code]*/ + +static PyObject * +SHA512Type_copy_impl(SHA512object *self, PyTypeObject *cls) +/*[clinic end generated code: output=66d2a8ef20de8302 input=f673a18f66527c90]*/ +{ + SHA512object *newobj; + sha2_state *state = PyType_GetModuleState(cls); + + if (Py_IS_TYPE((PyObject*)self, state->sha512_type)) { + if ((newobj = newSHA512object(state)) == NULL) { + return NULL; + } + } + else { + if ((newobj = newSHA384object(state)) == NULL) { + return NULL; + } + } + + SHA512copy(self, newobj); + return (PyObject *)newobj; +} + +/*[clinic input] +SHA256Type.digest + +Return the digest value as a bytes object. +[clinic start generated code]*/ + +static PyObject * +SHA256Type_digest_impl(SHA256object *self) +/*[clinic end generated code: output=3a2e3997a98ee792 input=f1f4cfea5cbde35c]*/ +{ + uint8_t digest[SHA256_DIGESTSIZE]; + assert(self->digestsize <= SHA256_DIGESTSIZE); + // HACL* performs copies under the hood so that self->state remains valid + // after this call. + Hacl_Streaming_SHA2_finish_256(self->state, digest); + return PyBytes_FromStringAndSize((const char *)digest, self->digestsize); +} + +/*[clinic input] +SHA512Type.digest + +Return the digest value as a bytes object. +[clinic start generated code]*/ + +static PyObject * +SHA512Type_digest_impl(SHA512object *self) +/*[clinic end generated code: output=dd8c6320070458e0 input=f6470dd359071f4b]*/ +{ + uint8_t digest[SHA512_DIGESTSIZE]; + assert(self->digestsize <= SHA512_DIGESTSIZE); + // HACL* performs copies under the hood so that self->state remains valid + // after this call. + Hacl_Streaming_SHA2_finish_512(self->state, digest); + return PyBytes_FromStringAndSize((const char *)digest, self->digestsize); +} + +/*[clinic input] +SHA256Type.hexdigest + +Return the digest value as a string of hexadecimal digits. +[clinic start generated code]*/ + +static PyObject * +SHA256Type_hexdigest_impl(SHA256object *self) +/*[clinic end generated code: output=96cb68996a780ab3 input=0cc4c714693010d1]*/ +{ + uint8_t digest[SHA256_DIGESTSIZE]; + assert(self->digestsize <= SHA256_DIGESTSIZE); + Hacl_Streaming_SHA2_finish_256(self->state, digest); + return _Py_strhex((const char *)digest, self->digestsize); +} + +/*[clinic input] +SHA512Type.hexdigest + +Return the digest value as a string of hexadecimal digits. +[clinic start generated code]*/ + +static PyObject * +SHA512Type_hexdigest_impl(SHA512object *self) +/*[clinic end generated code: output=cbd6f844aba1fe7c input=498b877b25cbe0a2]*/ +{ + uint8_t digest[SHA512_DIGESTSIZE]; + assert(self->digestsize <= SHA512_DIGESTSIZE); + Hacl_Streaming_SHA2_finish_512(self->state, digest); + return _Py_strhex((const char *)digest, self->digestsize); +} + +/*[clinic input] +SHA256Type.update + + obj: object + / + +Update this hash object's state with the provided string. +[clinic start generated code]*/ + +static PyObject * +SHA256Type_update(SHA256object *self, PyObject *obj) +/*[clinic end generated code: output=1b240f965ddbd8c6 input=b2d449d5b30f0f5a]*/ +{ + Py_buffer buf; + + GET_BUFFER_VIEW_OR_ERROUT(obj, &buf); + + update_256(self->state, buf.buf, buf.len); + + PyBuffer_Release(&buf); + Py_RETURN_NONE; +} + +/*[clinic input] +SHA512Type.update + + obj: object + / + +Update this hash object's state with the provided string. +[clinic start generated code]*/ + +static PyObject * +SHA512Type_update(SHA512object *self, PyObject *obj) +/*[clinic end generated code: output=745f51057a985884 input=ded2b46656566283]*/ +{ + Py_buffer buf; + + GET_BUFFER_VIEW_OR_ERROUT(obj, &buf); + + update_512(self->state, buf.buf, buf.len); + + PyBuffer_Release(&buf); + Py_RETURN_NONE; +} + +static PyMethodDef SHA256_methods[] = { + SHA256TYPE_COPY_METHODDEF + SHA256TYPE_DIGEST_METHODDEF + SHA256TYPE_HEXDIGEST_METHODDEF + SHA256TYPE_UPDATE_METHODDEF + {NULL, NULL} /* sentinel */ +}; + +static PyMethodDef SHA512_methods[] = { + SHA512TYPE_COPY_METHODDEF + SHA512TYPE_DIGEST_METHODDEF + SHA512TYPE_HEXDIGEST_METHODDEF + SHA512TYPE_UPDATE_METHODDEF + {NULL, NULL} /* sentinel */ +}; + +static PyObject * +SHA256_get_block_size(PyObject *self, void *closure) +{ + return PyLong_FromLong(SHA256_BLOCKSIZE); +} + +static PyObject * +SHA512_get_block_size(PyObject *self, void *closure) +{ + return PyLong_FromLong(SHA512_BLOCKSIZE); +} + +static PyObject * +SHA256_get_digest_size(SHA256object *self, void *closure) +{ + return PyLong_FromLong(self->digestsize); +} + +static PyObject * +SHA512_get_digest_size(SHA512object *self, void *closure) +{ + return PyLong_FromLong(self->digestsize); +} + +static PyObject * +SHA256_get_name(SHA256object *self, void *closure) +{ + if (self->digestsize == 28) { + return PyUnicode_FromStringAndSize("sha224", 6); + } + return PyUnicode_FromStringAndSize("sha256", 6); +} + +static PyObject * +SHA512_get_name(SHA512object *self, void *closure) +{ + if (self->digestsize == 64) { + return PyUnicode_FromStringAndSize("sha512", 6); + } + return PyUnicode_FromStringAndSize("sha384", 6); +} + +static PyGetSetDef SHA256_getseters[] = { + {"block_size", + (getter)SHA256_get_block_size, NULL, + NULL, + NULL}, + {"name", + (getter)SHA256_get_name, NULL, + NULL, + NULL}, + {"digest_size", + (getter)SHA256_get_digest_size, NULL, + NULL, + NULL}, + {NULL} /* Sentinel */ +}; + +static PyGetSetDef SHA512_getseters[] = { + {"block_size", + (getter)SHA512_get_block_size, NULL, + NULL, + NULL}, + {"name", + (getter)SHA512_get_name, NULL, + NULL, + NULL}, + {"digest_size", + (getter)SHA512_get_digest_size, NULL, + NULL, + NULL}, + {NULL} /* Sentinel */ +}; + +static PyType_Slot sha256_types_slots[] = { + {Py_tp_dealloc, SHA256_dealloc}, + {Py_tp_methods, SHA256_methods}, + {Py_tp_getset, SHA256_getseters}, + {Py_tp_traverse, SHA2_traverse}, + {0,0} +}; + +static PyType_Slot sha512_type_slots[] = { + {Py_tp_dealloc, SHA512_dealloc}, + {Py_tp_methods, SHA512_methods}, + {Py_tp_getset, SHA512_getseters}, + {Py_tp_traverse, SHA2_traverse}, + {0,0} +}; + +// Using PyType_GetModuleState() on these types is safe since they +// cannot be subclassed: they don't have the Py_TPFLAGS_BASETYPE flag. +static PyType_Spec sha224_type_spec = { + .name = "_sha2.SHA224Type", + .basicsize = sizeof(SHA256object), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_DISALLOW_INSTANTIATION | + Py_TPFLAGS_IMMUTABLETYPE | Py_TPFLAGS_HAVE_GC), + .slots = sha256_types_slots +}; + +static PyType_Spec sha256_type_spec = { + .name = "_sha2.SHA256Type", + .basicsize = sizeof(SHA256object), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_DISALLOW_INSTANTIATION | + Py_TPFLAGS_IMMUTABLETYPE | Py_TPFLAGS_HAVE_GC), + .slots = sha256_types_slots +}; + +static PyType_Spec sha384_type_spec = { + .name = "_sha2.SHA384Type", + .basicsize = sizeof(SHA512object), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_DISALLOW_INSTANTIATION | + Py_TPFLAGS_IMMUTABLETYPE | Py_TPFLAGS_HAVE_GC), + .slots = sha512_type_slots +}; + +static PyType_Spec sha512_type_spec = { + .name = "_sha2.SHA512Type", + .basicsize = sizeof(SHA512object), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_DISALLOW_INSTANTIATION | + Py_TPFLAGS_IMMUTABLETYPE | Py_TPFLAGS_HAVE_GC), + .slots = sha512_type_slots +}; + +/* The module-level constructors. */ + +/*[clinic input] +_sha2.sha256 + + string: object(c_default="NULL") = b'' + * + usedforsecurity: bool = True + +Return a new SHA-256 hash object; optionally initialized with a string. +[clinic start generated code]*/ + +static PyObject * +_sha2_sha256_impl(PyObject *module, PyObject *string, int usedforsecurity) +/*[clinic end generated code: output=243c9dd289931f87 input=6249da1de607280a]*/ +{ + Py_buffer buf; + + if (string) { + GET_BUFFER_VIEW_OR_ERROUT(string, &buf); + } + + sha2_state *state = sha2_get_state(module); + + SHA256object *new; + if ((new = newSHA256object(state)) == NULL) { + if (string) { + PyBuffer_Release(&buf); + } + return NULL; + } + + new->state = Hacl_Streaming_SHA2_create_in_256(); + new->digestsize = 32; + + if (PyErr_Occurred()) { + Py_DECREF(new); + if (string) { + PyBuffer_Release(&buf); + } + return NULL; + } + if (string) { + update_256(new->state, buf.buf, buf.len); + PyBuffer_Release(&buf); + } + + return (PyObject *)new; +} + +/*[clinic input] +_sha2.sha224 + + string: object(c_default="NULL") = b'' + * + usedforsecurity: bool = True + +Return a new SHA-224 hash object; optionally initialized with a string. +[clinic start generated code]*/ + +static PyObject * +_sha2_sha224_impl(PyObject *module, PyObject *string, int usedforsecurity) +/*[clinic end generated code: output=68191f232e4a3843 input=c42bcba47fd7d2b7]*/ +{ + Py_buffer buf; + if (string) { + GET_BUFFER_VIEW_OR_ERROUT(string, &buf); + } + + sha2_state *state = sha2_get_state(module); + SHA256object *new; + if ((new = newSHA224object(state)) == NULL) { + if (string) { + PyBuffer_Release(&buf); + } + return NULL; + } + + new->state = Hacl_Streaming_SHA2_create_in_224(); + new->digestsize = 28; + + if (PyErr_Occurred()) { + Py_DECREF(new); + if (string) { + PyBuffer_Release(&buf); + } + return NULL; + } + if (string) { + update_256(new->state, buf.buf, buf.len); + PyBuffer_Release(&buf); + } + + return (PyObject *)new; +} + +/*[clinic input] +_sha2.sha512 + + string: object(c_default="NULL") = b'' + * + usedforsecurity: bool = True + +Return a new SHA-512 hash object; optionally initialized with a string. +[clinic start generated code]*/ + +static PyObject * +_sha2_sha512_impl(PyObject *module, PyObject *string, int usedforsecurity) +/*[clinic end generated code: output=d55c8996eca214d7 input=0576ae2a6ebfad25]*/ +{ + SHA512object *new; + Py_buffer buf; + + sha2_state *state = sha2_get_state(module); + + if (string) + GET_BUFFER_VIEW_OR_ERROUT(string, &buf); + + if ((new = newSHA512object(state)) == NULL) { + if (string) + PyBuffer_Release(&buf); + return NULL; + } + + new->state = Hacl_Streaming_SHA2_create_in_512(); + new->digestsize = 64; + + if (PyErr_Occurred()) { + Py_DECREF(new); + if (string) + PyBuffer_Release(&buf); + return NULL; + } + if (string) { + update_512(new->state, buf.buf, buf.len); + PyBuffer_Release(&buf); + } + + return (PyObject *)new; +} + +/*[clinic input] +_sha2.sha384 + + string: object(c_default="NULL") = b'' + * + usedforsecurity: bool = True + +Return a new SHA-384 hash object; optionally initialized with a string. +[clinic start generated code]*/ + +static PyObject * +_sha2_sha384_impl(PyObject *module, PyObject *string, int usedforsecurity) +/*[clinic end generated code: output=b29a0d81d51d1368 input=4e9199d8de0d2f9b]*/ +{ + SHA512object *new; + Py_buffer buf; + + sha2_state *state = sha2_get_state(module); + + if (string) + GET_BUFFER_VIEW_OR_ERROUT(string, &buf); + + if ((new = newSHA384object(state)) == NULL) { + if (string) + PyBuffer_Release(&buf); + return NULL; + } + + new->state = Hacl_Streaming_SHA2_create_in_384(); + new->digestsize = 48; + + if (PyErr_Occurred()) { + Py_DECREF(new); + if (string) + PyBuffer_Release(&buf); + return NULL; + } + if (string) { + update_512(new->state, buf.buf, buf.len); + PyBuffer_Release(&buf); + } + + return (PyObject *)new; +} + +/* List of functions exported by this module */ + +static struct PyMethodDef SHA2_functions[] = { + _SHA2_SHA256_METHODDEF + _SHA2_SHA224_METHODDEF + _SHA2_SHA512_METHODDEF + _SHA2_SHA384_METHODDEF + {NULL, NULL} /* Sentinel */ +}; + +static int +_sha2_traverse(PyObject *module, visitproc visit, void *arg) +{ + sha2_state *state = sha2_get_state(module); + Py_VISIT(state->sha224_type); + Py_VISIT(state->sha256_type); + Py_VISIT(state->sha384_type); + Py_VISIT(state->sha512_type); + return 0; +} + +static int +_sha2_clear(PyObject *module) +{ + sha2_state *state = sha2_get_state(module); + Py_CLEAR(state->sha224_type); + Py_CLEAR(state->sha256_type); + Py_CLEAR(state->sha384_type); + Py_CLEAR(state->sha512_type); + return 0; +} + +static void +_sha2_free(void *module) +{ + _sha2_clear((PyObject *)module); +} + +/* Initialize this module. */ +static int sha2_exec(PyObject *module) +{ + sha2_state *state = sha2_get_state(module); + + state->sha224_type = (PyTypeObject *)PyType_FromModuleAndSpec( + module, &sha224_type_spec, NULL); + if (state->sha224_type == NULL) { + return -1; + } + state->sha256_type = (PyTypeObject *)PyType_FromModuleAndSpec( + module, &sha256_type_spec, NULL); + if (state->sha256_type == NULL) { + return -1; + } + state->sha384_type = (PyTypeObject *)PyType_FromModuleAndSpec( + module, &sha384_type_spec, NULL); + if (state->sha384_type == NULL) { + return -1; + } + state->sha512_type = (PyTypeObject *)PyType_FromModuleAndSpec( + module, &sha512_type_spec, NULL); + if (state->sha512_type == NULL) { + return -1; + } + + if (PyModule_AddType(module, state->sha224_type) < 0) { + return -1; + } + if (PyModule_AddType(module, state->sha256_type) < 0) { + return -1; + } + if (PyModule_AddType(module, state->sha384_type) < 0) { + return -1; + } + if (PyModule_AddType(module, state->sha512_type) < 0) { + return -1; + } + + return 0; +} + +static PyModuleDef_Slot _sha2_slots[] = { + {Py_mod_exec, sha2_exec}, + {0, NULL} +}; + +static struct PyModuleDef _sha2module = { + PyModuleDef_HEAD_INIT, + .m_name = "_sha2", + .m_size = sizeof(sha2_state), + .m_methods = SHA2_functions, + .m_slots = _sha2_slots, + .m_traverse = _sha2_traverse, + .m_clear = _sha2_clear, + .m_free = _sha2_free +}; + +PyMODINIT_FUNC +PyInit__sha2(void) +{ + return PyModuleDef_Init(&_sha2module); +} diff --git a/Modules/sha512module.c b/Modules/sha512module.c deleted file mode 100644 index d7dfed4e5db0..000000000000 --- a/Modules/sha512module.c +++ /dev/null @@ -1,456 +0,0 @@ -/* SHA512 module */ - -/* This module provides an interface to NIST's SHA-512 and SHA-384 Algorithms */ - -/* See below for information about the original code this module was - based upon. Additional work performed by: - - Andrew Kuchling (amk at amk.ca) - Greg Stein (gstein at lyra.org) - Trevor Perrin (trevp at trevp.net) - Jonathan Protzenko (jonathan at protzenko.fr) - - Copyright (C) 2005-2007 Gregory P. Smith (greg at krypto.org) - Licensed to PSF under a Contributor Agreement. - -*/ - -/* SHA objects */ -#ifndef Py_BUILD_CORE_BUILTIN -# define Py_BUILD_CORE_MODULE 1 -#endif - -#include "Python.h" -#include "pycore_bitutils.h" // _Py_bswap64() -#include "pycore_strhex.h" // _Py_strhex() -#include "structmember.h" // PyMemberDef -#include "hashlib.h" - -/*[clinic input] -module _sha512 -class SHA512Type "SHAobject *" "&PyType_Type" -[clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=81a3ccde92bcfe8d]*/ - - -/* The SHA block size and message digest sizes, in bytes */ - -#define SHA_BLOCKSIZE 128 -#define SHA_DIGESTSIZE 64 - -/* The SHA2-384 and SHA2-512 implementations defer to the HACL* verified - * library. */ - -#include "_hacl/Hacl_Streaming_SHA2.h" - -typedef struct { - PyObject_HEAD - int digestsize; - Hacl_Streaming_SHA2_state_sha2_512 *state; -} SHAobject; - -#include "clinic/sha512module.c.h" - - -static void SHAcopy(SHAobject *src, SHAobject *dest) -{ - dest->digestsize = src->digestsize; - dest->state = Hacl_Streaming_SHA2_copy_512(src->state); -} - -typedef struct { - PyTypeObject* sha384_type; - PyTypeObject* sha512_type; -} SHA512State; - -static inline SHA512State* -sha512_get_state(PyObject *module) -{ - void *state = PyModule_GetState(module); - assert(state != NULL); - return (SHA512State *)state; -} - -static SHAobject * -newSHA384object(SHA512State *st) -{ - SHAobject *sha = (SHAobject *)PyObject_GC_New(SHAobject, st->sha384_type); - PyObject_GC_Track(sha); - return sha; -} - -static SHAobject * -newSHA512object(SHA512State *st) -{ - SHAobject *sha = (SHAobject *)PyObject_GC_New(SHAobject, st->sha512_type); - PyObject_GC_Track(sha); - return sha; -} - -/* Internal methods for a hash object */ -static int -SHA_traverse(PyObject *ptr, visitproc visit, void *arg) -{ - Py_VISIT(Py_TYPE(ptr)); - return 0; -} - -static void -SHA512_dealloc(SHAobject *ptr) -{ - Hacl_Streaming_SHA2_free_512(ptr->state); - PyTypeObject *tp = Py_TYPE(ptr); - PyObject_GC_UnTrack(ptr); - PyObject_GC_Del(ptr); - Py_DECREF(tp); -} - -/* HACL* takes a uint32_t for the length of its parameter, but Py_ssize_t can be - * 64 bits. */ -static void update_512(Hacl_Streaming_SHA2_state_sha2_512 *state, uint8_t *buf, Py_ssize_t len) { - /* Note: we explicitly ignore the error code on the basis that it would take > - * 1 billion years to overflow the maximum admissible length for this API - * (namely, 2^64-1 bytes). */ - while (len > UINT32_MAX) { - Hacl_Streaming_SHA2_update_512(state, buf, UINT32_MAX); - len -= UINT32_MAX; - buf += UINT32_MAX; - } - /* Cast to uint32_t is safe: upon exiting the loop, len <= UINT32_MAX, and - * therefore fits in a uint32_t */ - Hacl_Streaming_SHA2_update_512(state, buf, (uint32_t) len); -} - - -/* External methods for a hash object */ - -/*[clinic input] -SHA512Type.copy - - cls: defining_class - -Return a copy of the hash object. -[clinic start generated code]*/ - -static PyObject * -SHA512Type_copy_impl(SHAobject *self, PyTypeObject *cls) -/*[clinic end generated code: output=85ea5b47837a08e6 input=f673a18f66527c90]*/ -{ - SHAobject *newobj; - SHA512State *st = PyType_GetModuleState(cls); - - if (Py_IS_TYPE((PyObject*)self, st->sha512_type)) { - if ( (newobj = newSHA512object(st))==NULL) { - return NULL; - } - } - else { - if ( (newobj = newSHA384object(st))==NULL) { - return NULL; - } - } - - SHAcopy(self, newobj); - return (PyObject *)newobj; -} - -/*[clinic input] -SHA512Type.digest - -Return the digest value as a bytes object. -[clinic start generated code]*/ - -static PyObject * -SHA512Type_digest_impl(SHAobject *self) -/*[clinic end generated code: output=1080bbeeef7dde1b input=f6470dd359071f4b]*/ -{ - uint8_t digest[SHA_DIGESTSIZE]; - // HACL performs copies under the hood so that self->state remains valid - // after this call. - Hacl_Streaming_SHA2_finish_512(self->state, digest); - return PyBytes_FromStringAndSize((const char *)digest, self->digestsize); -} - -/*[clinic input] -SHA512Type.hexdigest - -Return the digest value as a string of hexadecimal digits. -[clinic start generated code]*/ - -static PyObject * -SHA512Type_hexdigest_impl(SHAobject *self) -/*[clinic end generated code: output=7373305b8601e18b input=498b877b25cbe0a2]*/ -{ - uint8_t digest[SHA_DIGESTSIZE]; - Hacl_Streaming_SHA2_finish_512(self->state, digest); - return _Py_strhex((const char *)digest, self->digestsize); -} - -/*[clinic input] -SHA512Type.update - - obj: object - / - -Update this hash object's state with the provided string. -[clinic start generated code]*/ - -static PyObject * -SHA512Type_update(SHAobject *self, PyObject *obj) -/*[clinic end generated code: output=1cf333e73995a79e input=ded2b46656566283]*/ -{ - Py_buffer buf; - - GET_BUFFER_VIEW_OR_ERROUT(obj, &buf); - - update_512(self->state, buf.buf, buf.len); - - PyBuffer_Release(&buf); - Py_RETURN_NONE; -} - -static PyMethodDef SHA_methods[] = { - SHA512TYPE_COPY_METHODDEF - SHA512TYPE_DIGEST_METHODDEF - SHA512TYPE_HEXDIGEST_METHODDEF - SHA512TYPE_UPDATE_METHODDEF - {NULL, NULL} /* sentinel */ -}; - -static PyObject * -SHA512_get_block_size(PyObject *self, void *closure) -{ - return PyLong_FromLong(SHA_BLOCKSIZE); -} - -static PyObject * -SHA512_get_name(PyObject *self, void *closure) -{ - if (((SHAobject *)self)->digestsize == 64) - return PyUnicode_FromStringAndSize("sha512", 6); - else - return PyUnicode_FromStringAndSize("sha384", 6); -} - -static PyGetSetDef SHA_getseters[] = { - {"block_size", - (getter)SHA512_get_block_size, NULL, - NULL, - NULL}, - {"name", - (getter)SHA512_get_name, NULL, - NULL, - NULL}, - {NULL} /* Sentinel */ -}; - -static PyMemberDef SHA_members[] = { - {"digest_size", T_INT, offsetof(SHAobject, digestsize), READONLY, NULL}, - {NULL} /* Sentinel */ -}; - -static PyType_Slot sha512_sha384_type_slots[] = { - {Py_tp_dealloc, SHA512_dealloc}, - {Py_tp_methods, SHA_methods}, - {Py_tp_members, SHA_members}, - {Py_tp_getset, SHA_getseters}, - {Py_tp_traverse, SHA_traverse}, - {0,0} -}; - -static PyType_Spec sha512_sha384_type_spec = { - .name = "_sha512.sha384", - .basicsize = sizeof(SHAobject), - .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_DISALLOW_INSTANTIATION | - Py_TPFLAGS_IMMUTABLETYPE | Py_TPFLAGS_HAVE_GC), - .slots = sha512_sha384_type_slots -}; - -// Using PyType_GetModuleState() on this type is safe since -// it cannot be subclassed: it does not have the Py_TPFLAGS_BASETYPE flag. -static PyType_Spec sha512_sha512_type_spec = { - .name = "_sha512.sha512", - .basicsize = sizeof(SHAobject), - .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_DISALLOW_INSTANTIATION | - Py_TPFLAGS_IMMUTABLETYPE | Py_TPFLAGS_HAVE_GC), - .slots = sha512_sha384_type_slots -}; - -/* The single module-level function: new() */ - -/*[clinic input] -_sha512.sha512 - - string: object(c_default="NULL") = b'' - * - usedforsecurity: bool = True - -Return a new SHA-512 hash object; optionally initialized with a string. -[clinic start generated code]*/ - -static PyObject * -_sha512_sha512_impl(PyObject *module, PyObject *string, int usedforsecurity) -/*[clinic end generated code: output=a8d9e5f9e6a0831c input=23b4daebc2ebb9c9]*/ -{ - SHAobject *new; - Py_buffer buf; - - SHA512State *st = sha512_get_state(module); - - if (string) - GET_BUFFER_VIEW_OR_ERROUT(string, &buf); - - if ((new = newSHA512object(st)) == NULL) { - if (string) - PyBuffer_Release(&buf); - return NULL; - } - - new->state = Hacl_Streaming_SHA2_create_in_512(); - new->digestsize = 64; - - if (PyErr_Occurred()) { - Py_DECREF(new); - if (string) - PyBuffer_Release(&buf); - return NULL; - } - if (string) { - update_512(new->state, buf.buf, buf.len); - PyBuffer_Release(&buf); - } - - return (PyObject *)new; -} - -/*[clinic input] -_sha512.sha384 - - string: object(c_default="NULL") = b'' - * - usedforsecurity: bool = True - -Return a new SHA-384 hash object; optionally initialized with a string. -[clinic start generated code]*/ - -static PyObject * -_sha512_sha384_impl(PyObject *module, PyObject *string, int usedforsecurity) -/*[clinic end generated code: output=da7d594a08027ac3 input=59ef72f039a6b431]*/ -{ - SHAobject *new; - Py_buffer buf; - - SHA512State *st = sha512_get_state(module); - - if (string) - GET_BUFFER_VIEW_OR_ERROUT(string, &buf); - - if ((new = newSHA384object(st)) == NULL) { - if (string) - PyBuffer_Release(&buf); - return NULL; - } - - new->state = Hacl_Streaming_SHA2_create_in_384(); - new->digestsize = 48; - - if (PyErr_Occurred()) { - Py_DECREF(new); - if (string) - PyBuffer_Release(&buf); - return NULL; - } - if (string) { - update_512(new->state, buf.buf, buf.len); - PyBuffer_Release(&buf); - } - - return (PyObject *)new; -} - - -/* List of functions exported by this module */ - -static struct PyMethodDef SHA_functions[] = { - _SHA512_SHA512_METHODDEF - _SHA512_SHA384_METHODDEF - {NULL, NULL} /* Sentinel */ -}; - -static int -_sha512_traverse(PyObject *module, visitproc visit, void *arg) -{ - SHA512State *state = sha512_get_state(module); - Py_VISIT(state->sha384_type); - Py_VISIT(state->sha512_type); - return 0; -} - -static int -_sha512_clear(PyObject *module) -{ - SHA512State *state = sha512_get_state(module); - Py_CLEAR(state->sha384_type); - Py_CLEAR(state->sha512_type); - return 0; -} - -static void -_sha512_free(void *module) -{ - _sha512_clear((PyObject *)module); -} - - -/* Initialize this module. */ -static int -_sha512_exec(PyObject *m) -{ - SHA512State* st = sha512_get_state(m); - - st->sha384_type = (PyTypeObject *)PyType_FromModuleAndSpec( - m, &sha512_sha384_type_spec, NULL); - - st->sha512_type = (PyTypeObject *)PyType_FromModuleAndSpec( - m, &sha512_sha512_type_spec, NULL); - - if (st->sha384_type == NULL || st->sha512_type == NULL) { - return -1; - } - - Py_INCREF(st->sha384_type); - if (PyModule_AddObject(m, "SHA384Type", (PyObject *)st->sha384_type) < 0) { - Py_DECREF(st->sha384_type); - return -1; - } - - Py_INCREF(st->sha512_type); - if (PyModule_AddObject(m, "SHA384Type", (PyObject *)st->sha512_type) < 0) { - Py_DECREF(st->sha512_type); - return -1; - } - - return 0; -} - -static PyModuleDef_Slot _sha512_slots[] = { - {Py_mod_exec, _sha512_exec}, - {0, NULL} -}; - -static struct PyModuleDef _sha512module = { - PyModuleDef_HEAD_INIT, - .m_name = "_sha512", - .m_size = sizeof(SHA512State), - .m_methods = SHA_functions, - .m_slots = _sha512_slots, - .m_traverse = _sha512_traverse, - .m_clear = _sha512_clear, - .m_free = _sha512_free -}; - -PyMODINIT_FUNC -PyInit__sha512(void) -{ - return PyModuleDef_Init(&_sha512module); -} diff --git a/PC/config.c b/PC/config.c index cdb5db23c4ae..b1481d79e650 100644 --- a/PC/config.c +++ b/PC/config.c @@ -20,8 +20,7 @@ extern PyObject* PyInit_nt(void); extern PyObject* PyInit__operator(void); extern PyObject* PyInit__signal(void); extern PyObject* PyInit__sha1(void); -extern PyObject* PyInit__sha256(void); -extern PyObject* PyInit__sha512(void); +extern PyObject* PyInit__sha2(void); extern PyObject* PyInit__sha3(void); extern PyObject* PyInit__statistics(void); extern PyObject* PyInit__typing(void); @@ -98,8 +97,7 @@ struct _inittab _PyImport_Inittab[] = { {"_signal", PyInit__signal}, {"_md5", PyInit__md5}, {"_sha1", PyInit__sha1}, - {"_sha256", PyInit__sha256}, - {"_sha512", PyInit__sha512}, + {"_sha2", PyInit__sha2}, {"_sha3", PyInit__sha3}, {"_blake2", PyInit__blake2}, {"time", PyInit_time}, diff --git a/PCbuild/pythoncore.vcxproj b/PCbuild/pythoncore.vcxproj index e8e9ff01e306..222963bc42d1 100644 --- a/PCbuild/pythoncore.vcxproj +++ b/PCbuild/pythoncore.vcxproj @@ -408,8 +408,7 @@ - - + diff --git a/PCbuild/pythoncore.vcxproj.filters b/PCbuild/pythoncore.vcxproj.filters index 4820db6f2c32..efb96222043a 100644 --- a/PCbuild/pythoncore.vcxproj.filters +++ b/PCbuild/pythoncore.vcxproj.filters @@ -869,10 +869,7 @@ Modules - - Modules - - + Modules diff --git a/Python/stdlib_module_names.h b/Python/stdlib_module_names.h index 4e7dfb14d19d..e9f0061a59d3 100644 --- a/Python/stdlib_module_names.h +++ b/Python/stdlib_module_names.h @@ -63,9 +63,8 @@ static const char* _Py_stdlib_module_names[] = { "_random", "_scproxy", "_sha1", -"_sha256", +"_sha2", "_sha3", -"_sha512", "_signal", "_sitebuiltins", "_socket", diff --git a/configure b/configure index c00a1e1d2ec9..7c4254f3cb17 100755 --- a/configure +++ b/configure @@ -686,10 +686,8 @@ MODULE__BLAKE2_FALSE MODULE__BLAKE2_TRUE MODULE__SHA3_FALSE MODULE__SHA3_TRUE -MODULE__SHA512_FALSE -MODULE__SHA512_TRUE -MODULE__SHA256_FALSE -MODULE__SHA256_TRUE +MODULE__SHA2_FALSE +MODULE__SHA2_TRUE MODULE__SHA1_FALSE MODULE__SHA1_TRUE MODULE__MD5_FALSE @@ -1891,9 +1889,9 @@ Optional Packages: leave OpenSSL's defaults untouched, STRING: use a custom string, python and STRING also set TLS 1.2 as minimum TLS version - --with-builtin-hashlib-hashes=md5,sha1,sha256,sha512,sha3,blake2 - builtin hash modules, md5, sha1, sha256, sha512, - sha3 (with shake), blake2 + --with-builtin-hashlib-hashes=md5,sha1,sha2,sha3,blake2 + builtin hash modules, md5, sha1, sha2, sha3 (with + shake), blake2 Some influential environment variables: PKG_CONFIG path to pkg-config utility @@ -25346,7 +25344,7 @@ fi # builtin hash modules -default_hashlib_hashes="md5,sha1,sha256,sha512,sha3,blake2" +default_hashlib_hashes="md5,sha1,sha2,sha3,blake2" $as_echo "#define PY_BUILTIN_HASHLIB_HASHES /**/" >>confdefs.h @@ -25386,10 +25384,8 @@ for builtin_hash in $with_builtin_hashlib_hashes; do with_builtin_md5=yes ;; #( sha1) : with_builtin_sha1=yes ;; #( - sha256) : - with_builtin_sha256=yes ;; #( - sha512) : - with_builtin_sha512=yes ;; #( + sha2) : + with_builtin_sha2=yes ;; #( sha3) : with_builtin_sha3=yes ;; #( blake2) : @@ -26898,72 +26894,38 @@ fi $as_echo "$py_cv_module__sha1" >&6; } - { $as_echo "$as_me:${as_lineno-$LINENO}: checking for stdlib extension module _sha256" >&5 -$as_echo_n "checking for stdlib extension module _sha256... " >&6; } - if test "$py_cv_module__sha256" != "n/a"; then : + { $as_echo "$as_me:${as_lineno-$LINENO}: checking for stdlib extension module _sha2" >&5 +$as_echo_n "checking for stdlib extension module _sha2... " >&6; } + if test "$py_cv_module__sha2" != "n/a"; then : - if test "$with_builtin_sha256" = yes; then : + if test "$with_builtin_sha2" = yes; then : if true; then : - py_cv_module__sha256=yes + py_cv_module__sha2=yes else - py_cv_module__sha256=missing + py_cv_module__sha2=missing fi else - py_cv_module__sha256=disabled + py_cv_module__sha2=disabled fi fi - as_fn_append MODULE_BLOCK "MODULE__SHA256_STATE=$py_cv_module__sha256$as_nl" - if test "x$py_cv_module__sha256" = xyes; then : + as_fn_append MODULE_BLOCK "MODULE__SHA2_STATE=$py_cv_module__sha2$as_nl" + if test "x$py_cv_module__sha2" = xyes; then : - as_fn_append MODULE_BLOCK "MODULE__SHA256_CFLAGS=-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE$as_nl" + as_fn_append MODULE_BLOCK "MODULE__SHA2_CFLAGS=-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE$as_nl" fi - if test "$py_cv_module__sha256" = yes; then - MODULE__SHA256_TRUE= - MODULE__SHA256_FALSE='#' + if test "$py_cv_module__sha2" = yes; then + MODULE__SHA2_TRUE= + MODULE__SHA2_FALSE='#' else - MODULE__SHA256_TRUE='#' - MODULE__SHA256_FALSE= + MODULE__SHA2_TRUE='#' + MODULE__SHA2_FALSE= fi - { $as_echo "$as_me:${as_lineno-$LINENO}: result: $py_cv_module__sha256" >&5 -$as_echo "$py_cv_module__sha256" >&6; } - - - { $as_echo "$as_me:${as_lineno-$LINENO}: checking for stdlib extension module _sha512" >&5 -$as_echo_n "checking for stdlib extension module _sha512... " >&6; } - if test "$py_cv_module__sha512" != "n/a"; then : - - if test "$with_builtin_sha512" = yes; then : - if true; then : - py_cv_module__sha512=yes -else - py_cv_module__sha512=missing -fi -else - py_cv_module__sha512=disabled -fi - -fi - as_fn_append MODULE_BLOCK "MODULE__SHA512_STATE=$py_cv_module__sha512$as_nl" - if test "x$py_cv_module__sha512" = xyes; then : - - as_fn_append MODULE_BLOCK "MODULE__SHA512_CFLAGS=-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE$as_nl" - - -fi - if test "$py_cv_module__sha512" = yes; then - MODULE__SHA512_TRUE= - MODULE__SHA512_FALSE='#' -else - MODULE__SHA512_TRUE='#' - MODULE__SHA512_FALSE= -fi - - { $as_echo "$as_me:${as_lineno-$LINENO}: result: $py_cv_module__sha512" >&5 -$as_echo "$py_cv_module__sha512" >&6; } + { $as_echo "$as_me:${as_lineno-$LINENO}: result: $py_cv_module__sha2" >&5 +$as_echo "$py_cv_module__sha2" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for stdlib extension module _sha3" >&5 @@ -28337,12 +28299,8 @@ if test -z "${MODULE__SHA1_TRUE}" && test -z "${MODULE__SHA1_FALSE}"; then as_fn_error $? "conditional \"MODULE__SHA1\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi -if test -z "${MODULE__SHA256_TRUE}" && test -z "${MODULE__SHA256_FALSE}"; then - as_fn_error $? "conditional \"MODULE__SHA256\" was never defined. -Usually this means the macro was only invoked conditionally." "$LINENO" 5 -fi -if test -z "${MODULE__SHA512_TRUE}" && test -z "${MODULE__SHA512_FALSE}"; then - as_fn_error $? "conditional \"MODULE__SHA512\" was never defined. +if test -z "${MODULE__SHA2_TRUE}" && test -z "${MODULE__SHA2_FALSE}"; then + as_fn_error $? "conditional \"MODULE__SHA2\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${MODULE__SHA3_TRUE}" && test -z "${MODULE__SHA3_FALSE}"; then diff --git a/configure.ac b/configure.ac index 92a05c011026..370bbe07c576 100644 --- a/configure.ac +++ b/configure.ac @@ -6928,14 +6928,14 @@ AC_DEFINE(PY_SSL_DEFAULT_CIPHERS, 1) ]) # builtin hash modules -default_hashlib_hashes="md5,sha1,sha256,sha512,sha3,blake2" +default_hashlib_hashes="md5,sha1,sha2,sha3,blake2" AC_DEFINE([PY_BUILTIN_HASHLIB_HASHES], [], [enabled builtin hash modules] ) AC_MSG_CHECKING(for --with-builtin-hashlib-hashes) AC_ARG_WITH(builtin-hashlib-hashes, - AS_HELP_STRING([--with-builtin-hashlib-hashes=md5,sha1,sha256,sha512,sha3,blake2], + AS_HELP_STRING([--with-builtin-hashlib-hashes=md5,sha1,sha2,sha3,blake2], [builtin hash modules, - md5, sha1, sha256, sha512, sha3 (with shake), blake2]), + md5, sha1, sha2, sha3 (with shake), blake2]), [ AS_CASE([$with_builtin_hashlib_hashes], [yes], [with_builtin_hashlib_hashes=$default_hashlib_hashes], @@ -6952,8 +6952,7 @@ for builtin_hash in $with_builtin_hashlib_hashes; do AS_CASE($builtin_hash, [md5], [with_builtin_md5=yes], [sha1], [with_builtin_sha1=yes], - [sha256], [with_builtin_sha256=yes], - [sha512], [with_builtin_sha512=yes], + [sha2], [with_builtin_sha2=yes], [sha3], [with_builtin_sha3=yes], [blake2], [with_builtin_blake2=yes] ) @@ -7197,11 +7196,8 @@ dnl By default we always compile these even when OpenSSL is available dnl (issue #14693). The modules are small. PY_STDLIB_MOD([_md5], [test "$with_builtin_md5" = yes]) PY_STDLIB_MOD([_sha1], [test "$with_builtin_sha1" = yes]) -PY_STDLIB_MOD([_sha256], - [test "$with_builtin_sha256" = yes], [], - [-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE]) -PY_STDLIB_MOD([_sha512], - [test "$with_builtin_sha512" = yes], [], +PY_STDLIB_MOD([_sha2], + [test "$with_builtin_sha2" = yes], [], [-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE]) PY_STDLIB_MOD([_sha3], [test "$with_builtin_sha3" = yes]) PY_STDLIB_MOD([_blake2], From webhook-mailer at python.org Thu Feb 16 06:31:49 2023 From: webhook-mailer at python.org (iritkatriel) Date: Thu, 16 Feb 2023 11:31:49 -0000 Subject: [Python-checkins] gh-101928: fix crash in compiler on multi-line lambda in function call (#101933) Message-ID: https://github.com/python/cpython/commit/df7ccf6138b1a2ce0b82ff06aa3497ca4d38c90d commit: df7ccf6138b1a2ce0b82ff06aa3497ca4d38c90d branch: main author: penguin_wwy <940375606 at qq.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-16T11:31:41Z summary: gh-101928: fix crash in compiler on multi-line lambda in function call (#101933) files: M Lib/test/test_compile.py M Python/compile.c diff --git a/Lib/test/test_compile.py b/Lib/test/test_compile.py index 90b067bcf309..a77742c0cfa6 100644 --- a/Lib/test/test_compile.py +++ b/Lib/test/test_compile.py @@ -1155,6 +1155,17 @@ def test_if_expression_expression_empty_block(self): with self.subTest(expr=expr): compile(expr, "", "exec") + def test_multi_line_lambda_as_argument(self): + # See gh-101928 + compile(""" +def foo(param, lambda_exp): + pass + +foo(param=0, + lambda_exp=lambda: + 1) + """, "", "exec") + @requires_debug_ranges() class TestSourcePositions(unittest.TestCase): diff --git a/Python/compile.c b/Python/compile.c index 0534b536e3d1..c3b344c7af2a 100644 --- a/Python/compile.c +++ b/Python/compile.c @@ -9085,8 +9085,8 @@ optimize_basic_block(PyObject *const_cache, basicblock *bb, PyObject *consts) Py_DECREF(cnt); break; case RETURN_VALUE: - INSTR_SET_OP1(inst, RETURN_CONST, oparg); - INSTR_SET_OP0(&bb->b_instr[i + 1], NOP); + INSTR_SET_OP0(inst, NOP); + INSTR_SET_OP1(&bb->b_instr[++i], RETURN_CONST, oparg); break; } break; From webhook-mailer at python.org Thu Feb 16 07:32:20 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 16 Feb 2023 12:32:20 -0000 Subject: [Python-checkins] gh-101951: use textwrap.dedent in compiler tests to make them more readable (GH-101950) Message-ID: https://github.com/python/cpython/commit/36b139af638cdeb671cb6b8b0315b254148688f7 commit: 36b139af638cdeb671cb6b8b0315b254148688f7 branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-16T04:31:59-08:00 summary: gh-101951: use textwrap.dedent in compiler tests to make them more readable (GH-101950) Fixes #101951. Automerge-Triggered-By: GH:iritkatriel files: M Lib/test/test_compile.py diff --git a/Lib/test/test_compile.py b/Lib/test/test_compile.py index a77742c0cfa6..fe775779c50f 100644 --- a/Lib/test/test_compile.py +++ b/Lib/test/test_compile.py @@ -115,24 +115,24 @@ def test_extended_arg(self): repeat = 2000 longexpr = 'x = x or ' + '-x' * repeat g = {} - code = ''' -def f(x): - %s - %s - %s - %s - %s - %s - %s - %s - %s - %s - # the expressions above have no effect, x == argument - while x: - x -= 1 - # EXTENDED_ARG/JUMP_ABSOLUTE here - return x -''' % ((longexpr,)*10) + code = textwrap.dedent(''' + def f(x): + %s + %s + %s + %s + %s + %s + %s + %s + %s + %s + # the expressions above have no effect, x == argument + while x: + x -= 1 + # EXTENDED_ARG/JUMP_ABSOLUTE here + return x + ''' % ((longexpr,)*10)) exec(code, g) self.assertEqual(g['f'](5), 0) @@ -148,10 +148,11 @@ def test_float_literals(self): def test_indentation(self): # testing compile() of indented block w/o trailing newline" - s = """ -if 1: - if 2: - pass""" + s = textwrap.dedent(""" + if 1: + if 2: + pass + """) compile(s, "", "exec") # This test is probably specific to CPython and may not generalize @@ -1157,14 +1158,15 @@ def test_if_expression_expression_empty_block(self): def test_multi_line_lambda_as_argument(self): # See gh-101928 - compile(""" -def foo(param, lambda_exp): - pass + code = textwrap.dedent(""" + def foo(param, lambda_exp): + pass -foo(param=0, - lambda_exp=lambda: - 1) - """, "", "exec") + foo(param=0, + lambda_exp=lambda: + 1) + """) + compile(code, "", "exec") @requires_debug_ranges() @@ -1252,24 +1254,24 @@ def test_compiles_to_extended_op_arg(self): column=2, end_column=9, occurrence=2) def test_multiline_expression(self): - snippet = """\ -f( - 1, 2, 3, 4 -) -""" + snippet = textwrap.dedent("""\ + f( + 1, 2, 3, 4 + ) + """) compiled_code, _ = self.check_positions_against_ast(snippet) self.assertOpcodeSourcePositionIs(compiled_code, 'CALL', line=1, end_line=3, column=0, end_column=1) @requires_specialization def test_multiline_boolean_expression(self): - snippet = """\ -if (a or - (b and not c) or - not ( - d > 0)): - x = 42 -""" + snippet = textwrap.dedent("""\ + if (a or + (b and not c) or + not ( + d > 0)): + x = 42 + """) compiled_code, _ = self.check_positions_against_ast(snippet) # jump if a is true: self.assertOpcodeSourcePositionIs(compiled_code, 'POP_JUMP_IF_TRUE', @@ -1288,11 +1290,11 @@ def test_multiline_boolean_expression(self): line=4, end_line=4, column=8, end_column=13, occurrence=2) def test_multiline_assert(self): - snippet = """\ -assert (a > 0 and - bb > 0 and - ccc == 4), "error msg" -""" + snippet = textwrap.dedent("""\ + assert (a > 0 and + bb > 0 and + ccc == 4), "error msg" + """) compiled_code, _ = self.check_positions_against_ast(snippet) self.assertOpcodeSourcePositionIs(compiled_code, 'LOAD_ASSERTION_ERROR', line=1, end_line=3, column=0, end_column=30, occurrence=1) @@ -1305,14 +1307,14 @@ def test_multiline_assert(self): line=1, end_line=3, column=0, end_column=30, occurrence=1) def test_multiline_generator_expression(self): - snippet = """\ -((x, - 2*x) - for x - in [1,2,3] if (x > 0 - and x < 100 - and x != 50)) -""" + snippet = textwrap.dedent("""\ + ((x, + 2*x) + for x + in [1,2,3] if (x > 0 + and x < 100 + and x != 50)) + """) compiled_code, _ = self.check_positions_against_ast(snippet) compiled_code = compiled_code.co_consts[0] self.assertIsInstance(compiled_code, types.CodeType) @@ -1324,14 +1326,14 @@ def test_multiline_generator_expression(self): line=1, end_line=6, column=0, end_column=32, occurrence=1) def test_multiline_async_generator_expression(self): - snippet = """\ -((x, - 2*x) - async for x - in [1,2,3] if (x > 0 - and x < 100 - and x != 50)) -""" + snippet = textwrap.dedent("""\ + ((x, + 2*x) + async for x + in [1,2,3] if (x > 0 + and x < 100 + and x != 50)) + """) compiled_code, _ = self.check_positions_against_ast(snippet) compiled_code = compiled_code.co_consts[0] self.assertIsInstance(compiled_code, types.CodeType) @@ -1341,14 +1343,14 @@ def test_multiline_async_generator_expression(self): line=1, end_line=6, column=0, end_column=32, occurrence=1) def test_multiline_list_comprehension(self): - snippet = """\ -[(x, - 2*x) - for x - in [1,2,3] if (x > 0 - and x < 100 - and x != 50)] -""" + snippet = textwrap.dedent("""\ + [(x, + 2*x) + for x + in [1,2,3] if (x > 0 + and x < 100 + and x != 50)] + """) compiled_code, _ = self.check_positions_against_ast(snippet) compiled_code = compiled_code.co_consts[0] self.assertIsInstance(compiled_code, types.CodeType) @@ -1360,15 +1362,15 @@ def test_multiline_list_comprehension(self): line=1, end_line=6, column=0, end_column=32, occurrence=1) def test_multiline_async_list_comprehension(self): - snippet = """\ -async def f(): - [(x, - 2*x) - async for x - in [1,2,3] if (x > 0 - and x < 100 - and x != 50)] -""" + snippet = textwrap.dedent("""\ + async def f(): + [(x, + 2*x) + async for x + in [1,2,3] if (x > 0 + and x < 100 + and x != 50)] + """) compiled_code, _ = self.check_positions_against_ast(snippet) g = {} eval(compiled_code, g) @@ -1382,14 +1384,14 @@ async def f(): line=2, end_line=7, column=4, end_column=36, occurrence=1) def test_multiline_set_comprehension(self): - snippet = """\ -{(x, - 2*x) - for x - in [1,2,3] if (x > 0 - and x < 100 - and x != 50)} -""" + snippet = textwrap.dedent("""\ + {(x, + 2*x) + for x + in [1,2,3] if (x > 0 + and x < 100 + and x != 50)} + """) compiled_code, _ = self.check_positions_against_ast(snippet) compiled_code = compiled_code.co_consts[0] self.assertIsInstance(compiled_code, types.CodeType) @@ -1401,15 +1403,15 @@ def test_multiline_set_comprehension(self): line=1, end_line=6, column=0, end_column=32, occurrence=1) def test_multiline_async_set_comprehension(self): - snippet = """\ -async def f(): - {(x, - 2*x) - async for x - in [1,2,3] if (x > 0 - and x < 100 - and x != 50)} -""" + snippet = textwrap.dedent("""\ + async def f(): + {(x, + 2*x) + async for x + in [1,2,3] if (x > 0 + and x < 100 + and x != 50)} + """) compiled_code, _ = self.check_positions_against_ast(snippet) g = {} eval(compiled_code, g) @@ -1423,14 +1425,14 @@ async def f(): line=2, end_line=7, column=4, end_column=36, occurrence=1) def test_multiline_dict_comprehension(self): - snippet = """\ -{x: - 2*x - for x - in [1,2,3] if (x > 0 - and x < 100 - and x != 50)} -""" + snippet = textwrap.dedent("""\ + {x: + 2*x + for x + in [1,2,3] if (x > 0 + and x < 100 + and x != 50)} + """) compiled_code, _ = self.check_positions_against_ast(snippet) compiled_code = compiled_code.co_consts[0] self.assertIsInstance(compiled_code, types.CodeType) @@ -1442,15 +1444,15 @@ def test_multiline_dict_comprehension(self): line=1, end_line=6, column=0, end_column=32, occurrence=1) def test_multiline_async_dict_comprehension(self): - snippet = """\ -async def f(): - {x: - 2*x - async for x - in [1,2,3] if (x > 0 - and x < 100 - and x != 50)} -""" + snippet = textwrap.dedent("""\ + async def f(): + {x: + 2*x + async for x + in [1,2,3] if (x > 0 + and x < 100 + and x != 50)} + """) compiled_code, _ = self.check_positions_against_ast(snippet) g = {} eval(compiled_code, g) @@ -1464,11 +1466,11 @@ async def f(): line=2, end_line=7, column=4, end_column=36, occurrence=1) def test_matchcase_sequence(self): - snippet = """\ -match x: - case a, b: - pass -""" + snippet = textwrap.dedent("""\ + match x: + case a, b: + pass + """) compiled_code, _ = self.check_positions_against_ast(snippet) self.assertOpcodeSourcePositionIs(compiled_code, 'MATCH_SEQUENCE', line=2, end_line=2, column=9, end_column=13, occurrence=1) @@ -1480,11 +1482,11 @@ def test_matchcase_sequence(self): line=2, end_line=2, column=9, end_column=13, occurrence=2) def test_matchcase_sequence_wildcard(self): - snippet = """\ -match x: - case a, *b, c: - pass -""" + snippet = textwrap.dedent("""\ + match x: + case a, *b, c: + pass + """) compiled_code, _ = self.check_positions_against_ast(snippet) self.assertOpcodeSourcePositionIs(compiled_code, 'MATCH_SEQUENCE', line=2, end_line=2, column=9, end_column=17, occurrence=1) @@ -1498,11 +1500,11 @@ def test_matchcase_sequence_wildcard(self): line=2, end_line=2, column=9, end_column=17, occurrence=3) def test_matchcase_mapping(self): - snippet = """\ -match x: - case {"a" : a, "b": b}: - pass -""" + snippet = textwrap.dedent("""\ + match x: + case {"a" : a, "b": b}: + pass + """) compiled_code, _ = self.check_positions_against_ast(snippet) self.assertOpcodeSourcePositionIs(compiled_code, 'MATCH_MAPPING', line=2, end_line=2, column=9, end_column=26, occurrence=1) @@ -1514,11 +1516,11 @@ def test_matchcase_mapping(self): line=2, end_line=2, column=9, end_column=26, occurrence=2) def test_matchcase_mapping_wildcard(self): - snippet = """\ -match x: - case {"a" : a, "b": b, **c}: - pass -""" + snippet = textwrap.dedent("""\ + match x: + case {"a" : a, "b": b, **c}: + pass + """) compiled_code, _ = self.check_positions_against_ast(snippet) self.assertOpcodeSourcePositionIs(compiled_code, 'MATCH_MAPPING', line=2, end_line=2, column=9, end_column=31, occurrence=1) @@ -1530,11 +1532,11 @@ def test_matchcase_mapping_wildcard(self): line=2, end_line=2, column=9, end_column=31, occurrence=2) def test_matchcase_class(self): - snippet = """\ -match x: - case C(a, b): - pass -""" + snippet = textwrap.dedent("""\ + match x: + case C(a, b): + pass + """) compiled_code, _ = self.check_positions_against_ast(snippet) self.assertOpcodeSourcePositionIs(compiled_code, 'MATCH_CLASS', line=2, end_line=2, column=9, end_column=16, occurrence=1) @@ -1546,11 +1548,11 @@ def test_matchcase_class(self): line=2, end_line=2, column=9, end_column=16, occurrence=2) def test_matchcase_or(self): - snippet = """\ -match x: - case C(1) | C(2): - pass -""" + snippet = textwrap.dedent("""\ + match x: + case C(1) | C(2): + pass + """) compiled_code, _ = self.check_positions_against_ast(snippet) self.assertOpcodeSourcePositionIs(compiled_code, 'MATCH_CLASS', line=2, end_line=2, column=9, end_column=13, occurrence=1) From webhook-mailer at python.org Thu Feb 16 09:52:55 2023 From: webhook-mailer at python.org (zooba) Date: Thu, 16 Feb 2023 14:52:55 -0000 Subject: [Python-checkins] gh-101881: Support (non-)blocking read/write functions on Windows pipes (GH-101882) Message-ID: https://github.com/python/cpython/commit/739c026f4488bd2e37d500a2c3d948aaf929b641 commit: 739c026f4488bd2e37d500a2c3d948aaf929b641 branch: main author: Rayyan Ansari committer: zooba date: 2023-02-16T14:52:24Z summary: gh-101881: Support (non-)blocking read/write functions on Windows pipes (GH-101882) * fileutils: handle non-blocking pipe IO on Windows Handle erroring operations on non-blocking pipes by reading the _doserrno code. Limit writes on non-blocking pipes that are too large. * Support blocking functions on Windows Use the GetNamedPipeHandleState and SetNamedPipeHandleState Win32 API functions to add support for os.get_blocking and os.set_blocking. files: A Misc/NEWS.d/next/Windows/2023-02-13-18-05-49.gh-issue-101881._TnHzN.rst A Misc/NEWS.d/next/Windows/2023-02-15-11-08-10.gh-issue-101881.fScr3m.rst M Doc/library/os.rst M Include/internal/pycore_fileutils.h M Lib/test/test_os.py M Modules/clinic/posixmodule.c.h M Modules/posixmodule.c M Python/fileutils.c diff --git a/Doc/library/os.rst b/Doc/library/os.rst index fb091176767f..85924d0e4836 100644 --- a/Doc/library/os.rst +++ b/Doc/library/os.rst @@ -1091,13 +1091,17 @@ as internal buffering of data. See also :func:`set_blocking` and :meth:`socket.socket.setblocking`. - .. availability:: Unix. + .. availability:: Unix, Windows. The function is limited on Emscripten and WASI, see :ref:`wasm-availability` for more information. + On Windows, this function is limited to pipes. + .. versionadded:: 3.5 + .. versionchanged:: 3.12 + Added support for pipes on Windows. .. function:: isatty(fd, /) @@ -1565,13 +1569,17 @@ or `the MSDN `_ on Windo See also :func:`get_blocking` and :meth:`socket.socket.setblocking`. - .. availability:: Unix. + .. availability:: Unix, Windows. The function is limited on Emscripten and WASI, see :ref:`wasm-availability` for more information. + On Windows, this function is limited to pipes. + .. versionadded:: 3.5 + .. versionchanged:: 3.12 + Added support for pipes on Windows. .. data:: SF_NODISKIO SF_MNOWAIT diff --git a/Include/internal/pycore_fileutils.h b/Include/internal/pycore_fileutils.h index ac89c43d569c..f8e2bf225908 100644 --- a/Include/internal/pycore_fileutils.h +++ b/Include/internal/pycore_fileutils.h @@ -160,11 +160,11 @@ PyAPI_FUNC(int) _Py_set_inheritable_async_safe(int fd, int inheritable, PyAPI_FUNC(int) _Py_dup(int fd); -#ifndef MS_WINDOWS PyAPI_FUNC(int) _Py_get_blocking(int fd); PyAPI_FUNC(int) _Py_set_blocking(int fd, int blocking); -#else /* MS_WINDOWS */ + +#ifdef MS_WINDOWS PyAPI_FUNC(void*) _Py_get_osfhandle_noraise(int fd); PyAPI_FUNC(void*) _Py_get_osfhandle(int fd); diff --git a/Lib/test/test_os.py b/Lib/test/test_os.py index 387d2581c06f..deea207bfdad 100644 --- a/Lib/test/test_os.py +++ b/Lib/test/test_os.py @@ -4099,6 +4099,7 @@ def test_path_t_converter_and_custom_class(self): @unittest.skipUnless(hasattr(os, 'get_blocking'), 'needs os.get_blocking() and os.set_blocking()') @unittest.skipIf(support.is_emscripten, "Cannot unset blocking flag") + at unittest.skipIf(sys.platform == 'win32', 'Windows only supports blocking on pipes') class BlockingTests(unittest.TestCase): def test_blocking(self): fd = os.open(__file__, os.O_RDONLY) diff --git a/Misc/NEWS.d/next/Windows/2023-02-13-18-05-49.gh-issue-101881._TnHzN.rst b/Misc/NEWS.d/next/Windows/2023-02-13-18-05-49.gh-issue-101881._TnHzN.rst new file mode 100644 index 000000000000..ba58dd4f5cb4 --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-13-18-05-49.gh-issue-101881._TnHzN.rst @@ -0,0 +1 @@ +Add support for the os.get_blocking() and os.set_blocking() functions on Windows. diff --git a/Misc/NEWS.d/next/Windows/2023-02-15-11-08-10.gh-issue-101881.fScr3m.rst b/Misc/NEWS.d/next/Windows/2023-02-15-11-08-10.gh-issue-101881.fScr3m.rst new file mode 100644 index 000000000000..099b2c1c07a6 --- /dev/null +++ b/Misc/NEWS.d/next/Windows/2023-02-15-11-08-10.gh-issue-101881.fScr3m.rst @@ -0,0 +1 @@ +Handle read and write operations on non-blocking pipes properly on Windows. diff --git a/Modules/clinic/posixmodule.c.h b/Modules/clinic/posixmodule.c.h index 5e04507ddd69..dcd25c28370c 100644 --- a/Modules/clinic/posixmodule.c.h +++ b/Modules/clinic/posixmodule.c.h @@ -10402,8 +10402,6 @@ os_set_handle_inheritable(PyObject *module, PyObject *const *args, Py_ssize_t na #endif /* defined(MS_WINDOWS) */ -#if !defined(MS_WINDOWS) - PyDoc_STRVAR(os_get_blocking__doc__, "get_blocking($module, fd, /)\n" "--\n" @@ -10439,10 +10437,6 @@ os_get_blocking(PyObject *module, PyObject *arg) return return_value; } -#endif /* !defined(MS_WINDOWS) */ - -#if !defined(MS_WINDOWS) - PyDoc_STRVAR(os_set_blocking__doc__, "set_blocking($module, fd, blocking, /)\n" "--\n" @@ -10482,8 +10476,6 @@ os_set_blocking(PyObject *module, PyObject *const *args, Py_ssize_t nargs) return return_value; } -#endif /* !defined(MS_WINDOWS) */ - PyDoc_STRVAR(os_DirEntry_is_symlink__doc__, "is_symlink($self, /)\n" "--\n" @@ -11789,14 +11781,6 @@ os_waitstatus_to_exitcode(PyObject *module, PyObject *const *args, Py_ssize_t na #define OS_SET_HANDLE_INHERITABLE_METHODDEF #endif /* !defined(OS_SET_HANDLE_INHERITABLE_METHODDEF) */ -#ifndef OS_GET_BLOCKING_METHODDEF - #define OS_GET_BLOCKING_METHODDEF -#endif /* !defined(OS_GET_BLOCKING_METHODDEF) */ - -#ifndef OS_SET_BLOCKING_METHODDEF - #define OS_SET_BLOCKING_METHODDEF -#endif /* !defined(OS_SET_BLOCKING_METHODDEF) */ - #ifndef OS_GETRANDOM_METHODDEF #define OS_GETRANDOM_METHODDEF #endif /* !defined(OS_GETRANDOM_METHODDEF) */ @@ -11812,4 +11796,4 @@ os_waitstatus_to_exitcode(PyObject *module, PyObject *const *args, Py_ssize_t na #ifndef OS_WAITSTATUS_TO_EXITCODE_METHODDEF #define OS_WAITSTATUS_TO_EXITCODE_METHODDEF #endif /* !defined(OS_WAITSTATUS_TO_EXITCODE_METHODDEF) */ -/*[clinic end generated code: output=a3f76228b549e8ec input=a9049054013a1b77]*/ +/*[clinic end generated code: output=1b0eb6a76b1a0e28 input=a9049054013a1b77]*/ diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c index d9e93473aead..524dc7eb1ccc 100644 --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -13930,7 +13930,6 @@ os_set_handle_inheritable_impl(PyObject *module, intptr_t handle, } #endif /* MS_WINDOWS */ -#ifndef MS_WINDOWS /*[clinic input] os.get_blocking -> bool fd: int @@ -13978,7 +13977,6 @@ os_set_blocking_impl(PyObject *module, int fd, int blocking) return NULL; Py_RETURN_NONE; } -#endif /* !MS_WINDOWS */ /*[clinic input] diff --git a/Python/fileutils.c b/Python/fileutils.c index 22b2257a56d0..897c2f9f4ea1 100644 --- a/Python/fileutils.c +++ b/Python/fileutils.c @@ -1750,7 +1750,15 @@ _Py_read(int fd, void *buf, size_t count) Py_BEGIN_ALLOW_THREADS errno = 0; #ifdef MS_WINDOWS + _doserrno = 0; n = read(fd, buf, (int)count); + // read() on a non-blocking empty pipe fails with EINVAL, which is + // mapped from the Windows error code ERROR_NO_DATA. + if (n < 0 && errno == EINVAL) { + if (_doserrno == ERROR_NO_DATA) { + errno = EAGAIN; + } + } #else n = read(fd, buf, count); #endif @@ -1804,6 +1812,7 @@ _Py_write_impl(int fd, const void *buf, size_t count, int gil_held) } } } + #endif if (count > _PY_WRITE_MAX) { count = _PY_WRITE_MAX; @@ -1814,7 +1823,18 @@ _Py_write_impl(int fd, const void *buf, size_t count, int gil_held) Py_BEGIN_ALLOW_THREADS errno = 0; #ifdef MS_WINDOWS - n = write(fd, buf, (int)count); + // write() on a non-blocking pipe fails with ENOSPC on Windows if + // the pipe lacks available space for the entire buffer. + int c = (int)count; + do { + _doserrno = 0; + n = write(fd, buf, c); + if (n >= 0 || errno != ENOSPC || _doserrno != 0) { + break; + } + errno = EAGAIN; + c /= 2; + } while (c > 0); #else n = write(fd, buf, count); #endif @@ -1829,7 +1849,18 @@ _Py_write_impl(int fd, const void *buf, size_t count, int gil_held) do { errno = 0; #ifdef MS_WINDOWS - n = write(fd, buf, (int)count); + // write() on a non-blocking pipe fails with ENOSPC on Windows if + // the pipe lacks available space for the entire buffer. + int c = (int)count; + do { + _doserrno = 0; + n = write(fd, buf, c); + if (n >= 0 || errno != ENOSPC || _doserrno != 0) { + break; + } + errno = EAGAIN; + c /= 2; + } while (c > 0); #else n = write(fd, buf, count); #endif @@ -2450,6 +2481,64 @@ _Py_set_blocking(int fd, int blocking) return -1; } #else /* MS_WINDOWS */ +int +_Py_get_blocking(int fd) +{ + HANDLE handle; + DWORD mode; + BOOL success; + + handle = _Py_get_osfhandle(fd); + if (handle == INVALID_HANDLE_VALUE) { + return -1; + } + + Py_BEGIN_ALLOW_THREADS + success = GetNamedPipeHandleStateW(handle, &mode, + NULL, NULL, NULL, NULL, 0); + Py_END_ALLOW_THREADS + + if (!success) { + PyErr_SetFromWindowsErr(0); + return -1; + } + + return !(mode & PIPE_NOWAIT); +} + +int +_Py_set_blocking(int fd, int blocking) +{ + HANDLE handle; + DWORD mode; + BOOL success; + + handle = _Py_get_osfhandle(fd); + if (handle == INVALID_HANDLE_VALUE) { + return -1; + } + + Py_BEGIN_ALLOW_THREADS + success = GetNamedPipeHandleStateW(handle, &mode, + NULL, NULL, NULL, NULL, 0); + if (success) { + if (blocking) { + mode &= ~PIPE_NOWAIT; + } + else { + mode |= PIPE_NOWAIT; + } + success = SetNamedPipeHandleState(handle, &mode, NULL, NULL); + } + Py_END_ALLOW_THREADS + + if (!success) { + PyErr_SetFromWindowsErr(0); + return -1; + } + return 0; +} + void* _Py_get_osfhandle_noraise(int fd) { From webhook-mailer at python.org Thu Feb 16 10:14:02 2023 From: webhook-mailer at python.org (JulienPalard) Date: Thu, 16 Feb 2023 15:14:02 -0000 Subject: [Python-checkins] gh-93573: Replace wrong example domains in configparser doc (GH-93574) Message-ID: https://github.com/python/cpython/commit/924a3bfa28578802eb9ca77a66fb5d4762a62f14 commit: 924a3bfa28578802eb9ca77a66fb5d4762a62f14 branch: main author: sblondon committer: JulienPalard date: 2023-02-16T16:13:21+01:00 summary: gh-93573: Replace wrong example domains in configparser doc (GH-93574) * Replace bitbucket.org domain by forge.example * Update example to python.org * Use explicitly invalid domain topsecret.server.com domain is not controled by PSF. It's replaced by invalid topsecret.server.example domain. It follows RFC 2606, which advise .example as TLD for documentation. files: M Doc/library/configparser.rst diff --git a/Doc/library/configparser.rst b/Doc/library/configparser.rst index a925a3dd4fb9..a7f75fd6e84f 100644 --- a/Doc/library/configparser.rst +++ b/Doc/library/configparser.rst @@ -69,10 +69,10 @@ Let's take a very basic configuration file that looks like this: CompressionLevel = 9 ForwardX11 = yes - [bitbucket.org] + [forge.example] User = hg - [topsecret.server.com] + [topsecret.server.example] Port = 50022 ForwardX11 = no @@ -89,10 +89,10 @@ creating the above configuration file programmatically. >>> config['DEFAULT'] = {'ServerAliveInterval': '45', ... 'Compression': 'yes', ... 'CompressionLevel': '9'} - >>> config['bitbucket.org'] = {} - >>> config['bitbucket.org']['User'] = 'hg' - >>> config['topsecret.server.com'] = {} - >>> topsecret = config['topsecret.server.com'] + >>> config['forge.example'] = {} + >>> config['forge.example']['User'] = 'hg' + >>> config['topsecret.server.example'] = {} + >>> topsecret = config['topsecret.server.example'] >>> topsecret['Port'] = '50022' # mutates the parser >>> topsecret['ForwardX11'] = 'no' # same here >>> config['DEFAULT']['ForwardX11'] = 'yes' @@ -115,28 +115,28 @@ back and explore the data it holds. >>> config.read('example.ini') ['example.ini'] >>> config.sections() - ['bitbucket.org', 'topsecret.server.com'] - >>> 'bitbucket.org' in config + ['forge.example', 'topsecret.server.example'] + >>> 'forge.example' in config True - >>> 'bytebong.com' in config + >>> 'python.org' in config False - >>> config['bitbucket.org']['User'] + >>> config['forge.example']['User'] 'hg' >>> config['DEFAULT']['Compression'] 'yes' - >>> topsecret = config['topsecret.server.com'] + >>> topsecret = config['topsecret.server.example'] >>> topsecret['ForwardX11'] 'no' >>> topsecret['Port'] '50022' - >>> for key in config['bitbucket.org']: # doctest: +SKIP + >>> for key in config['forge.example']: # doctest: +SKIP ... print(key) user compressionlevel serveraliveinterval compression forwardx11 - >>> config['bitbucket.org']['ForwardX11'] + >>> config['forge.example']['ForwardX11'] 'yes' As we can see above, the API is pretty straightforward. The only bit of magic @@ -154,15 +154,15 @@ configuration while the previously existing keys are retained. >>> another_config = configparser.ConfigParser() >>> another_config.read('example.ini') ['example.ini'] - >>> another_config['topsecret.server.com']['Port'] + >>> another_config['topsecret.server.example']['Port'] '50022' - >>> another_config.read_string("[topsecret.server.com]\nPort=48484") - >>> another_config['topsecret.server.com']['Port'] + >>> another_config.read_string("[topsecret.server.example]\nPort=48484") + >>> another_config['topsecret.server.example']['Port'] '48484' - >>> another_config.read_dict({"topsecret.server.com": {"Port": 21212}}) - >>> another_config['topsecret.server.com']['Port'] + >>> another_config.read_dict({"topsecret.server.example": {"Port": 21212}}) + >>> another_config['topsecret.server.example']['Port'] '21212' - >>> another_config['topsecret.server.com']['ForwardX11'] + >>> another_config['topsecret.server.example']['ForwardX11'] 'no' This behaviour is equivalent to a :meth:`ConfigParser.read` call with several @@ -195,9 +195,9 @@ recognizes Boolean values from ``'yes'``/``'no'``, ``'on'``/``'off'``, >>> topsecret.getboolean('ForwardX11') False - >>> config['bitbucket.org'].getboolean('ForwardX11') + >>> config['forge.example'].getboolean('ForwardX11') True - >>> config.getboolean('bitbucket.org', 'Compression') + >>> config.getboolean('forge.example', 'Compression') True Apart from :meth:`~ConfigParser.getboolean`, config parsers also @@ -224,7 +224,7 @@ provide fallback values: Please note that default values have precedence over fallback values. For instance, in our example the ``'CompressionLevel'`` key was specified only in the ``'DEFAULT'`` section. If we try to get it from -the section ``'topsecret.server.com'``, we will always get the default, +the section ``'topsecret.server.example'``, we will always get the default, even if we specify a fallback: .. doctest:: @@ -239,7 +239,7 @@ the ``fallback`` keyword-only argument: .. doctest:: - >>> config.get('bitbucket.org', 'monster', + >>> config.get('forge.example', 'monster', ... fallback='No such things as monsters') 'No such things as monsters' From webhook-mailer at python.org Thu Feb 16 10:22:30 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 16 Feb 2023 15:22:30 -0000 Subject: [Python-checkins] gh-93573: Replace wrong example domains in configparser doc (GH-93574) Message-ID: https://github.com/python/cpython/commit/4d74bb4726697fad7a72430503b5faa6dc4dadf6 commit: 4d74bb4726697fad7a72430503b5faa6dc4dadf6 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-16T07:22:23-08:00 summary: gh-93573: Replace wrong example domains in configparser doc (GH-93574) * Replace bitbucket.org domain by forge.example * Update example to python.org * Use explicitly invalid domain topsecret.server.com domain is not controled by PSF. It's replaced by invalid topsecret.server.example domain. It follows RFC 2606, which advise .example as TLD for documentation. (cherry picked from commit 924a3bfa28578802eb9ca77a66fb5d4762a62f14) Co-authored-by: sblondon files: M Doc/library/configparser.rst diff --git a/Doc/library/configparser.rst b/Doc/library/configparser.rst index 72aa20d73d8b..4102c4f35fc4 100644 --- a/Doc/library/configparser.rst +++ b/Doc/library/configparser.rst @@ -65,10 +65,10 @@ Let's take a very basic configuration file that looks like this: CompressionLevel = 9 ForwardX11 = yes - [bitbucket.org] + [forge.example] User = hg - [topsecret.server.com] + [topsecret.server.example] Port = 50022 ForwardX11 = no @@ -85,10 +85,10 @@ creating the above configuration file programmatically. >>> config['DEFAULT'] = {'ServerAliveInterval': '45', ... 'Compression': 'yes', ... 'CompressionLevel': '9'} - >>> config['bitbucket.org'] = {} - >>> config['bitbucket.org']['User'] = 'hg' - >>> config['topsecret.server.com'] = {} - >>> topsecret = config['topsecret.server.com'] + >>> config['forge.example'] = {} + >>> config['forge.example']['User'] = 'hg' + >>> config['topsecret.server.example'] = {} + >>> topsecret = config['topsecret.server.example'] >>> topsecret['Port'] = '50022' # mutates the parser >>> topsecret['ForwardX11'] = 'no' # same here >>> config['DEFAULT']['ForwardX11'] = 'yes' @@ -111,28 +111,28 @@ back and explore the data it holds. >>> config.read('example.ini') ['example.ini'] >>> config.sections() - ['bitbucket.org', 'topsecret.server.com'] - >>> 'bitbucket.org' in config + ['forge.example', 'topsecret.server.example'] + >>> 'forge.example' in config True - >>> 'bytebong.com' in config + >>> 'python.org' in config False - >>> config['bitbucket.org']['User'] + >>> config['forge.example']['User'] 'hg' >>> config['DEFAULT']['Compression'] 'yes' - >>> topsecret = config['topsecret.server.com'] + >>> topsecret = config['topsecret.server.example'] >>> topsecret['ForwardX11'] 'no' >>> topsecret['Port'] '50022' - >>> for key in config['bitbucket.org']: # doctest: +SKIP + >>> for key in config['forge.example']: # doctest: +SKIP ... print(key) user compressionlevel serveraliveinterval compression forwardx11 - >>> config['bitbucket.org']['ForwardX11'] + >>> config['forge.example']['ForwardX11'] 'yes' As we can see above, the API is pretty straightforward. The only bit of magic @@ -150,15 +150,15 @@ configuration while the previously existing keys are retained. >>> another_config = configparser.ConfigParser() >>> another_config.read('example.ini') ['example.ini'] - >>> another_config['topsecret.server.com']['Port'] + >>> another_config['topsecret.server.example']['Port'] '50022' - >>> another_config.read_string("[topsecret.server.com]\nPort=48484") - >>> another_config['topsecret.server.com']['Port'] + >>> another_config.read_string("[topsecret.server.example]\nPort=48484") + >>> another_config['topsecret.server.example']['Port'] '48484' - >>> another_config.read_dict({"topsecret.server.com": {"Port": 21212}}) - >>> another_config['topsecret.server.com']['Port'] + >>> another_config.read_dict({"topsecret.server.example": {"Port": 21212}}) + >>> another_config['topsecret.server.example']['Port'] '21212' - >>> another_config['topsecret.server.com']['ForwardX11'] + >>> another_config['topsecret.server.example']['ForwardX11'] 'no' This behaviour is equivalent to a :meth:`ConfigParser.read` call with several @@ -191,9 +191,9 @@ recognizes Boolean values from ``'yes'``/``'no'``, ``'on'``/``'off'``, >>> topsecret.getboolean('ForwardX11') False - >>> config['bitbucket.org'].getboolean('ForwardX11') + >>> config['forge.example'].getboolean('ForwardX11') True - >>> config.getboolean('bitbucket.org', 'Compression') + >>> config.getboolean('forge.example', 'Compression') True Apart from :meth:`~ConfigParser.getboolean`, config parsers also @@ -220,7 +220,7 @@ provide fallback values: Please note that default values have precedence over fallback values. For instance, in our example the ``'CompressionLevel'`` key was specified only in the ``'DEFAULT'`` section. If we try to get it from -the section ``'topsecret.server.com'``, we will always get the default, +the section ``'topsecret.server.example'``, we will always get the default, even if we specify a fallback: .. doctest:: @@ -235,7 +235,7 @@ the ``fallback`` keyword-only argument: .. doctest:: - >>> config.get('bitbucket.org', 'monster', + >>> config.get('forge.example', 'monster', ... fallback='No such things as monsters') 'No such things as monsters' From webhook-mailer at python.org Thu Feb 16 10:25:38 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 16 Feb 2023 15:25:38 -0000 Subject: [Python-checkins] gh-93573: Replace wrong example domains in configparser doc (GH-93574) Message-ID: https://github.com/python/cpython/commit/aedf38391a1e2897bc4cb0a6985da22f2cde1919 commit: aedf38391a1e2897bc4cb0a6985da22f2cde1919 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-16T07:25:31-08:00 summary: gh-93573: Replace wrong example domains in configparser doc (GH-93574) * Replace bitbucket.org domain by forge.example * Update example to python.org * Use explicitly invalid domain topsecret.server.com domain is not controled by PSF. It's replaced by invalid topsecret.server.example domain. It follows RFC 2606, which advise .example as TLD for documentation. (cherry picked from commit 924a3bfa28578802eb9ca77a66fb5d4762a62f14) Co-authored-by: sblondon files: M Doc/library/configparser.rst diff --git a/Doc/library/configparser.rst b/Doc/library/configparser.rst index bfd6a7f58b3e..846eb57f5994 100644 --- a/Doc/library/configparser.rst +++ b/Doc/library/configparser.rst @@ -69,10 +69,10 @@ Let's take a very basic configuration file that looks like this: CompressionLevel = 9 ForwardX11 = yes - [bitbucket.org] + [forge.example] User = hg - [topsecret.server.com] + [topsecret.server.example] Port = 50022 ForwardX11 = no @@ -89,10 +89,10 @@ creating the above configuration file programmatically. >>> config['DEFAULT'] = {'ServerAliveInterval': '45', ... 'Compression': 'yes', ... 'CompressionLevel': '9'} - >>> config['bitbucket.org'] = {} - >>> config['bitbucket.org']['User'] = 'hg' - >>> config['topsecret.server.com'] = {} - >>> topsecret = config['topsecret.server.com'] + >>> config['forge.example'] = {} + >>> config['forge.example']['User'] = 'hg' + >>> config['topsecret.server.example'] = {} + >>> topsecret = config['topsecret.server.example'] >>> topsecret['Port'] = '50022' # mutates the parser >>> topsecret['ForwardX11'] = 'no' # same here >>> config['DEFAULT']['ForwardX11'] = 'yes' @@ -115,28 +115,28 @@ back and explore the data it holds. >>> config.read('example.ini') ['example.ini'] >>> config.sections() - ['bitbucket.org', 'topsecret.server.com'] - >>> 'bitbucket.org' in config + ['forge.example', 'topsecret.server.example'] + >>> 'forge.example' in config True - >>> 'bytebong.com' in config + >>> 'python.org' in config False - >>> config['bitbucket.org']['User'] + >>> config['forge.example']['User'] 'hg' >>> config['DEFAULT']['Compression'] 'yes' - >>> topsecret = config['topsecret.server.com'] + >>> topsecret = config['topsecret.server.example'] >>> topsecret['ForwardX11'] 'no' >>> topsecret['Port'] '50022' - >>> for key in config['bitbucket.org']: # doctest: +SKIP + >>> for key in config['forge.example']: # doctest: +SKIP ... print(key) user compressionlevel serveraliveinterval compression forwardx11 - >>> config['bitbucket.org']['ForwardX11'] + >>> config['forge.example']['ForwardX11'] 'yes' As we can see above, the API is pretty straightforward. The only bit of magic @@ -154,15 +154,15 @@ configuration while the previously existing keys are retained. >>> another_config = configparser.ConfigParser() >>> another_config.read('example.ini') ['example.ini'] - >>> another_config['topsecret.server.com']['Port'] + >>> another_config['topsecret.server.example']['Port'] '50022' - >>> another_config.read_string("[topsecret.server.com]\nPort=48484") - >>> another_config['topsecret.server.com']['Port'] + >>> another_config.read_string("[topsecret.server.example]\nPort=48484") + >>> another_config['topsecret.server.example']['Port'] '48484' - >>> another_config.read_dict({"topsecret.server.com": {"Port": 21212}}) - >>> another_config['topsecret.server.com']['Port'] + >>> another_config.read_dict({"topsecret.server.example": {"Port": 21212}}) + >>> another_config['topsecret.server.example']['Port'] '21212' - >>> another_config['topsecret.server.com']['ForwardX11'] + >>> another_config['topsecret.server.example']['ForwardX11'] 'no' This behaviour is equivalent to a :meth:`ConfigParser.read` call with several @@ -195,9 +195,9 @@ recognizes Boolean values from ``'yes'``/``'no'``, ``'on'``/``'off'``, >>> topsecret.getboolean('ForwardX11') False - >>> config['bitbucket.org'].getboolean('ForwardX11') + >>> config['forge.example'].getboolean('ForwardX11') True - >>> config.getboolean('bitbucket.org', 'Compression') + >>> config.getboolean('forge.example', 'Compression') True Apart from :meth:`~ConfigParser.getboolean`, config parsers also @@ -224,7 +224,7 @@ provide fallback values: Please note that default values have precedence over fallback values. For instance, in our example the ``'CompressionLevel'`` key was specified only in the ``'DEFAULT'`` section. If we try to get it from -the section ``'topsecret.server.com'``, we will always get the default, +the section ``'topsecret.server.example'``, we will always get the default, even if we specify a fallback: .. doctest:: @@ -239,7 +239,7 @@ the ``fallback`` keyword-only argument: .. doctest:: - >>> config.get('bitbucket.org', 'monster', + >>> config.get('forge.example', 'monster', ... fallback='No such things as monsters') 'No such things as monsters' From webhook-mailer at python.org Thu Feb 16 12:47:01 2023 From: webhook-mailer at python.org (gvanrossum) Date: Thu, 16 Feb 2023 17:47:01 -0000 Subject: [Python-checkins] gh-101952: Fix possible segfault in `BUILD_SET` opcode (#101958) Message-ID: https://github.com/python/cpython/commit/68bd8c5e2efab64ff9d38a214775164182179431 commit: 68bd8c5e2efab64ff9d38a214775164182179431 branch: main author: Eclips4 <80244920+Eclips4 at users.noreply.github.com> committer: gvanrossum date: 2023-02-16T09:46:43-08:00 summary: gh-101952: Fix possible segfault in `BUILD_SET` opcode (#101958) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-16-16-57-23.gh-issue-101952.Zo1dlq.rst M Python/bytecodes.c M Python/generated_cases.c.h diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-16-16-57-23.gh-issue-101952.Zo1dlq.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-16-16-57-23.gh-issue-101952.Zo1dlq.rst new file mode 100644 index 000000000000..3902c988c8bf --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-16-16-57-23.gh-issue-101952.Zo1dlq.rst @@ -0,0 +1 @@ +Fix possible segfault in ``BUILD_SET`` opcode, when new set created. diff --git a/Python/bytecodes.c b/Python/bytecodes.c index d5d5034cbfbf..84747f1758e0 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -1302,6 +1302,8 @@ dummy_func( inst(BUILD_SET, (values[oparg] -- set)) { set = PySet_New(NULL); + if (set == NULL) + goto error; int err = 0; for (int i = 0; i < oparg; i++) { PyObject *item = values[i]; diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index 8b8a7161ad89..730dfb7426ac 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -1649,6 +1649,8 @@ PyObject **values = &PEEK(oparg); PyObject *set; set = PySet_New(NULL); + if (set == NULL) + goto error; int err = 0; for (int i = 0; i < oparg; i++) { PyObject *item = values[i]; From webhook-mailer at python.org Thu Feb 16 12:58:08 2023 From: webhook-mailer at python.org (FFY00) Date: Thu, 16 Feb 2023 17:58:08 -0000 Subject: [Python-checkins] gh-99942: correct the pkg-config/python-config flags for cygwin/android Message-ID: https://github.com/python/cpython/commit/226484e47599a93f5bf033ac47198e68ff401432 commit: 226484e47599a93f5bf033ac47198e68ff401432 branch: main author: Eli Schwartz committer: FFY00 date: 2023-02-16T17:57:59Z summary: gh-99942: correct the pkg-config/python-config flags for cygwin/android files: A Misc/NEWS.d/next/Build/2023-01-12-00-49-16.gh-issue-99942.DUR8b4.rst M configure M configure.ac diff --git a/Misc/NEWS.d/next/Build/2023-01-12-00-49-16.gh-issue-99942.DUR8b4.rst b/Misc/NEWS.d/next/Build/2023-01-12-00-49-16.gh-issue-99942.DUR8b4.rst new file mode 100644 index 000000000000..5b692c3cc458 --- /dev/null +++ b/Misc/NEWS.d/next/Build/2023-01-12-00-49-16.gh-issue-99942.DUR8b4.rst @@ -0,0 +1,2 @@ +On Android, in a static build, python-config in embed mode no longer +incorrectly reports a library to link to. diff --git a/configure b/configure index 7c4254f3cb17..17dc62fb63de 100755 --- a/configure +++ b/configure @@ -21496,7 +21496,7 @@ $as_echo "$LDVERSION" >&6; } # On Android and Cygwin the shared libraries must be linked with libpython. -if test -n "$ANDROID_API_LEVEL" -o "$MACHDEP" = "cygwin"; then +if test "$PY_ENABLE_SHARED" = "1" && ( test -n "$ANDROID_API_LEVEL" || test "$MACHDEP" = "cygwin"); then LIBPYTHON="-lpython${VERSION}${ABIFLAGS}" else LIBPYTHON='' diff --git a/configure.ac b/configure.ac index 370bbe07c576..bc288b86cfa5 100644 --- a/configure.ac +++ b/configure.ac @@ -5759,7 +5759,7 @@ AC_MSG_RESULT($LDVERSION) # On Android and Cygwin the shared libraries must be linked with libpython. AC_SUBST(LIBPYTHON) -if test -n "$ANDROID_API_LEVEL" -o "$MACHDEP" = "cygwin"; then +if test "$PY_ENABLE_SHARED" = "1" && ( test -n "$ANDROID_API_LEVEL" || test "$MACHDEP" = "cygwin"); then LIBPYTHON="-lpython${VERSION}${ABIFLAGS}" else LIBPYTHON='' From webhook-mailer at python.org Thu Feb 16 13:48:34 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Thu, 16 Feb 2023 18:48:34 -0000 Subject: [Python-checkins] GH-96764: rewrite `asyncio.wait_for` to use `asyncio.timeout` (#98518) Message-ID: https://github.com/python/cpython/commit/a5024a261a75dafa4fb6613298dcb64a9603d9c7 commit: a5024a261a75dafa4fb6613298dcb64a9603d9c7 branch: main author: Kumar Aditya <59607654+kumaraditya303 at users.noreply.github.com> committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-17T00:18:21+05:30 summary: GH-96764: rewrite `asyncio.wait_for` to use `asyncio.timeout` (#98518) Changes `asyncio.wait_for` to use `asyncio.timeout` as its underlying implementation. files: A Misc/NEWS.d/next/Library/2022-10-22-09-26-43.gh-issue-96764.Dh9Y5L.rst M Lib/asyncio/tasks.py M Lib/test/test_asyncio/test_futures2.py M Lib/test/test_asyncio/test_waitfor.py diff --git a/Lib/asyncio/tasks.py b/Lib/asyncio/tasks.py index e78719de216f..a2e06d5ef72f 100644 --- a/Lib/asyncio/tasks.py +++ b/Lib/asyncio/tasks.py @@ -24,6 +24,7 @@ from . import events from . import exceptions from . import futures +from . import timeouts from .coroutines import _is_coroutine # Helper to generate new task names @@ -437,65 +438,44 @@ async def wait_for(fut, timeout): If the wait is cancelled, the task is also cancelled. + If the task supresses the cancellation and returns a value instead, + that value is returned. + This function is a coroutine. """ - loop = events.get_running_loop() + # The special case for timeout <= 0 is for the following case: + # + # async def test_waitfor(): + # func_started = False + # + # async def func(): + # nonlocal func_started + # func_started = True + # + # try: + # await asyncio.wait_for(func(), 0) + # except asyncio.TimeoutError: + # assert not func_started + # else: + # assert False + # + # asyncio.run(test_waitfor()) - if timeout is None: - return await fut - if timeout <= 0: - fut = ensure_future(fut, loop=loop) + if timeout is not None and timeout <= 0: + fut = ensure_future(fut) if fut.done(): return fut.result() - await _cancel_and_wait(fut, loop=loop) + await _cancel_and_wait(fut) try: return fut.result() except exceptions.CancelledError as exc: - raise exceptions.TimeoutError() from exc - - waiter = loop.create_future() - timeout_handle = loop.call_later(timeout, _release_waiter, waiter) - cb = functools.partial(_release_waiter, waiter) - - fut = ensure_future(fut, loop=loop) - fut.add_done_callback(cb) - - try: - # wait until the future completes or the timeout - try: - await waiter - except exceptions.CancelledError: - if fut.done(): - return fut.result() - else: - fut.remove_done_callback(cb) - # We must ensure that the task is not running - # after wait_for() returns. - # See https://bugs.python.org/issue32751 - await _cancel_and_wait(fut, loop=loop) - raise - - if fut.done(): - return fut.result() - else: - fut.remove_done_callback(cb) - # We must ensure that the task is not running - # after wait_for() returns. - # See https://bugs.python.org/issue32751 - await _cancel_and_wait(fut, loop=loop) - # In case task cancellation failed with some - # exception, we should re-raise it - # See https://bugs.python.org/issue40607 - try: - return fut.result() - except exceptions.CancelledError as exc: - raise exceptions.TimeoutError() from exc - finally: - timeout_handle.cancel() + raise TimeoutError from exc + async with timeouts.timeout(timeout): + return await fut async def _wait(fs, timeout, return_when, loop): """Internal helper for wait(). @@ -541,9 +521,10 @@ def _on_completion(f): return done, pending -async def _cancel_and_wait(fut, loop): +async def _cancel_and_wait(fut): """Cancel the *fut* future or task and wait until it completes.""" + loop = events.get_running_loop() waiter = loop.create_future() cb = functools.partial(_release_waiter, waiter) fut.add_done_callback(cb) diff --git a/Lib/test/test_asyncio/test_futures2.py b/Lib/test/test_asyncio/test_futures2.py index 9e7a5775a703..b7cfffb76bd8 100644 --- a/Lib/test/test_asyncio/test_futures2.py +++ b/Lib/test/test_asyncio/test_futures2.py @@ -86,10 +86,9 @@ async def test_recursive_repr_for_pending_tasks(self): async def func(): return asyncio.all_tasks() - # The repr() call should not raise RecursiveError at first. - # The check for returned string is not very reliable but - # exact comparison for the whole string is even weaker. - self.assertIn('...', repr(await asyncio.wait_for(func(), timeout=10))) + # The repr() call should not raise RecursionError at first. + waiter = await asyncio.wait_for(asyncio.Task(func()),timeout=10) + self.assertIn('...', repr(waiter)) if __name__ == '__main__': diff --git a/Lib/test/test_asyncio/test_waitfor.py b/Lib/test/test_asyncio/test_waitfor.py index 45498fa097f6..ed80540b2b38 100644 --- a/Lib/test/test_asyncio/test_waitfor.py +++ b/Lib/test/test_asyncio/test_waitfor.py @@ -237,33 +237,6 @@ async def inner(): with self.assertRaises(FooException): await foo() - async def test_wait_for_self_cancellation(self): - async def inner(): - try: - await asyncio.sleep(0.3) - except asyncio.CancelledError: - try: - await asyncio.sleep(0.3) - except asyncio.CancelledError: - await asyncio.sleep(0.3) - - return 42 - - inner_task = asyncio.create_task(inner()) - - wait = asyncio.wait_for(inner_task, timeout=0.1) - - # Test that wait_for itself is properly cancellable - # even when the initial task holds up the initial cancellation. - task = asyncio.create_task(wait) - await asyncio.sleep(0.2) - task.cancel() - - with self.assertRaises(asyncio.CancelledError): - await task - - self.assertEqual(await inner_task, 42) - async def _test_cancel_wait_for(self, timeout): loop = asyncio.get_running_loop() @@ -289,6 +262,106 @@ async def test_cancel_blocking_wait_for(self): async def test_cancel_wait_for(self): await self._test_cancel_wait_for(60.0) + async def test_wait_for_cancel_suppressed(self): + # GH-86296: Supressing CancelledError is discouraged + # but if a task subpresses CancelledError and returns a value, + # `wait_for` should return the value instead of raising CancelledError. + # This is the same behavior as `asyncio.timeout`. + + async def return_42(): + try: + await asyncio.sleep(10) + except asyncio.CancelledError: + return 42 + + res = await asyncio.wait_for(return_42(), timeout=0.1) + self.assertEqual(res, 42) + + + async def test_wait_for_issue86296(self): + # GH-86296: The task should get cancelled and not run to completion. + # inner completes in one cycle of the event loop so it + # completes before the task is cancelled. + + async def inner(): + return 'done' + + inner_task = asyncio.create_task(inner()) + reached_end = False + + async def wait_for_coro(): + await asyncio.wait_for(inner_task, timeout=100) + await asyncio.sleep(1) + nonlocal reached_end + reached_end = True + + task = asyncio.create_task(wait_for_coro()) + self.assertFalse(task.done()) + # Run the task + await asyncio.sleep(0) + task.cancel() + with self.assertRaises(asyncio.CancelledError): + await task + self.assertTrue(inner_task.done()) + self.assertEqual(await inner_task, 'done') + self.assertFalse(reached_end) + + +class WaitForShieldTests(unittest.IsolatedAsyncioTestCase): + + async def test_zero_timeout(self): + # `asyncio.shield` creates a new task which wraps the passed in + # awaitable and shields it from cancellation so with timeout=0 + # the task returned by `asyncio.shield` aka shielded_task gets + # cancelled immediately and the task wrapped by it is scheduled + # to run. + + async def coro(): + await asyncio.sleep(0.01) + return 'done' + + task = asyncio.create_task(coro()) + with self.assertRaises(asyncio.TimeoutError): + shielded_task = asyncio.shield(task) + await asyncio.wait_for(shielded_task, timeout=0) + + # Task is running in background + self.assertFalse(task.done()) + self.assertFalse(task.cancelled()) + self.assertTrue(shielded_task.cancelled()) + + # Wait for the task to complete + await asyncio.sleep(0.1) + self.assertTrue(task.done()) + + + async def test_none_timeout(self): + # With timeout=None the timeout is disabled so it + # runs till completion. + async def coro(): + await asyncio.sleep(0.1) + return 'done' + + task = asyncio.create_task(coro()) + await asyncio.wait_for(asyncio.shield(task), timeout=None) + + self.assertTrue(task.done()) + self.assertEqual(await task, "done") + + async def test_shielded_timeout(self): + # shield prevents the task from being cancelled. + async def coro(): + await asyncio.sleep(0.1) + return 'done' + + task = asyncio.create_task(coro()) + with self.assertRaises(asyncio.TimeoutError): + await asyncio.wait_for(asyncio.shield(task), timeout=0.01) + + self.assertFalse(task.done()) + self.assertFalse(task.cancelled()) + self.assertEqual(await task, "done") + if __name__ == '__main__': unittest.main() diff --git a/Misc/NEWS.d/next/Library/2022-10-22-09-26-43.gh-issue-96764.Dh9Y5L.rst b/Misc/NEWS.d/next/Library/2022-10-22-09-26-43.gh-issue-96764.Dh9Y5L.rst new file mode 100644 index 000000000000..a0174291cbc3 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2022-10-22-09-26-43.gh-issue-96764.Dh9Y5L.rst @@ -0,0 +1 @@ +:func:`asyncio.wait_for` now uses :func:`asyncio.timeout` as its underlying implementation. Patch by Kumar Aditya. From webhook-mailer at python.org Thu Feb 16 16:05:44 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Thu, 16 Feb 2023 21:05:44 -0000 Subject: [Python-checkins] gh-101758: Add _PyState_AddModule() Back for the Stable ABI (gh-101956) Message-ID: https://github.com/python/cpython/commit/4d8959b73ac194ca9a2f623dcb5c23680f7d8536 commit: 4d8959b73ac194ca9a2f623dcb5c23680f7d8536 branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-16T14:05:31-07:00 summary: gh-101758: Add _PyState_AddModule() Back for the Stable ABI (gh-101956) We're adding the function back, only for the stable ABI symbol and not as any form of API. I had removed it yesterday. This undocumented "private" function was added with the implementation for PEP 3121 (3.0, 2007) for internal use and later moved out of the limited API (3.6, 2016) and then into the internal API (3.9, 2019). I removed it completely yesterday, including from the stable ABI manifest (where it was added because the symbol happened to be exported). It's unlikely that anyone is using _PyState_AddModule(), especially any stable ABI extensions built against 3.2-3.5, but we're playing it safe. https://github.com/python/cpython/issues/101758 files: M Include/internal/pycore_pystate.h M Lib/test/test_stable_abi_ctypes.py M Misc/stable_abi.toml M PC/python3dll.c M Python/import.c diff --git a/Include/internal/pycore_pystate.h b/Include/internal/pycore_pystate.h index 638b86253879..7046ec8d9ada 100644 --- a/Include/internal/pycore_pystate.h +++ b/Include/internal/pycore_pystate.h @@ -152,6 +152,12 @@ extern void _PySignal_AfterFork(void); #endif +PyAPI_FUNC(int) _PyState_AddModule( + PyThreadState *tstate, + PyObject* module, + PyModuleDef* def); + + PyAPI_FUNC(int) _PyOS_InterruptOccurred(PyThreadState *tstate); #define HEAD_LOCK(runtime) \ diff --git a/Lib/test/test_stable_abi_ctypes.py b/Lib/test/test_stable_abi_ctypes.py index 7e50fbda2c07..e77c1c840988 100644 --- a/Lib/test/test_stable_abi_ctypes.py +++ b/Lib/test/test_stable_abi_ctypes.py @@ -864,6 +864,7 @@ def test_windows_feature_macros(self): "_PyObject_GC_Resize", "_PyObject_New", "_PyObject_NewVar", + "_PyState_AddModule", "_PyThreadState_Init", "_PyThreadState_Prealloc", "_PyWeakref_CallableProxyType", diff --git a/Misc/stable_abi.toml b/Misc/stable_abi.toml index c04a3a228caf..21ff96161334 100644 --- a/Misc/stable_abi.toml +++ b/Misc/stable_abi.toml @@ -1684,6 +1684,9 @@ [function._PyObject_NewVar] added = '3.2' abi_only = true +[function._PyState_AddModule] + added = '3.2' + abi_only = true [function._PyThreadState_Init] added = '3.2' abi_only = true diff --git a/PC/python3dll.c b/PC/python3dll.c index 79f09037282f..e30081936575 100755 --- a/PC/python3dll.c +++ b/PC/python3dll.c @@ -34,6 +34,7 @@ EXPORT_FUNC(_PyObject_GC_NewVar) EXPORT_FUNC(_PyObject_GC_Resize) EXPORT_FUNC(_PyObject_New) EXPORT_FUNC(_PyObject_NewVar) +EXPORT_FUNC(_PyState_AddModule) EXPORT_FUNC(_PyThreadState_Init) EXPORT_FUNC(_PyThreadState_Prealloc) EXPORT_FUNC(Py_AddPendingCall) diff --git a/Python/import.c b/Python/import.c index ec126f28b858..fabf03b1c5d6 100644 --- a/Python/import.c +++ b/Python/import.c @@ -487,6 +487,26 @@ PyState_FindModule(PyModuleDef* module) return _modules_by_index_get(interp, module); } +/* _PyState_AddModule() has been completely removed from the C-API + (and was removed from the limited API in 3.6). However, we're + playing it safe and keeping it around for any stable ABI extensions + built against 3.2-3.5. */ +int +_PyState_AddModule(PyThreadState *tstate, PyObject* module, PyModuleDef* def) +{ + if (!def) { + assert(_PyErr_Occurred(tstate)); + return -1; + } + if (def->m_slots) { + _PyErr_SetString(tstate, + PyExc_SystemError, + "PyState_AddModule called on module with slots"); + return -1; + } + return _modules_by_index_set(tstate->interp, def, module); +} + int PyState_AddModule(PyObject* module, PyModuleDef* def) { From webhook-mailer at python.org Thu Feb 16 19:21:58 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Fri, 17 Feb 2023 00:21:58 -0000 Subject: [Python-checkins] gh-101758: Fix Refleak-Related Failures in test_singlephase_variants (gh-101969) Message-ID: https://github.com/python/cpython/commit/984f8ab018f847fe8d66768af962f69ec0e81849 commit: 984f8ab018f847fe8d66768af962f69ec0e81849 branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-16T17:21:22-07:00 summary: gh-101758: Fix Refleak-Related Failures in test_singlephase_variants (gh-101969) gh-101891 is causing failures under `$> ./python -m test test_imp -R 3:3`. Furthermore, with that fixed, "test_singlephase_variants" is leaking references. This change addresses the first part, but skips the leaking tests until we can follow up with a fix. https://github.com/python/cpython/issues/101758 files: M Lib/test/test_imp.py diff --git a/Lib/test/test_imp.py b/Lib/test/test_imp.py index 5997ffad8e12..2292bb209395 100644 --- a/Lib/test/test_imp.py +++ b/Lib/test/test_imp.py @@ -263,6 +263,7 @@ def test_issue16421_multiple_modules_in_one_dll(self): with self.assertRaises(ImportError): imp.load_dynamic('nonexistent', pathname) + @unittest.skip('known refleak (temporarily skipping)') @requires_subinterpreters @requires_load_dynamic def test_singlephase_multiple_interpreters(self): @@ -329,9 +330,10 @@ def clean_up(): # However, globals are still shared. _interpreters.run_string(interp2, script % 2) + @unittest.skip('known refleak (temporarily skipping)') @requires_load_dynamic def test_singlephase_variants(self): - '''Exercise the most meaningful variants described in Python/import.c.''' + # Exercise the most meaningful variants described in Python/import.c. self.maxDiff = None basename = '_testsinglephase' @@ -343,6 +345,11 @@ def clean_up(): _testsinglephase._clear_globals() self.addCleanup(clean_up) + def add_ext_cleanup(name): + def clean_up(): + _testinternalcapi.clear_extension(name, pathname) + self.addCleanup(clean_up) + modules = {} def load(name): assert name not in modules @@ -440,6 +447,7 @@ def check_with_reinit_reloaded(module, lookedup, initialized, # Check the "basic" module. name = basename + add_ext_cleanup(name) expected_init_count = 1 with self.subTest(name): mod = load(name) @@ -457,6 +465,7 @@ def check_with_reinit_reloaded(module, lookedup, initialized, # Check its indirect variants. name = f'{basename}_basic_wrapper' + add_ext_cleanup(name) expected_init_count += 1 with self.subTest(name): mod = load(name) @@ -480,6 +489,7 @@ def check_with_reinit_reloaded(module, lookedup, initialized, # Check its direct variant. name = f'{basename}_basic_copy' + add_ext_cleanup(name) expected_init_count += 1 with self.subTest(name): mod = load(name) @@ -500,6 +510,7 @@ def check_with_reinit_reloaded(module, lookedup, initialized, # Check the non-basic variant that has no state. name = f'{basename}_with_reinit' + add_ext_cleanup(name) with self.subTest(name): mod = load(name) lookedup, initialized, cached = check_common(name, mod) @@ -518,6 +529,7 @@ def check_with_reinit_reloaded(module, lookedup, initialized, # Check the basic variant that has state. name = f'{basename}_with_state' + add_ext_cleanup(name) with self.subTest(name): mod = load(name) lookedup, initialized, cached = check_common(name, mod) From webhook-mailer at python.org Fri Feb 17 03:43:14 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Fri, 17 Feb 2023 08:43:14 -0000 Subject: [Python-checkins] gh-101973: Fix parameter reference for PyModule_FromDefAndSpec (#101976) Message-ID: https://github.com/python/cpython/commit/a3bb7fbe7eecfae6bf7b2f0912f9b2b12fac8db1 commit: a3bb7fbe7eecfae6bf7b2f0912f9b2b12fac8db1 branch: main author: Oleg Iarygin committer: erlend-aasland date: 2023-02-17T09:43:07+01:00 summary: gh-101973: Fix parameter reference for PyModule_FromDefAndSpec (#101976) files: M Doc/c-api/module.rst diff --git a/Doc/c-api/module.rst b/Doc/c-api/module.rst index e2ba157b32c7..c0351c8a6c72 100644 --- a/Doc/c-api/module.rst +++ b/Doc/c-api/module.rst @@ -388,7 +388,7 @@ objects dynamically. Note that both ``PyModule_FromDefAndSpec`` and .. c:function:: PyObject * PyModule_FromDefAndSpec(PyModuleDef *def, PyObject *spec) - Create a new module object, given the definition in *module* and the + Create a new module object, given the definition in *def* and the ModuleSpec *spec*. This behaves like :c:func:`PyModule_FromDefAndSpec2` with *module_api_version* set to :const:`PYTHON_API_VERSION`. @@ -396,7 +396,7 @@ objects dynamically. Note that both ``PyModule_FromDefAndSpec`` and .. c:function:: PyObject * PyModule_FromDefAndSpec2(PyModuleDef *def, PyObject *spec, int module_api_version) - Create a new module object, given the definition in *module* and the + Create a new module object, given the definition in *def* and the ModuleSpec *spec*, assuming the API version *module_api_version*. If that version does not match the version of the running interpreter, a :exc:`RuntimeWarning` is emitted. From webhook-mailer at python.org Fri Feb 17 03:47:09 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 17 Feb 2023 08:47:09 -0000 Subject: [Python-checkins] Docs: fix typos in PyFunction_WatchCallback docs and in 3.12 NEWS (GH-101980) Message-ID: https://github.com/python/cpython/commit/3c0a31cbfd1258bd96153a007dd44a96f2947dbf commit: 3c0a31cbfd1258bd96153a007dd44a96f2947dbf branch: main author: Yeojin Kim committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-17T00:47:02-08:00 summary: Docs: fix typos in PyFunction_WatchCallback docs and in 3.12 NEWS (GH-101980) - possitibility => possibility - disaallowed => disallowed files: M Doc/c-api/function.rst M Misc/NEWS.d/3.12.0a2.rst diff --git a/Doc/c-api/function.rst b/Doc/c-api/function.rst index 3cce18bdde30..bc7569d0add9 100644 --- a/Doc/c-api/function.rst +++ b/Doc/c-api/function.rst @@ -169,7 +169,7 @@ There are a few functions specific to Python functions. before the modification to *func* takes place, so the prior state of *func* can be inspected. The runtime is permitted to optimize away the creation of function objects when possible. In such cases no event will be emitted. - Although this creates the possitibility of an observable difference of + Although this creates the possibility of an observable difference of runtime behavior depending on optimization decisions, it does not change the semantics of the Python code being executed. diff --git a/Misc/NEWS.d/3.12.0a2.rst b/Misc/NEWS.d/3.12.0a2.rst index 318f3f71f115..41ad8cd22b5d 100644 --- a/Misc/NEWS.d/3.12.0a2.rst +++ b/Misc/NEWS.d/3.12.0a2.rst @@ -1060,7 +1060,7 @@ Add ``getbufferproc`` and ``releasebufferproc`` to the stable API. Some configurable capabilities of sub-interpreters have changed. They always allow subprocesses (:mod:`subprocess`) now, whereas before subprocesses -could be optionally disaallowed for a sub-interpreter. Instead +could be optionally disallowed for a sub-interpreter. Instead :func:`os.exec` can now be disallowed. Disallowing daemon threads is now supported. Disallowing all threads is still allowed, but is never done by default. Note that the optional restrictions are only available through From webhook-mailer at python.org Fri Feb 17 03:51:13 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 17 Feb 2023 08:51:13 -0000 Subject: [Python-checkins] gh-101973: Fix parameter reference for PyModule_FromDefAndSpec (GH-101976) Message-ID: https://github.com/python/cpython/commit/559d0e8073e223acad1f41f24c8a8fd6aa6abb81 commit: 559d0e8073e223acad1f41f24c8a8fd6aa6abb81 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-17T00:50:46-08:00 summary: gh-101973: Fix parameter reference for PyModule_FromDefAndSpec (GH-101976) (cherry picked from commit a3bb7fbe7eecfae6bf7b2f0912f9b2b12fac8db1) Co-authored-by: Oleg Iarygin files: M Doc/c-api/module.rst diff --git a/Doc/c-api/module.rst b/Doc/c-api/module.rst index e2ba157b32c7..c0351c8a6c72 100644 --- a/Doc/c-api/module.rst +++ b/Doc/c-api/module.rst @@ -388,7 +388,7 @@ objects dynamically. Note that both ``PyModule_FromDefAndSpec`` and .. c:function:: PyObject * PyModule_FromDefAndSpec(PyModuleDef *def, PyObject *spec) - Create a new module object, given the definition in *module* and the + Create a new module object, given the definition in *def* and the ModuleSpec *spec*. This behaves like :c:func:`PyModule_FromDefAndSpec2` with *module_api_version* set to :const:`PYTHON_API_VERSION`. @@ -396,7 +396,7 @@ objects dynamically. Note that both ``PyModule_FromDefAndSpec`` and .. c:function:: PyObject * PyModule_FromDefAndSpec2(PyModuleDef *def, PyObject *spec, int module_api_version) - Create a new module object, given the definition in *module* and the + Create a new module object, given the definition in *def* and the ModuleSpec *spec*, assuming the API version *module_api_version*. If that version does not match the version of the running interpreter, a :exc:`RuntimeWarning` is emitted. From webhook-mailer at python.org Fri Feb 17 03:51:31 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 17 Feb 2023 08:51:31 -0000 Subject: [Python-checkins] gh-101973: Fix parameter reference for PyModule_FromDefAndSpec (GH-101976) Message-ID: https://github.com/python/cpython/commit/f4f5dd5c391d8b632f4f88d781ca73788d210755 commit: f4f5dd5c391d8b632f4f88d781ca73788d210755 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-17T00:51:24-08:00 summary: gh-101973: Fix parameter reference for PyModule_FromDefAndSpec (GH-101976) (cherry picked from commit a3bb7fbe7eecfae6bf7b2f0912f9b2b12fac8db1) Co-authored-by: Oleg Iarygin files: M Doc/c-api/module.rst diff --git a/Doc/c-api/module.rst b/Doc/c-api/module.rst index e2ba157b32c7..c0351c8a6c72 100644 --- a/Doc/c-api/module.rst +++ b/Doc/c-api/module.rst @@ -388,7 +388,7 @@ objects dynamically. Note that both ``PyModule_FromDefAndSpec`` and .. c:function:: PyObject * PyModule_FromDefAndSpec(PyModuleDef *def, PyObject *spec) - Create a new module object, given the definition in *module* and the + Create a new module object, given the definition in *def* and the ModuleSpec *spec*. This behaves like :c:func:`PyModule_FromDefAndSpec2` with *module_api_version* set to :const:`PYTHON_API_VERSION`. @@ -396,7 +396,7 @@ objects dynamically. Note that both ``PyModule_FromDefAndSpec`` and .. c:function:: PyObject * PyModule_FromDefAndSpec2(PyModuleDef *def, PyObject *spec, int module_api_version) - Create a new module object, given the definition in *module* and the + Create a new module object, given the definition in *def* and the ModuleSpec *spec*, assuming the API version *module_api_version*. If that version does not match the version of the running interpreter, a :exc:`RuntimeWarning` is emitted. From webhook-mailer at python.org Fri Feb 17 05:14:14 2023 From: webhook-mailer at python.org (corona10) Date: Fri, 17 Feb 2023 10:14:14 -0000 Subject: [Python-checkins] gh-101766: Fix refleak for _BlockingOnManager resources (gh-101942) Message-ID: https://github.com/python/cpython/commit/775f8819e319127f9bfb046773b74bcc62c68b6a commit: 775f8819e319127f9bfb046773b74bcc62c68b6a branch: main author: Dong-hee Na committer: corona10 date: 2023-02-17T19:14:07+09:00 summary: gh-101766: Fix refleak for _BlockingOnManager resources (gh-101942) files: M Lib/importlib/_bootstrap.py diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py index bebe7e15cbce..1ef7b6adb044 100644 --- a/Lib/importlib/_bootstrap.py +++ b/Lib/importlib/_bootstrap.py @@ -85,6 +85,11 @@ def __enter__(self): def __exit__(self, *args, **kwargs): """Remove self.lock from this thread's _blocking_on list.""" self.blocked_on.remove(self.lock) + if len(self.blocked_on) == 0: + # gh-101766: glboal cache should be cleaned-up + # if there is no more _blocking_on for this thread. + del _blocking_on[self.thread_id] + del self.blocked_on class _DeadlockError(RuntimeError): From webhook-mailer at python.org Fri Feb 17 09:05:46 2023 From: webhook-mailer at python.org (zooba) Date: Fri, 17 Feb 2023 14:05:46 -0000 Subject: [Python-checkins] gh-101360: Fix anchor matching in pathlib.PureWindowsPath.match() (GH-101363) Message-ID: https://github.com/python/cpython/commit/d401b20630965c0e1d2a5a0d60d5fc51aa5a8d80 commit: d401b20630965c0e1d2a5a0d60d5fc51aa5a8d80 branch: main author: Barney Gale committer: zooba date: 2023-02-17T14:05:38Z summary: gh-101360: Fix anchor matching in pathlib.PureWindowsPath.match() (GH-101363) Use `fnmatch` to match path and pattern anchors, just as we do for other path parts. This allows patterns such as `'*:/Users/*'` to be matched. files: A Misc/NEWS.d/next/Library/2023-01-27-02-53-50.gh-issue-101360.bPB7SL.rst M Lib/pathlib.py M Lib/test/test_ntpath.py M Lib/test/test_pathlib.py diff --git a/Lib/pathlib.py b/Lib/pathlib.py index 17659bcd3e2d..d7994a331d5e 100644 --- a/Lib/pathlib.py +++ b/Lib/pathlib.py @@ -647,15 +647,10 @@ def match(self, path_pattern): drv, root, pat_parts = self._parse_parts((path_pattern,)) if not pat_parts: raise ValueError("empty pattern") - elif drv and drv != self._flavour.normcase(self._drv): - return False - elif root and root != self._root: - return False parts = self._parts_normcase if drv or root: if len(pat_parts) != len(parts): return False - pat_parts = pat_parts[1:] elif len(pat_parts) > len(parts): return False for part, pat in zip(reversed(parts), reversed(pat_parts)): diff --git a/Lib/test/test_ntpath.py b/Lib/test/test_ntpath.py index b32900697874..08c8a7a1f94b 100644 --- a/Lib/test/test_ntpath.py +++ b/Lib/test/test_ntpath.py @@ -200,6 +200,10 @@ def test_splitroot(self): tester('ntpath.splitroot("//x")', ("//x", "", "")) # non-empty server & missing share tester('ntpath.splitroot("//x/")', ("//x/", "", "")) # non-empty server & empty share + # gh-101363: match GetFullPathNameW() drive letter parsing behaviour + tester('ntpath.splitroot(" :/foo")', (" :", "/", "foo")) + tester('ntpath.splitroot("/:/foo")', ("", "/", ":/foo")) + def test_split(self): tester('ntpath.split("c:\\foo\\bar")', ('c:\\foo', 'bar')) tester('ntpath.split("\\\\conky\\mountpoint\\foo\\bar")', diff --git a/Lib/test/test_pathlib.py b/Lib/test/test_pathlib.py index a596795b44f0..b8683796d960 100644 --- a/Lib/test/test_pathlib.py +++ b/Lib/test/test_pathlib.py @@ -852,8 +852,7 @@ def test_as_uri(self): def test_match_common(self): P = self.cls # Absolute patterns. - self.assertTrue(P('c:/b.py').match('/*.py')) - self.assertTrue(P('c:/b.py').match('c:*.py')) + self.assertTrue(P('c:/b.py').match('*:/*.py')) self.assertTrue(P('c:/b.py').match('c:/*.py')) self.assertFalse(P('d:/b.py').match('c:/*.py')) # wrong drive self.assertFalse(P('b.py').match('/*.py')) @@ -864,7 +863,7 @@ def test_match_common(self): self.assertFalse(P('/b.py').match('c:*.py')) self.assertFalse(P('/b.py').match('c:/*.py')) # UNC patterns. - self.assertTrue(P('//some/share/a.py').match('/*.py')) + self.assertTrue(P('//some/share/a.py').match('//*/*/*.py')) self.assertTrue(P('//some/share/a.py').match('//some/share/*.py')) self.assertFalse(P('//other/share/a.py').match('//some/share/*.py')) self.assertFalse(P('//some/share/a/b.py').match('//some/share/*.py')) @@ -872,6 +871,10 @@ def test_match_common(self): self.assertTrue(P('B.py').match('b.PY')) self.assertTrue(P('c:/a/B.Py').match('C:/A/*.pY')) self.assertTrue(P('//Some/Share/B.Py').match('//somE/sharE/*.pY')) + # Path anchor doesn't match pattern anchor + self.assertFalse(P('c:/b.py').match('/*.py')) # 'c:/' vs '/' + self.assertFalse(P('c:/b.py').match('c:*.py')) # 'c:/' vs 'c:' + self.assertFalse(P('//some/share/a.py').match('/*.py')) # '//some/share/' vs '/' def test_ordering_common(self): # Case-insensitivity. diff --git a/Misc/NEWS.d/next/Library/2023-01-27-02-53-50.gh-issue-101360.bPB7SL.rst b/Misc/NEWS.d/next/Library/2023-01-27-02-53-50.gh-issue-101360.bPB7SL.rst new file mode 100644 index 000000000000..4cfb136c5db8 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-01-27-02-53-50.gh-issue-101360.bPB7SL.rst @@ -0,0 +1,3 @@ +Fix anchor matching in :meth:`pathlib.PureWindowsPath.match`. Path and +pattern anchors are now matched with :mod:`fnmatch`, just like other path +parts. This allows patterns such as ``"*:/Users/*"`` to be matched. From webhook-mailer at python.org Fri Feb 17 09:08:22 2023 From: webhook-mailer at python.org (zooba) Date: Fri, 17 Feb 2023 14:08:22 -0000 Subject: [Python-checkins] gh-100809: Fix handling of drive-relative paths in pathlib.Path.absolute() (GH-100812) Message-ID: https://github.com/python/cpython/commit/072011b3c38f871cdc3ab62630ea2234d09456d1 commit: 072011b3c38f871cdc3ab62630ea2234d09456d1 branch: main author: Barney Gale committer: zooba date: 2023-02-17T14:08:14Z summary: gh-100809: Fix handling of drive-relative paths in pathlib.Path.absolute() (GH-100812) Resolving the drive independently uses the OS API, which ensures it starts from the current directory on that drive. files: A Misc/NEWS.d/next/Library/2023-01-06-21-14-41.gh-issue-100809.I697UT.rst M Lib/pathlib.py M Lib/test/support/os_helper.py M Lib/test/test_pathlib.py diff --git a/Lib/pathlib.py b/Lib/pathlib.py index d7994a331d5e..dde573592fdd 100644 --- a/Lib/pathlib.py +++ b/Lib/pathlib.py @@ -816,7 +816,12 @@ def absolute(self): """ if self.is_absolute(): return self - return self._from_parts([os.getcwd()] + self._parts) + elif self._drv: + # There is a CWD on each drive-letter drive. + cwd = self._flavour.abspath(self._drv) + else: + cwd = os.getcwd() + return self._from_parts([cwd] + self._parts) def resolve(self, strict=False): """ diff --git a/Lib/test/support/os_helper.py b/Lib/test/support/os_helper.py index 2d4356a1191b..821a4b1ffd50 100644 --- a/Lib/test/support/os_helper.py +++ b/Lib/test/support/os_helper.py @@ -4,6 +4,7 @@ import os import re import stat +import string import sys import time import unittest @@ -716,3 +717,37 @@ def __exit__(self, *ignore_exc): else: self._environ[k] = v os.environ = self._environ + + +try: + import ctypes + kernel32 = ctypes.WinDLL('kernel32', use_last_error=True) + + ERROR_FILE_NOT_FOUND = 2 + DDD_REMOVE_DEFINITION = 2 + DDD_EXACT_MATCH_ON_REMOVE = 4 + DDD_NO_BROADCAST_SYSTEM = 8 +except (ImportError, AttributeError): + def subst_drive(path): + raise unittest.SkipTest('ctypes or kernel32 is not available') +else: + @contextlib.contextmanager + def subst_drive(path): + """Temporarily yield a substitute drive for a given path.""" + for c in reversed(string.ascii_uppercase): + drive = f'{c}:' + if (not kernel32.QueryDosDeviceW(drive, None, 0) and + ctypes.get_last_error() == ERROR_FILE_NOT_FOUND): + break + else: + raise unittest.SkipTest('no available logical drive') + if not kernel32.DefineDosDeviceW( + DDD_NO_BROADCAST_SYSTEM, drive, path): + raise ctypes.WinError(ctypes.get_last_error()) + try: + yield drive + finally: + if not kernel32.DefineDosDeviceW( + DDD_REMOVE_DEFINITION | DDD_EXACT_MATCH_ON_REMOVE, + drive, path): + raise ctypes.WinError(ctypes.get_last_error()) diff --git a/Lib/test/test_pathlib.py b/Lib/test/test_pathlib.py index b8683796d960..4de91d52c6d1 100644 --- a/Lib/test/test_pathlib.py +++ b/Lib/test/test_pathlib.py @@ -2973,6 +2973,26 @@ def test_absolute(self): self.assertEqual(str(P('a', 'b', 'c').absolute()), os.path.join(share, 'a', 'b', 'c')) + drive = os.path.splitdrive(BASE)[0] + with os_helper.change_cwd(BASE): + # Relative path with root + self.assertEqual(str(P('\\').absolute()), drive + '\\') + self.assertEqual(str(P('\\foo').absolute()), drive + '\\foo') + + # Relative path on current drive + self.assertEqual(str(P(drive).absolute()), BASE) + self.assertEqual(str(P(drive + 'foo').absolute()), os.path.join(BASE, 'foo')) + + with os_helper.subst_drive(BASE) as other_drive: + # Set the working directory on the substitute drive + saved_cwd = os.getcwd() + other_cwd = f'{other_drive}\\dirA' + os.chdir(other_cwd) + os.chdir(saved_cwd) + + # Relative path on another drive + self.assertEqual(str(P(other_drive).absolute()), other_cwd) + self.assertEqual(str(P(other_drive + 'foo').absolute()), other_cwd + '\\foo') def test_glob(self): P = self.cls diff --git a/Misc/NEWS.d/next/Library/2023-01-06-21-14-41.gh-issue-100809.I697UT.rst b/Misc/NEWS.d/next/Library/2023-01-06-21-14-41.gh-issue-100809.I697UT.rst new file mode 100644 index 000000000000..54082de88ccf --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-01-06-21-14-41.gh-issue-100809.I697UT.rst @@ -0,0 +1,3 @@ +Fix handling of drive-relative paths (like 'C:' and 'C:foo') in +:meth:`pathlib.Path.absolute`. This method now uses the OS API +to retrieve the correct current working directory for the drive. From webhook-mailer at python.org Fri Feb 17 10:18:56 2023 From: webhook-mailer at python.org (corona10) Date: Fri, 17 Feb 2023 15:18:56 -0000 Subject: [Python-checkins] gh-101766: Fix refleak for _BlockingOnManager resources from test suite level (gh-101988) Message-ID: https://github.com/python/cpython/commit/f482ade4c7887c49dfd8bba3be76f839e562608d commit: f482ade4c7887c49dfd8bba3be76f839e562608d branch: main author: Dong-hee Na committer: corona10 date: 2023-02-18T00:18:47+09:00 summary: gh-101766: Fix refleak for _BlockingOnManager resources from test suite level (gh-101988) files: M Lib/importlib/_bootstrap.py M Lib/test/test_importlib/test_locks.py diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py index 1ef7b6adb044..bebe7e15cbce 100644 --- a/Lib/importlib/_bootstrap.py +++ b/Lib/importlib/_bootstrap.py @@ -85,11 +85,6 @@ def __enter__(self): def __exit__(self, *args, **kwargs): """Remove self.lock from this thread's _blocking_on list.""" self.blocked_on.remove(self.lock) - if len(self.blocked_on) == 0: - # gh-101766: glboal cache should be cleaned-up - # if there is no more _blocking_on for this thread. - del _blocking_on[self.thread_id] - del self.blocked_on class _DeadlockError(RuntimeError): diff --git a/Lib/test/test_importlib/test_locks.py b/Lib/test/test_importlib/test_locks.py index 56d73c496e6b..ba9cf51c261d 100644 --- a/Lib/test/test_importlib/test_locks.py +++ b/Lib/test/test_importlib/test_locks.py @@ -33,6 +33,11 @@ class ModuleLockAsRLockTests: test_repr = None test_locked_repr = None + def tearDown(self): + for splitinit in init.values(): + splitinit._bootstrap._blocking_on.clear() + + LOCK_TYPES = {kind: splitinit._bootstrap._ModuleLock for kind, splitinit in init.items()} From webhook-mailer at python.org Fri Feb 17 14:30:37 2023 From: webhook-mailer at python.org (terryjreedy) Date: Fri, 17 Feb 2023 19:30:37 -0000 Subject: [Python-checkins] gh-101992: update plistlib examples to be runnable (#101994) Message-ID: https://github.com/python/cpython/commit/a1723caabfcdca5d675c4cb04554fb04c7edf601 commit: a1723caabfcdca5d675c4cb04554fb04c7edf601 branch: main author: Dustin Rodrigues committer: terryjreedy date: 2023-02-17T14:30:29-05:00 summary: gh-101992: update plistlib examples to be runnable (#101994) * gh-101992: update plistlib examples to be runnable * Update Doc/library/plistlib.rst --------- Co-authored-by: Terry Jan Reedy files: M Doc/library/plistlib.rst diff --git a/Doc/library/plistlib.rst b/Doc/library/plistlib.rst index 5ded9661f080..7aad15ec91a0 100644 --- a/Doc/library/plistlib.rst +++ b/Doc/library/plistlib.rst @@ -159,6 +159,9 @@ Examples Generating a plist:: + import datetime + import plistlib + pl = dict( aString = "Doodah", aList = ["A", "B", 12, 32.1, [1, 2, 3]], @@ -172,13 +175,19 @@ Generating a plist:: ), someData = b"", someMoreData = b"" * 10, - aDate = datetime.datetime.fromtimestamp(time.mktime(time.gmtime())), + aDate = datetime.datetime.now() ) - with open(fileName, 'wb') as fp: - dump(pl, fp) + print(plistlib.dumps(pl).decode()) Parsing a plist:: - with open(fileName, 'rb') as fp: - pl = load(fp) - print(pl["aKey"]) + import plistlib + + plist = b""" + + foo + bar + + """ + pl = plistlib.loads(plist) + print(pl["foo"]) From webhook-mailer at python.org Fri Feb 17 14:39:21 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 17 Feb 2023 19:39:21 -0000 Subject: [Python-checkins] gh-101992: update plistlib examples to be runnable (GH-101994) Message-ID: https://github.com/python/cpython/commit/be69ac3c9687f4f7520c8778409ddbc6241bf003 commit: be69ac3c9687f4f7520c8778409ddbc6241bf003 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-17T11:39:14-08:00 summary: gh-101992: update plistlib examples to be runnable (GH-101994) * gh-101992: update plistlib examples to be runnable * Update Doc/library/plistlib.rst --------- (cherry picked from commit a1723caabfcdca5d675c4cb04554fb04c7edf601) Co-authored-by: Dustin Rodrigues Co-authored-by: Terry Jan Reedy files: M Doc/library/plistlib.rst diff --git a/Doc/library/plistlib.rst b/Doc/library/plistlib.rst index 5ded9661f080..7aad15ec91a0 100644 --- a/Doc/library/plistlib.rst +++ b/Doc/library/plistlib.rst @@ -159,6 +159,9 @@ Examples Generating a plist:: + import datetime + import plistlib + pl = dict( aString = "Doodah", aList = ["A", "B", 12, 32.1, [1, 2, 3]], @@ -172,13 +175,19 @@ Generating a plist:: ), someData = b"", someMoreData = b"" * 10, - aDate = datetime.datetime.fromtimestamp(time.mktime(time.gmtime())), + aDate = datetime.datetime.now() ) - with open(fileName, 'wb') as fp: - dump(pl, fp) + print(plistlib.dumps(pl).decode()) Parsing a plist:: - with open(fileName, 'rb') as fp: - pl = load(fp) - print(pl["aKey"]) + import plistlib + + plist = b""" + + foo + bar + + """ + pl = plistlib.loads(plist) + print(pl["foo"]) From webhook-mailer at python.org Fri Feb 17 14:40:27 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 17 Feb 2023 19:40:27 -0000 Subject: [Python-checkins] gh-101992: update plistlib examples to be runnable (GH-101994) Message-ID: https://github.com/python/cpython/commit/b6156787fa123afdb944d78fbf560da7fd054120 commit: b6156787fa123afdb944d78fbf560da7fd054120 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-17T11:40:20-08:00 summary: gh-101992: update plistlib examples to be runnable (GH-101994) * gh-101992: update plistlib examples to be runnable * Update Doc/library/plistlib.rst --------- (cherry picked from commit a1723caabfcdca5d675c4cb04554fb04c7edf601) Co-authored-by: Dustin Rodrigues Co-authored-by: Terry Jan Reedy files: M Doc/library/plistlib.rst diff --git a/Doc/library/plistlib.rst b/Doc/library/plistlib.rst index 5ded9661f080..7aad15ec91a0 100644 --- a/Doc/library/plistlib.rst +++ b/Doc/library/plistlib.rst @@ -159,6 +159,9 @@ Examples Generating a plist:: + import datetime + import plistlib + pl = dict( aString = "Doodah", aList = ["A", "B", 12, 32.1, [1, 2, 3]], @@ -172,13 +175,19 @@ Generating a plist:: ), someData = b"", someMoreData = b"" * 10, - aDate = datetime.datetime.fromtimestamp(time.mktime(time.gmtime())), + aDate = datetime.datetime.now() ) - with open(fileName, 'wb') as fp: - dump(pl, fp) + print(plistlib.dumps(pl).decode()) Parsing a plist:: - with open(fileName, 'rb') as fp: - pl = load(fp) - print(pl["aKey"]) + import plistlib + + plist = b""" + + foo + bar + + """ + pl = plistlib.loads(plist) + print(pl["foo"]) From webhook-mailer at python.org Fri Feb 17 16:01:36 2023 From: webhook-mailer at python.org (gvanrossum) Date: Fri, 17 Feb 2023 21:01:36 -0000 Subject: [Python-checkins] gh-100226: Clarify StreamReader.read behavior (#101807) Message-ID: https://github.com/python/cpython/commit/77d95c83733722ada35eb1ef89ae5b84a51ddd32 commit: 77d95c83733722ada35eb1ef89ae5b84a51ddd32 branch: main author: Jan Gosmann committer: gvanrossum date: 2023-02-17T13:01:26-08:00 summary: gh-100226: Clarify StreamReader.read behavior (#101807) files: M Doc/library/asyncio-stream.rst M Lib/asyncio/streams.py diff --git a/Doc/library/asyncio-stream.rst b/Doc/library/asyncio-stream.rst index 3b3c68ab6ef6..bbac1c32b569 100644 --- a/Doc/library/asyncio-stream.rst +++ b/Doc/library/asyncio-stream.rst @@ -206,12 +206,20 @@ StreamReader .. coroutinemethod:: read(n=-1) - Read up to *n* bytes. If *n* is not provided, or set to ``-1``, - read until EOF and return all read bytes. + Read up to *n* bytes from the stream. + If *n* is not provided or set to ``-1``, + read until EOF, then return all read :class:`bytes`. If EOF was received and the internal buffer is empty, return an empty ``bytes`` object. + If *n* is ``0``, return an empty ``bytes`` object immediately. + + If *n* is positive, return at most *n* available ``bytes`` + as soon as at least 1 byte is available in the internal buffer. + If EOF is received before any byte is read, return an empty + ``bytes`` object. + .. coroutinemethod:: readline() Read one line, where "line" is a sequence of bytes diff --git a/Lib/asyncio/streams.py b/Lib/asyncio/streams.py index 7d13e961bd2d..bf15f517e50d 100644 --- a/Lib/asyncio/streams.py +++ b/Lib/asyncio/streams.py @@ -649,16 +649,17 @@ async def readuntil(self, separator=b'\n'): async def read(self, n=-1): """Read up to `n` bytes from the stream. - If n is not provided, or set to -1, read until EOF and return all read - bytes. If the EOF was received and the internal buffer is empty, return - an empty bytes object. + If `n` is not provided or set to -1, + read until EOF, then return all read bytes. + If EOF was received and the internal buffer is empty, + return an empty bytes object. - If n is zero, return empty bytes object immediately. + If `n` is 0, return an empty bytes object immediately. - If n is positive, this function try to read `n` bytes, and may return - less or equal bytes than requested, but at least one byte. If EOF was - received before any byte is read, this function returns empty byte - object. + If `n` is positive, return at most `n` available bytes + as soon as at least 1 byte is available in the internal buffer. + If EOF is received before any byte is read, return an empty + bytes object. Returned value is not limited with limit, configured at stream creation. From webhook-mailer at python.org Fri Feb 17 16:24:51 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 17 Feb 2023 21:24:51 -0000 Subject: [Python-checkins] gh-100226: Clarify StreamReader.read behavior (GH-101807) Message-ID: https://github.com/python/cpython/commit/3c1b495cac84d4122c31be6657767f5c2a17bdc3 commit: 3c1b495cac84d4122c31be6657767f5c2a17bdc3 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-17T13:24:42-08:00 summary: gh-100226: Clarify StreamReader.read behavior (GH-101807) (cherry picked from commit 77d95c83733722ada35eb1ef89ae5b84a51ddd32) Co-authored-by: Jan Gosmann files: M Doc/library/asyncio-stream.rst M Lib/asyncio/streams.py diff --git a/Doc/library/asyncio-stream.rst b/Doc/library/asyncio-stream.rst index f0fcac1a7937..a27bb7c98c71 100644 --- a/Doc/library/asyncio-stream.rst +++ b/Doc/library/asyncio-stream.rst @@ -192,12 +192,20 @@ StreamReader .. coroutinemethod:: read(n=-1) - Read up to *n* bytes. If *n* is not provided, or set to ``-1``, - read until EOF and return all read bytes. + Read up to *n* bytes from the stream. + If *n* is not provided or set to ``-1``, + read until EOF, then return all read :class:`bytes`. If EOF was received and the internal buffer is empty, return an empty ``bytes`` object. + If *n* is ``0``, return an empty ``bytes`` object immediately. + + If *n* is positive, return at most *n* available ``bytes`` + as soon as at least 1 byte is available in the internal buffer. + If EOF is received before any byte is read, return an empty + ``bytes`` object. + .. coroutinemethod:: readline() Read one line, where "line" is a sequence of bytes diff --git a/Lib/asyncio/streams.py b/Lib/asyncio/streams.py index 4f3ee94123bc..37fa7dca7d8a 100644 --- a/Lib/asyncio/streams.py +++ b/Lib/asyncio/streams.py @@ -627,16 +627,17 @@ async def readuntil(self, separator=b'\n'): async def read(self, n=-1): """Read up to `n` bytes from the stream. - If n is not provided, or set to -1, read until EOF and return all read - bytes. If the EOF was received and the internal buffer is empty, return - an empty bytes object. + If `n` is not provided or set to -1, + read until EOF, then return all read bytes. + If EOF was received and the internal buffer is empty, + return an empty bytes object. - If n is zero, return empty bytes object immediately. + If `n` is 0, return an empty bytes object immediately. - If n is positive, this function try to read `n` bytes, and may return - less or equal bytes than requested, but at least one byte. If EOF was - received before any byte is read, this function returns empty byte - object. + If `n` is positive, return at most `n` available bytes + as soon as at least 1 byte is available in the internal buffer. + If EOF is received before any byte is read, return an empty + bytes object. Returned value is not limited with limit, configured at stream creation. From webhook-mailer at python.org Fri Feb 17 16:36:54 2023 From: webhook-mailer at python.org (ethanfurman) Date: Fri, 17 Feb 2023 21:36:54 -0000 Subject: [Python-checkins] gh-101739: [Enum] update docs - default boundary for Flag is CONFORM (GH-101746) Message-ID: https://github.com/python/cpython/commit/7f1c72175600b21c1c840e8988cc6e6b4b244582 commit: 7f1c72175600b21c1c840e8988cc6e6b4b244582 branch: main author: Owain Davies <116417456+OTheDev at users.noreply.github.com> committer: ethanfurman date: 2023-02-17T13:36:47-08:00 summary: gh-101739: [Enum] update docs - default boundary for Flag is CONFORM (GH-101746) files: M Doc/library/enum.rst diff --git a/Doc/library/enum.rst b/Doc/library/enum.rst index 13591a1bdc73..24b6dbfe37cd 100644 --- a/Doc/library/enum.rst +++ b/Doc/library/enum.rst @@ -696,10 +696,9 @@ Data Types .. attribute:: STRICT - Out-of-range values cause a :exc:`ValueError` to be raised. This is the - default for :class:`Flag`:: + Out-of-range values cause a :exc:`ValueError` to be raised:: - >>> from enum import Flag, STRICT + >>> from enum import Flag, STRICT, auto >>> class StrictFlag(Flag, boundary=STRICT): ... RED = auto() ... GREEN = auto() @@ -715,9 +714,9 @@ Data Types .. attribute:: CONFORM Out-of-range values have invalid values removed, leaving a valid *Flag* - value:: + value. This is the default for :class:`Flag`:: - >>> from enum import Flag, CONFORM + >>> from enum import Flag, CONFORM, auto >>> class ConformFlag(Flag, boundary=CONFORM): ... RED = auto() ... GREEN = auto() @@ -731,7 +730,7 @@ Data Types Out-of-range values lose their *Flag* membership and revert to :class:`int`. This is the default for :class:`IntFlag`:: - >>> from enum import Flag, EJECT + >>> from enum import Flag, EJECT, auto >>> class EjectFlag(Flag, boundary=EJECT): ... RED = auto() ... GREEN = auto() @@ -742,10 +741,10 @@ Data Types .. attribute:: KEEP - Out-of-range values are kept, and the *Flag* membership is kept. This is - used for some stdlib flags: + Out-of-range values are kept, and the *Flag* membership is kept. This is + used for some stdlib flags:: - >>> from enum import Flag, KEEP + >>> from enum import Flag, KEEP, auto >>> class KeepFlag(Flag, boundary=KEEP): ... RED = auto() ... GREEN = auto() From webhook-mailer at python.org Fri Feb 17 16:44:52 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 17 Feb 2023 21:44:52 -0000 Subject: [Python-checkins] gh-101739: [Enum] update docs - default boundary for Flag is CONFORM (GH-101746) Message-ID: https://github.com/python/cpython/commit/b7a49eb5ff541157a87fefb8fa0527a5cf2b3abc commit: b7a49eb5ff541157a87fefb8fa0527a5cf2b3abc branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-17T13:44:45-08:00 summary: gh-101739: [Enum] update docs - default boundary for Flag is CONFORM (GH-101746) (cherry picked from commit 7f1c72175600b21c1c840e8988cc6e6b4b244582) Co-authored-by: Owain Davies <116417456+OTheDev at users.noreply.github.com> files: M Doc/library/enum.rst diff --git a/Doc/library/enum.rst b/Doc/library/enum.rst index 1973578e788c..180399938208 100644 --- a/Doc/library/enum.rst +++ b/Doc/library/enum.rst @@ -692,10 +692,9 @@ Data Types .. attribute:: STRICT - Out-of-range values cause a :exc:`ValueError` to be raised. This is the - default for :class:`Flag`:: + Out-of-range values cause a :exc:`ValueError` to be raised:: - >>> from enum import Flag, STRICT + >>> from enum import Flag, STRICT, auto >>> class StrictFlag(Flag, boundary=STRICT): ... RED = auto() ... GREEN = auto() @@ -710,9 +709,9 @@ Data Types .. attribute:: CONFORM Out-of-range values have invalid values removed, leaving a valid *Flag* - value:: + value. This is the default for :class:`Flag`:: - >>> from enum import Flag, CONFORM + >>> from enum import Flag, CONFORM, auto >>> class ConformFlag(Flag, boundary=CONFORM): ... RED = auto() ... GREEN = auto() @@ -725,7 +724,7 @@ Data Types Out-of-range values lose their *Flag* membership and revert to :class:`int`. This is the default for :class:`IntFlag`:: - >>> from enum import Flag, EJECT + >>> from enum import Flag, EJECT, auto >>> class EjectFlag(Flag, boundary=EJECT): ... RED = auto() ... GREEN = auto() @@ -735,10 +734,10 @@ Data Types .. attribute:: KEEP - Out-of-range values are kept, and the *Flag* membership is kept. This is - used for some stdlib flags: + Out-of-range values are kept, and the *Flag* membership is kept. This is + used for some stdlib flags:: - >>> from enum import Flag, KEEP + >>> from enum import Flag, KEEP, auto >>> class KeepFlag(Flag, boundary=KEEP): ... RED = auto() ... GREEN = auto() From webhook-mailer at python.org Fri Feb 17 19:52:38 2023 From: webhook-mailer at python.org (iritkatriel) Date: Sat, 18 Feb 2023 00:52:38 -0000 Subject: [Python-checkins] gh-101967: add a missing error check (#101968) Message-ID: https://github.com/python/cpython/commit/89413bbccb9261b72190e275eefe4b0d49671477 commit: 89413bbccb9261b72190e275eefe4b0d49671477 branch: main author: Eclips4 <80244920+Eclips4 at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-18T00:52:23Z summary: gh-101967: add a missing error check (#101968) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-16-23-19-01.gh-issue-101967.Kqr1dz.rst M Python/ceval.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-16-23-19-01.gh-issue-101967.Kqr1dz.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-16-23-19-01.gh-issue-101967.Kqr1dz.rst new file mode 100644 index 000000000000..6e681f910f53 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-16-23-19-01.gh-issue-101967.Kqr1dz.rst @@ -0,0 +1 @@ +Fix possible segfault in ``positional_only_passed_as_keyword`` function, when new list created. diff --git a/Python/ceval.c b/Python/ceval.c index 09fd2f29266c..308ef52259df 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -1255,7 +1255,9 @@ positional_only_passed_as_keyword(PyThreadState *tstate, PyCodeObject *co, { int posonly_conflicts = 0; PyObject* posonly_names = PyList_New(0); - + if (posonly_names == NULL) { + goto fail; + } for(int k=0; k < co->co_posonlyargcount; k++){ PyObject* posonly_name = PyTuple_GET_ITEM(co->co_localsplusnames, k); From webhook-mailer at python.org Fri Feb 17 20:13:48 2023 From: webhook-mailer at python.org (miss-islington) Date: Sat, 18 Feb 2023 01:13:48 -0000 Subject: [Python-checkins] gh-101967: add a missing error check (GH-101968) Message-ID: https://github.com/python/cpython/commit/92050e87677548e9283f32c18c984f1cf1d4fc76 commit: 92050e87677548e9283f32c18c984f1cf1d4fc76 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-17T17:13:33-08:00 summary: gh-101967: add a missing error check (GH-101968) (cherry picked from commit 89413bbccb9261b72190e275eefe4b0d49671477) Co-authored-by: Eclips4 <80244920+Eclips4 at users.noreply.github.com> files: A Misc/NEWS.d/next/Core and Builtins/2023-02-16-23-19-01.gh-issue-101967.Kqr1dz.rst M Python/ceval.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-16-23-19-01.gh-issue-101967.Kqr1dz.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-16-23-19-01.gh-issue-101967.Kqr1dz.rst new file mode 100644 index 000000000000..6e681f910f53 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-16-23-19-01.gh-issue-101967.Kqr1dz.rst @@ -0,0 +1 @@ +Fix possible segfault in ``positional_only_passed_as_keyword`` function, when new list created. diff --git a/Python/ceval.c b/Python/ceval.c index 9719177e19ba..9f4ef6be0e1f 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -4718,7 +4718,9 @@ positional_only_passed_as_keyword(PyThreadState *tstate, PyCodeObject *co, { int posonly_conflicts = 0; PyObject* posonly_names = PyList_New(0); - + if (posonly_names == NULL) { + goto fail; + } for(int k=0; k < co->co_posonlyargcount; k++){ PyObject* posonly_name = PyTuple_GET_ITEM(co->co_varnames, k); From webhook-mailer at python.org Sat Feb 18 10:46:40 2023 From: webhook-mailer at python.org (facundobatista) Date: Sat, 18 Feb 2023 15:46:40 -0000 Subject: [Python-checkins] gh-101536: [docs] Improve attributes of `urllib.error.HTTPError` (#101612) Message-ID: https://github.com/python/cpython/commit/af446bbb76f64e67831444a0ceee6863a1527088 commit: af446bbb76f64e67831444a0ceee6863a1527088 branch: main author: Nikita Sobolev committer: facundobatista date: 2023-02-18T12:46:33-03:00 summary: gh-101536: [docs] Improve attributes of `urllib.error.HTTPError` (#101612) * gh-101536: [docs] Improve attributes of `urllib.error.HTTPError` * Address review files: M Doc/library/urllib.error.rst diff --git a/Doc/library/urllib.error.rst b/Doc/library/urllib.error.rst index f7d47ed76aca..3adbdd261322 100644 --- a/Doc/library/urllib.error.rst +++ b/Doc/library/urllib.error.rst @@ -31,7 +31,7 @@ The following exceptions are raised by :mod:`urllib.error` as appropriate: of :exc:`IOError`. -.. exception:: HTTPError +.. exception:: HTTPError(url, code, msg, hdrs, fp) Though being an exception (a subclass of :exc:`URLError`), an :exc:`HTTPError` can also function as a non-exceptional file-like return @@ -39,6 +39,11 @@ The following exceptions are raised by :mod:`urllib.error` as appropriate: is useful when handling exotic HTTP errors, such as requests for authentication. + .. attribute:: url + + Contains the request URL. + An alias for *filename* attribute. + .. attribute:: code An HTTP status code as defined in :rfc:`2616`. This numeric value corresponds @@ -48,14 +53,20 @@ The following exceptions are raised by :mod:`urllib.error` as appropriate: .. attribute:: reason This is usually a string explaining the reason for this error. + An alias for *msg* attribute. .. attribute:: headers The HTTP response headers for the HTTP request that caused the :exc:`HTTPError`. + An alias for *hdrs* attribute. .. versionadded:: 3.4 + .. attribute:: fp + + A file-like object where the HTTP error body can be read from. + .. exception:: ContentTooShortError(msg, content) This exception is raised when the :func:`~urllib.request.urlretrieve` From webhook-mailer at python.org Sat Feb 18 13:45:01 2023 From: webhook-mailer at python.org (iritkatriel) Date: Sat, 18 Feb 2023 18:45:01 -0000 Subject: [Python-checkins] bpo-46978: Correct docstrings for in-place builtin operators (#31802) Message-ID: https://github.com/python/cpython/commit/128379b8cdb88a6d3d7fed24df082c9a654b3fb8 commit: 128379b8cdb88a6d3d7fed24df082c9a654b3fb8 branch: main author: Nicko van Someren committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-18T18:44:41Z summary: bpo-46978: Correct docstrings for in-place builtin operators (#31802) files: A Misc/NEWS.d/next/Core and Builtins/2022-03-10-21-48-05.bpo-46978.f5QFfw.rst M Objects/typeobject.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2022-03-10-21-48-05.bpo-46978.f5QFfw.rst b/Misc/NEWS.d/next/Core and Builtins/2022-03-10-21-48-05.bpo-46978.f5QFfw.rst new file mode 100644 index 000000000000..72291d042a03 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2022-03-10-21-48-05.bpo-46978.f5QFfw.rst @@ -0,0 +1 @@ +Fixed docstrings for in-place operators of built-in types. diff --git a/Objects/typeobject.c b/Objects/typeobject.c index bf6ccdb77a90..f2e8092aa37e 100644 --- a/Objects/typeobject.c +++ b/Objects/typeobject.c @@ -8564,7 +8564,7 @@ an all-zero entry. #NAME "($self, /)\n--\n\n" DOC) #define IBSLOT(NAME, SLOT, FUNCTION, WRAPPER, DOC) \ ETSLOT(NAME, as_number.SLOT, FUNCTION, WRAPPER, \ - #NAME "($self, value, /)\n--\n\nReturn self" DOC "value.") + #NAME "($self, value, /)\n--\n\nCompute self " DOC " value.") #define BINSLOT(NAME, SLOT, FUNCTION, DOC) \ ETSLOT(NAME, as_number.SLOT, FUNCTION, wrap_binaryfunc_l, \ #NAME "($self, value, /)\n--\n\nReturn self" DOC "value.") From webhook-mailer at python.org Sat Feb 18 16:29:28 2023 From: webhook-mailer at python.org (jaraco) Date: Sat, 18 Feb 2023 21:29:28 -0000 Subject: [Python-checkins] gh-97930: Apply changes from importlib_resources 5.12. (GH-102010) Message-ID: https://github.com/python/cpython/commit/5170caf3059fdacc92d7370eecb9fe4f0c5a1c76 commit: 5170caf3059fdacc92d7370eecb9fe4f0c5a1c76 branch: main author: Jason R. Coombs committer: jaraco date: 2023-02-18T16:29:22-05:00 summary: gh-97930: Apply changes from importlib_resources 5.12. (GH-102010) files: A Lib/test/test_importlib/resources/data02/subdirectory/subsubdir/resource.txt A Lib/test/test_importlib/resources/test_custom.py A Misc/NEWS.d/next/Library/2023-02-17-19-00-58.gh-issue-97930.C_nQjb.rst M Lib/importlib/resources/_adapters.py M Lib/importlib/resources/_itertools.py M Lib/importlib/resources/readers.py M Lib/test/test_importlib/resources/_path.py M Lib/test/test_importlib/resources/test_compatibilty_files.py M Lib/test/test_importlib/resources/test_files.py M Lib/test/test_importlib/resources/test_open.py M Lib/test/test_importlib/resources/test_path.py M Lib/test/test_importlib/resources/test_read.py M Lib/test/test_importlib/resources/test_reader.py M Lib/test/test_importlib/resources/test_resource.py M Lib/test/test_importlib/resources/util.py diff --git a/Lib/importlib/resources/_adapters.py b/Lib/importlib/resources/_adapters.py index f22f6bc509a3..50688fbb6666 100644 --- a/Lib/importlib/resources/_adapters.py +++ b/Lib/importlib/resources/_adapters.py @@ -34,9 +34,7 @@ def _io_wrapper(file, mode='r', *args, **kwargs): return TextIOWrapper(file, *args, **kwargs) elif mode == 'rb': return file - raise ValueError( - f"Invalid mode value '{mode}', only 'r' and 'rb' are supported" - ) + raise ValueError(f"Invalid mode value '{mode}', only 'r' and 'rb' are supported") class CompatibilityFiles: diff --git a/Lib/importlib/resources/_itertools.py b/Lib/importlib/resources/_itertools.py index cce05582ffc6..7b775ef5ae89 100644 --- a/Lib/importlib/resources/_itertools.py +++ b/Lib/importlib/resources/_itertools.py @@ -1,35 +1,38 @@ -from itertools import filterfalse +# from more_itertools 9.0 +def only(iterable, default=None, too_long=None): + """If *iterable* has only one item, return it. + If it has zero items, return *default*. + If it has more than one item, raise the exception given by *too_long*, + which is ``ValueError`` by default. + >>> only([], default='missing') + 'missing' + >>> only([1]) + 1 + >>> only([1, 2]) # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + ValueError: Expected exactly one item in iterable, but got 1, 2, + and perhaps more.' + >>> only([1, 2], too_long=TypeError) # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + TypeError + Note that :func:`only` attempts to advance *iterable* twice to ensure there + is only one item. See :func:`spy` or :func:`peekable` to check + iterable contents less destructively. + """ + it = iter(iterable) + first_value = next(it, default) -from typing import ( - Callable, - Iterable, - Iterator, - Optional, - Set, - TypeVar, - Union, -) - -# Type and type variable definitions -_T = TypeVar('_T') -_U = TypeVar('_U') - - -def unique_everseen( - iterable: Iterable[_T], key: Optional[Callable[[_T], _U]] = None -) -> Iterator[_T]: - "List unique elements, preserving order. Remember all elements ever seen." - # unique_everseen('AAAABBBCCDAABBB') --> A B C D - # unique_everseen('ABBCcAD', str.lower) --> A B C D - seen: Set[Union[_T, _U]] = set() - seen_add = seen.add - if key is None: - for element in filterfalse(seen.__contains__, iterable): - seen_add(element) - yield element + try: + second_value = next(it) + except StopIteration: + pass else: - for element in iterable: - k = key(element) - if k not in seen: - seen_add(k) - yield element + msg = ( + 'Expected exactly one item in iterable, but got {!r}, {!r}, ' + 'and perhaps more.'.format(first_value, second_value) + ) + raise too_long or ValueError(msg) + + return first_value diff --git a/Lib/importlib/resources/readers.py b/Lib/importlib/resources/readers.py index 80cb320dd8bd..c3cdf769cbec 100644 --- a/Lib/importlib/resources/readers.py +++ b/Lib/importlib/resources/readers.py @@ -1,11 +1,12 @@ import collections -import operator +import itertools import pathlib +import operator import zipfile from . import abc -from ._itertools import unique_everseen +from ._itertools import only def remove_duplicates(items): @@ -41,8 +42,10 @@ def open_resource(self, resource): raise FileNotFoundError(exc.args[0]) def is_resource(self, path): - # workaround for `zipfile.Path.is_file` returning true - # for non-existent paths. + """ + Workaround for `zipfile.Path.is_file` returning true + for non-existent paths. + """ target = self.files().joinpath(path) return target.is_file() and target.exists() @@ -67,8 +70,10 @@ def __init__(self, *paths): raise NotADirectoryError('MultiplexedPath only supports directories') def iterdir(self): - files = (file for path in self._paths for file in path.iterdir()) - return unique_everseen(files, key=operator.attrgetter('name')) + children = (child for path in self._paths for child in path.iterdir()) + by_name = operator.attrgetter('name') + groups = itertools.groupby(sorted(children, key=by_name), key=by_name) + return map(self._follow, (locs for name, locs in groups)) def read_bytes(self): raise FileNotFoundError(f'{self} is not a file') @@ -90,6 +95,25 @@ def joinpath(self, *descendants): # Just return something that will not exist. return self._paths[0].joinpath(*descendants) + @classmethod + def _follow(cls, children): + """ + Construct a MultiplexedPath if needed. + + If children contains a sole element, return it. + Otherwise, return a MultiplexedPath of the items. + Unless one of the items is not a Directory, then return the first. + """ + subdirs, one_dir, one_file = itertools.tee(children, 3) + + try: + return only(one_dir) + except ValueError: + try: + return cls(*subdirs) + except NotADirectoryError: + return next(one_file) + def open(self, *args, **kwargs): raise FileNotFoundError(f'{self} is not a file') diff --git a/Lib/test/test_importlib/resources/_path.py b/Lib/test/test_importlib/resources/_path.py index c630e4d3d3f3..1f97c9614696 100644 --- a/Lib/test/test_importlib/resources/_path.py +++ b/Lib/test/test_importlib/resources/_path.py @@ -1,12 +1,16 @@ import pathlib import functools +from typing import Dict, Union + #### -# from jaraco.path 3.4 +# from jaraco.path 3.4.1 + +FilesSpec = Dict[str, Union[str, bytes, 'FilesSpec']] # type: ignore -def build(spec, prefix=pathlib.Path()): +def build(spec: FilesSpec, prefix=pathlib.Path()): """ Build a set of files/directories, as described by the spec. @@ -23,15 +27,17 @@ def build(spec, prefix=pathlib.Path()): ... "baz.py": "# Some code", ... } ... } - >>> tmpdir = getfixture('tmpdir') - >>> build(spec, tmpdir) + >>> target = getfixture('tmp_path') + >>> build(spec, target) + >>> target.joinpath('foo/baz.py').read_text(encoding='utf-8') + '# Some code' """ for name, contents in spec.items(): create(contents, pathlib.Path(prefix) / name) @functools.singledispatch -def create(content, path): +def create(content: Union[str, bytes, FilesSpec], path): path.mkdir(exist_ok=True) build(content, prefix=path) # type: ignore @@ -43,7 +49,7 @@ def _(content: bytes, path): @create.register def _(content: str, path): - path.write_text(content) + path.write_text(content, encoding='utf-8') # end from jaraco.path diff --git a/Lib/test/test_importlib/resources/data02/subdirectory/subsubdir/resource.txt b/Lib/test/test_importlib/resources/data02/subdirectory/subsubdir/resource.txt new file mode 100644 index 000000000000..48f587a2d0ac --- /dev/null +++ b/Lib/test/test_importlib/resources/data02/subdirectory/subsubdir/resource.txt @@ -0,0 +1 @@ +a resource \ No newline at end of file diff --git a/Lib/test/test_importlib/resources/test_compatibilty_files.py b/Lib/test/test_importlib/resources/test_compatibilty_files.py index 6fa18a24973f..bcf608d9e2cb 100644 --- a/Lib/test/test_importlib/resources/test_compatibilty_files.py +++ b/Lib/test/test_importlib/resources/test_compatibilty_files.py @@ -64,11 +64,13 @@ def test_orphan_path_name(self): def test_spec_path_open(self): self.assertEqual(self.files.read_bytes(), b'Hello, world!') - self.assertEqual(self.files.read_text(), 'Hello, world!') + self.assertEqual(self.files.read_text(encoding='utf-8'), 'Hello, world!') def test_child_path_open(self): self.assertEqual((self.files / 'a').read_bytes(), b'Hello, world!') - self.assertEqual((self.files / 'a').read_text(), 'Hello, world!') + self.assertEqual( + (self.files / 'a').read_text(encoding='utf-8'), 'Hello, world!' + ) def test_orphan_path_open(self): with self.assertRaises(FileNotFoundError): diff --git a/Lib/test/test_importlib/resources/test_custom.py b/Lib/test/test_importlib/resources/test_custom.py new file mode 100644 index 000000000000..73127209a276 --- /dev/null +++ b/Lib/test/test_importlib/resources/test_custom.py @@ -0,0 +1,46 @@ +import unittest +import contextlib +import pathlib + +from test.support import os_helper + +from importlib import resources +from importlib.resources.abc import TraversableResources, ResourceReader +from . import util + + +class SimpleLoader: + """ + A simple loader that only implements a resource reader. + """ + + def __init__(self, reader: ResourceReader): + self.reader = reader + + def get_resource_reader(self, package): + return self.reader + + +class MagicResources(TraversableResources): + """ + Magically returns the resources at path. + """ + + def __init__(self, path: pathlib.Path): + self.path = path + + def files(self): + return self.path + + +class CustomTraversableResourcesTests(unittest.TestCase): + def setUp(self): + self.fixtures = contextlib.ExitStack() + self.addCleanup(self.fixtures.close) + + def test_custom_loader(self): + temp_dir = self.fixtures.enter_context(os_helper.temp_dir()) + loader = SimpleLoader(MagicResources(temp_dir)) + pkg = util.create_package_from_loader(loader) + files = resources.files(pkg) + assert files is temp_dir diff --git a/Lib/test/test_importlib/resources/test_files.py b/Lib/test/test_importlib/resources/test_files.py index fe813ae7d088..1450cfb31092 100644 --- a/Lib/test/test_importlib/resources/test_files.py +++ b/Lib/test/test_importlib/resources/test_files.py @@ -85,7 +85,7 @@ def test_module_resources(self): _path.build(spec, self.site_dir) import mod - actual = resources.files(mod).joinpath('res.txt').read_text() + actual = resources.files(mod).joinpath('res.txt').read_text(encoding='utf-8') assert actual == spec['res.txt'] @@ -99,7 +99,7 @@ def test_implicit_files(self): '__init__.py': textwrap.dedent( """ import importlib.resources as res - val = res.files().joinpath('res.txt').read_text() + val = res.files().joinpath('res.txt').read_text(encoding='utf-8') """ ), 'res.txt': 'resources are the best', diff --git a/Lib/test/test_importlib/resources/test_open.py b/Lib/test/test_importlib/resources/test_open.py index 0554c41ba67d..86becb4bfaad 100644 --- a/Lib/test/test_importlib/resources/test_open.py +++ b/Lib/test/test_importlib/resources/test_open.py @@ -15,7 +15,7 @@ def execute(self, package, path): class CommonTextTests(util.CommonTests, unittest.TestCase): def execute(self, package, path): target = resources.files(package).joinpath(path) - with target.open(): + with target.open(encoding='utf-8'): pass @@ -28,7 +28,7 @@ def test_open_binary(self): def test_open_text_default_encoding(self): target = resources.files(self.data) / 'utf-8.file' - with target.open() as fp: + with target.open(encoding='utf-8') as fp: result = fp.read() self.assertEqual(result, 'Hello, UTF-8 world!\n') @@ -39,7 +39,9 @@ def test_open_text_given_encoding(self): self.assertEqual(result, 'Hello, UTF-16 world!\n') def test_open_text_with_errors(self): - # Raises UnicodeError without the 'errors' argument. + """ + Raises UnicodeError without the 'errors' argument. + """ target = resources.files(self.data) / 'utf-16.file' with target.open(encoding='utf-8', errors='strict') as fp: self.assertRaises(UnicodeError, fp.read) @@ -54,11 +56,13 @@ def test_open_text_with_errors(self): def test_open_binary_FileNotFoundError(self): target = resources.files(self.data) / 'does-not-exist' - self.assertRaises(FileNotFoundError, target.open, 'rb') + with self.assertRaises(FileNotFoundError): + target.open('rb') def test_open_text_FileNotFoundError(self): target = resources.files(self.data) / 'does-not-exist' - self.assertRaises(FileNotFoundError, target.open) + with self.assertRaises(FileNotFoundError): + target.open(encoding='utf-8') class OpenDiskTests(OpenTests, unittest.TestCase): diff --git a/Lib/test/test_importlib/resources/test_path.py b/Lib/test/test_importlib/resources/test_path.py index adcf75feea78..34a6bdd2d58b 100644 --- a/Lib/test/test_importlib/resources/test_path.py +++ b/Lib/test/test_importlib/resources/test_path.py @@ -14,9 +14,12 @@ def execute(self, package, path): class PathTests: def test_reading(self): - # Path should be readable. - # Test also implicitly verifies the returned object is a pathlib.Path - # instance. + """ + Path should be readable. + + Test also implicitly verifies the returned object is a pathlib.Path + instance. + """ target = resources.files(self.data) / 'utf-8.file' with resources.as_file(target) as path: self.assertTrue(path.name.endswith("utf-8.file"), repr(path)) @@ -51,8 +54,10 @@ def setUp(self): class PathZipTests(PathTests, util.ZipSetup, unittest.TestCase): def test_remove_in_context_manager(self): - # It is not an error if the file that was temporarily stashed on the - # file system is removed inside the `with` stanza. + """ + It is not an error if the file that was temporarily stashed on the + file system is removed inside the `with` stanza. + """ target = resources.files(self.data) / 'utf-8.file' with resources.as_file(target) as path: path.unlink() diff --git a/Lib/test/test_importlib/resources/test_read.py b/Lib/test/test_importlib/resources/test_read.py index 0ca8ee9d0285..088982681e8b 100644 --- a/Lib/test/test_importlib/resources/test_read.py +++ b/Lib/test/test_importlib/resources/test_read.py @@ -12,7 +12,7 @@ def execute(self, package, path): class CommonTextTests(util.CommonTests, unittest.TestCase): def execute(self, package, path): - resources.files(package).joinpath(path).read_text() + resources.files(package).joinpath(path).read_text(encoding='utf-8') class ReadTests: @@ -21,7 +21,11 @@ def test_read_bytes(self): self.assertEqual(result, b'\0\1\2\3') def test_read_text_default_encoding(self): - result = resources.files(self.data).joinpath('utf-8.file').read_text() + result = ( + resources.files(self.data) + .joinpath('utf-8.file') + .read_text(encoding='utf-8') + ) self.assertEqual(result, 'Hello, UTF-8 world!\n') def test_read_text_given_encoding(self): @@ -33,7 +37,9 @@ def test_read_text_given_encoding(self): self.assertEqual(result, 'Hello, UTF-16 world!\n') def test_read_text_with_errors(self): - # Raises UnicodeError without the 'errors' argument. + """ + Raises UnicodeError without the 'errors' argument. + """ target = resources.files(self.data) / 'utf-16.file' self.assertRaises(UnicodeError, target.read_text, encoding='utf-8') result = target.read_text(encoding='utf-8', errors='ignore') diff --git a/Lib/test/test_importlib/resources/test_reader.py b/Lib/test/test_importlib/resources/test_reader.py index 4fd9e6bbe428..8670f72a3345 100644 --- a/Lib/test/test_importlib/resources/test_reader.py +++ b/Lib/test/test_importlib/resources/test_reader.py @@ -81,6 +81,17 @@ def test_join_path_compound(self): path = MultiplexedPath(self.folder) assert not path.joinpath('imaginary/foo.py').exists() + def test_join_path_common_subdir(self): + prefix = os.path.abspath(os.path.join(__file__, '..')) + data01 = os.path.join(prefix, 'data01') + data02 = os.path.join(prefix, 'data02') + path = MultiplexedPath(data01, data02) + self.assertIsInstance(path.joinpath('subdirectory'), MultiplexedPath) + self.assertEqual( + str(path.joinpath('subdirectory', 'subsubdir'))[len(prefix) + 1 :], + os.path.join('data02', 'subdirectory', 'subsubdir'), + ) + def test_repr(self): self.assertEqual( repr(MultiplexedPath(self.folder)), diff --git a/Lib/test/test_importlib/resources/test_resource.py b/Lib/test/test_importlib/resources/test_resource.py index f7e3abbdc805..6f75cf57f03d 100644 --- a/Lib/test/test_importlib/resources/test_resource.py +++ b/Lib/test/test_importlib/resources/test_resource.py @@ -1,3 +1,4 @@ +import contextlib import sys import unittest import uuid @@ -7,7 +8,7 @@ from . import zipdata01, zipdata02 from . import util from importlib import resources, import_module -from test.support import import_helper +from test.support import import_helper, os_helper from test.support.os_helper import unlink @@ -69,10 +70,12 @@ def test_resource_missing(self): class ResourceCornerCaseTests(unittest.TestCase): def test_package_has_no_reader_fallback(self): - # Test odd ball packages which: + """ + Test odd ball packages which: # 1. Do not have a ResourceReader as a loader # 2. Are not on the file system # 3. Are not in a zip file + """ module = util.create_package( file=data01, path=data01.__file__, contents=['A', 'B', 'C'] ) @@ -138,82 +141,71 @@ def test_unrelated_contents(self): ) + at contextlib.contextmanager +def zip_on_path(dir): + data_path = pathlib.Path(zipdata01.__file__) + source_zip_path = data_path.parent.joinpath('ziptestdata.zip') + zip_path = pathlib.Path(dir) / f'{uuid.uuid4()}.zip' + zip_path.write_bytes(source_zip_path.read_bytes()) + sys.path.append(str(zip_path)) + import_module('ziptestdata') + + try: + yield + finally: + with contextlib.suppress(ValueError): + sys.path.remove(str(zip_path)) + + with contextlib.suppress(KeyError): + del sys.path_importer_cache[str(zip_path)] + del sys.modules['ziptestdata'] + + with contextlib.suppress(OSError): + unlink(zip_path) + + class DeletingZipsTest(unittest.TestCase): """Having accessed resources in a zip file should not keep an open reference to the zip. """ - ZIP_MODULE = zipdata01 - def setUp(self): + self.fixtures = contextlib.ExitStack() + self.addCleanup(self.fixtures.close) + modules = import_helper.modules_setup() self.addCleanup(import_helper.modules_cleanup, *modules) - data_path = pathlib.Path(self.ZIP_MODULE.__file__) - data_dir = data_path.parent - self.source_zip_path = data_dir / 'ziptestdata.zip' - self.zip_path = pathlib.Path(f'{uuid.uuid4()}.zip').absolute() - self.zip_path.write_bytes(self.source_zip_path.read_bytes()) - sys.path.append(str(self.zip_path)) - self.data = import_module('ziptestdata') - - def tearDown(self): - try: - sys.path.remove(str(self.zip_path)) - except ValueError: - pass - - try: - del sys.path_importer_cache[str(self.zip_path)] - del sys.modules[self.data.__name__] - except KeyError: - pass - - try: - unlink(self.zip_path) - except OSError: - # If the test fails, this will probably fail too - pass + temp_dir = self.fixtures.enter_context(os_helper.temp_dir()) + self.fixtures.enter_context(zip_on_path(temp_dir)) def test_iterdir_does_not_keep_open(self): - c = [item.name for item in resources.files('ziptestdata').iterdir()] - self.zip_path.unlink() - del c + [item.name for item in resources.files('ziptestdata').iterdir()] def test_is_file_does_not_keep_open(self): - c = resources.files('ziptestdata').joinpath('binary.file').is_file() - self.zip_path.unlink() - del c + resources.files('ziptestdata').joinpath('binary.file').is_file() def test_is_file_failure_does_not_keep_open(self): - c = resources.files('ziptestdata').joinpath('not-present').is_file() - self.zip_path.unlink() - del c + resources.files('ziptestdata').joinpath('not-present').is_file() @unittest.skip("Desired but not supported.") def test_as_file_does_not_keep_open(self): # pragma: no cover - c = resources.as_file(resources.files('ziptestdata') / 'binary.file') - self.zip_path.unlink() - del c + resources.as_file(resources.files('ziptestdata') / 'binary.file') def test_entered_path_does_not_keep_open(self): - # This is what certifi does on import to make its bundle - # available for the process duration. - c = resources.as_file( - resources.files('ziptestdata') / 'binary.file' - ).__enter__() - self.zip_path.unlink() - del c + """ + Mimic what certifi does on import to make its bundle + available for the process duration. + """ + resources.as_file(resources.files('ziptestdata') / 'binary.file').__enter__() def test_read_binary_does_not_keep_open(self): - c = resources.files('ziptestdata').joinpath('binary.file').read_bytes() - self.zip_path.unlink() - del c + resources.files('ziptestdata').joinpath('binary.file').read_bytes() def test_read_text_does_not_keep_open(self): - c = resources.files('ziptestdata').joinpath('utf-8.file').read_text() - self.zip_path.unlink() - del c + resources.files('ziptestdata').joinpath('utf-8.file').read_text( + encoding='utf-8' + ) class ResourceFromNamespaceTest01(unittest.TestCase): diff --git a/Lib/test/test_importlib/resources/util.py b/Lib/test/test_importlib/resources/util.py index 1e72b91ff6b3..dbe6ee814766 100644 --- a/Lib/test/test_importlib/resources/util.py +++ b/Lib/test/test_importlib/resources/util.py @@ -80,32 +80,44 @@ def execute(self, package, path): """ def test_package_name(self): - # Passing in the package name should succeed. + """ + Passing in the package name should succeed. + """ self.execute(data01.__name__, 'utf-8.file') def test_package_object(self): - # Passing in the package itself should succeed. + """ + Passing in the package itself should succeed. + """ self.execute(data01, 'utf-8.file') def test_string_path(self): - # Passing in a string for the path should succeed. + """ + Passing in a string for the path should succeed. + """ path = 'utf-8.file' self.execute(data01, path) def test_pathlib_path(self): - # Passing in a pathlib.PurePath object for the path should succeed. + """ + Passing in a pathlib.PurePath object for the path should succeed. + """ path = pathlib.PurePath('utf-8.file') self.execute(data01, path) def test_importing_module_as_side_effect(self): - # The anchor package can already be imported. + """ + The anchor package can already be imported. + """ del sys.modules[data01.__name__] self.execute(data01.__name__, 'utf-8.file') def test_missing_path(self): - # Attempting to open or read or request the path for a - # non-existent path should succeed if open_resource - # can return a viable data stream. + """ + Attempting to open or read or request the path for a + non-existent path should succeed if open_resource + can return a viable data stream. + """ bytes_data = io.BytesIO(b'Hello, world!') package = create_package(file=bytes_data, path=FileNotFoundError()) self.execute(package, 'utf-8.file') diff --git a/Misc/NEWS.d/next/Library/2023-02-17-19-00-58.gh-issue-97930.C_nQjb.rst b/Misc/NEWS.d/next/Library/2023-02-17-19-00-58.gh-issue-97930.C_nQjb.rst new file mode 100644 index 000000000000..967e13f752bc --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-17-19-00-58.gh-issue-97930.C_nQjb.rst @@ -0,0 +1,4 @@ +Apply changes from `importlib_resources 5.12 +`_, +including fix for ``MultiplexedPath`` to support directories in multiple +namespaces (python/importlib_resources#265). From webhook-mailer at python.org Sat Feb 18 18:57:25 2023 From: webhook-mailer at python.org (gvanrossum) Date: Sat, 18 Feb 2023 23:57:25 -0000 Subject: [Python-checkins] [3.11] gh-85747: Active voice & suggested edits, 'running/stopping loop' & 'callbacks' subsections of asyncio-eventloop.rst (GH-100270) (#102006) Message-ID: https://github.com/python/cpython/commit/5220c2df9eb5bf0333b2bdfceea98ec80a140fd6 commit: 5220c2df9eb5bf0333b2bdfceea98ec80a140fd6 branch: 3.11 author: Brian Skinn committer: gvanrossum date: 2023-02-18T15:57:06-08:00 summary: [3.11] gh-85747: Active voice & suggested edits, 'running/stopping loop' & 'callbacks' subsections of asyncio-eventloop.rst (GH-100270) (#102006) Note: the `timeout` parameter was not added until 3.12. (cherry picked from commit c4de6b1d52304a0a9cdfafc1dad5098993710404) Co-authored-by: C.A.M. Gerlach Co-authored-by: Terry Jan Reedy . Co-authored-by: Brian Skinn files: M Doc/library/asyncio-eventloop.rst diff --git a/Doc/library/asyncio-eventloop.rst b/Doc/library/asyncio-eventloop.rst index c7343826be3f..020d8a0ca8a8 100644 --- a/Doc/library/asyncio-eventloop.rst +++ b/Doc/library/asyncio-eventloop.rst @@ -192,12 +192,15 @@ Running and stopping the loop .. coroutinemethod:: loop.shutdown_default_executor() Schedule the closure of the default executor and wait for it to join all of - the threads in the :class:`ThreadPoolExecutor`. After calling this method, a - :exc:`RuntimeError` will be raised if :meth:`loop.run_in_executor` is called - while using the default executor. + the threads in the :class:`~concurrent.futures.ThreadPoolExecutor`. + Once this method has been called, + using the default executor with :meth:`loop.run_in_executor` + will raise a :exc:`RuntimeError`. - Note that there is no need to call this function when - :func:`asyncio.run` is used. + .. note:: + + Do not call this method when using :func:`asyncio.run`, + as the latter handles default executor shutdown automatically. .. versionadded:: 3.9 @@ -210,22 +213,23 @@ Scheduling callbacks Schedule the *callback* :term:`callback` to be called with *args* arguments at the next iteration of the event loop. + Return an instance of :class:`asyncio.Handle`, + which can be used later to cancel the callback. + Callbacks are called in the order in which they are registered. Each callback will be called exactly once. - An optional keyword-only *context* argument allows specifying a + The optional keyword-only *context* argument specifies a custom :class:`contextvars.Context` for the *callback* to run in. - The current context is used when no *context* is provided. - - An instance of :class:`asyncio.Handle` is returned, which can be - used later to cancel the callback. + Callbacks use the current context when no *context* is provided. - This method is not thread-safe. + Unlike :meth:`call_soon_threadsafe`, this method is not thread-safe. .. method:: loop.call_soon_threadsafe(callback, *args, context=None) - A thread-safe variant of :meth:`call_soon`. Must be used to - schedule callbacks *from another thread*. + A thread-safe variant of :meth:`call_soon`. When scheduling callbacks from + another thread, this function *must* be used, since :meth:`call_soon` is not + thread-safe. Raises :exc:`RuntimeError` if called on a loop that's been closed. This can happen on a secondary thread when the main application is From webhook-mailer at python.org Sat Feb 18 18:57:48 2023 From: webhook-mailer at python.org (gvanrossum) Date: Sat, 18 Feb 2023 23:57:48 -0000 Subject: [Python-checkins] [3.10] gh-85747: Active voice & suggested edits, 'running/stopping loop' & 'callbacks' subsections of asyncio-eventloop.rst (gh-100270) (#102005) Message-ID: https://github.com/python/cpython/commit/2b1a62c6695e7b51e941d6b6be71f1fef676d360 commit: 2b1a62c6695e7b51e941d6b6be71f1fef676d360 branch: 3.10 author: Brian Skinn committer: gvanrossum date: 2023-02-18T15:57:41-08:00 summary: [3.10] gh-85747: Active voice & suggested edits, 'running/stopping loop' & 'callbacks' subsections of asyncio-eventloop.rst (gh-100270) (#102005) Note: The `timeout` parameter was not added until Python 3.12. (cherry picked from commit c4de6b1d52304a0a9cdfafc1dad5098993710404) Co-authored-by: C.A.M. Gerlach Co-authored-by: Terry Jan Reedy files: M Doc/library/asyncio-eventloop.rst diff --git a/Doc/library/asyncio-eventloop.rst b/Doc/library/asyncio-eventloop.rst index 80440da6be9b..759e30a73f27 100644 --- a/Doc/library/asyncio-eventloop.rst +++ b/Doc/library/asyncio-eventloop.rst @@ -192,12 +192,15 @@ Running and stopping the loop .. coroutinemethod:: loop.shutdown_default_executor() Schedule the closure of the default executor and wait for it to join all of - the threads in the :class:`ThreadPoolExecutor`. After calling this method, a - :exc:`RuntimeError` will be raised if :meth:`loop.run_in_executor` is called - while using the default executor. + the threads in the :class:`~concurrent.futures.ThreadPoolExecutor`. + Once this method has been called, + using the default executor with :meth:`loop.run_in_executor` + will raise a :exc:`RuntimeError`. - Note that there is no need to call this function when - :func:`asyncio.run` is used. + .. note:: + + Do not call this method when using :func:`asyncio.run`, + as the latter handles default executor shutdown automatically. .. versionadded:: 3.9 @@ -210,22 +213,23 @@ Scheduling callbacks Schedule the *callback* :term:`callback` to be called with *args* arguments at the next iteration of the event loop. + Return an instance of :class:`asyncio.Handle`, + which can be used later to cancel the callback. + Callbacks are called in the order in which they are registered. Each callback will be called exactly once. - An optional keyword-only *context* argument allows specifying a + The optional keyword-only *context* argument specifies a custom :class:`contextvars.Context` for the *callback* to run in. - The current context is used when no *context* is provided. - - An instance of :class:`asyncio.Handle` is returned, which can be - used later to cancel the callback. + Callbacks use the current context when no *context* is provided. - This method is not thread-safe. + Unlike :meth:`call_soon_threadsafe`, this method is not thread-safe. .. method:: loop.call_soon_threadsafe(callback, *args, context=None) - A thread-safe variant of :meth:`call_soon`. Must be used to - schedule callbacks *from another thread*. + A thread-safe variant of :meth:`call_soon`. When scheduling callbacks from + another thread, this function *must* be used, since :meth:`call_soon` is not + thread-safe. Raises :exc:`RuntimeError` if called on a loop that's been closed. This can happen on a secondary thread when the main application is From webhook-mailer at python.org Sat Feb 18 19:22:13 2023 From: webhook-mailer at python.org (rhettinger) Date: Sun, 19 Feb 2023 00:22:13 -0000 Subject: [Python-checkins] GH-84783: Make the slice object hashable (GH-101264) Message-ID: https://github.com/python/cpython/commit/61f1e67c6fcbf80eb9be2b75f7d62954e28c89e6 commit: 61f1e67c6fcbf80eb9be2b75f7d62954e28c89e6 branch: main author: Furkan Onder committer: rhettinger date: 2023-02-18T18:22:02-06:00 summary: GH-84783: Make the slice object hashable (GH-101264) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-11-23-14-06.gh-issue-84783._P5sMa.rst M Doc/library/functions.rst M Lib/test/test_capi/test_misc.py M Lib/test/test_doctest.py M Lib/test/test_slice.py M Objects/sliceobject.c M Objects/tupleobject.c diff --git a/Doc/library/functions.rst b/Doc/library/functions.rst index 658d6768457d..3ff288490251 100644 --- a/Doc/library/functions.rst +++ b/Doc/library/functions.rst @@ -1635,6 +1635,9 @@ are always available. They are listed here in alphabetical order. example: ``a[start:stop:step]`` or ``a[start:stop, i]``. See :func:`itertools.islice` for an alternate version that returns an iterator. + .. versionchanged:: 3.12 + Slice objects are now :term:`hashable` (provided :attr:`~slice.start`, + :attr:`~slice.stop`, and :attr:`~slice.step` are hashable). .. function:: sorted(iterable, /, *, key=None, reverse=False) diff --git a/Lib/test/test_capi/test_misc.py b/Lib/test/test_capi/test_misc.py index f26b4723d1e6..ad099c61463b 100644 --- a/Lib/test/test_capi/test_misc.py +++ b/Lib/test/test_capi/test_misc.py @@ -419,11 +419,6 @@ def __setitem__(self, index, value): with self.assertRaises(TypeError): _testcapi.sequence_set_slice(None, 1, 3, 'xy') - mapping = {1: 'a', 2: 'b', 3: 'c'} - with self.assertRaises(TypeError): - _testcapi.sequence_set_slice(mapping, 1, 3, 'xy') - self.assertEqual(mapping, {1: 'a', 2: 'b', 3: 'c'}) - def test_sequence_del_slice(self): # Correct case: data = [1, 2, 3, 4, 5] @@ -459,7 +454,7 @@ def __delitem__(self, index): _testcapi.sequence_del_slice(None, 1, 3) mapping = {1: 'a', 2: 'b', 3: 'c'} - with self.assertRaises(TypeError): + with self.assertRaises(KeyError): _testcapi.sequence_del_slice(mapping, 1, 3) self.assertEqual(mapping, {1: 'a', 2: 'b', 3: 'c'}) diff --git a/Lib/test/test_doctest.py b/Lib/test/test_doctest.py index 65e215f1cdda..3491d4cdb1c1 100644 --- a/Lib/test/test_doctest.py +++ b/Lib/test/test_doctest.py @@ -707,7 +707,7 @@ def non_Python_modules(): r""" >>> import builtins >>> tests = doctest.DocTestFinder().find(builtins) - >>> 825 < len(tests) < 845 # approximate number of objects with docstrings + >>> 830 < len(tests) < 850 # approximate number of objects with docstrings True >>> real_tests = [t for t in tests if len(t.examples) > 0] >>> len(real_tests) # objects that actually have doctests diff --git a/Lib/test/test_slice.py b/Lib/test/test_slice.py index 03fde3275e14..c35a2293f790 100644 --- a/Lib/test/test_slice.py +++ b/Lib/test/test_slice.py @@ -80,10 +80,16 @@ def test_repr(self): self.assertEqual(repr(slice(1, 2, 3)), "slice(1, 2, 3)") def test_hash(self): - # Verify clearing of SF bug #800796 - self.assertRaises(TypeError, hash, slice(5)) + self.assertEqual(hash(slice(5)), slice(5).__hash__()) + self.assertEqual(hash(slice(1, 2)), slice(1, 2).__hash__()) + self.assertEqual(hash(slice(1, 2, 3)), slice(1, 2, 3).__hash__()) + self.assertNotEqual(slice(5), slice(6)) + + with self.assertRaises(TypeError): + hash(slice(1, 2, [])) + with self.assertRaises(TypeError): - slice(5).__hash__() + hash(slice(4, {})) def test_cmp(self): s1 = slice(1, 2, 3) diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-11-23-14-06.gh-issue-84783._P5sMa.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-11-23-14-06.gh-issue-84783._P5sMa.rst new file mode 100644 index 000000000000..e1c851a0825a --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-11-23-14-06.gh-issue-84783._P5sMa.rst @@ -0,0 +1 @@ +Make the slice object hashable. diff --git a/Objects/sliceobject.c b/Objects/sliceobject.c index 5694bd9c661f..5d2e6ad522bc 100644 --- a/Objects/sliceobject.c +++ b/Objects/sliceobject.c @@ -628,6 +628,42 @@ slice_traverse(PySliceObject *v, visitproc visit, void *arg) return 0; } +/* code based on tuplehash() of Objects/tupleobject.c */ +#if SIZEOF_PY_UHASH_T > 4 +#define _PyHASH_XXPRIME_1 ((Py_uhash_t)11400714785074694791ULL) +#define _PyHASH_XXPRIME_2 ((Py_uhash_t)14029467366897019727ULL) +#define _PyHASH_XXPRIME_5 ((Py_uhash_t)2870177450012600261ULL) +#define _PyHASH_XXROTATE(x) ((x << 31) | (x >> 33)) /* Rotate left 31 bits */ +#else +#define _PyHASH_XXPRIME_1 ((Py_uhash_t)2654435761UL) +#define _PyHASH_XXPRIME_2 ((Py_uhash_t)2246822519UL) +#define _PyHASH_XXPRIME_5 ((Py_uhash_t)374761393UL) +#define _PyHASH_XXROTATE(x) ((x << 13) | (x >> 19)) /* Rotate left 13 bits */ +#endif + +static Py_hash_t +slicehash(PySliceObject *v) +{ + Py_uhash_t acc = _PyHASH_XXPRIME_5; +#define _PyHASH_SLICE_PART(com) { \ + Py_uhash_t lane = PyObject_Hash(v->com); \ + if(lane == (Py_uhash_t)-1) { \ + return -1; \ + } \ + acc += lane * _PyHASH_XXPRIME_2; \ + acc = _PyHASH_XXROTATE(acc); \ + acc *= _PyHASH_XXPRIME_1; \ +} + _PyHASH_SLICE_PART(start); + _PyHASH_SLICE_PART(stop); + _PyHASH_SLICE_PART(step); +#undef _PyHASH_SLICE_PART + if(acc == (Py_uhash_t)-1) { + return 1546275796; + } + return acc; +} + PyTypeObject PySlice_Type = { PyVarObject_HEAD_INIT(&PyType_Type, 0) "slice", /* Name of this type */ @@ -642,7 +678,7 @@ PyTypeObject PySlice_Type = { 0, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ - PyObject_HashNotImplemented, /* tp_hash */ + (hashfunc)slicehash, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ PyObject_GenericGetAttr, /* tp_getattro */ diff --git a/Objects/tupleobject.c b/Objects/tupleobject.c index e1b9953226c0..7d6d0e1bea24 100644 --- a/Objects/tupleobject.c +++ b/Objects/tupleobject.c @@ -288,7 +288,7 @@ tuplerepr(PyTupleObject *v) /* Hash for tuples. This is a slightly simplified version of the xxHash non-cryptographic hash: - - we do not use any parallellism, there is only 1 accumulator. + - we do not use any parallelism, there is only 1 accumulator. - we drop the final mixing since this is just a permutation of the output space: it does not help against collisions. - at the end, we mangle the length with a single constant. From webhook-mailer at python.org Sat Feb 18 19:55:50 2023 From: webhook-mailer at python.org (hauntsaninja) Date: Sun, 19 Feb 2023 00:55:50 -0000 Subject: [Python-checkins] Fix incorrectly documented attribute in csv docs (#101250) Message-ID: https://github.com/python/cpython/commit/36b670908b3546f46283aae4dbf311e53289f3d1 commit: 36b670908b3546f46283aae4dbf311e53289f3d1 branch: main author: Reza Rastak committer: hauntsaninja <12621235+hauntsaninja at users.noreply.github.com> date: 2023-02-18T16:55:43-08:00 summary: Fix incorrectly documented attribute in csv docs (#101250) files: M Doc/library/csv.rst diff --git a/Doc/library/csv.rst b/Doc/library/csv.rst index 41f11505aa10..f1776554d8b9 100644 --- a/Doc/library/csv.rst +++ b/Doc/library/csv.rst @@ -458,7 +458,7 @@ Reader objects have the following public attributes: DictReader objects have the following public attribute: -.. attribute:: csvreader.fieldnames +.. attribute:: DictReader.fieldnames If not passed as a parameter when creating the object, this attribute is initialized upon first access or when the first record is read from the From webhook-mailer at python.org Sat Feb 18 20:02:25 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 19 Feb 2023 01:02:25 -0000 Subject: [Python-checkins] Fix incorrectly documented attribute in csv docs (GH-101250) Message-ID: https://github.com/python/cpython/commit/d104234f51bf1f88589ea705a6cfe1e674b4fe03 commit: d104234f51bf1f88589ea705a6cfe1e674b4fe03 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-18T17:02:18-08:00 summary: Fix incorrectly documented attribute in csv docs (GH-101250) (cherry picked from commit 36b670908b3546f46283aae4dbf311e53289f3d1) Co-authored-by: Reza Rastak files: M Doc/library/csv.rst diff --git a/Doc/library/csv.rst b/Doc/library/csv.rst index 3a41ae284f32..5e54f4a3e4fc 100644 --- a/Doc/library/csv.rst +++ b/Doc/library/csv.rst @@ -450,7 +450,7 @@ Reader objects have the following public attributes: DictReader objects have the following public attribute: -.. attribute:: csvreader.fieldnames +.. attribute:: DictReader.fieldnames If not passed as a parameter when creating the object, this attribute is initialized upon first access or when the first record is read from the From webhook-mailer at python.org Sat Feb 18 20:04:07 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 19 Feb 2023 01:04:07 -0000 Subject: [Python-checkins] Fix incorrectly documented attribute in csv docs (GH-101250) Message-ID: https://github.com/python/cpython/commit/62db23c9445b5fe6a37d70cfb5661e443002de9f commit: 62db23c9445b5fe6a37d70cfb5661e443002de9f branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-18T17:04:01-08:00 summary: Fix incorrectly documented attribute in csv docs (GH-101250) (cherry picked from commit 36b670908b3546f46283aae4dbf311e53289f3d1) Co-authored-by: Reza Rastak files: M Doc/library/csv.rst diff --git a/Doc/library/csv.rst b/Doc/library/csv.rst index f7e85f2baa73..0aad2a0d523c 100644 --- a/Doc/library/csv.rst +++ b/Doc/library/csv.rst @@ -454,7 +454,7 @@ Reader objects have the following public attributes: DictReader objects have the following public attribute: -.. attribute:: csvreader.fieldnames +.. attribute:: DictReader.fieldnames If not passed as a parameter when creating the object, this attribute is initialized upon first access or when the first record is read from the From webhook-mailer at python.org Sat Feb 18 20:06:10 2023 From: webhook-mailer at python.org (hauntsaninja) Date: Sun, 19 Feb 2023 01:06:10 -0000 Subject: [Python-checkins] gh-99735: Use required=True in argparse subparsers example (#100927) Message-ID: https://github.com/python/cpython/commit/6aab56f3c2ee331116eba242d2fcdca592577328 commit: 6aab56f3c2ee331116eba242d2fcdca592577328 branch: main author: Patricio Paez committer: hauntsaninja <12621235+hauntsaninja at users.noreply.github.com> date: 2023-02-18T17:06:03-08:00 summary: gh-99735: Use required=True in argparse subparsers example (#100927) files: M Doc/library/argparse.rst diff --git a/Doc/library/argparse.rst b/Doc/library/argparse.rst index dbaa5d0d9b99..34b4c61649b9 100644 --- a/Doc/library/argparse.rst +++ b/Doc/library/argparse.rst @@ -1867,7 +1867,7 @@ Sub-commands ... >>> # create the top-level parser >>> parser = argparse.ArgumentParser() - >>> subparsers = parser.add_subparsers() + >>> subparsers = parser.add_subparsers(required=True) >>> >>> # create the parser for the "foo" command >>> parser_foo = subparsers.add_parser('foo') From webhook-mailer at python.org Sat Feb 18 21:33:15 2023 From: webhook-mailer at python.org (jaraco) Date: Sun, 19 Feb 2023 02:33:15 -0000 Subject: [Python-checkins] gh-97930: Also include subdirectory in makefile. (#102030) Message-ID: https://github.com/python/cpython/commit/072935951f7cd44b40ee37fe561478b2e431c2fb commit: 072935951f7cd44b40ee37fe561478b2e431c2fb branch: main author: Jason R. Coombs committer: jaraco date: 2023-02-18T21:32:50-05:00 summary: gh-97930: Also include subdirectory in makefile. (#102030) files: M Makefile.pre.in diff --git a/Makefile.pre.in b/Makefile.pre.in index 490483a71201..b28c6067a535 100644 --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -2075,6 +2075,8 @@ TESTSUBDIRS= idlelib/idle_test \ test/test_importlib/resources/data01/subdirectory \ test/test_importlib/resources/data02 \ test/test_importlib/resources/data02/one \ + test/test_importlib/resources/data02/subdirectory \ + test/test_importlib/resources/data02/subdirectory/subsubdir \ test/test_importlib/resources/data02/two \ test/test_importlib/resources/data03 \ test/test_importlib/resources/data03/namespace \ From webhook-mailer at python.org Sun Feb 19 05:18:51 2023 From: webhook-mailer at python.org (ezio-melotti) Date: Sun, 19 Feb 2023 10:18:51 -0000 Subject: [Python-checkins] gh-100210: Correct the comment link for unescaping HTML (#100212) Message-ID: https://github.com/python/cpython/commit/9a07eff628c1cd88b7cdda88a8fd0db3fe7ea552 commit: 9a07eff628c1cd88b7cdda88a8fd0db3fe7ea552 branch: main author: Jean-Christophe Amiel committer: ezio-melotti date: 2023-02-19T11:18:12+01:00 summary: gh-100210: Correct the comment link for unescaping HTML (#100212) gh-100210: correct the comment link for unescaping HTML files: M Lib/html/__init__.py diff --git a/Lib/html/__init__.py b/Lib/html/__init__.py index da0a0a3ce70e..1543460ca33b 100644 --- a/Lib/html/__init__.py +++ b/Lib/html/__init__.py @@ -25,7 +25,7 @@ def escape(s, quote=True): return s -# see http://www.w3.org/TR/html5/syntax.html#tokenizing-character-references +# see https://html.spec.whatwg.org/multipage/parsing.html#numeric-character-reference-end-state _invalid_charrefs = { 0x00: '\ufffd', # REPLACEMENT CHARACTER From webhook-mailer at python.org Sun Feb 19 05:41:59 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 19 Feb 2023 10:41:59 -0000 Subject: [Python-checkins] gh-100210: Correct the comment link for unescaping HTML (GH-100212) Message-ID: https://github.com/python/cpython/commit/8520e6cbb28146f9091c7af18558911f6ce74ae9 commit: 8520e6cbb28146f9091c7af18558911f6ce74ae9 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-19T02:41:53-08:00 summary: gh-100210: Correct the comment link for unescaping HTML (GH-100212) (cherry picked from commit 9a07eff628c1cd88b7cdda88a8fd0db3fe7ea552) Co-authored-by: Jean-Christophe Amiel gh-100210: correct the comment link for unescaping HTML files: M Lib/html/__init__.py diff --git a/Lib/html/__init__.py b/Lib/html/__init__.py index da0a0a3ce70e..1543460ca33b 100644 --- a/Lib/html/__init__.py +++ b/Lib/html/__init__.py @@ -25,7 +25,7 @@ def escape(s, quote=True): return s -# see http://www.w3.org/TR/html5/syntax.html#tokenizing-character-references +# see https://html.spec.whatwg.org/multipage/parsing.html#numeric-character-reference-end-state _invalid_charrefs = { 0x00: '\ufffd', # REPLACEMENT CHARACTER From webhook-mailer at python.org Sun Feb 19 10:01:10 2023 From: webhook-mailer at python.org (mdickinson) Date: Sun, 19 Feb 2023 15:01:10 -0000 Subject: [Python-checkins] Add missing 'is' to `cmath.log()` docstring (#102049) Message-ID: https://github.com/python/cpython/commit/71f614ef2a3d66213b9cae807cbbc1ed03741221 commit: 71f614ef2a3d66213b9cae807cbbc1ed03741221 branch: main author: Owain Davies <116417456+OTheDev at users.noreply.github.com> committer: mdickinson date: 2023-02-19T15:00:59Z summary: Add missing 'is' to `cmath.log()` docstring (#102049) Fix missing 'is' in cmath.log() docstring files: M Modules/clinic/cmathmodule.c.h M Modules/cmathmodule.c diff --git a/Modules/clinic/cmathmodule.c.h b/Modules/clinic/cmathmodule.c.h index b1da9452c61d..941448e76e80 100644 --- a/Modules/clinic/cmathmodule.c.h +++ b/Modules/clinic/cmathmodule.c.h @@ -644,7 +644,7 @@ PyDoc_STRVAR(cmath_log__doc__, "\n" "log(z[, base]) -> the logarithm of z to the given base.\n" "\n" -"If the base not specified, returns the natural logarithm (base e) of z."); +"If the base is not specified, returns the natural logarithm (base e) of z."); #define CMATH_LOG_METHODDEF \ {"log", _PyCFunction_CAST(cmath_log), METH_FASTCALL, cmath_log__doc__}, @@ -982,4 +982,4 @@ cmath_isclose(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObjec exit: return return_value; } -/*[clinic end generated code: output=0146c656e67f5d5f input=a9049054013a1b77]*/ +/*[clinic end generated code: output=87f609786ef270cd input=a9049054013a1b77]*/ diff --git a/Modules/cmathmodule.c b/Modules/cmathmodule.c index 2038ac26e658..53e34061d537 100644 --- a/Modules/cmathmodule.c +++ b/Modules/cmathmodule.c @@ -957,12 +957,12 @@ cmath.log log(z[, base]) -> the logarithm of z to the given base. -If the base not specified, returns the natural logarithm (base e) of z. +If the base is not specified, returns the natural logarithm (base e) of z. [clinic start generated code]*/ static PyObject * cmath_log_impl(PyObject *module, Py_complex x, PyObject *y_obj) -/*[clinic end generated code: output=4effdb7d258e0d94 input=230ed3a71ecd000a]*/ +/*[clinic end generated code: output=4effdb7d258e0d94 input=e1f81d4fcfd26497]*/ { Py_complex y; From webhook-mailer at python.org Sun Feb 19 11:39:10 2023 From: webhook-mailer at python.org (rhettinger) Date: Sun, 19 Feb 2023 16:39:10 -0000 Subject: [Python-checkins] gh-100425: Update tutorial docs related to sum() accuracy (FH-101854) Message-ID: https://github.com/python/cpython/commit/32df540635cacce1053ee0ef98ee23f3f6a43c02 commit: 32df540635cacce1053ee0ef98ee23f3f6a43c02 branch: main author: neuralstring <107343209+neuralstring at users.noreply.github.com> committer: rhettinger date: 2023-02-19T10:39:03-06:00 summary: gh-100425: Update tutorial docs related to sum() accuracy (FH-101854) files: M Doc/tutorial/stdlib2.rst diff --git a/Doc/tutorial/stdlib2.rst b/Doc/tutorial/stdlib2.rst index 0c101c1f2072..33f311db3a24 100644 --- a/Doc/tutorial/stdlib2.rst +++ b/Doc/tutorial/stdlib2.rst @@ -394,7 +394,7 @@ point:: >>> sum([Decimal('0.1')]*10) == Decimal('1.0') True - >>> sum([0.1]*10) == 1.0 + >>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 == 1.0 False The :mod:`decimal` module provides arithmetic with as much precision as needed:: From webhook-mailer at python.org Sun Feb 19 14:11:58 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 19 Feb 2023 19:11:58 -0000 Subject: [Python-checkins] gh-99735: Use required=True in argparse subparsers example (GH-100927) Message-ID: https://github.com/python/cpython/commit/4969903f7ea9393303dab0d48886b4d10095a063 commit: 4969903f7ea9393303dab0d48886b4d10095a063 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-19T11:11:40-08:00 summary: gh-99735: Use required=True in argparse subparsers example (GH-100927) (cherry picked from commit 6aab56f3c2ee331116eba242d2fcdca592577328) Co-authored-by: Patricio Paez files: M Doc/library/argparse.rst diff --git a/Doc/library/argparse.rst b/Doc/library/argparse.rst index eeb65160bbc8..7a36c5264aa0 100644 --- a/Doc/library/argparse.rst +++ b/Doc/library/argparse.rst @@ -1854,7 +1854,7 @@ Sub-commands ... >>> # create the top-level parser >>> parser = argparse.ArgumentParser() - >>> subparsers = parser.add_subparsers() + >>> subparsers = parser.add_subparsers(required=True) >>> >>> # create the parser for the "foo" command >>> parser_foo = subparsers.add_parser('foo') From webhook-mailer at python.org Sun Feb 19 14:15:51 2023 From: webhook-mailer at python.org (mdickinson) Date: Sun, 19 Feb 2023 19:15:51 -0000 Subject: [Python-checkins] gh-85417: Clarify behaviour on branch cuts in cmath module (#102046) Message-ID: https://github.com/python/cpython/commit/b513c46d998344dc07eb6d510782c2e23d2b859e commit: b513c46d998344dc07eb6d510782c2e23d2b859e branch: main author: Mark Dickinson committer: mdickinson date: 2023-02-19T19:15:44Z summary: gh-85417: Clarify behaviour on branch cuts in cmath module (#102046) This PR updates the cmath module documentation to reflect the reality that Python is almost always (and as far as I can tell, that "almost" can be omitted) running on a machine whose C double supports signed zeros. * Removes misleading references to functions being continuous from above / below / the left / the right at branch cuts * Expands the note on branch cuts at the top of the module documentation to explain the double-sided sign-of-zero-based behaviour files: A Misc/NEWS.d/next/Documentation/2023-02-19-10-33-01.gh-issue-85417.kYO8u3.rst M Doc/library/cmath.rst diff --git a/Doc/library/cmath.rst b/Doc/library/cmath.rst index 28cd96b0e12d..5ed7a09b3e9d 100644 --- a/Doc/library/cmath.rst +++ b/Doc/library/cmath.rst @@ -15,11 +15,27 @@ the function is then applied to the result of the conversion. .. note:: - On platforms with hardware and system-level support for signed - zeros, functions involving branch cuts are continuous on *both* - sides of the branch cut: the sign of the zero distinguishes one - side of the branch cut from the other. On platforms that do not - support signed zeros the continuity is as specified below. + For functions involving branch cuts, we have the problem of deciding how to + define those functions on the cut itself. Following Kahan's "Branch cuts for + complex elementary functions" paper, as well as Annex G of C99 and later C + standards, we use the sign of zero to distinguish one side of the branch cut + from the other: for a branch cut along (a portion of) the real axis we look + at the sign of the imaginary part, while for a branch cut along the + imaginary axis we look at the sign of the real part. + + For example, the :func:`cmath.sqrt` function has a branch cut along the + negative real axis. An argument of ``complex(-2.0, -0.0)`` is treated as + though it lies *below* the branch cut, and so gives a result on the negative + imaginary axis:: + + >>> cmath.sqrt(complex(-2.0, -0.0)) + -1.4142135623730951j + + But an argument of ``complex(-2.0, 0.0)`` is treated as though it lies above + the branch cut:: + + >>> cmath.sqrt(complex(-2.0, 0.0)) + 1.4142135623730951j Conversions to and from polar coordinates @@ -44,14 +60,11 @@ rectangular coordinates to polar coordinates and back. .. function:: phase(x) - Return the phase of *x* (also known as the *argument* of *x*), as a - float. ``phase(x)`` is equivalent to ``math.atan2(x.imag, - x.real)``. The result lies in the range [-\ *?*, *?*], and the branch - cut for this operation lies along the negative real axis, - continuous from above. On systems with support for signed zeros - (which includes most systems in current use), this means that the - sign of the result is the same as the sign of ``x.imag``, even when - ``x.imag`` is zero:: + Return the phase of *x* (also known as the *argument* of *x*), as a float. + ``phase(x)`` is equivalent to ``math.atan2(x.imag, x.real)``. The result + lies in the range [-\ *?*, *?*], and the branch cut for this operation lies + along the negative real axis. The sign of the result is the same as the + sign of ``x.imag``, even when ``x.imag`` is zero:: >>> phase(complex(-1.0, 0.0)) 3.141592653589793 @@ -92,8 +105,8 @@ Power and logarithmic functions .. function:: log(x[, base]) Returns the logarithm of *x* to the given *base*. If the *base* is not - specified, returns the natural logarithm of *x*. There is one branch cut, from 0 - along the negative real axis to -?, continuous from above. + specified, returns the natural logarithm of *x*. There is one branch cut, + from 0 along the negative real axis to -?. .. function:: log10(x) @@ -112,9 +125,9 @@ Trigonometric functions .. function:: acos(x) - Return the arc cosine of *x*. There are two branch cuts: One extends right from - 1 along the real axis to ?, continuous from below. The other extends left from - -1 along the real axis to -?, continuous from above. + Return the arc cosine of *x*. There are two branch cuts: One extends right + from 1 along the real axis to ?. The other extends left from -1 along the + real axis to -?. .. function:: asin(x) @@ -125,9 +138,8 @@ Trigonometric functions .. function:: atan(x) Return the arc tangent of *x*. There are two branch cuts: One extends from - ``1j`` along the imaginary axis to ``?j``, continuous from the right. The - other extends from ``-1j`` along the imaginary axis to ``-?j``, continuous - from the left. + ``1j`` along the imaginary axis to ``?j``. The other extends from ``-1j`` + along the imaginary axis to ``-?j``. .. function:: cos(x) @@ -151,23 +163,21 @@ Hyperbolic functions .. function:: acosh(x) Return the inverse hyperbolic cosine of *x*. There is one branch cut, - extending left from 1 along the real axis to -?, continuous from above. + extending left from 1 along the real axis to -?. .. function:: asinh(x) Return the inverse hyperbolic sine of *x*. There are two branch cuts: - One extends from ``1j`` along the imaginary axis to ``?j``, - continuous from the right. The other extends from ``-1j`` along - the imaginary axis to ``-?j``, continuous from the left. + One extends from ``1j`` along the imaginary axis to ``?j``. The other + extends from ``-1j`` along the imaginary axis to ``-?j``. .. function:: atanh(x) Return the inverse hyperbolic tangent of *x*. There are two branch cuts: One - extends from ``1`` along the real axis to ``?``, continuous from below. The - other extends from ``-1`` along the real axis to ``-?``, continuous from - above. + extends from ``1`` along the real axis to ``?``. The other extends from + ``-1`` along the real axis to ``-?``. .. function:: cosh(x) diff --git a/Misc/NEWS.d/next/Documentation/2023-02-19-10-33-01.gh-issue-85417.kYO8u3.rst b/Misc/NEWS.d/next/Documentation/2023-02-19-10-33-01.gh-issue-85417.kYO8u3.rst new file mode 100644 index 000000000000..a5532df14795 --- /dev/null +++ b/Misc/NEWS.d/next/Documentation/2023-02-19-10-33-01.gh-issue-85417.kYO8u3.rst @@ -0,0 +1 @@ +Update :mod:`cmath` documentation to clarify behaviour on branch cuts. From webhook-mailer at python.org Sun Feb 19 14:21:43 2023 From: webhook-mailer at python.org (rhettinger) Date: Sun, 19 Feb 2023 19:21:43 -0000 Subject: [Python-checkins] Misc improvements to the float tutorial (GH-102052) Message-ID: https://github.com/python/cpython/commit/3b264df470c82d77f5b01c6f9d1d7173d1cb9597 commit: 3b264df470c82d77f5b01c6f9d1d7173d1cb9597 branch: main author: Raymond Hettinger committer: rhettinger date: 2023-02-19T13:21:37-06:00 summary: Misc improvements to the float tutorial (GH-102052) files: M Doc/tutorial/floatingpoint.rst diff --git a/Doc/tutorial/floatingpoint.rst b/Doc/tutorial/floatingpoint.rst index cedade6e3366..306b1eba3c45 100644 --- a/Doc/tutorial/floatingpoint.rst +++ b/Doc/tutorial/floatingpoint.rst @@ -1,6 +1,7 @@ .. testsetup:: import math + from fractions import Fraction .. _tut-fp-issues: @@ -9,12 +10,13 @@ Floating Point Arithmetic: Issues and Limitations ************************************************** .. sectionauthor:: Tim Peters +.. sectionauthor:: Raymond Hettinger Floating-point numbers are represented in computer hardware as base 2 (binary) -fractions. For example, the **decimal** fraction ``0.125`` -has value 1/10 + 2/100 + 5/1000, and in the same way the **binary** fraction ``0.001`` -has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only +fractions. For example, the **decimal** fraction ``0.625`` +has value 6/10 + 2/100 + 5/1000, and in the same way the **binary** fraction ``0.101`` +has value 1/2 + 0/4 + 1/8. These two fractions have identical values, the only real difference being that the first is written in base 10 fractional notation, and the second in base 2. @@ -57,13 +59,15 @@ Many users are not aware of the approximation because of the way values are displayed. Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored -for 0.1, it would have to display :: +for 0.1, it would have to display:: >>> 0.1 0.1000000000000000055511151231257827021181583404541015625 That is more digits than most people find useful, so Python keeps the number -of digits manageable by displaying a rounded value instead :: +of digits manageable by displaying a rounded value instead: + +.. doctest:: >>> 1 / 10 0.1 @@ -90,7 +94,10 @@ thing in all languages that support your hardware's floating-point arithmetic (although some languages may not *display* the difference by default, or in all output modes). -For more pleasant output, you may wish to use string formatting to produce a limited number of significant digits:: +For more pleasant output, you may wish to use string formatting to produce a +limited number of significant digits: + +.. doctest:: >>> format(math.pi, '.12g') # give 12 significant digits '3.14159265359' @@ -101,33 +108,49 @@ For more pleasant output, you may wish to use string formatting to produce a lim >>> repr(math.pi) '3.141592653589793' - It's important to realize that this is, in a real sense, an illusion: you're simply rounding the *display* of the true machine value. One illusion may beget another. For example, since 0.1 is not exactly 1/10, -summing three values of 0.1 may not yield exactly 0.3, either:: +summing three values of 0.1 may not yield exactly 0.3, either: + +.. doctest:: - >>> .1 + .1 + .1 == .3 + >>> 0.1 + 0.1 + 0.1 == 0.3 False Also, since the 0.1 cannot get any closer to the exact value of 1/10 and 0.3 cannot get any closer to the exact value of 3/10, then pre-rounding with -:func:`round` function cannot help:: +:func:`round` function cannot help: - >>> round(.1, 1) + round(.1, 1) + round(.1, 1) == round(.3, 1) +.. doctest:: + + >>> round(0.1, 1) + round(0.1, 1) + round(0.1, 1) == round(0.3, 1) False Though the numbers cannot be made closer to their intended exact values, -the :func:`round` function can be useful for post-rounding so that results -with inexact values become comparable to one another:: +the :func:`math.isclose` function can be useful for comparing inexact values: - >>> round(.1 + .1 + .1, 10) == round(.3, 10) - True +.. doctest:: + + >>> math.isclose(0.1 + 0.1 + 0.1, 0.3) + True + +Alternatively, the :func:`round` function can be used to compare rough +approximations:: + +.. doctest:: + + >>> round(math.pi, ndigits=2) == round(22 / 7, ndigits=2) + True Binary floating-point arithmetic holds many surprises like this. The problem with "0.1" is explained in precise detail below, in the "Representation Error" -section. See `The Perils of Floating Point `_ +section. See `Examples of Floating Point Problems +`_ for +a pleasant summary of how binary floating point works and the kinds of +problems commonly encountered in practice. Also see +`The Perils of Floating Point `_ for a more complete account of other common surprises. As that says near the end, "there are no easy answers." Still, don't be unduly @@ -158,26 +181,34 @@ statistical operations supplied by the SciPy project. See . Python provides tools that may help on those rare occasions when you really *do* want to know the exact value of a float. The :meth:`float.as_integer_ratio` method expresses the value of a float as a -fraction:: +fraction: + +.. doctest:: >>> x = 3.14159 >>> x.as_integer_ratio() (3537115888337719, 1125899906842624) Since the ratio is exact, it can be used to losslessly recreate the -original value:: +original value: + +.. doctest:: >>> x == 3537115888337719 / 1125899906842624 True The :meth:`float.hex` method expresses a float in hexadecimal (base -16), again giving the exact value stored by your computer:: +16), again giving the exact value stored by your computer: + +.. doctest:: >>> x.hex() '0x1.921f9f01b866ep+1' This precise hexadecimal representation can be used to reconstruct -the float value exactly:: +the float value exactly: + +.. doctest:: >>> x == float.fromhex('0x1.921f9f01b866ep+1') True @@ -186,17 +217,43 @@ Since the representation is exact, it is useful for reliably porting values across different versions of Python (platform independence) and exchanging data with other languages that support the same format (such as Java and C99). -Another helpful tool is the :func:`math.fsum` function which helps mitigate -loss-of-precision during summation. It tracks "lost digits" as values are -added onto a running total. That can make a difference in overall accuracy -so that the errors do not accumulate to the point where they affect the -final total: +Another helpful tool is the :func:`sum` function which helps mitigate +loss-of-precision during summation. It uses extended precision for +intermediate rounding steps as values are added onto a running total. +That can make a difference in overall accuracy so that the errors do not +accumulate to the point where they affect the final total: + +.. doctest:: >>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 == 1.0 False - >>> math.fsum([0.1] * 10) == 1.0 + >>> sum([0.1] * 10) == 1.0 True +The :func:`math.fsum()` goes further and tracks all of the "lost digits" +as values are added onto a running total so that the result has only a +single rounding. This is slower than :func:`sum` but will be more +accurate in uncommon cases where large magnitude inputs mostly cancel +each other out leaving a final sum near zero: + +.. doctest:: + + >>> arr = [-0.10430216751806065, -266310978.67179024, 143401161448607.16, + ... -143401161400469.7, 266262841.31058735, -0.003244936839808227] + >>> float(sum(map(Fraction, arr))) # Exact summation with single rounding + 8.042173697819788e-13 + >>> math.fsum(arr) # Single rounding + 8.042173697819788e-13 + >>> sum(arr) # Multiple roundings in extended precision + 8.042178034628478e-13 + >>> total = 0.0 + >>> for x in arr: + ... total += x # Multiple roundings in standard precision + ... + >>> total # Straight addition has no correct digits! + -0.0051575902860057365 + + .. _tut-fp-error: Representation Error @@ -225,20 +282,28 @@ as :: J ~= 2**N / 10 and recalling that *J* has exactly 53 bits (is ``>= 2**52`` but ``< 2**53``), -the best value for *N* is 56:: +the best value for *N* is 56: + +.. doctest:: >>> 2**52 <= 2**56 // 10 < 2**53 True That is, 56 is the only value for *N* that leaves *J* with exactly 53 bits. The -best possible value for *J* is then that quotient rounded:: +best possible value for *J* is then that quotient rounded: + +.. doctest:: >>> q, r = divmod(2**56, 10) >>> r 6 Since the remainder is more than half of 10, the best approximation is obtained -by rounding up:: +by rounding up: + +.. doctest:: + + >>> q+1 7205759403792794 @@ -256,13 +321,17 @@ if we had not rounded up, the quotient would have been a little bit smaller than 1/10. But in no case can it be *exactly* 1/10! So the computer never "sees" 1/10: what it sees is the exact fraction given -above, the best 754 double approximation it can get:: +above, the best 754 double approximation it can get: + +.. doctest:: >>> 0.1 * 2 ** 55 3602879701896397.0 If we multiply that fraction by 10\*\*55, we can see the value out to -55 decimal digits:: +55 decimal digits: + +.. doctest:: >>> 3602879701896397 * 10 ** 55 // 2 ** 55 1000000000000000055511151231257827021181583404541015625 @@ -270,13 +339,17 @@ If we multiply that fraction by 10\*\*55, we can see the value out to meaning that the exact number stored in the computer is equal to the decimal value 0.1000000000000000055511151231257827021181583404541015625. Instead of displaying the full decimal value, many languages (including -older versions of Python), round the result to 17 significant digits:: +older versions of Python), round the result to 17 significant digits: + +.. doctest:: >>> format(0.1, '.17f') '0.10000000000000001' The :mod:`fractions` and :mod:`decimal` modules make these calculations -easy:: +easy: + +.. doctest:: >>> from decimal import Decimal >>> from fractions import Fraction From webhook-mailer at python.org Sun Feb 19 14:35:16 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 19 Feb 2023 19:35:16 -0000 Subject: [Python-checkins] gh-99735: Use required=True in argparse subparsers example (GH-100927) Message-ID: https://github.com/python/cpython/commit/f193fad4b3f4a8892f6af000ae3801d58b8b66f8 commit: f193fad4b3f4a8892f6af000ae3801d58b8b66f8 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-19T11:35:09-08:00 summary: gh-99735: Use required=True in argparse subparsers example (GH-100927) (cherry picked from commit 6aab56f3c2ee331116eba242d2fcdca592577328) Co-authored-by: Patricio Paez files: M Doc/library/argparse.rst diff --git a/Doc/library/argparse.rst b/Doc/library/argparse.rst index 3971c000b2a0..b6ecf8a2542c 100644 --- a/Doc/library/argparse.rst +++ b/Doc/library/argparse.rst @@ -1771,7 +1771,7 @@ Sub-commands ... >>> # create the top-level parser >>> parser = argparse.ArgumentParser() - >>> subparsers = parser.add_subparsers() + >>> subparsers = parser.add_subparsers(required=True) >>> >>> # create the parser for the "foo" command >>> parser_foo = subparsers.add_parser('foo') From webhook-mailer at python.org Sun Feb 19 15:22:45 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Sun, 19 Feb 2023 20:22:45 -0000 Subject: [Python-checkins] gh-101578: Amend PyErr_{Set, Get}RaisedException docs (#101962) Message-ID: https://github.com/python/cpython/commit/60bbed7f174e481d3fc69984430a4667951678b3 commit: 60bbed7f174e481d3fc69984430a4667951678b3 branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-19T21:22:29+01:00 summary: gh-101578: Amend PyErr_{Set,Get}RaisedException docs (#101962) Co-authored-by: C.A.M. Gerlach files: M Doc/c-api/exceptions.rst M Doc/data/refcounts.dat diff --git a/Doc/c-api/exceptions.rst b/Doc/c-api/exceptions.rst index de9b15edd685..e735f8db4023 100644 --- a/Doc/c-api/exceptions.rst +++ b/Doc/c-api/exceptions.rst @@ -402,51 +402,36 @@ Querying the error indicator .. c:function:: PyObject *PyErr_GetRaisedException(void) - Returns the exception currently being raised, clearing the exception at - the same time. Do not confuse this with the exception currently being - handled which can be accessed with :c:func:`PyErr_GetHandledException`. + Return the exception currently being raised, clearing the error indicator at + the same time. - .. note:: + This function is used by code that needs to catch exceptions, + or code that needs to save and restore the error indicator temporarily. - This function is normally only used by code that needs to catch exceptions or - by code that needs to save and restore the error indicator temporarily, e.g.:: + For example:: - { - PyObject *exc = PyErr_GetRaisedException(); + { + PyObject *exc = PyErr_GetRaisedException(); - /* ... code that might produce other errors ... */ + /* ... code that might produce other errors ... */ - PyErr_SetRaisedException(exc); - } + PyErr_SetRaisedException(exc); + } + + .. seealso:: :c:func:`PyErr_GetHandledException`, + to save the exception currently being handled. .. versionadded:: 3.12 .. c:function:: void PyErr_SetRaisedException(PyObject *exc) - Sets the exception currently being raised ``exc``. - If the exception is already set, it is cleared first. - - ``exc`` must be a valid exception. - (Violating this rules will cause subtle problems later.) - This call consumes a reference to the ``exc`` object: you must own a - reference to that object before the call and after the call you no longer own - that reference. - (If you don't understand this, don't use this function. I warned you.) + Set *exc* as the exception currently being raised, + clearing the existing exception if one is set. - .. note:: - - This function is normally only used by code that needs to save and restore the - error indicator temporarily. Use :c:func:`PyErr_GetRaisedException` to save - the current exception, e.g.:: - - { - PyObject *exc = PyErr_GetRaisedException(); - - /* ... code that might produce other errors ... */ + .. warning:: - PyErr_SetRaisedException(exc); - } + This call steals a reference to *exc*, which must be a valid exception. .. versionadded:: 3.12 diff --git a/Doc/data/refcounts.dat b/Doc/data/refcounts.dat index 349c4dd5be3d..ed2958f8cd22 100644 --- a/Doc/data/refcounts.dat +++ b/Doc/data/refcounts.dat @@ -606,6 +606,9 @@ PyErr_GetExcInfo:PyObject**:ptype:+1: PyErr_GetExcInfo:PyObject**:pvalue:+1: PyErr_GetExcInfo:PyObject**:ptraceback:+1: +PyErr_GetRaisedException:PyObject*::+1: +PyErr_SetRaisedException:::: + PyErr_GivenExceptionMatches:int::: PyErr_GivenExceptionMatches:PyObject*:given:0: PyErr_GivenExceptionMatches:PyObject*:exc:0: From webhook-mailer at python.org Sun Feb 19 20:16:26 2023 From: webhook-mailer at python.org (gpshead) Date: Mon, 20 Feb 2023 01:16:26 -0000 Subject: [Python-checkins] gh-97786: Fix compiler warnings in pytime.c (#101826) Message-ID: https://github.com/python/cpython/commit/b1b375e2670a58fc37cb4c2629ed73b045159918 commit: b1b375e2670a58fc37cb4c2629ed73b045159918 branch: main author: Mark Dickinson committer: gpshead date: 2023-02-19T17:16:11-08:00 summary: gh-97786: Fix compiler warnings in pytime.c (#101826) Fixes compiler warnings in pytime.c. files: A Misc/NEWS.d/next/Library/2023-02-11-13-23-29.gh-issue-97786.QjvQ1B.rst M Python/pytime.c diff --git a/Misc/NEWS.d/next/Library/2023-02-11-13-23-29.gh-issue-97786.QjvQ1B.rst b/Misc/NEWS.d/next/Library/2023-02-11-13-23-29.gh-issue-97786.QjvQ1B.rst new file mode 100644 index 000000000000..df194b67590d --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-11-13-23-29.gh-issue-97786.QjvQ1B.rst @@ -0,0 +1,2 @@ +Fix potential undefined behaviour in corner cases of floating-point-to-time +conversions. diff --git a/Python/pytime.c b/Python/pytime.c index 01c07da07475..acd1842056af 100644 --- a/Python/pytime.c +++ b/Python/pytime.c @@ -1,5 +1,4 @@ #include "Python.h" -#include "pycore_pymath.h" // _Py_InIntegralTypeRange() #ifdef MS_WINDOWS # include // struct timeval #endif @@ -41,6 +40,14 @@ # error "unsupported time_t size" #endif +#if PY_TIME_T_MAX + PY_TIME_T_MIN != -1 +# error "time_t is not a two's complement integer type" +#endif + +#if _PyTime_MIN + _PyTime_MAX != -1 +# error "_PyTime_t is not a two's complement integer type" +#endif + static void pytime_time_t_overflow(void) @@ -294,7 +301,21 @@ pytime_double_to_denominator(double d, time_t *sec, long *numerator, } assert(0.0 <= floatpart && floatpart < denominator); - if (!_Py_InIntegralTypeRange(time_t, intpart)) { + /* + Conversion of an out-of-range value to time_t gives undefined behaviour + (C99 ?6.3.1.4p1), so we must guard against it. However, checking that + `intpart` is in range is delicate: the obvious expression `intpart <= + PY_TIME_T_MAX` will first convert the value `PY_TIME_T_MAX` to a double, + potentially changing its value and leading to us failing to catch some + UB-inducing values. The code below works correctly under the mild + assumption that time_t is a two's complement integer type with no trap + representation, and that `PY_TIME_T_MIN` is within the representable + range of a C double. + + Note: we want the `if` condition below to be true for NaNs; therefore, + resist any temptation to simplify by applying De Morgan's laws. + */ + if (!((double)PY_TIME_T_MIN <= intpart && intpart < -(double)PY_TIME_T_MIN)) { pytime_time_t_overflow(); return -1; } @@ -349,7 +370,8 @@ _PyTime_ObjectToTime_t(PyObject *obj, time_t *sec, _PyTime_round_t round) d = pytime_round(d, round); (void)modf(d, &intpart); - if (!_Py_InIntegralTypeRange(time_t, intpart)) { + /* See comments in pytime_double_to_denominator */ + if (!((double)PY_TIME_T_MIN <= intpart && intpart < -(double)PY_TIME_T_MIN)) { pytime_time_t_overflow(); return -1; } @@ -515,8 +537,9 @@ pytime_from_double(_PyTime_t *tp, double value, _PyTime_round_t round, d *= (double)unit_to_ns; d = pytime_round(d, round); - if (!_Py_InIntegralTypeRange(_PyTime_t, d)) { - pytime_overflow(); + /* See comments in pytime_double_to_denominator */ + if (!((double)_PyTime_MIN <= d && d < -(double)_PyTime_MIN)) { + pytime_time_t_overflow(); return -1; } _PyTime_t ns = (_PyTime_t)d; @@ -910,7 +933,7 @@ py_get_system_clock(_PyTime_t *tp, _Py_clock_info_t *info, int raise_exc) info->monotonic = 0; info->adjustable = 1; if (clock_getres(CLOCK_REALTIME, &res) == 0) { - info->resolution = res.tv_sec + res.tv_nsec * 1e-9; + info->resolution = (double)res.tv_sec + (double)res.tv_nsec * 1e-9; } else { info->resolution = 1e-9; From webhook-mailer at python.org Mon Feb 20 08:07:32 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 20 Feb 2023 13:07:32 -0000 Subject: [Python-checkins] gh-101981: Build macOS as recommended by the devguide (GH-102070) Message-ID: https://github.com/python/cpython/commit/27136310414965a3ea7f835e416cf74b91cefb48 commit: 27136310414965a3ea7f835e416cf74b91cefb48 branch: main author: Erlend E. Aasland committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-20T05:07:25-08:00 summary: gh-101981: Build macOS as recommended by the devguide (GH-102070) Automerge-Triggered-By: GH:erlend-aasland files: M .github/workflows/build.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 97ea2d94598e..acc8d936774a 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -157,12 +157,19 @@ jobs: PYTHONSTRICTEXTENSIONBUILD: 1 steps: - uses: actions/checkout at v3 - - name: Prepare homebrew environment variables + - name: Install Homebrew dependencies + run: brew install pkg-config openssl at 1.1 xz gdbm tcl-tk + - name: Prepare Homebrew environment variables run: | - echo "LDFLAGS=-L$(brew --prefix tcl-tk)/lib" >> $GITHUB_ENV + echo "CFLAGS=-I$(brew --prefix gdbm)/include -I$(brew --prefix xz)/include" >> $GITHUB_ENV + echo "LDFLAGS=-L$(brew --prefix gdbm)/lib -I$(brew --prefix xz)/lib" >> $GITHUB_ENV echo "PKG_CONFIG_PATH=$(brew --prefix openssl at 1.1)/lib/pkgconfig:$(brew --prefix tcl-tk)/lib/pkgconfig" >> $GITHUB_ENV - name: Configure CPython - run: ./configure --with-pydebug --prefix=/opt/python-dev + run: | + ./configure \ + --with-pydebug \ + --prefix=/opt/python-dev \ + --with-openssl="$(brew --prefix openssl at 1.1)" - name: Build CPython run: make -j4 - name: Display build info From webhook-mailer at python.org Mon Feb 20 08:33:29 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 20 Feb 2023 13:33:29 -0000 Subject: [Python-checkins] gh-101981: Build macOS as recommended by the devguide (GH-102070) Message-ID: https://github.com/python/cpython/commit/95751b9707ec9985177f2fadc7e19f2f57bc5ad7 commit: 95751b9707ec9985177f2fadc7e19f2f57bc5ad7 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-20T05:33:21-08:00 summary: gh-101981: Build macOS as recommended by the devguide (GH-102070) (cherry picked from commit 27136310414965a3ea7f835e416cf74b91cefb48) Co-authored-by: Erlend E. Aasland Automerge-Triggered-By: GH:erlend-aasland files: M .github/workflows/build.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 32535dc6a4e6..beb390eb7734 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -190,12 +190,19 @@ jobs: PYTHONSTRICTEXTENSIONBUILD: 1 steps: - uses: actions/checkout at v3 - - name: Prepare homebrew environment variables + - name: Install Homebrew dependencies + run: brew install pkg-config openssl at 1.1 xz gdbm tcl-tk + - name: Prepare Homebrew environment variables run: | - echo "LDFLAGS=-L$(brew --prefix tcl-tk)/lib" >> $GITHUB_ENV + echo "CFLAGS=-I$(brew --prefix gdbm)/include -I$(brew --prefix xz)/include" >> $GITHUB_ENV + echo "LDFLAGS=-L$(brew --prefix gdbm)/lib -I$(brew --prefix xz)/lib" >> $GITHUB_ENV echo "PKG_CONFIG_PATH=$(brew --prefix openssl at 1.1)/lib/pkgconfig:$(brew --prefix tcl-tk)/lib/pkgconfig" >> $GITHUB_ENV - name: Configure CPython - run: ./configure --with-pydebug --prefix=/opt/python-dev + run: | + ./configure \ + --with-pydebug \ + --prefix=/opt/python-dev \ + --with-openssl="$(brew --prefix openssl at 1.1)" - name: Build CPython run: make -j4 - name: Display build info From webhook-mailer at python.org Mon Feb 20 08:46:49 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 20 Feb 2023 13:46:49 -0000 Subject: [Python-checkins] gh-101819: Adapt _io types to heap types, batch 1 (GH-101949) Message-ID: https://github.com/python/cpython/commit/c00faf79438cc7f0d98af2679c695f747e4369a3 commit: c00faf79438cc7f0d98af2679c695f747e4369a3 branch: main author: Erlend E. Aasland committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-20T05:46:20-08:00 summary: gh-101819: Adapt _io types to heap types, batch 1 (GH-101949) Adapt StringIO, TextIOWrapper, FileIO, Buffered*, and BytesIO types. Automerge-Triggered-By: GH:erlend-aasland files: M Lib/test/test_io.py M Modules/_io/_iomodule.c M Modules/_io/_iomodule.h M Modules/_io/bufferedio.c M Modules/_io/bytesio.c M Modules/_io/clinic/bufferedio.c.h M Modules/_io/fileio.c M Modules/_io/stringio.c M Modules/_io/textio.c M Tools/c-analyzer/cpython/globals-to-fix.tsv diff --git a/Lib/test/test_io.py b/Lib/test/test_io.py index c5f2e5060a54..9bcc92a40c61 100644 --- a/Lib/test/test_io.py +++ b/Lib/test/test_io.py @@ -1042,6 +1042,95 @@ def close(self): support.gc_collect() self.assertIsNone(wr(), wr) + at support.cpython_only +class TestIOCTypes(unittest.TestCase): + def setUp(self): + _io = import_helper.import_module("_io") + self.types = [ + _io.BufferedRWPair, + _io.BufferedRandom, + _io.BufferedReader, + _io.BufferedWriter, + _io.BytesIO, + _io.FileIO, + _io.IncrementalNewlineDecoder, + _io.StringIO, + _io.TextIOWrapper, + _io._BufferedIOBase, + _io._BytesIOBuffer, + _io._IOBase, + _io._RawIOBase, + _io._TextIOBase, + ] + if sys.platform == "win32": + self.types.append(_io._WindowsConsoleIO) + self._io = _io + + def test_immutable_types(self): + for tp in self.types: + with self.subTest(tp=tp): + with self.assertRaisesRegex(TypeError, "immutable"): + tp.foo = "bar" + + def test_class_hierarchy(self): + def check_subs(types, base): + for tp in types: + with self.subTest(tp=tp, base=base): + self.assertTrue(issubclass(tp, base)) + + def recursive_check(d): + for k, v in d.items(): + if isinstance(v, dict): + recursive_check(v) + elif isinstance(v, set): + check_subs(v, k) + else: + self.fail("corrupt test dataset") + + _io = self._io + hierarchy = { + _io._IOBase: { + _io._BufferedIOBase: { + _io.BufferedRWPair, + _io.BufferedRandom, + _io.BufferedReader, + _io.BufferedWriter, + _io.BytesIO, + }, + _io._RawIOBase: { + _io.FileIO, + }, + _io._TextIOBase: { + _io.StringIO, + _io.TextIOWrapper, + }, + }, + } + if sys.platform == "win32": + hierarchy[_io._IOBase][_io._RawIOBase].add(_io._WindowsConsoleIO) + + recursive_check(hierarchy) + + def test_subclassing(self): + _io = self._io + dataset = {k: True for k in self.types} + dataset[_io._BytesIOBuffer] = False + + for tp, is_basetype in dataset.items(): + with self.subTest(tp=tp, is_basetype=is_basetype): + name = f"{tp.__name__}_subclass" + bases = (tp,) + if is_basetype: + _ = type(name, bases, {}) + else: + msg = "not an acceptable base type" + with self.assertRaisesRegex(TypeError, msg): + _ = type(name, bases, {}) + + def test_disallow_instantiation(self): + _io = self._io + support.check_disallow_instantiation(self, _io._BytesIOBuffer) + class PyIOTest(IOTest): pass @@ -4671,7 +4760,7 @@ def load_tests(loader, tests, pattern): CIncrementalNewlineDecoderTest, PyIncrementalNewlineDecoderTest, CTextIOWrapperTest, PyTextIOWrapperTest, CMiscIOTest, PyMiscIOTest, - CSignalsTest, PySignalsTest, + CSignalsTest, PySignalsTest, TestIOCTypes, ) # Put the namespaces of the IO module we are testing and some useful mock diff --git a/Modules/_io/_iomodule.c b/Modules/_io/_iomodule.c index 811b1d221a01..55b6535eb34b 100644 --- a/Modules/_io/_iomodule.c +++ b/Modules/_io/_iomodule.c @@ -10,7 +10,6 @@ #define PY_SSIZE_T_CLEAN #include "Python.h" #include "_iomodule.h" -#include "pycore_moduleobject.h" // _PyModule_GetState() #include "pycore_pystate.h" // _PyInterpreterState_GET() #ifdef HAVE_SYS_TYPES_H @@ -315,8 +314,9 @@ _io_open_impl(PyObject *module, PyObject *file, const char *mode, } /* Create the Raw file stream */ + _PyIO_State *state = get_io_state(module); { - PyObject *RawIO_class = (PyObject *)&PyFileIO_Type; + PyObject *RawIO_class = (PyObject *)state->PyFileIO_Type; #ifdef MS_WINDOWS const PyConfig *config = _Py_GetConfig(); if (!config->legacy_windows_stdio && _PyIO_get_console_type(path_or_fd) != '\0') { @@ -390,12 +390,15 @@ _io_open_impl(PyObject *module, PyObject *file, const char *mode, { PyObject *Buffered_class; - if (updating) - Buffered_class = (PyObject *)&PyBufferedRandom_Type; - else if (creating || writing || appending) - Buffered_class = (PyObject *)&PyBufferedWriter_Type; - else if (reading) - Buffered_class = (PyObject *)&PyBufferedReader_Type; + if (updating) { + Buffered_class = (PyObject *)state->PyBufferedRandom_Type; + } + else if (creating || writing || appending) { + Buffered_class = (PyObject *)state->PyBufferedWriter_Type; + } + else if (reading) { + Buffered_class = (PyObject *)state->PyBufferedReader_Type; + } else { PyErr_Format(PyExc_ValueError, "unknown mode: '%s'", mode); @@ -417,7 +420,7 @@ _io_open_impl(PyObject *module, PyObject *file, const char *mode, } /* wraps into a TextIOWrapper */ - wrapper = PyObject_CallFunction((PyObject *)&PyTextIOWrapper_Type, + wrapper = PyObject_CallFunction((PyObject *)state->PyTextIOWrapper_Type, "OsssO", buffer, encoding, errors, newline, @@ -558,14 +561,6 @@ PyNumber_AsOff_t(PyObject *item, PyObject *err) return result; } -static inline _PyIO_State* -get_io_state(PyObject *module) -{ - void *state = _PyModule_GetState(module); - assert(state != NULL); - return (_PyIO_State *)state; -} - _PyIO_State * _PyIO_get_module_state(void) { @@ -587,6 +582,15 @@ iomodule_traverse(PyObject *mod, visitproc visit, void *arg) { return 0; Py_VISIT(state->locale_module); Py_VISIT(state->unsupported_operation); + + Py_VISIT(state->PyBufferedRWPair_Type); + Py_VISIT(state->PyBufferedRandom_Type); + Py_VISIT(state->PyBufferedReader_Type); + Py_VISIT(state->PyBufferedWriter_Type); + Py_VISIT(state->PyBytesIO_Type); + Py_VISIT(state->PyFileIO_Type); + Py_VISIT(state->PyStringIO_Type); + Py_VISIT(state->PyTextIOWrapper_Type); return 0; } @@ -599,6 +603,15 @@ iomodule_clear(PyObject *mod) { if (state->locale_module != NULL) Py_CLEAR(state->locale_module); Py_CLEAR(state->unsupported_operation); + + Py_CLEAR(state->PyBufferedRWPair_Type); + Py_CLEAR(state->PyBufferedRandom_Type); + Py_CLEAR(state->PyBufferedReader_Type); + Py_CLEAR(state->PyBufferedWriter_Type); + Py_CLEAR(state->PyBytesIO_Type); + Py_CLEAR(state->PyFileIO_Type); + Py_CLEAR(state->PyStringIO_Type); + Py_CLEAR(state->PyTextIOWrapper_Type); return 0; } @@ -612,7 +625,9 @@ iomodule_free(PyObject *mod) { * Module definition */ +#define clinic_state() (get_io_state(module)) #include "clinic/_iomodule.c.h" +#undef clinic_state static PyMethodDef module_methods[] = { _IO_OPEN_METHODDEF @@ -644,23 +659,11 @@ static PyTypeObject* static_types[] = { &PyRawIOBase_Type, &PyTextIOBase_Type, - // PyBufferedIOBase_Type(PyIOBase_Type) subclasses - &PyBytesIO_Type, - &PyBufferedReader_Type, - &PyBufferedWriter_Type, - &PyBufferedRWPair_Type, - &PyBufferedRandom_Type, - // PyRawIOBase_Type(PyIOBase_Type) subclasses - &PyFileIO_Type, &_PyBytesIOBuffer_Type, #ifdef MS_WINDOWS &PyWindowsConsoleIO_Type, #endif - - // PyTextIOBase_Type(PyIOBase_Type) subclasses - &PyStringIO_Type, - &PyTextIOWrapper_Type, }; @@ -673,6 +676,17 @@ _PyIO_Fini(void) } } +#define ADD_TYPE(module, type, spec, base) \ +do { \ + type = (PyTypeObject *)PyType_FromModuleAndSpec(module, spec, \ + (PyObject *)base); \ + if (type == NULL) { \ + goto fail; \ + } \ + if (PyModule_AddType(module, type) < 0) { \ + goto fail; \ + } \ +} while (0) PyMODINIT_FUNC PyInit__io(void) @@ -705,17 +719,9 @@ PyInit__io(void) } // Set type base classes - PyFileIO_Type.tp_base = &PyRawIOBase_Type; - PyBytesIO_Type.tp_base = &PyBufferedIOBase_Type; - PyStringIO_Type.tp_base = &PyTextIOBase_Type; #ifdef MS_WINDOWS PyWindowsConsoleIO_Type.tp_base = &PyRawIOBase_Type; #endif - PyBufferedReader_Type.tp_base = &PyBufferedIOBase_Type; - PyBufferedWriter_Type.tp_base = &PyBufferedIOBase_Type; - PyBufferedRWPair_Type.tp_base = &PyBufferedIOBase_Type; - PyBufferedRandom_Type.tp_base = &PyBufferedIOBase_Type; - PyTextIOWrapper_Type.tp_base = &PyTextIOBase_Type; // Add types for (size_t i=0; i < Py_ARRAY_LENGTH(static_types); i++) { @@ -725,6 +731,25 @@ PyInit__io(void) } } + // PyBufferedIOBase_Type(PyIOBase_Type) subclasses + ADD_TYPE(m, state->PyBytesIO_Type, &bytesio_spec, &PyBufferedIOBase_Type); + ADD_TYPE(m, state->PyBufferedWriter_Type, &bufferedwriter_spec, + &PyBufferedIOBase_Type); + ADD_TYPE(m, state->PyBufferedReader_Type, &bufferedreader_spec, + &PyBufferedIOBase_Type); + ADD_TYPE(m, state->PyBufferedRWPair_Type, &bufferedrwpair_spec, + &PyBufferedIOBase_Type); + ADD_TYPE(m, state->PyBufferedRandom_Type, &bufferedrandom_spec, + &PyBufferedIOBase_Type); + + // PyRawIOBase_Type(PyIOBase_Type) subclasses + ADD_TYPE(m, state->PyFileIO_Type, &fileio_spec, &PyRawIOBase_Type); + + // PyTextIOBase_Type(PyIOBase_Type) subclasses + ADD_TYPE(m, state->PyStringIO_Type, &stringio_spec, &PyTextIOBase_Type); + ADD_TYPE(m, state->PyTextIOWrapper_Type, &textiowrapper_spec, + &PyTextIOBase_Type); + state->initialized = 1; return m; diff --git a/Modules/_io/_iomodule.h b/Modules/_io/_iomodule.h index 7617cb8fb70e..02daef9e8567 100644 --- a/Modules/_io/_iomodule.h +++ b/Modules/_io/_iomodule.h @@ -4,6 +4,9 @@ #include "exports.h" +#include "pycore_moduleobject.h" // _PyModule_GetState() +#include "structmember.h" + /* ABCs */ extern PyTypeObject PyIOBase_Type; extern PyTypeObject PyRawIOBase_Type; @@ -11,16 +14,18 @@ extern PyTypeObject PyBufferedIOBase_Type; extern PyTypeObject PyTextIOBase_Type; /* Concrete classes */ -extern PyTypeObject PyFileIO_Type; -extern PyTypeObject PyBytesIO_Type; -extern PyTypeObject PyStringIO_Type; -extern PyTypeObject PyBufferedReader_Type; -extern PyTypeObject PyBufferedWriter_Type; -extern PyTypeObject PyBufferedRWPair_Type; -extern PyTypeObject PyBufferedRandom_Type; -extern PyTypeObject PyTextIOWrapper_Type; extern PyTypeObject PyIncrementalNewlineDecoder_Type; +/* Type specs */ +extern PyType_Spec bufferedrandom_spec; +extern PyType_Spec bufferedreader_spec; +extern PyType_Spec bufferedrwpair_spec; +extern PyType_Spec bufferedwriter_spec; +extern PyType_Spec bytesio_spec; +extern PyType_Spec fileio_spec; +extern PyType_Spec stringio_spec; +extern PyType_Spec textiowrapper_spec; + #ifdef MS_WINDOWS extern PyTypeObject PyWindowsConsoleIO_Type; #endif /* MS_WINDOWS */ @@ -140,11 +145,37 @@ typedef struct { PyObject *locale_module; PyObject *unsupported_operation; + + /* Types */ + PyTypeObject *PyBufferedRWPair_Type; + PyTypeObject *PyBufferedRandom_Type; + PyTypeObject *PyBufferedReader_Type; + PyTypeObject *PyBufferedWriter_Type; + PyTypeObject *PyBytesIO_Type; + PyTypeObject *PyFileIO_Type; + PyTypeObject *PyStringIO_Type; + PyTypeObject *PyTextIOWrapper_Type; } _PyIO_State; #define IO_MOD_STATE(mod) ((_PyIO_State *)PyModule_GetState(mod)) #define IO_STATE() _PyIO_get_module_state() +static inline _PyIO_State * +get_io_state(PyObject *module) +{ + void *state = _PyModule_GetState(module); + assert(state != NULL); + return (_PyIO_State *)state; +} + +static inline _PyIO_State * +find_io_state_by_def(PyTypeObject *type) +{ + PyObject *mod = PyType_GetModuleByDef(type, &_PyIO_Module); + assert(mod != NULL); + return get_io_state(mod); +} + extern _PyIO_State *_PyIO_get_module_state(void); #ifdef MS_WINDOWS diff --git a/Modules/_io/bufferedio.c b/Modules/_io/bufferedio.c index ba8969f0bcd1..56491f097100 100644 --- a/Modules/_io/bufferedio.c +++ b/Modules/_io/bufferedio.c @@ -18,12 +18,12 @@ module _io class _io._BufferedIOBase "PyObject *" "&PyBufferedIOBase_Type" class _io._Buffered "buffered *" "&PyBufferedIOBase_Type" -class _io.BufferedReader "buffered *" "&PyBufferedReader_Type" -class _io.BufferedWriter "buffered *" "&PyBufferedWriter_Type" -class _io.BufferedRWPair "rwpair *" "&PyBufferedRWPair_Type" -class _io.BufferedRandom "buffered *" "&PyBufferedRandom_Type" +class _io.BufferedReader "buffered *" "clinic_state()->PyBufferedReader_Type" +class _io.BufferedWriter "buffered *" "clinic_state()->PyBufferedWriter_Type" +class _io.BufferedRWPair "rwpair *" "clinic_state()->PyBufferedRWPair_Type" +class _io.BufferedRandom "buffered *" "clinic_state()->PyBufferedRandom_Type" [clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=59460b9c5639984d]*/ +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=abd685b9d94b9888]*/ /* * BufferedIOBase class, inherits from IOBase. @@ -366,6 +366,7 @@ _enter_buffered_busy(buffered *self) static void buffered_dealloc(buffered *self) { + PyTypeObject *tp = Py_TYPE(self); self->finalizing = 1; if (_PyIOBase_finalize((PyObject *) self) < 0) return; @@ -383,7 +384,8 @@ buffered_dealloc(buffered *self) self->lock = NULL; } Py_CLEAR(self->dict); - Py_TYPE(self)->tp_free((PyObject *)self); + tp->tp_free((PyObject *)self); + Py_DECREF(tp); } static PyObject * @@ -399,6 +401,7 @@ buffered_sizeof(buffered *self, PyObject *Py_UNUSED(ignored)) static int buffered_traverse(buffered *self, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(self)); Py_VISIT(self->raw); Py_VISIT(self->dict); return 0; @@ -1328,9 +1331,11 @@ buffered_iternext(buffered *self) CHECK_INITIALIZED(self); + _PyIO_State *state = find_io_state_by_def(Py_TYPE(self)); tp = Py_TYPE(self); - if (tp == &PyBufferedReader_Type || - tp == &PyBufferedRandom_Type) { + if (Py_IS_TYPE(tp, state->PyBufferedReader_Type) || + Py_IS_TYPE(tp, state->PyBufferedRandom_Type)) + { /* Skip method call overhead for speed */ line = _buffered_readline(self, -1); } @@ -1428,8 +1433,11 @@ _io_BufferedReader___init___impl(buffered *self, PyObject *raw, return -1; _bufferedreader_reset_buf(self); - self->fast_closed_checks = (Py_IS_TYPE(self, &PyBufferedReader_Type) && - Py_IS_TYPE(raw, &PyFileIO_Type)); + _PyIO_State *state = find_io_state_by_def(Py_TYPE(self)); + self->fast_closed_checks = ( + Py_IS_TYPE(self, state->PyBufferedReader_Type) && + Py_IS_TYPE(raw, state->PyFileIO_Type) + ); self->ok = 1; return 0; @@ -1783,8 +1791,11 @@ _io_BufferedWriter___init___impl(buffered *self, PyObject *raw, _bufferedwriter_reset_buf(self); self->pos = 0; - self->fast_closed_checks = (Py_IS_TYPE(self, &PyBufferedWriter_Type) && - Py_IS_TYPE(raw, &PyFileIO_Type)); + _PyIO_State *state = find_io_state_by_def(Py_TYPE(self)); + self->fast_closed_checks = ( + Py_IS_TYPE(self, state->PyBufferedWriter_Type) && + Py_IS_TYPE(raw, state->PyFileIO_Type) + ); self->ok = 1; return 0; @@ -2090,13 +2101,16 @@ _io_BufferedRWPair___init___impl(rwpair *self, PyObject *reader, if (_PyIOBase_check_writable(writer, Py_True) == NULL) return -1; + _PyIO_State *state = find_io_state_by_def(Py_TYPE(self)); self->reader = (buffered *) PyObject_CallFunction( - (PyObject *) &PyBufferedReader_Type, "On", reader, buffer_size); + (PyObject *)state->PyBufferedReader_Type, + "On", reader, buffer_size); if (self->reader == NULL) return -1; self->writer = (buffered *) PyObject_CallFunction( - (PyObject *) &PyBufferedWriter_Type, "On", writer, buffer_size); + (PyObject *)state->PyBufferedWriter_Type, + "On", writer, buffer_size); if (self->writer == NULL) { Py_CLEAR(self->reader); return -1; @@ -2108,6 +2122,7 @@ _io_BufferedRWPair___init___impl(rwpair *self, PyObject *reader, static int bufferedrwpair_traverse(rwpair *self, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(self)); Py_VISIT(self->dict); return 0; } @@ -2124,13 +2139,15 @@ bufferedrwpair_clear(rwpair *self) static void bufferedrwpair_dealloc(rwpair *self) { + PyTypeObject *tp = Py_TYPE(self); _PyObject_GC_UNTRACK(self); if (self->weakreflist != NULL) PyObject_ClearWeakRefs((PyObject *)self); Py_CLEAR(self->reader); Py_CLEAR(self->writer); Py_CLEAR(self->dict); - Py_TYPE(self)->tp_free((PyObject *) self); + tp->tp_free((PyObject *) self); + Py_DECREF(tp); } static PyObject * @@ -2295,14 +2312,17 @@ _io_BufferedRandom___init___impl(buffered *self, PyObject *raw, _bufferedwriter_reset_buf(self); self->pos = 0; - self->fast_closed_checks = (Py_IS_TYPE(self, &PyBufferedRandom_Type) && - Py_IS_TYPE(raw, &PyFileIO_Type)); + _PyIO_State *state = find_io_state_by_def(Py_TYPE(self)); + self->fast_closed_checks = (Py_IS_TYPE(self, state->PyBufferedRandom_Type) && + Py_IS_TYPE(raw, state->PyFileIO_Type)); self->ok = 1; return 0; } +#define clinic_state() (find_io_state_by_def(Py_TYPE(self))) #include "clinic/bufferedio.c.h" +#undef clinic_state static PyMethodDef bufferediobase_methods[] = { @@ -2394,6 +2414,8 @@ static PyMethodDef bufferedreader_methods[] = { static PyMemberDef bufferedreader_members[] = { {"raw", T_OBJECT, offsetof(buffered, raw), READONLY}, {"_finalizing", T_BOOL, offsetof(buffered, finalizing), 0}, + {"__weaklistoffset__", T_PYSSIZET, offsetof(buffered, weakreflist), READONLY}, + {"__dictoffset__", T_PYSSIZET, offsetof(buffered, dict), READONLY}, {NULL} }; @@ -2405,58 +2427,27 @@ static PyGetSetDef bufferedreader_getset[] = { }; -PyTypeObject PyBufferedReader_Type = { - PyVarObject_HEAD_INIT(NULL, 0) - "_io.BufferedReader", /*tp_name*/ - sizeof(buffered), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - (destructor)buffered_dealloc, /*tp_dealloc*/ - 0, /*tp_vectorcall_offset*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_as_async*/ - (reprfunc)buffered_repr, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash */ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE - | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - _io_BufferedReader___init____doc__, /* tp_doc */ - (traverseproc)buffered_traverse, /* tp_traverse */ - (inquiry)buffered_clear, /* tp_clear */ - 0, /* tp_richcompare */ - offsetof(buffered, weakreflist), /*tp_weaklistoffset*/ - 0, /* tp_iter */ - (iternextfunc)buffered_iternext, /* tp_iternext */ - bufferedreader_methods, /* tp_methods */ - bufferedreader_members, /* tp_members */ - bufferedreader_getset, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - offsetof(buffered, dict), /* tp_dictoffset */ - _io_BufferedReader___init__, /* tp_init */ - 0, /* tp_alloc */ - PyType_GenericNew, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ - 0, /* tp_version_tag */ - 0, /* tp_finalize */ +static PyType_Slot bufferedreader_slots[] = { + {Py_tp_dealloc, buffered_dealloc}, + {Py_tp_repr, buffered_repr}, + {Py_tp_doc, (void *)_io_BufferedReader___init____doc__}, + {Py_tp_traverse, buffered_traverse}, + {Py_tp_clear, buffered_clear}, + {Py_tp_iternext, buffered_iternext}, + {Py_tp_methods, bufferedreader_methods}, + {Py_tp_members, bufferedreader_members}, + {Py_tp_getset, bufferedreader_getset}, + {Py_tp_init, _io_BufferedReader___init__}, + {0, NULL}, }; +PyType_Spec bufferedreader_spec = { + .name = "_io.BufferedReader", + .basicsize = sizeof(buffered), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = bufferedreader_slots, +}; static PyMethodDef bufferedwriter_methods[] = { /* BufferedIOMixin methods */ @@ -2480,6 +2471,8 @@ static PyMethodDef bufferedwriter_methods[] = { static PyMemberDef bufferedwriter_members[] = { {"raw", T_OBJECT, offsetof(buffered, raw), READONLY}, {"_finalizing", T_BOOL, offsetof(buffered, finalizing), 0}, + {"__weaklistoffset__", T_PYSSIZET, offsetof(buffered, weakreflist), READONLY}, + {"__dictoffset__", T_PYSSIZET, offsetof(buffered, dict), READONLY}, {NULL} }; @@ -2491,58 +2484,26 @@ static PyGetSetDef bufferedwriter_getset[] = { }; -PyTypeObject PyBufferedWriter_Type = { - PyVarObject_HEAD_INIT(NULL, 0) - "_io.BufferedWriter", /*tp_name*/ - sizeof(buffered), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - (destructor)buffered_dealloc, /*tp_dealloc*/ - 0, /*tp_vectorcall_offset*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_as_async*/ - (reprfunc)buffered_repr, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash */ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE - | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - _io_BufferedWriter___init____doc__, /* tp_doc */ - (traverseproc)buffered_traverse, /* tp_traverse */ - (inquiry)buffered_clear, /* tp_clear */ - 0, /* tp_richcompare */ - offsetof(buffered, weakreflist), /*tp_weaklistoffset*/ - 0, /* tp_iter */ - 0, /* tp_iternext */ - bufferedwriter_methods, /* tp_methods */ - bufferedwriter_members, /* tp_members */ - bufferedwriter_getset, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - offsetof(buffered, dict), /* tp_dictoffset */ - _io_BufferedWriter___init__, /* tp_init */ - 0, /* tp_alloc */ - PyType_GenericNew, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ - 0, /* tp_version_tag */ - 0, /* tp_finalize */ +static PyType_Slot bufferedwriter_slots[] = { + {Py_tp_dealloc, buffered_dealloc}, + {Py_tp_repr, buffered_repr}, + {Py_tp_doc, (void *)_io_BufferedWriter___init____doc__}, + {Py_tp_traverse, buffered_traverse}, + {Py_tp_clear, buffered_clear}, + {Py_tp_methods, bufferedwriter_methods}, + {Py_tp_members, bufferedwriter_members}, + {Py_tp_getset, bufferedwriter_getset}, + {Py_tp_init, _io_BufferedWriter___init__}, + {0, NULL}, }; +PyType_Spec bufferedwriter_spec = { + .name = "_io.BufferedWriter", + .basicsize = sizeof(buffered), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = bufferedwriter_slots, +}; static PyMethodDef bufferedrwpair_methods[] = { {"read", (PyCFunction)bufferedrwpair_read, METH_VARARGS}, @@ -2563,61 +2524,35 @@ static PyMethodDef bufferedrwpair_methods[] = { {NULL, NULL} }; +static PyMemberDef bufferedrwpair_members[] = { + {"__weaklistoffset__", T_PYSSIZET, offsetof(rwpair, weakreflist), READONLY}, + {"__dictoffset__", T_PYSSIZET, offsetof(rwpair, dict), READONLY}, + {NULL} +}; + static PyGetSetDef bufferedrwpair_getset[] = { {"closed", (getter)bufferedrwpair_closed_get, NULL, NULL}, {NULL} }; -PyTypeObject PyBufferedRWPair_Type = { - PyVarObject_HEAD_INIT(NULL, 0) - "_io.BufferedRWPair", /*tp_name*/ - sizeof(rwpair), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - (destructor)bufferedrwpair_dealloc, /*tp_dealloc*/ - 0, /*tp_vectorcall_offset*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_as_async*/ - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash */ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE - | Py_TPFLAGS_HAVE_GC, /* tp_flags */ - _io_BufferedRWPair___init____doc__, /* tp_doc */ - (traverseproc)bufferedrwpair_traverse, /* tp_traverse */ - (inquiry)bufferedrwpair_clear, /* tp_clear */ - 0, /* tp_richcompare */ - offsetof(rwpair, weakreflist), /*tp_weaklistoffset*/ - 0, /* tp_iter */ - 0, /* tp_iternext */ - bufferedrwpair_methods, /* tp_methods */ - 0, /* tp_members */ - bufferedrwpair_getset, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - offsetof(rwpair, dict), /* tp_dictoffset */ - _io_BufferedRWPair___init__, /* tp_init */ - 0, /* tp_alloc */ - PyType_GenericNew, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ - 0, /* tp_version_tag */ - 0, /* tp_finalize */ +static PyType_Slot bufferedrwpair_slots[] = { + {Py_tp_dealloc, bufferedrwpair_dealloc}, + {Py_tp_doc, (void *)_io_BufferedRWPair___init____doc__}, + {Py_tp_traverse, bufferedrwpair_traverse}, + {Py_tp_clear, bufferedrwpair_clear}, + {Py_tp_methods, bufferedrwpair_methods}, + {Py_tp_members, bufferedrwpair_members}, + {Py_tp_getset, bufferedrwpair_getset}, + {Py_tp_init, _io_BufferedRWPair___init__}, + {0, NULL}, +}; + +PyType_Spec bufferedrwpair_spec = { + .name = "_io.BufferedRWPair", + .basicsize = sizeof(rwpair), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = bufferedrwpair_slots, }; @@ -2651,6 +2586,8 @@ static PyMethodDef bufferedrandom_methods[] = { static PyMemberDef bufferedrandom_members[] = { {"raw", T_OBJECT, offsetof(buffered, raw), READONLY}, {"_finalizing", T_BOOL, offsetof(buffered, finalizing), 0}, + {"__weaklistoffset__", T_PYSSIZET, offsetof(buffered, weakreflist), READONLY}, + {"__dictoffset__", T_PYSSIZET, offsetof(buffered, dict), READONLY}, {NULL} }; @@ -2662,54 +2599,24 @@ static PyGetSetDef bufferedrandom_getset[] = { }; -PyTypeObject PyBufferedRandom_Type = { - PyVarObject_HEAD_INIT(NULL, 0) - "_io.BufferedRandom", /*tp_name*/ - sizeof(buffered), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - (destructor)buffered_dealloc, /*tp_dealloc*/ - 0, /*tp_vectorcall_offset*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_as_async*/ - (reprfunc)buffered_repr, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash */ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE - | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - _io_BufferedRandom___init____doc__, /* tp_doc */ - (traverseproc)buffered_traverse, /* tp_traverse */ - (inquiry)buffered_clear, /* tp_clear */ - 0, /* tp_richcompare */ - offsetof(buffered, weakreflist), /*tp_weaklistoffset*/ - 0, /* tp_iter */ - (iternextfunc)buffered_iternext, /* tp_iternext */ - bufferedrandom_methods, /* tp_methods */ - bufferedrandom_members, /* tp_members */ - bufferedrandom_getset, /* tp_getset */ - 0, /* tp_base */ - 0, /*tp_dict*/ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - offsetof(buffered, dict), /*tp_dictoffset*/ - _io_BufferedRandom___init__, /* tp_init */ - 0, /* tp_alloc */ - PyType_GenericNew, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ - 0, /* tp_version_tag */ - 0, /* tp_finalize */ +static PyType_Slot bufferedrandom_slots[] = { + {Py_tp_dealloc, buffered_dealloc}, + {Py_tp_repr, buffered_repr}, + {Py_tp_doc, (void *)_io_BufferedRandom___init____doc__}, + {Py_tp_traverse, buffered_traverse}, + {Py_tp_clear, buffered_clear}, + {Py_tp_iternext, buffered_iternext}, + {Py_tp_methods, bufferedrandom_methods}, + {Py_tp_members, bufferedrandom_members}, + {Py_tp_getset, bufferedrandom_getset}, + {Py_tp_init, _io_BufferedRandom___init__}, + {0, NULL}, +}; + +PyType_Spec bufferedrandom_spec = { + .name = "_io.BufferedRandom", + .basicsize = sizeof(buffered), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = bufferedrandom_slots, }; diff --git a/Modules/_io/bytesio.c b/Modules/_io/bytesio.c index 6698c60355fc..7e9d28b3b965 100644 --- a/Modules/_io/bytesio.c +++ b/Modules/_io/bytesio.c @@ -5,9 +5,9 @@ /*[clinic input] module _io -class _io.BytesIO "bytesio *" "&PyBytesIO_Type" +class _io.BytesIO "bytesio *" "clinic_state()->PyBytesIO_Type" [clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=7f50ec034f5c0b26]*/ +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=48ede2f330f847c3]*/ typedef struct { PyObject_HEAD @@ -881,6 +881,7 @@ bytesio_setstate(bytesio *self, PyObject *state) static void bytesio_dealloc(bytesio *self) { + PyTypeObject *tp = Py_TYPE(self); _PyObject_GC_UNTRACK(self); if (self->exports > 0) { PyErr_SetString(PyExc_SystemError, @@ -891,7 +892,8 @@ bytesio_dealloc(bytesio *self) Py_CLEAR(self->dict); if (self->weakreflist != NULL) PyObject_ClearWeakRefs((PyObject *) self); - Py_TYPE(self)->tp_free(self); + tp->tp_free(self); + Py_DECREF(tp); } static PyObject * @@ -971,6 +973,7 @@ bytesio_sizeof(bytesio *self, void *unused) static int bytesio_traverse(bytesio *self, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(self)); Py_VISIT(self->dict); return 0; } @@ -983,7 +986,9 @@ bytesio_clear(bytesio *self) } +#define clinic_state() (find_io_state_by_def(Py_TYPE(self))) #include "clinic/bytesio.c.h" +#undef clinic_state static PyGetSetDef bytesio_getsetlist[] = { {"closed", (getter)bytesio_get_closed, NULL, @@ -1016,48 +1021,34 @@ static struct PyMethodDef bytesio_methods[] = { {NULL, NULL} /* sentinel */ }; -PyTypeObject PyBytesIO_Type = { - PyVarObject_HEAD_INIT(NULL, 0) - "_io.BytesIO", /*tp_name*/ - sizeof(bytesio), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - (destructor)bytesio_dealloc, /*tp_dealloc*/ - 0, /*tp_vectorcall_offset*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_as_async*/ - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | - Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - _io_BytesIO___init____doc__, /*tp_doc*/ - (traverseproc)bytesio_traverse, /*tp_traverse*/ - (inquiry)bytesio_clear, /*tp_clear*/ - 0, /*tp_richcompare*/ - offsetof(bytesio, weakreflist), /*tp_weaklistoffset*/ - PyObject_SelfIter, /*tp_iter*/ - (iternextfunc)bytesio_iternext, /*tp_iternext*/ - bytesio_methods, /*tp_methods*/ - 0, /*tp_members*/ - bytesio_getsetlist, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - offsetof(bytesio, dict), /*tp_dictoffset*/ - _io_BytesIO___init__, /*tp_init*/ - 0, /*tp_alloc*/ - bytesio_new, /*tp_new*/ +static PyMemberDef bytesio_members[] = { + {"__weaklistoffset__", T_PYSSIZET, offsetof(bytesio, weakreflist), READONLY}, + {"__dictoffset__", T_PYSSIZET, offsetof(bytesio, dict), READONLY}, + {NULL} }; +static PyType_Slot bytesio_slots[] = { + {Py_tp_dealloc, bytesio_dealloc}, + {Py_tp_doc, (void *)_io_BytesIO___init____doc__}, + {Py_tp_traverse, bytesio_traverse}, + {Py_tp_clear, bytesio_clear}, + {Py_tp_iter, PyObject_SelfIter}, + {Py_tp_iternext, bytesio_iternext}, + {Py_tp_methods, bytesio_methods}, + {Py_tp_members, bytesio_members}, + {Py_tp_getset, bytesio_getsetlist}, + {Py_tp_init, _io_BytesIO___init__}, + {Py_tp_new, bytesio_new}, + {0, NULL}, +}; + +PyType_Spec bytesio_spec = { + .name = "_io.BytesIO", + .basicsize = sizeof(bytesio), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = bytesio_slots, +}; /* * Implementation of the small intermediate object used by getbuffer(). diff --git a/Modules/_io/clinic/bufferedio.c.h b/Modules/_io/clinic/bufferedio.c.h index 38ea756879c1..d44321bb8b96 100644 --- a/Modules/_io/clinic/bufferedio.c.h +++ b/Modules/_io/clinic/bufferedio.c.h @@ -601,7 +601,7 @@ static int _io_BufferedRWPair___init__(PyObject *self, PyObject *args, PyObject *kwargs) { int return_value = -1; - PyTypeObject *base_tp = &PyBufferedRWPair_Type; + PyTypeObject *base_tp = clinic_state()->PyBufferedRWPair_Type; PyObject *reader; PyObject *writer; Py_ssize_t buffer_size = DEFAULT_BUFFER_SIZE; @@ -714,4 +714,4 @@ _io_BufferedRandom___init__(PyObject *self, PyObject *args, PyObject *kwargs) exit: return return_value; } -/*[clinic end generated code: output=953f1577e96e8d86 input=a9049054013a1b77]*/ +/*[clinic end generated code: output=8412b10c04259bb8 input=a9049054013a1b77]*/ diff --git a/Modules/_io/fileio.c b/Modules/_io/fileio.c index d1a183cedac5..f424fb8439d7 100644 --- a/Modules/_io/fileio.c +++ b/Modules/_io/fileio.c @@ -51,9 +51,9 @@ /*[clinic input] module _io -class _io.FileIO "fileio *" "&PyFileIO_Type" +class _io.FileIO "fileio *" "clinic_state()->PyFileIO_Type" [clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=1c77708b41fda70c]*/ +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=ac25ec278f4d6703]*/ typedef struct { PyObject_HEAD @@ -70,9 +70,7 @@ typedef struct { PyObject *dict; } fileio; -PyTypeObject PyFileIO_Type; - -#define PyFileIO_Check(op) (PyObject_TypeCheck((op), &PyFileIO_Type)) +#define PyFileIO_Check(state, op) (PyObject_TypeCheck((op), state->PyFileIO_Type)) /* Forward declarations */ static PyObject* portable_lseek(fileio *self, PyObject *posobj, int whence, bool suppress_pipe_error); @@ -242,7 +240,10 @@ _io_FileIO___init___impl(fileio *self, PyObject *nameobj, const char *mode, int fstat_result; int async_err = 0; - assert(PyFileIO_Check(self)); +#ifdef Py_DEBUG + _PyIO_State *state = find_io_state_by_def(Py_TYPE(self)); + assert(PyFileIO_Check(state, self)); +#endif if (self->fd >= 0) { if (self->closefd) { /* Have to close the existing file first. */ @@ -503,6 +504,7 @@ _io_FileIO___init___impl(fileio *self, PyObject *nameobj, const char *mode, static int fileio_traverse(fileio *self, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(self)); Py_VISIT(self->dict); return 0; } @@ -517,6 +519,7 @@ fileio_clear(fileio *self) static void fileio_dealloc(fileio *self) { + PyTypeObject *tp = Py_TYPE(self); self->finalizing = 1; if (_PyIOBase_finalize((PyObject *) self) < 0) return; @@ -524,7 +527,8 @@ fileio_dealloc(fileio *self) if (self->weakreflist != NULL) PyObject_ClearWeakRefs((PyObject *) self); Py_CLEAR(self->dict); - Py_TYPE(self)->tp_free((PyObject *)self); + tp->tp_free((PyObject *)self); + Py_DECREF(tp); } static PyObject * @@ -1177,57 +1181,29 @@ static PyGetSetDef fileio_getsetlist[] = { static PyMemberDef fileio_members[] = { {"_blksize", T_UINT, offsetof(fileio, blksize), 0}, {"_finalizing", T_BOOL, offsetof(fileio, finalizing), 0}, + {"__weaklistoffset__", T_PYSSIZET, offsetof(fileio, weakreflist), READONLY}, + {"__dictoffset__", T_PYSSIZET, offsetof(fileio, dict), READONLY}, {NULL} }; -PyTypeObject PyFileIO_Type = { - PyVarObject_HEAD_INIT(NULL, 0) - "_io.FileIO", - sizeof(fileio), - 0, - (destructor)fileio_dealloc, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - (reprfunc)fileio_repr, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE - | Py_TPFLAGS_HAVE_GC, /* tp_flags */ - _io_FileIO___init____doc__, /* tp_doc */ - (traverseproc)fileio_traverse, /* tp_traverse */ - (inquiry)fileio_clear, /* tp_clear */ - 0, /* tp_richcompare */ - offsetof(fileio, weakreflist), /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - fileio_methods, /* tp_methods */ - fileio_members, /* tp_members */ - fileio_getsetlist, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - offsetof(fileio, dict), /* tp_dictoffset */ - _io_FileIO___init__, /* tp_init */ - PyType_GenericAlloc, /* tp_alloc */ - fileio_new, /* tp_new */ - PyObject_GC_Del, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ - 0, /* tp_version_tag */ - 0, /* tp_finalize */ +static PyType_Slot fileio_slots[] = { + {Py_tp_dealloc, fileio_dealloc}, + {Py_tp_repr, fileio_repr}, + {Py_tp_doc, (void *)_io_FileIO___init____doc__}, + {Py_tp_traverse, fileio_traverse}, + {Py_tp_clear, fileio_clear}, + {Py_tp_methods, fileio_methods}, + {Py_tp_members, fileio_members}, + {Py_tp_getset, fileio_getsetlist}, + {Py_tp_init, _io_FileIO___init__}, + {Py_tp_new, fileio_new}, + {0, NULL}, +}; + +PyType_Spec fileio_spec = { + .name = "_io.FileIO", + .basicsize = sizeof(fileio), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = fileio_slots, }; diff --git a/Modules/_io/stringio.c b/Modules/_io/stringio.c index ae6c3125a2d9..54c050f0be46 100644 --- a/Modules/_io/stringio.c +++ b/Modules/_io/stringio.c @@ -13,9 +13,9 @@ /*[clinic input] module _io -class _io.StringIO "stringio *" "&PyStringIO_Type" +class _io.StringIO "stringio *" "clinic_state()->PyStringIO_Type" [clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=c17bc0f42165cd7d]*/ +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=2693eada0658d470]*/ typedef struct { PyObject_HEAD @@ -43,6 +43,7 @@ typedef struct { PyObject *dict; PyObject *weakreflist; + _PyIO_State *module_state; } stringio; static int _io_StringIO___init__(PyObject *self, PyObject *args, PyObject *kwargs); @@ -401,7 +402,7 @@ stringio_iternext(stringio *self) CHECK_CLOSED(self); ENSURE_REALIZED(self); - if (Py_IS_TYPE(self, &PyStringIO_Type)) { + if (Py_IS_TYPE(self, self->module_state->PyStringIO_Type)) { /* Skip method call overhead for speed */ line = _stringio_readline(self, -1); } @@ -581,6 +582,7 @@ _io_StringIO_close_impl(stringio *self) static int stringio_traverse(stringio *self, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(self)); Py_VISIT(self->dict); return 0; } @@ -595,6 +597,7 @@ stringio_clear(stringio *self) static void stringio_dealloc(stringio *self) { + PyTypeObject *tp = Py_TYPE(self); _PyObject_GC_UNTRACK(self); self->ok = 0; if (self->buf) { @@ -606,9 +609,11 @@ stringio_dealloc(stringio *self) Py_CLEAR(self->writenl); Py_CLEAR(self->decoder); Py_CLEAR(self->dict); - if (self->weakreflist != NULL) + if (self->weakreflist != NULL) { PyObject_ClearWeakRefs((PyObject *) self); - Py_TYPE(self)->tp_free(self); + } + tp->tp_free(self); + Py_DECREF(tp); } static PyObject * @@ -745,7 +750,7 @@ _io_StringIO___init___impl(stringio *self, PyObject *value, self->state = STATE_ACCUMULATING; } self->pos = 0; - + self->module_state = find_io_state_by_def(Py_TYPE(self)); self->closed = 0; self->ok = 1; return 0; @@ -963,7 +968,9 @@ stringio_newlines(stringio *self, void *context) return PyObject_GetAttr(self->decoder, &_Py_ID(newlines)); } +#define clinic_state() (find_io_state_by_def(Py_TYPE(self))) #include "clinic/stringio.c.h" +#undef clinic_state static struct PyMethodDef stringio_methods[] = { _IO_STRINGIO_CLOSE_METHODDEF @@ -997,44 +1004,30 @@ static PyGetSetDef stringio_getset[] = { {NULL} }; -PyTypeObject PyStringIO_Type = { - PyVarObject_HEAD_INIT(NULL, 0) - "_io.StringIO", /*tp_name*/ - sizeof(stringio), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - (destructor)stringio_dealloc, /*tp_dealloc*/ - 0, /*tp_vectorcall_offset*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_as_async*/ - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE - | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - _io_StringIO___init____doc__, /*tp_doc*/ - (traverseproc)stringio_traverse, /*tp_traverse*/ - (inquiry)stringio_clear, /*tp_clear*/ - 0, /*tp_richcompare*/ - offsetof(stringio, weakreflist), /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - (iternextfunc)stringio_iternext, /*tp_iternext*/ - stringio_methods, /*tp_methods*/ - 0, /*tp_members*/ - stringio_getset, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - offsetof(stringio, dict), /*tp_dictoffset*/ - _io_StringIO___init__, /*tp_init*/ - 0, /*tp_alloc*/ - stringio_new, /*tp_new*/ +static struct PyMemberDef stringio_members[] = { + {"__weaklistoffset__", T_PYSSIZET, offsetof(stringio, weakreflist), READONLY}, + {"__dictoffset__", T_PYSSIZET, offsetof(stringio, dict), READONLY}, + {NULL}, +}; + +static PyType_Slot stringio_slots[] = { + {Py_tp_dealloc, stringio_dealloc}, + {Py_tp_doc, (void *)_io_StringIO___init____doc__}, + {Py_tp_traverse, stringio_traverse}, + {Py_tp_clear, stringio_clear}, + {Py_tp_iternext, stringio_iternext}, + {Py_tp_methods, stringio_methods}, + {Py_tp_members, stringio_members}, + {Py_tp_getset, stringio_getset}, + {Py_tp_init, _io_StringIO___init__}, + {Py_tp_new, stringio_new}, + {0, NULL}, +}; + +PyType_Spec stringio_spec = { + .name = "_io.StringIO", + .basicsize = sizeof(stringio), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = stringio_slots, }; diff --git a/Modules/_io/textio.c b/Modules/_io/textio.c index ea2ea32c3369..fbf0bf468403 100644 --- a/Modules/_io/textio.c +++ b/Modules/_io/textio.c @@ -19,9 +19,9 @@ /*[clinic input] module _io class _io.IncrementalNewlineDecoder "nldecoder_object *" "&PyIncrementalNewlineDecoder_Type" -class _io.TextIOWrapper "textio *" "&TextIOWrapper_Type" +class _io.TextIOWrapper "textio *" "clinic_state()->TextIOWrapper_Type" [clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=ed072384f8aada2c]*/ +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=d3f032e90f74c8f2]*/ /* TextIOBase */ @@ -682,6 +682,8 @@ typedef struct PyObject *weakreflist; PyObject *dict; + + _PyIO_State *state; } textio; static void @@ -1175,15 +1177,16 @@ _io_TextIOWrapper___init___impl(textio *self, PyObject *buffer, /* Finished sorting out the codec details */ Py_CLEAR(codec_info); - if (Py_IS_TYPE(buffer, &PyBufferedReader_Type) || - Py_IS_TYPE(buffer, &PyBufferedWriter_Type) || - Py_IS_TYPE(buffer, &PyBufferedRandom_Type)) + _PyIO_State *state = find_io_state_by_def(Py_TYPE(self)); + if (Py_IS_TYPE(buffer, state->PyBufferedReader_Type) || + Py_IS_TYPE(buffer, state->PyBufferedWriter_Type) || + Py_IS_TYPE(buffer, state->PyBufferedRandom_Type)) { if (_PyObject_LookupAttr(buffer, &_Py_ID(raw), &raw) < 0) goto error; /* Cache the raw FileIO object to speed up 'closed' checks */ if (raw != NULL) { - if (Py_IS_TYPE(raw, &PyFileIO_Type)) + if (Py_IS_TYPE(raw, state->PyFileIO_Type)) self->raw = raw; else Py_DECREF(raw); @@ -1211,6 +1214,7 @@ _io_TextIOWrapper___init___impl(textio *self, PyObject *buffer, goto error; } + self->state = state; self->ok = 1; return 0; @@ -1387,6 +1391,7 @@ textiowrapper_clear(textio *self) static void textiowrapper_dealloc(textio *self) { + PyTypeObject *tp = Py_TYPE(self); self->finalizing = 1; if (_PyIOBase_finalize((PyObject *) self) < 0) return; @@ -1395,12 +1400,14 @@ textiowrapper_dealloc(textio *self) if (self->weakreflist != NULL) PyObject_ClearWeakRefs((PyObject *)self); textiowrapper_clear(self); - Py_TYPE(self)->tp_free((PyObject *)self); + tp->tp_free((PyObject *)self); + Py_DECREF(tp); } static int textiowrapper_traverse(textio *self, visitproc visit, void *arg) { + Py_VISIT(Py_TYPE(self)); Py_VISIT(self->buffer); Py_VISIT(self->encoding); Py_VISIT(self->encoder); @@ -1424,7 +1431,7 @@ textiowrapper_closed_get(textio *self, void *context); do { \ int r; \ PyObject *_res; \ - if (Py_IS_TYPE(self, &PyTextIOWrapper_Type)) { \ + if (Py_IS_TYPE(self, self->state->PyTextIOWrapper_Type)) { \ if (self->raw != NULL) \ r = _PyFileIO_closed(self->raw); \ else { \ @@ -3053,7 +3060,7 @@ textiowrapper_iternext(textio *self) CHECK_ATTACHED(self); self->telling = 0; - if (Py_IS_TYPE(self, &PyTextIOWrapper_Type)) { + if (Py_IS_TYPE(self, self->state->PyTextIOWrapper_Type)) { /* Skip method call overhead for speed */ line = _textiowrapper_readline(self, -1); } @@ -3145,7 +3152,9 @@ textiowrapper_chunk_size_set(textio *self, PyObject *arg, void *context) return 0; } +#define clinic_state() (find_io_state_by_def(Py_TYPE(self))) #include "clinic/textio.c.h" +#undef clinic_state static PyMethodDef incrementalnewlinedecoder_methods[] = { _IO_INCREMENTALNEWLINEDECODER_DECODE_METHODDEF @@ -3229,6 +3238,8 @@ static PyMemberDef textiowrapper_members[] = { {"line_buffering", T_BOOL, offsetof(textio, line_buffering), READONLY}, {"write_through", T_BOOL, offsetof(textio, write_through), READONLY}, {"_finalizing", T_BOOL, offsetof(textio, finalizing), 0}, + {"__weaklistoffset__", T_PYSSIZET, offsetof(textio, weakreflist), READONLY}, + {"__dictoffset__", T_PYSSIZET, offsetof(textio, dict), READONLY}, {NULL} }; @@ -3244,54 +3255,24 @@ static PyGetSetDef textiowrapper_getset[] = { {NULL} }; -PyTypeObject PyTextIOWrapper_Type = { - PyVarObject_HEAD_INIT(NULL, 0) - "_io.TextIOWrapper", /*tp_name*/ - sizeof(textio), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - (destructor)textiowrapper_dealloc, /*tp_dealloc*/ - 0, /*tp_vectorcall_offset*/ - 0, /*tp_getattr*/ - 0, /*tps_etattr*/ - 0, /*tp_as_async*/ - (reprfunc)textiowrapper_repr,/*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash */ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE - | Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - _io_TextIOWrapper___init____doc__, /* tp_doc */ - (traverseproc)textiowrapper_traverse, /* tp_traverse */ - (inquiry)textiowrapper_clear, /* tp_clear */ - 0, /* tp_richcompare */ - offsetof(textio, weakreflist), /*tp_weaklistoffset*/ - 0, /* tp_iter */ - (iternextfunc)textiowrapper_iternext, /* tp_iternext */ - textiowrapper_methods, /* tp_methods */ - textiowrapper_members, /* tp_members */ - textiowrapper_getset, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - offsetof(textio, dict), /*tp_dictoffset*/ - _io_TextIOWrapper___init__, /* tp_init */ - 0, /* tp_alloc */ - PyType_GenericNew, /* tp_new */ - 0, /* tp_free */ - 0, /* tp_is_gc */ - 0, /* tp_bases */ - 0, /* tp_mro */ - 0, /* tp_cache */ - 0, /* tp_subclasses */ - 0, /* tp_weaklist */ - 0, /* tp_del */ - 0, /* tp_version_tag */ - 0, /* tp_finalize */ +PyType_Slot textiowrapper_slots[] = { + {Py_tp_dealloc, textiowrapper_dealloc}, + {Py_tp_repr, textiowrapper_repr}, + {Py_tp_doc, (void *)_io_TextIOWrapper___init____doc__}, + {Py_tp_traverse, textiowrapper_traverse}, + {Py_tp_clear, textiowrapper_clear}, + {Py_tp_iternext, textiowrapper_iternext}, + {Py_tp_methods, textiowrapper_methods}, + {Py_tp_members, textiowrapper_members}, + {Py_tp_getset, textiowrapper_getset}, + {Py_tp_init, _io_TextIOWrapper___init__}, + {0, NULL}, +}; + +PyType_Spec textiowrapper_spec = { + .name = "_io.TextIOWrapper", + .basicsize = sizeof(textio), + .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC | + Py_TPFLAGS_IMMUTABLETYPE), + .slots = textiowrapper_slots, }; diff --git a/Tools/c-analyzer/cpython/globals-to-fix.tsv b/Tools/c-analyzer/cpython/globals-to-fix.tsv index 6011b1604508..2e28c50c6ff6 100644 --- a/Tools/c-analyzer/cpython/globals-to-fix.tsv +++ b/Tools/c-analyzer/cpython/globals-to-fix.tsv @@ -317,19 +317,11 @@ Modules/_collectionsmodule.c - dequeiter_type - Modules/_collectionsmodule.c - dequereviter_type - Modules/_collectionsmodule.c - tuplegetter_type - Modules/_io/bufferedio.c - PyBufferedIOBase_Type - -Modules/_io/bufferedio.c - PyBufferedRWPair_Type - -Modules/_io/bufferedio.c - PyBufferedRandom_Type - -Modules/_io/bufferedio.c - PyBufferedReader_Type - -Modules/_io/bufferedio.c - PyBufferedWriter_Type - -Modules/_io/bytesio.c - PyBytesIO_Type - Modules/_io/bytesio.c - _PyBytesIOBuffer_Type - -Modules/_io/fileio.c - PyFileIO_Type - Modules/_io/iobase.c - PyIOBase_Type - Modules/_io/iobase.c - PyRawIOBase_Type - -Modules/_io/stringio.c - PyStringIO_Type - Modules/_io/textio.c - PyIncrementalNewlineDecoder_Type - Modules/_io/textio.c - PyTextIOBase_Type - -Modules/_io/textio.c - PyTextIOWrapper_Type - Modules/_io/winconsoleio.c - PyWindowsConsoleIO_Type - Modules/_testcapi/vectorcall.c - MethodDescriptorBase_Type - Modules/_testcapi/vectorcall.c - MethodDescriptorDerived_Type - From webhook-mailer at python.org Mon Feb 20 09:52:32 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Mon, 20 Feb 2023 14:52:32 -0000 Subject: [Python-checkins] [3.10] gh-101981: Build macOS as recommended by the devguide (GH-102070) (#102073) Message-ID: https://github.com/python/cpython/commit/63877f697d678a7c8ac7c878d8225c0b2724b839 commit: 63877f697d678a7c8ac7c878d8225c0b2724b839 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: erlend-aasland date: 2023-02-20T15:52:25+01:00 summary: [3.10] gh-101981: Build macOS as recommended by the devguide (GH-102070) (#102073) gh-101981: Build macOS as recommended by the devguide (GH-102070) (cherry picked from commit 27136310414965a3ea7f835e416cf74b91cefb48) files: M .github/workflows/build.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 9f039d71b3c2..0f3ad7b0bcf5 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -177,12 +177,19 @@ jobs: PYTHONSTRICTEXTENSIONBUILD: 1 steps: - uses: actions/checkout at v3 - - name: Prepare homebrew environment variables + - name: Install Homebrew dependencies + run: brew install pkg-config openssl at 1.1 xz gdbm tcl-tk + - name: Prepare Homebrew environment variables run: | - echo "LDFLAGS=-L$(brew --prefix tcl-tk)/lib" >> $GITHUB_ENV + echo "CFLAGS=-I$(brew --prefix gdbm)/include -I$(brew --prefix xz)/include" >> $GITHUB_ENV + echo "LDFLAGS=-L$(brew --prefix gdbm)/lib -I$(brew --prefix xz)/lib" >> $GITHUB_ENV echo "PKG_CONFIG_PATH=$(brew --prefix openssl at 1.1)/lib/pkgconfig:$(brew --prefix tcl-tk)/lib/pkgconfig" >> $GITHUB_ENV - name: Configure CPython - run: ./configure --with-pydebug --prefix=/opt/python-dev + run: | + ./configure \ + --with-pydebug \ + --prefix=/opt/python-dev \ + --with-openssl="$(brew --prefix openssl at 1.1)" - name: Build CPython run: make -j4 - name: Display build info From webhook-mailer at python.org Mon Feb 20 09:56:56 2023 From: webhook-mailer at python.org (markshannon) Date: Mon, 20 Feb 2023 14:56:56 -0000 Subject: [Python-checkins] gh-101907: Stop using `_Py_OPCODE` and `_Py_OPARG` macros (GH-101912) Message-ID: https://github.com/python/cpython/commit/a99eb5cd9947629a6745a4ad99cb07af1c287b5d commit: a99eb5cd9947629a6745a4ad99cb07af1c287b5d branch: main author: Steve Dower committer: markshannon date: 2023-02-20T14:56:48Z summary: gh-101907: Stop using `_Py_OPCODE` and `_Py_OPARG` macros (GH-101912) * gh-101907: Removes use of non-standard C++ extension from Include/cpython/code.h * Make cases_generator correct on Windows files: A Misc/NEWS.d/next/C API/2023-02-14-15-53-01.gh-issue-101907.HgF1N2.rst M Include/cpython/code.h M Objects/codeobject.c M Objects/frameobject.c M Objects/genobject.c M Objects/typeobject.c M Python/bytecodes.c M Python/ceval.c M Python/ceval_macros.h M Python/compile.c M Python/generated_cases.c.h M Python/specialize.c M Tools/cases_generator/generate_cases.py diff --git a/Include/cpython/code.h b/Include/cpython/code.h index 0cf49f06c877..fba9296aedc9 100644 --- a/Include/cpython/code.h +++ b/Include/cpython/code.h @@ -19,21 +19,35 @@ extern "C" { typedef union { uint16_t cache; struct { - uint8_t opcode; - uint8_t oparg; - }; + uint8_t code; + uint8_t arg; + } op; } _Py_CODEUNIT; -#define _Py_OPCODE(word) ((word).opcode) -#define _Py_OPARG(word) ((word).oparg) + +/* These macros only remain defined for compatibility. */ +#define _Py_OPCODE(word) ((word).op.code) +#define _Py_OPARG(word) ((word).op.arg) + +static inline _Py_CODEUNIT +_py_make_codeunit(uint8_t opcode, uint8_t oparg) +{ + // No designated initialisers because of C++ compat + _Py_CODEUNIT word; + word.op.code = opcode; + word.op.arg = oparg; + return word; +} static inline void _py_set_opcode(_Py_CODEUNIT *word, uint8_t opcode) { - word->opcode = opcode; + word->op.code = opcode; } -#define _Py_SET_OPCODE(word, opcode) _py_set_opocde(&(word), opcode) +#define _Py_MAKE_CODEUNIT(opcode, oparg) _py_make_codeunit((opcode), (oparg)) +#define _Py_SET_OPCODE(word, opcode) _py_set_opcode(&(word), (opcode)) + typedef struct { PyObject *_co_code; diff --git a/Misc/NEWS.d/next/C API/2023-02-14-15-53-01.gh-issue-101907.HgF1N2.rst b/Misc/NEWS.d/next/C API/2023-02-14-15-53-01.gh-issue-101907.HgF1N2.rst new file mode 100644 index 000000000000..cfc0d72cdbca --- /dev/null +++ b/Misc/NEWS.d/next/C API/2023-02-14-15-53-01.gh-issue-101907.HgF1N2.rst @@ -0,0 +1 @@ +Removes use of non-standard C++ extension in public header files. diff --git a/Objects/codeobject.c b/Objects/codeobject.c index ab31b6582cda..a03b14edea8d 100644 --- a/Objects/codeobject.c +++ b/Objects/codeobject.c @@ -413,7 +413,7 @@ init_code(PyCodeObject *co, struct _PyCodeConstructor *con) PyBytes_GET_SIZE(con->code)); int entry_point = 0; while (entry_point < Py_SIZE(co) && - _Py_OPCODE(_PyCode_CODE(co)[entry_point]) != RESUME) { + _PyCode_CODE(co)[entry_point].op.code != RESUME) { entry_point++; } co->_co_firsttraceable = entry_point; @@ -1505,12 +1505,12 @@ deopt_code(_Py_CODEUNIT *instructions, Py_ssize_t len) { for (int i = 0; i < len; i++) { _Py_CODEUNIT instruction = instructions[i]; - int opcode = _PyOpcode_Deopt[_Py_OPCODE(instruction)]; + int opcode = _PyOpcode_Deopt[instruction.op.code]; int caches = _PyOpcode_Caches[opcode]; - instructions[i].opcode = opcode; + instructions[i].op.code = opcode; while (caches--) { - instructions[++i].opcode = CACHE; - instructions[i].oparg = 0; + instructions[++i].op.code = CACHE; + instructions[i].op.arg = 0; } } } @@ -1763,13 +1763,13 @@ code_richcompare(PyObject *self, PyObject *other, int op) for (int i = 0; i < Py_SIZE(co); i++) { _Py_CODEUNIT co_instr = _PyCode_CODE(co)[i]; _Py_CODEUNIT cp_instr = _PyCode_CODE(cp)[i]; - co_instr.opcode = _PyOpcode_Deopt[_Py_OPCODE(co_instr)]; - cp_instr.opcode =_PyOpcode_Deopt[_Py_OPCODE(cp_instr)]; + co_instr.op.code = _PyOpcode_Deopt[co_instr.op.code]; + cp_instr.op.code = _PyOpcode_Deopt[cp_instr.op.code]; eq = co_instr.cache == cp_instr.cache; if (!eq) { goto unequal; } - i += _PyOpcode_Caches[_Py_OPCODE(co_instr)]; + i += _PyOpcode_Caches[co_instr.op.code]; } /* compare constants */ @@ -1848,9 +1848,9 @@ code_hash(PyCodeObject *co) SCRAMBLE_IN(co->co_firstlineno); SCRAMBLE_IN(Py_SIZE(co)); for (int i = 0; i < Py_SIZE(co); i++) { - int deop = _PyOpcode_Deopt[_Py_OPCODE(_PyCode_CODE(co)[i])]; + int deop = _PyOpcode_Deopt[_PyCode_CODE(co)[i].op.code]; SCRAMBLE_IN(deop); - SCRAMBLE_IN(_Py_OPARG(_PyCode_CODE(co)[i])); + SCRAMBLE_IN(_PyCode_CODE(co)[i].op.arg); i += _PyOpcode_Caches[deop]; } if ((Py_hash_t)uhash == -1) { diff --git a/Objects/frameobject.c b/Objects/frameobject.c index 581ed2d214c4..34143c9a40b2 100644 --- a/Objects/frameobject.c +++ b/Objects/frameobject.c @@ -111,13 +111,13 @@ static unsigned int get_arg(const _Py_CODEUNIT *codestr, Py_ssize_t i) { _Py_CODEUNIT word; - unsigned int oparg = _Py_OPARG(codestr[i]); - if (i >= 1 && _Py_OPCODE(word = codestr[i-1]) == EXTENDED_ARG) { - oparg |= _Py_OPARG(word) << 8; - if (i >= 2 && _Py_OPCODE(word = codestr[i-2]) == EXTENDED_ARG) { - oparg |= _Py_OPARG(word) << 16; - if (i >= 3 && _Py_OPCODE(word = codestr[i-3]) == EXTENDED_ARG) { - oparg |= _Py_OPARG(word) << 24; + unsigned int oparg = codestr[i].op.arg; + if (i >= 1 && (word = codestr[i-1]).op.code == EXTENDED_ARG) { + oparg |= word.op.arg << 8; + if (i >= 2 && (word = codestr[i-2]).op.code == EXTENDED_ARG) { + oparg |= word.op.arg << 16; + if (i >= 3 && (word = codestr[i-3]).op.code == EXTENDED_ARG) { + oparg |= word.op.arg << 24; } } } @@ -304,7 +304,7 @@ mark_stacks(PyCodeObject *code_obj, int len) if (next_stack == UNINITIALIZED) { continue; } - opcode = _Py_OPCODE(code[i]); + opcode = code[i].op.code; switch (opcode) { case JUMP_IF_FALSE_OR_POP: case JUMP_IF_TRUE_OR_POP: @@ -610,7 +610,7 @@ _PyFrame_GetState(PyFrameObject *frame) if (_PyInterpreterFrame_LASTI(frame->f_frame) < 0) { return FRAME_CREATED; } - switch (_Py_OPCODE(*frame->f_frame->prev_instr)) + switch (frame->f_frame->prev_instr->op.code) { case COPY_FREE_VARS: case MAKE_CELL: @@ -1092,8 +1092,8 @@ _PyFrame_OpAlreadyRan(_PyInterpreterFrame *frame, int opcode, int oparg) for (_Py_CODEUNIT *instruction = _PyCode_CODE(frame->f_code); instruction < frame->prev_instr; instruction++) { - int check_opcode = _PyOpcode_Deopt[_Py_OPCODE(*instruction)]; - check_oparg |= _Py_OPARG(*instruction); + int check_opcode = _PyOpcode_Deopt[instruction->op.code]; + check_oparg |= instruction->op.arg; if (check_opcode == opcode && check_oparg == oparg) { return 1; } @@ -1117,7 +1117,7 @@ frame_init_get_vars(_PyInterpreterFrame *frame) // here: PyCodeObject *co = frame->f_code; int lasti = _PyInterpreterFrame_LASTI(frame); - if (!(lasti < 0 && _Py_OPCODE(_PyCode_CODE(co)[0]) == COPY_FREE_VARS + if (!(lasti < 0 && _PyCode_CODE(co)->op.code == COPY_FREE_VARS && PyFunction_Check(frame->f_funcobj))) { /* Free vars are initialized */ diff --git a/Objects/genobject.c b/Objects/genobject.c index 35246653c453..aec64ca7004e 100644 --- a/Objects/genobject.c +++ b/Objects/genobject.c @@ -332,11 +332,11 @@ _PyGen_yf(PyGenObject *gen) /* Return immediately if the frame didn't start yet. SEND always come after LOAD_CONST: a code object should not start with SEND */ - assert(_Py_OPCODE(_PyCode_CODE(gen->gi_code)[0]) != SEND); + assert(_PyCode_CODE(gen->gi_code)[0].op.code != SEND); return NULL; } _Py_CODEUNIT next = frame->prev_instr[1]; - if (_Py_OPCODE(next) != RESUME || _Py_OPARG(next) < 2) + if (next.op.code != RESUME || next.op.arg < 2) { /* Not in a yield from */ return NULL; @@ -371,9 +371,9 @@ gen_close(PyGenObject *gen, PyObject *args) _PyInterpreterFrame *frame = (_PyInterpreterFrame *)gen->gi_iframe; /* It is possible for the previous instruction to not be a * YIELD_VALUE if the debugger has changed the lineno. */ - if (err == 0 && frame->prev_instr->opcode == YIELD_VALUE) { - assert(frame->prev_instr[1].opcode == RESUME); - int exception_handler_depth = frame->prev_instr->oparg; + if (err == 0 && frame->prev_instr[0].op.code == YIELD_VALUE) { + assert(frame->prev_instr[1].op.code == RESUME); + int exception_handler_depth = frame->prev_instr[0].op.code; assert(exception_handler_depth > 0); /* We can safely ignore the outermost try block * as it automatically generated to handle diff --git a/Objects/typeobject.c b/Objects/typeobject.c index f2e8092aa37e..2d1220a06950 100644 --- a/Objects/typeobject.c +++ b/Objects/typeobject.c @@ -9507,8 +9507,8 @@ super_init_without_args(_PyInterpreterFrame *cframe, PyCodeObject *co, if (_PyInterpreterFrame_LASTI(cframe) >= 0) { // MAKE_CELL and COPY_FREE_VARS have no quickened forms, so no need // to use _PyOpcode_Deopt here: - assert(_Py_OPCODE(_PyCode_CODE(co)[0]) == MAKE_CELL || - _Py_OPCODE(_PyCode_CODE(co)[0]) == COPY_FREE_VARS); + assert(_PyCode_CODE(co)[0].op.code == MAKE_CELL || + _PyCode_CODE(co)[0].op.code == COPY_FREE_VARS); assert(PyCell_Check(firstarg)); firstarg = PyCell_GET(firstarg); } diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 84747f1758e0..c5959f2f994f 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -246,9 +246,9 @@ dummy_func( DEOPT_IF(!PyUnicode_CheckExact(left), BINARY_OP); DEOPT_IF(Py_TYPE(right) != Py_TYPE(left), BINARY_OP); _Py_CODEUNIT true_next = next_instr[INLINE_CACHE_ENTRIES_BINARY_OP]; - assert(_Py_OPCODE(true_next) == STORE_FAST || - _Py_OPCODE(true_next) == STORE_FAST__LOAD_FAST); - PyObject **target_local = &GETLOCAL(_Py_OPARG(true_next)); + assert(true_next.op.code == STORE_FAST || + true_next.op.code == STORE_FAST__LOAD_FAST); + PyObject **target_local = &GETLOCAL(true_next.op.arg); DEOPT_IF(*target_local != left, BINARY_OP); STAT_INC(BINARY_OP, hit); /* Handle `left = left + right` or `left += right` for str. @@ -1748,10 +1748,10 @@ dummy_func( Py_DECREF(left); Py_DECREF(right); ERROR_IF(cond == NULL, error); - assert(_Py_OPCODE(next_instr[1]) == POP_JUMP_IF_FALSE || - _Py_OPCODE(next_instr[1]) == POP_JUMP_IF_TRUE); - bool jump_on_true = _Py_OPCODE(next_instr[1]) == POP_JUMP_IF_TRUE; - int offset = _Py_OPARG(next_instr[1]); + assert(next_instr[1].op.code == POP_JUMP_IF_FALSE || + next_instr[1].op.code == POP_JUMP_IF_TRUE); + bool jump_on_true = next_instr[1].op.code == POP_JUMP_IF_TRUE; + int offset = next_instr[1].op.arg; int err = PyObject_IsTrue(cond); Py_DECREF(cond); if (err < 0) { @@ -1774,7 +1774,7 @@ dummy_func( _Py_DECREF_SPECIALIZED(left, _PyFloat_ExactDealloc); _Py_DECREF_SPECIALIZED(right, _PyFloat_ExactDealloc); if (sign_ish & oparg) { - int offset = _Py_OPARG(next_instr[1]); + int offset = next_instr[1].op.arg; JUMPBY(offset); } } @@ -1795,7 +1795,7 @@ dummy_func( _Py_DECREF_SPECIALIZED(left, (destructor)PyObject_Free); _Py_DECREF_SPECIALIZED(right, (destructor)PyObject_Free); if (sign_ish & oparg) { - int offset = _Py_OPARG(next_instr[1]); + int offset = next_instr[1].op.arg; JUMPBY(offset); } } @@ -1814,7 +1814,7 @@ dummy_func( assert((oparg & 0xf) == COMPARISON_NOT_EQUALS || (oparg & 0xf) == COMPARISON_EQUALS); assert(COMPARISON_NOT_EQUALS + 1 == COMPARISON_EQUALS); if ((res + COMPARISON_NOT_EQUALS) & oparg) { - int offset = _Py_OPARG(next_instr[1]); + int offset = next_instr[1].op.arg; JUMPBY(offset); } } @@ -2122,7 +2122,7 @@ dummy_func( _PyErr_Clear(tstate); } /* iterator ended normally */ - assert(_Py_OPCODE(next_instr[INLINE_CACHE_ENTRIES_FOR_ITER + oparg]) == END_FOR); + assert(next_instr[INLINE_CACHE_ENTRIES_FOR_ITER + oparg].op.code == END_FOR); Py_DECREF(iter); STACK_SHRINK(1); /* Jump forward oparg, then skip following END_FOR instruction */ @@ -2186,7 +2186,7 @@ dummy_func( DEOPT_IF(Py_TYPE(r) != &PyRangeIter_Type, FOR_ITER); STAT_INC(FOR_ITER, hit); _Py_CODEUNIT next = next_instr[INLINE_CACHE_ENTRIES_FOR_ITER]; - assert(_PyOpcode_Deopt[_Py_OPCODE(next)] == STORE_FAST); + assert(_PyOpcode_Deopt[next.op.code] == STORE_FAST); if (r->len <= 0) { STACK_SHRINK(1); Py_DECREF(r); @@ -2197,7 +2197,7 @@ dummy_func( long value = r->start; r->start = value + r->step; r->len--; - if (_PyLong_AssignValue(&GETLOCAL(_Py_OPARG(next)), value) < 0) { + if (_PyLong_AssignValue(&GETLOCAL(next.op.arg), value) < 0) { goto error; } // The STORE_FAST is already done. @@ -2220,7 +2220,7 @@ dummy_func( gen->gi_exc_state.previous_item = tstate->exc_info; tstate->exc_info = &gen->gi_exc_state; JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + oparg); - assert(_Py_OPCODE(*next_instr) == END_FOR); + assert(next_instr->op.code == END_FOR); DISPATCH_INLINED(gen_frame); } @@ -2809,7 +2809,7 @@ dummy_func( STACK_SHRINK(3); // CALL + POP_TOP JUMPBY(INLINE_CACHE_ENTRIES_CALL + 1); - assert(_Py_OPCODE(next_instr[-1]) == POP_TOP); + assert(next_instr[-1].op.code == POP_TOP); DISPATCH(); } @@ -3118,8 +3118,8 @@ dummy_func( inst(EXTENDED_ARG, (--)) { assert(oparg); assert(cframe.use_tracing == 0); - opcode = _Py_OPCODE(*next_instr); - oparg = oparg << 8 | _Py_OPARG(*next_instr); + opcode = next_instr->op.code; + oparg = oparg << 8 | next_instr->op.arg; PRE_DISPATCH_GOTO(); DISPATCH_GOTO(); } diff --git a/Python/ceval.c b/Python/ceval.c index 308ef52259df..b85231a1df8a 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -132,8 +132,8 @@ lltrace_instruction(_PyInterpreterFrame *frame, objects enters the interpreter recursively. It is also slow. So you might want to comment it out. */ dump_stack(frame, stack_pointer); - int oparg = _Py_OPARG(*next_instr); - int opcode = _Py_OPCODE(*next_instr); + int oparg = next_instr->op.arg; + int opcode = next_instr->op.code; const char *opname = _PyOpcode_OpName[opcode]; assert(opname != NULL); int offset = (int)(next_instr - _PyCode_CODE(frame->f_code)); @@ -920,8 +920,8 @@ _PyEval_EvalFrameDefault(PyThreadState *tstate, _PyInterpreterFrame *frame, int // CPython hasn't ever traced the instruction after an EXTENDED_ARG. // Inline the EXTENDED_ARG here, so we can avoid branching there: INSTRUCTION_START(EXTENDED_ARG); - opcode = _Py_OPCODE(*next_instr); - oparg = oparg << 8 | _Py_OPARG(*next_instr); + opcode = next_instr->op.code; + oparg = oparg << 8 | next_instr->op.arg; // Make sure the next instruction isn't a RESUME, since that needs // to trace properly (and shouldn't have an EXTENDED_ARG, anyways): assert(opcode != RESUME); @@ -946,7 +946,7 @@ _PyEval_EvalFrameDefault(PyThreadState *tstate, _PyInterpreterFrame *frame, int #endif /* Tell C compilers not to hold the opcode variable in the loop. next_instr points the current instruction without TARGET(). */ - opcode = _Py_OPCODE(*next_instr); + opcode = next_instr->op.code; _PyErr_Format(tstate, PyExc_SystemError, "%U:%d: unknown opcode %d", frame->f_code->co_filename, @@ -2196,7 +2196,7 @@ maybe_call_line_trace(Py_tracefunc func, PyObject *obj, (_PyInterpreterFrame_LASTI(frame) < instr_prev && // SEND has no quickened forms, so no need to use _PyOpcode_Deopt // here: - _Py_OPCODE(*frame->prev_instr) != SEND); + frame->prev_instr->op.code != SEND); if (trace) { result = call_trace(func, obj, tstate, frame, PyTrace_LINE, Py_None); } diff --git a/Python/ceval_macros.h b/Python/ceval_macros.h index 691bf8e1caae..ac1fec77daca 100644 --- a/Python/ceval_macros.h +++ b/Python/ceval_macros.h @@ -100,7 +100,7 @@ #define DISPATCH_SAME_OPARG() \ { \ - opcode = _Py_OPCODE(*next_instr); \ + opcode = next_instr->op.code; \ PRE_DISPATCH_GOTO(); \ opcode |= cframe.use_tracing OR_DTRACE_LINE; \ DISPATCH_GOTO(); \ @@ -143,8 +143,8 @@ GETITEM(PyObject *v, Py_ssize_t i) { #define INSTR_OFFSET() ((int)(next_instr - _PyCode_CODE(frame->f_code))) #define NEXTOPARG() do { \ _Py_CODEUNIT word = *next_instr; \ - opcode = _Py_OPCODE(word); \ - oparg = _Py_OPARG(word); \ + opcode = word.op.code; \ + oparg = word.op.arg; \ } while (0) #define JUMPTO(x) (next_instr = _PyCode_CODE(frame->f_code) + (x)) #define JUMPBY(x) (next_instr += (x)) @@ -180,14 +180,14 @@ GETITEM(PyObject *v, Py_ssize_t i) { #if USE_COMPUTED_GOTOS #define PREDICT(op) if (0) goto PREDICT_ID(op) #else -#define PREDICT(op) \ +#define PREDICT(next_op) \ do { \ _Py_CODEUNIT word = *next_instr; \ - opcode = _Py_OPCODE(word) | cframe.use_tracing OR_DTRACE_LINE; \ - if (opcode == op) { \ - oparg = _Py_OPARG(word); \ - INSTRUCTION_START(op); \ - goto PREDICT_ID(op); \ + opcode = word.op.code | cframe.use_tracing OR_DTRACE_LINE; \ + if (opcode == next_op) { \ + oparg = word.op.arg; \ + INSTRUCTION_START(next_op); \ + goto PREDICT_ID(next_op); \ } \ } while(0) #endif diff --git a/Python/compile.c b/Python/compile.c index c3b344c7af2a..3f620beb0d02 100644 --- a/Python/compile.c +++ b/Python/compile.c @@ -274,31 +274,31 @@ write_instr(_Py_CODEUNIT *codestr, struct instr *instruction, int ilen) int caches = _PyOpcode_Caches[opcode]; switch (ilen - caches) { case 4: - codestr->opcode = EXTENDED_ARG; - codestr->oparg = (oparg >> 24) & 0xFF; + codestr->op.code = EXTENDED_ARG; + codestr->op.arg = (oparg >> 24) & 0xFF; codestr++; /* fall through */ case 3: - codestr->opcode = EXTENDED_ARG; - codestr->oparg = (oparg >> 16) & 0xFF; + codestr->op.code = EXTENDED_ARG; + codestr->op.arg = (oparg >> 16) & 0xFF; codestr++; /* fall through */ case 2: - codestr->opcode = EXTENDED_ARG; - codestr->oparg = (oparg >> 8) & 0xFF; + codestr->op.code = EXTENDED_ARG; + codestr->op.arg = (oparg >> 8) & 0xFF; codestr++; /* fall through */ case 1: - codestr->opcode = opcode; - codestr->oparg = oparg & 0xFF; + codestr->op.code = opcode; + codestr->op.arg = oparg & 0xFF; codestr++; break; default: Py_UNREACHABLE(); } while (caches--) { - codestr->opcode = CACHE; - codestr->oparg = 0; + codestr->op.code = CACHE; + codestr->op.arg = 0; codestr++; } } diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index 730dfb7426ac..487e63d855d1 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -339,9 +339,9 @@ DEOPT_IF(!PyUnicode_CheckExact(left), BINARY_OP); DEOPT_IF(Py_TYPE(right) != Py_TYPE(left), BINARY_OP); _Py_CODEUNIT true_next = next_instr[INLINE_CACHE_ENTRIES_BINARY_OP]; - assert(_Py_OPCODE(true_next) == STORE_FAST || - _Py_OPCODE(true_next) == STORE_FAST__LOAD_FAST); - PyObject **target_local = &GETLOCAL(_Py_OPARG(true_next)); + assert(true_next.op.code == STORE_FAST || + true_next.op.code == STORE_FAST__LOAD_FAST); + PyObject **target_local = &GETLOCAL(true_next.op.arg); DEOPT_IF(*target_local != left, BINARY_OP); STAT_INC(BINARY_OP, hit); /* Handle `left = left + right` or `left += right` for str. @@ -2199,10 +2199,10 @@ Py_DECREF(left); Py_DECREF(right); if (cond == NULL) goto pop_2_error; - assert(_Py_OPCODE(next_instr[1]) == POP_JUMP_IF_FALSE || - _Py_OPCODE(next_instr[1]) == POP_JUMP_IF_TRUE); - bool jump_on_true = _Py_OPCODE(next_instr[1]) == POP_JUMP_IF_TRUE; - int offset = _Py_OPARG(next_instr[1]); + assert(next_instr[1].op.code == POP_JUMP_IF_FALSE || + next_instr[1].op.code == POP_JUMP_IF_TRUE); + bool jump_on_true = next_instr[1].op.code == POP_JUMP_IF_TRUE; + int offset = next_instr[1].op.arg; int err = PyObject_IsTrue(cond); Py_DECREF(cond); if (err < 0) { @@ -2230,7 +2230,7 @@ _Py_DECREF_SPECIALIZED(left, _PyFloat_ExactDealloc); _Py_DECREF_SPECIALIZED(right, _PyFloat_ExactDealloc); if (sign_ish & oparg) { - int offset = _Py_OPARG(next_instr[1]); + int offset = next_instr[1].op.arg; JUMPBY(offset); } STACK_SHRINK(2); @@ -2255,7 +2255,7 @@ _Py_DECREF_SPECIALIZED(left, (destructor)PyObject_Free); _Py_DECREF_SPECIALIZED(right, (destructor)PyObject_Free); if (sign_ish & oparg) { - int offset = _Py_OPARG(next_instr[1]); + int offset = next_instr[1].op.arg; JUMPBY(offset); } STACK_SHRINK(2); @@ -2278,7 +2278,7 @@ assert((oparg & 0xf) == COMPARISON_NOT_EQUALS || (oparg & 0xf) == COMPARISON_EQUALS); assert(COMPARISON_NOT_EQUALS + 1 == COMPARISON_EQUALS); if ((res + COMPARISON_NOT_EQUALS) & oparg) { - int offset = _Py_OPARG(next_instr[1]); + int offset = next_instr[1].op.arg; JUMPBY(offset); } STACK_SHRINK(2); @@ -2682,7 +2682,7 @@ _PyErr_Clear(tstate); } /* iterator ended normally */ - assert(_Py_OPCODE(next_instr[INLINE_CACHE_ENTRIES_FOR_ITER + oparg]) == END_FOR); + assert(next_instr[INLINE_CACHE_ENTRIES_FOR_ITER + oparg].op.code == END_FOR); Py_DECREF(iter); STACK_SHRINK(1); /* Jump forward oparg, then skip following END_FOR instruction */ @@ -2761,7 +2761,7 @@ DEOPT_IF(Py_TYPE(r) != &PyRangeIter_Type, FOR_ITER); STAT_INC(FOR_ITER, hit); _Py_CODEUNIT next = next_instr[INLINE_CACHE_ENTRIES_FOR_ITER]; - assert(_PyOpcode_Deopt[_Py_OPCODE(next)] == STORE_FAST); + assert(_PyOpcode_Deopt[next.op.code] == STORE_FAST); if (r->len <= 0) { STACK_SHRINK(1); Py_DECREF(r); @@ -2772,7 +2772,7 @@ long value = r->start; r->start = value + r->step; r->len--; - if (_PyLong_AssignValue(&GETLOCAL(_Py_OPARG(next)), value) < 0) { + if (_PyLong_AssignValue(&GETLOCAL(next.op.arg), value) < 0) { goto error; } // The STORE_FAST is already done. @@ -2795,7 +2795,7 @@ gen->gi_exc_state.previous_item = tstate->exc_info; tstate->exc_info = &gen->gi_exc_state; JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + oparg); - assert(_Py_OPCODE(*next_instr) == END_FOR); + assert(next_instr->op.code == END_FOR); DISPATCH_INLINED(gen_frame); } @@ -3516,7 +3516,7 @@ STACK_SHRINK(3); // CALL + POP_TOP JUMPBY(INLINE_CACHE_ENTRIES_CALL + 1); - assert(_Py_OPCODE(next_instr[-1]) == POP_TOP); + assert(next_instr[-1].op.code == POP_TOP); DISPATCH(); } @@ -3903,8 +3903,8 @@ TARGET(EXTENDED_ARG) { assert(oparg); assert(cframe.use_tracing == 0); - opcode = _Py_OPCODE(*next_instr); - oparg = oparg << 8 | _Py_OPARG(*next_instr); + opcode = next_instr->op.code; + oparg = oparg << 8 | next_instr->op.arg; PRE_DISPATCH_GOTO(); DISPATCH_GOTO(); } diff --git a/Python/specialize.c b/Python/specialize.c index 4ede3122d380..c9555f8ad4dc 100644 --- a/Python/specialize.c +++ b/Python/specialize.c @@ -282,7 +282,7 @@ _PyCode_Quicken(PyCodeObject *code) _Py_CODEUNIT *instructions = _PyCode_CODE(code); for (int i = 0; i < Py_SIZE(code); i++) { int previous_opcode = opcode; - opcode = _PyOpcode_Deopt[_Py_OPCODE(instructions[i])]; + opcode = _PyOpcode_Deopt[instructions[i].op.code]; int caches = _PyOpcode_Caches[opcode]; if (caches) { instructions[i + 1].cache = adaptive_counter_warmup(); @@ -291,31 +291,31 @@ _PyCode_Quicken(PyCodeObject *code) } switch (previous_opcode << 8 | opcode) { case LOAD_CONST << 8 | LOAD_FAST: - instructions[i - 1].opcode = LOAD_CONST__LOAD_FAST; + instructions[i - 1].op.code = LOAD_CONST__LOAD_FAST; break; case LOAD_FAST << 8 | LOAD_CONST: - instructions[i - 1].opcode = LOAD_FAST__LOAD_CONST; + instructions[i - 1].op.code = LOAD_FAST__LOAD_CONST; break; case LOAD_FAST << 8 | LOAD_FAST: - instructions[i - 1].opcode = LOAD_FAST__LOAD_FAST; + instructions[i - 1].op.code = LOAD_FAST__LOAD_FAST; break; case STORE_FAST << 8 | LOAD_FAST: - instructions[i - 1].opcode = STORE_FAST__LOAD_FAST; + instructions[i - 1].op.code = STORE_FAST__LOAD_FAST; break; case STORE_FAST << 8 | STORE_FAST: - instructions[i - 1].opcode = STORE_FAST__STORE_FAST; + instructions[i - 1].op.code = STORE_FAST__STORE_FAST; break; case COMPARE_OP << 8 | POP_JUMP_IF_TRUE: case COMPARE_OP << 8 | POP_JUMP_IF_FALSE: { - int oparg = instructions[i - 1 - INLINE_CACHE_ENTRIES_COMPARE_OP].oparg; + int oparg = instructions[i - 1 - INLINE_CACHE_ENTRIES_COMPARE_OP].op.arg; assert((oparg >> 4) <= Py_GE); int mask = compare_masks[oparg >> 4]; if (opcode == POP_JUMP_IF_FALSE) { mask = mask ^ 0xf; } - instructions[i - 1 - INLINE_CACHE_ENTRIES_COMPARE_OP].opcode = COMPARE_AND_BRANCH; - instructions[i - 1 - INLINE_CACHE_ENTRIES_COMPARE_OP].oparg = (oparg & 0xf0) | mask; + instructions[i - 1 - INLINE_CACHE_ENTRIES_COMPARE_OP].op.code = COMPARE_AND_BRANCH; + instructions[i - 1 - INLINE_CACHE_ENTRIES_COMPARE_OP].op.arg = (oparg & 0xf0) | mask; break; } } @@ -519,7 +519,7 @@ specialize_module_load_attr( } write_u32(cache->version, keys_version); cache->index = (uint16_t)index; - _py_set_opcode(instr, LOAD_ATTR_MODULE); + instr->op.code = LOAD_ATTR_MODULE; return 0; } @@ -674,7 +674,7 @@ specialize_dict_access( } write_u32(cache->version, type->tp_version_tag); cache->index = (uint16_t)index; - _py_set_opcode(instr, values_op); + instr->op.code = values_op; } else { PyDictObject *dict = (PyDictObject *)_PyDictOrValues_GetDict(dorv); @@ -694,7 +694,7 @@ specialize_dict_access( } cache->index = (uint16_t)index; write_u32(cache->version, type->tp_version_tag); - _py_set_opcode(instr, hint_op); + instr->op.code = hint_op; } return 1; } @@ -739,7 +739,7 @@ _Py_Specialize_LoadAttr(PyObject *owner, _Py_CODEUNIT *instr, PyObject *name) goto fail; case METHOD: { - int oparg = _Py_OPARG(*instr); + int oparg = instr->op.arg; if (oparg & 1) { if (specialize_attr_loadmethod(owner, instr, name, descr, kind)) { goto success; @@ -775,7 +775,7 @@ _Py_Specialize_LoadAttr(PyObject *owner, _Py_CODEUNIT *instr, PyObject *name) write_u32(lm_cache->type_version, type->tp_version_tag); /* borrowed */ write_obj(lm_cache->descr, fget); - _py_set_opcode(instr, LOAD_ATTR_PROPERTY); + instr->op.code = LOAD_ATTR_PROPERTY; goto success; } case OBJECT_SLOT: @@ -799,7 +799,7 @@ _Py_Specialize_LoadAttr(PyObject *owner, _Py_CODEUNIT *instr, PyObject *name) assert(offset > 0); cache->index = (uint16_t)offset; write_u32(cache->version, type->tp_version_tag); - _py_set_opcode(instr, LOAD_ATTR_SLOT); + instr->op.code = LOAD_ATTR_SLOT; goto success; } case DUNDER_CLASS: @@ -808,7 +808,7 @@ _Py_Specialize_LoadAttr(PyObject *owner, _Py_CODEUNIT *instr, PyObject *name) assert(offset == (uint16_t)offset); cache->index = (uint16_t)offset; write_u32(cache->version, type->tp_version_tag); - _py_set_opcode(instr, LOAD_ATTR_SLOT); + instr->op.code = LOAD_ATTR_SLOT; goto success; } case OTHER_SLOT: @@ -836,7 +836,7 @@ _Py_Specialize_LoadAttr(PyObject *owner, _Py_CODEUNIT *instr, PyObject *name) /* borrowed */ write_obj(lm_cache->descr, descr); write_u32(lm_cache->type_version, type->tp_version_tag); - _py_set_opcode(instr, LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN); + instr->op.code = LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN; goto success; } case BUILTIN_CLASSMETHOD: @@ -867,7 +867,7 @@ _Py_Specialize_LoadAttr(PyObject *owner, _Py_CODEUNIT *instr, PyObject *name) fail: STAT_INC(LOAD_ATTR, failure); assert(!PyErr_Occurred()); - _py_set_opcode(instr, LOAD_ATTR); + instr->op.code = LOAD_ATTR; cache->counter = adaptive_counter_backoff(cache->counter); return; success: @@ -927,7 +927,7 @@ _Py_Specialize_StoreAttr(PyObject *owner, _Py_CODEUNIT *instr, PyObject *name) assert(offset > 0); cache->index = (uint16_t)offset; write_u32(cache->version, type->tp_version_tag); - _py_set_opcode(instr, STORE_ATTR_SLOT); + instr->op.code = STORE_ATTR_SLOT; goto success; } case DUNDER_CLASS: @@ -963,7 +963,7 @@ _Py_Specialize_StoreAttr(PyObject *owner, _Py_CODEUNIT *instr, PyObject *name) fail: STAT_INC(STORE_ATTR, failure); assert(!PyErr_Occurred()); - _py_set_opcode(instr, STORE_ATTR); + instr->op.code = STORE_ATTR; cache->counter = adaptive_counter_backoff(cache->counter); return; success: @@ -1027,7 +1027,7 @@ specialize_class_load_attr(PyObject *owner, _Py_CODEUNIT *instr, case NON_DESCRIPTOR: write_u32(cache->type_version, ((PyTypeObject *)owner)->tp_version_tag); write_obj(cache->descr, descr); - _py_set_opcode(instr, LOAD_ATTR_CLASS); + instr->op.code = LOAD_ATTR_CLASS; return 0; #ifdef Py_STATS case ABSENT: @@ -1069,7 +1069,7 @@ PyObject *descr, DescriptorClassification kind) return 0; } write_u32(cache->keys_version, keys_version); - _py_set_opcode(instr, LOAD_ATTR_METHOD_WITH_VALUES); + instr->op.code = LOAD_ATTR_METHOD_WITH_VALUES; } else { Py_ssize_t dictoffset = owner_cls->tp_dictoffset; @@ -1078,7 +1078,7 @@ PyObject *descr, DescriptorClassification kind) return 0; } if (dictoffset == 0) { - _py_set_opcode(instr, LOAD_ATTR_METHOD_NO_DICT); + instr->op.code = LOAD_ATTR_METHOD_NO_DICT; } else { PyObject *dict = *(PyObject **) ((char *)owner + dictoffset); @@ -1088,7 +1088,7 @@ PyObject *descr, DescriptorClassification kind) } assert(owner_cls->tp_dictoffset > 0); assert(owner_cls->tp_dictoffset <= INT16_MAX); - _py_set_opcode(instr, LOAD_ATTR_METHOD_LAZY_DICT); + instr->op.code = LOAD_ATTR_METHOD_LAZY_DICT; } } /* `descr` is borrowed. This is safe for methods (even inherited ones from @@ -1146,7 +1146,7 @@ _Py_Specialize_LoadGlobal( } cache->index = (uint16_t)index; write_u32(cache->module_keys_version, keys_version); - _py_set_opcode(instr, LOAD_GLOBAL_MODULE); + instr->op.code = LOAD_GLOBAL_MODULE; goto success; } if (!PyDict_CheckExact(builtins)) { @@ -1184,12 +1184,12 @@ _Py_Specialize_LoadGlobal( cache->index = (uint16_t)index; write_u32(cache->module_keys_version, globals_version); cache->builtin_keys_version = (uint16_t)builtins_version; - _py_set_opcode(instr, LOAD_GLOBAL_BUILTIN); + instr->op.code = LOAD_GLOBAL_BUILTIN; goto success; fail: STAT_INC(LOAD_GLOBAL, failure); assert(!PyErr_Occurred()); - _py_set_opcode(instr, LOAD_GLOBAL); + instr->op.code = LOAD_GLOBAL; cache->counter = adaptive_counter_backoff(cache->counter); return; success: @@ -1295,7 +1295,7 @@ _Py_Specialize_BinarySubscr( if (container_type == &PyList_Type) { if (PyLong_CheckExact(sub)) { if (Py_SIZE(sub) == 0 || Py_SIZE(sub) == 1) { - _py_set_opcode(instr, BINARY_SUBSCR_LIST_INT); + instr->op.code = BINARY_SUBSCR_LIST_INT; goto success; } SPECIALIZATION_FAIL(BINARY_SUBSCR, SPEC_FAIL_OUT_OF_RANGE); @@ -1308,7 +1308,7 @@ _Py_Specialize_BinarySubscr( if (container_type == &PyTuple_Type) { if (PyLong_CheckExact(sub)) { if (Py_SIZE(sub) == 0 || Py_SIZE(sub) == 1) { - _py_set_opcode(instr, BINARY_SUBSCR_TUPLE_INT); + instr->op.code = BINARY_SUBSCR_TUPLE_INT; goto success; } SPECIALIZATION_FAIL(BINARY_SUBSCR, SPEC_FAIL_OUT_OF_RANGE); @@ -1319,7 +1319,7 @@ _Py_Specialize_BinarySubscr( goto fail; } if (container_type == &PyDict_Type) { - _py_set_opcode(instr, BINARY_SUBSCR_DICT); + instr->op.code = BINARY_SUBSCR_DICT; goto success; } PyTypeObject *cls = Py_TYPE(container); @@ -1350,7 +1350,7 @@ _Py_Specialize_BinarySubscr( } cache->func_version = version; ((PyHeapTypeObject *)container_type)->_spec_cache.getitem = descriptor; - _py_set_opcode(instr, BINARY_SUBSCR_GETITEM); + instr->op.code = BINARY_SUBSCR_GETITEM; goto success; } SPECIALIZATION_FAIL(BINARY_SUBSCR, @@ -1358,7 +1358,7 @@ _Py_Specialize_BinarySubscr( fail: STAT_INC(BINARY_SUBSCR, failure); assert(!PyErr_Occurred()); - _py_set_opcode(instr, BINARY_SUBSCR); + instr->op.code = BINARY_SUBSCR; cache->counter = adaptive_counter_backoff(cache->counter); return; success: @@ -1378,7 +1378,7 @@ _Py_Specialize_StoreSubscr(PyObject *container, PyObject *sub, _Py_CODEUNIT *ins if ((Py_SIZE(sub) == 0 || Py_SIZE(sub) == 1) && ((PyLongObject *)sub)->long_value.ob_digit[0] < (size_t)PyList_GET_SIZE(container)) { - _py_set_opcode(instr, STORE_SUBSCR_LIST_INT); + instr->op.code = STORE_SUBSCR_LIST_INT; goto success; } else { @@ -1396,8 +1396,8 @@ _Py_Specialize_StoreSubscr(PyObject *container, PyObject *sub, _Py_CODEUNIT *ins } } if (container_type == &PyDict_Type) { - _py_set_opcode(instr, STORE_SUBSCR_DICT); - goto success; + instr->op.code = STORE_SUBSCR_DICT; + goto success; } #ifdef Py_STATS PyMappingMethods *as_mapping = container_type->tp_as_mapping; @@ -1463,7 +1463,7 @@ _Py_Specialize_StoreSubscr(PyObject *container, PyObject *sub, _Py_CODEUNIT *ins fail: STAT_INC(STORE_SUBSCR, failure); assert(!PyErr_Occurred()); - _py_set_opcode(instr, STORE_SUBSCR); + instr->op.code = STORE_SUBSCR; cache->counter = adaptive_counter_backoff(cache->counter); return; success: @@ -1482,23 +1482,23 @@ specialize_class_call(PyObject *callable, _Py_CODEUNIT *instr, int nargs, return -1; } if (tp->tp_flags & Py_TPFLAGS_IMMUTABLETYPE) { - int oparg = _Py_OPARG(*instr); + int oparg = instr->op.arg; if (nargs == 1 && kwnames == NULL && oparg == 1) { if (tp == &PyUnicode_Type) { - _py_set_opcode(instr, CALL_NO_KW_STR_1); + instr->op.code = CALL_NO_KW_STR_1; return 0; } else if (tp == &PyType_Type) { - _py_set_opcode(instr, CALL_NO_KW_TYPE_1); + instr->op.code = CALL_NO_KW_TYPE_1; return 0; } else if (tp == &PyTuple_Type) { - _py_set_opcode(instr, CALL_NO_KW_TUPLE_1); + instr->op.code = CALL_NO_KW_TUPLE_1; return 0; } } if (tp->tp_vectorcall != NULL) { - _py_set_opcode(instr, CALL_BUILTIN_CLASS); + instr->op.code = CALL_BUILTIN_CLASS; return 0; } SPECIALIZATION_FAIL(CALL, tp == &PyUnicode_Type ? @@ -1573,7 +1573,7 @@ specialize_method_descriptor(PyMethodDescrObject *descr, _Py_CODEUNIT *instr, SPECIALIZATION_FAIL(CALL, SPEC_FAIL_WRONG_NUMBER_ARGUMENTS); return -1; } - _py_set_opcode(instr, CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS); + instr->op.code = CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS; return 0; } case METH_O: { @@ -1584,21 +1584,21 @@ specialize_method_descriptor(PyMethodDescrObject *descr, _Py_CODEUNIT *instr, PyInterpreterState *interp = _PyInterpreterState_GET(); PyObject *list_append = interp->callable_cache.list_append; _Py_CODEUNIT next = instr[INLINE_CACHE_ENTRIES_CALL + 1]; - bool pop = (_Py_OPCODE(next) == POP_TOP); - int oparg = _Py_OPARG(*instr); + bool pop = (next.op.code == POP_TOP); + int oparg = instr->op.arg; if ((PyObject *)descr == list_append && oparg == 1 && pop) { - _py_set_opcode(instr, CALL_NO_KW_LIST_APPEND); + instr->op.code = CALL_NO_KW_LIST_APPEND; return 0; } - _py_set_opcode(instr, CALL_NO_KW_METHOD_DESCRIPTOR_O); + instr->op.code = CALL_NO_KW_METHOD_DESCRIPTOR_O; return 0; } case METH_FASTCALL: { - _py_set_opcode(instr, CALL_NO_KW_METHOD_DESCRIPTOR_FAST); + instr->op.code = CALL_NO_KW_METHOD_DESCRIPTOR_FAST; return 0; } case METH_FASTCALL | METH_KEYWORDS: { - _py_set_opcode(instr, CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS); + instr->op.code = CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS; return 0; } } @@ -1649,14 +1649,14 @@ specialize_py_call(PyFunctionObject *func, _Py_CODEUNIT *instr, int nargs, write_u32(cache->func_version, version); cache->min_args = min_args; if (argcount == nargs) { - _py_set_opcode(instr, bound_method ? CALL_BOUND_METHOD_EXACT_ARGS : CALL_PY_EXACT_ARGS); + instr->op.code = bound_method ? CALL_BOUND_METHOD_EXACT_ARGS : CALL_PY_EXACT_ARGS; } else if (bound_method) { SPECIALIZATION_FAIL(CALL, SPEC_FAIL_CALL_BOUND_METHOD); return -1; } else { - _py_set_opcode(instr, CALL_PY_WITH_DEFAULTS); + instr->op.code = CALL_PY_WITH_DEFAULTS; } return 0; } @@ -1683,10 +1683,10 @@ specialize_c_call(PyObject *callable, _Py_CODEUNIT *instr, int nargs, /* len(o) */ PyInterpreterState *interp = _PyInterpreterState_GET(); if (callable == interp->callable_cache.len) { - _py_set_opcode(instr, CALL_NO_KW_LEN); + instr->op.code = CALL_NO_KW_LEN; return 0; } - _py_set_opcode(instr, CALL_NO_KW_BUILTIN_O); + instr->op.code = CALL_NO_KW_BUILTIN_O; return 0; } case METH_FASTCALL: { @@ -1698,15 +1698,15 @@ specialize_c_call(PyObject *callable, _Py_CODEUNIT *instr, int nargs, /* isinstance(o1, o2) */ PyInterpreterState *interp = _PyInterpreterState_GET(); if (callable == interp->callable_cache.isinstance) { - _py_set_opcode(instr, CALL_NO_KW_ISINSTANCE); + instr->op.code = CALL_NO_KW_ISINSTANCE; return 0; } } - _py_set_opcode(instr, CALL_NO_KW_BUILTIN_FAST); + instr->op.code = CALL_NO_KW_BUILTIN_FAST; return 0; } case METH_FASTCALL | METH_KEYWORDS: { - _py_set_opcode(instr, CALL_BUILTIN_FAST_WITH_KEYWORDS); + instr->op.code = CALL_BUILTIN_FAST_WITH_KEYWORDS; return 0; } default: @@ -1785,7 +1785,7 @@ _Py_Specialize_Call(PyObject *callable, _Py_CODEUNIT *instr, int nargs, if (fail) { STAT_INC(CALL, failure); assert(!PyErr_Occurred()); - _py_set_opcode(instr, CALL); + instr->op.code = CALL; cache->counter = adaptive_counter_backoff(cache->counter); } else { @@ -1880,21 +1880,21 @@ _Py_Specialize_BinaryOp(PyObject *lhs, PyObject *rhs, _Py_CODEUNIT *instr, } if (PyUnicode_CheckExact(lhs)) { _Py_CODEUNIT next = instr[INLINE_CACHE_ENTRIES_BINARY_OP + 1]; - bool to_store = (_Py_OPCODE(next) == STORE_FAST || - _Py_OPCODE(next) == STORE_FAST__LOAD_FAST); - if (to_store && locals[_Py_OPARG(next)] == lhs) { - _py_set_opcode(instr, BINARY_OP_INPLACE_ADD_UNICODE); + bool to_store = (next.op.code == STORE_FAST || + next.op.code == STORE_FAST__LOAD_FAST); + if (to_store && locals[next.op.arg] == lhs) { + instr->op.code = BINARY_OP_INPLACE_ADD_UNICODE; goto success; } - _py_set_opcode(instr, BINARY_OP_ADD_UNICODE); + instr->op.code = BINARY_OP_ADD_UNICODE; goto success; } if (PyLong_CheckExact(lhs)) { - _py_set_opcode(instr, BINARY_OP_ADD_INT); + instr->op.code = BINARY_OP_ADD_INT; goto success; } if (PyFloat_CheckExact(lhs)) { - _py_set_opcode(instr, BINARY_OP_ADD_FLOAT); + instr->op.code = BINARY_OP_ADD_FLOAT; goto success; } break; @@ -1904,11 +1904,11 @@ _Py_Specialize_BinaryOp(PyObject *lhs, PyObject *rhs, _Py_CODEUNIT *instr, break; } if (PyLong_CheckExact(lhs)) { - _py_set_opcode(instr, BINARY_OP_MULTIPLY_INT); + instr->op.code = BINARY_OP_MULTIPLY_INT; goto success; } if (PyFloat_CheckExact(lhs)) { - _py_set_opcode(instr, BINARY_OP_MULTIPLY_FLOAT); + instr->op.code = BINARY_OP_MULTIPLY_FLOAT; goto success; } break; @@ -1918,18 +1918,18 @@ _Py_Specialize_BinaryOp(PyObject *lhs, PyObject *rhs, _Py_CODEUNIT *instr, break; } if (PyLong_CheckExact(lhs)) { - _py_set_opcode(instr, BINARY_OP_SUBTRACT_INT); + instr->op.code = BINARY_OP_SUBTRACT_INT; goto success; } if (PyFloat_CheckExact(lhs)) { - _py_set_opcode(instr, BINARY_OP_SUBTRACT_FLOAT); + instr->op.code = BINARY_OP_SUBTRACT_FLOAT; goto success; } break; } SPECIALIZATION_FAIL(BINARY_OP, binary_op_fail_kind(oparg, lhs, rhs)); STAT_INC(BINARY_OP, failure); - _py_set_opcode(instr, BINARY_OP); + instr->op.code = BINARY_OP; cache->counter = adaptive_counter_backoff(cache->counter); return; success: @@ -1981,7 +1981,7 @@ _Py_Specialize_CompareAndBranch(PyObject *lhs, PyObject *rhs, _Py_CODEUNIT *inst assert(_PyOpcode_Caches[COMPARE_AND_BRANCH] == INLINE_CACHE_ENTRIES_COMPARE_OP); _PyCompareOpCache *cache = (_PyCompareOpCache *)(instr + 1); #ifndef NDEBUG - int next_opcode = _Py_OPCODE(instr[INLINE_CACHE_ENTRIES_COMPARE_OP + 1]); + int next_opcode = instr[INLINE_CACHE_ENTRIES_COMPARE_OP + 1].op.code; assert(next_opcode == POP_JUMP_IF_FALSE || next_opcode == POP_JUMP_IF_TRUE); #endif if (Py_TYPE(lhs) != Py_TYPE(rhs)) { @@ -1989,12 +1989,12 @@ _Py_Specialize_CompareAndBranch(PyObject *lhs, PyObject *rhs, _Py_CODEUNIT *inst goto failure; } if (PyFloat_CheckExact(lhs)) { - _py_set_opcode(instr, COMPARE_AND_BRANCH_FLOAT); + instr->op.code = COMPARE_AND_BRANCH_FLOAT; goto success; } if (PyLong_CheckExact(lhs)) { if (Py_ABS(Py_SIZE(lhs)) <= 1 && Py_ABS(Py_SIZE(rhs)) <= 1) { - _py_set_opcode(instr, COMPARE_AND_BRANCH_INT); + instr->op.code = COMPARE_AND_BRANCH_INT; goto success; } else { @@ -2009,14 +2009,14 @@ _Py_Specialize_CompareAndBranch(PyObject *lhs, PyObject *rhs, _Py_CODEUNIT *inst goto failure; } else { - _py_set_opcode(instr, COMPARE_AND_BRANCH_STR); + instr->op.code = COMPARE_AND_BRANCH_STR; goto success; } } SPECIALIZATION_FAIL(COMPARE_AND_BRANCH, compare_op_fail_kind(lhs, rhs)); failure: STAT_INC(COMPARE_AND_BRANCH, failure); - _py_set_opcode(instr, COMPARE_AND_BRANCH); + instr->op.code = COMPARE_AND_BRANCH; cache->counter = adaptive_counter_backoff(cache->counter); return; success: @@ -2051,10 +2051,10 @@ _Py_Specialize_UnpackSequence(PyObject *seq, _Py_CODEUNIT *instr, int oparg) goto failure; } if (PyTuple_GET_SIZE(seq) == 2) { - _py_set_opcode(instr, UNPACK_SEQUENCE_TWO_TUPLE); + instr->op.code = UNPACK_SEQUENCE_TWO_TUPLE; goto success; } - _py_set_opcode(instr, UNPACK_SEQUENCE_TUPLE); + instr->op.code = UNPACK_SEQUENCE_TUPLE; goto success; } if (PyList_CheckExact(seq)) { @@ -2062,13 +2062,13 @@ _Py_Specialize_UnpackSequence(PyObject *seq, _Py_CODEUNIT *instr, int oparg) SPECIALIZATION_FAIL(UNPACK_SEQUENCE, SPEC_FAIL_EXPECTED_ERROR); goto failure; } - _py_set_opcode(instr, UNPACK_SEQUENCE_LIST); + instr->op.code = UNPACK_SEQUENCE_LIST; goto success; } SPECIALIZATION_FAIL(UNPACK_SEQUENCE, unpack_sequence_fail_kind(seq)); failure: STAT_INC(UNPACK_SEQUENCE, failure); - _py_set_opcode(instr, UNPACK_SEQUENCE); + instr->op.code = UNPACK_SEQUENCE; cache->counter = adaptive_counter_backoff(cache->counter); return; success: @@ -2156,28 +2156,28 @@ _Py_Specialize_ForIter(PyObject *iter, _Py_CODEUNIT *instr, int oparg) _PyForIterCache *cache = (_PyForIterCache *)(instr + 1); PyTypeObject *tp = Py_TYPE(iter); _Py_CODEUNIT next = instr[1+INLINE_CACHE_ENTRIES_FOR_ITER]; - int next_op = _PyOpcode_Deopt[_Py_OPCODE(next)]; + int next_op = _PyOpcode_Deopt[next.op.code]; if (tp == &PyListIter_Type) { - _py_set_opcode(instr, FOR_ITER_LIST); + instr->op.code = FOR_ITER_LIST; goto success; } else if (tp == &PyTupleIter_Type) { - _py_set_opcode(instr, FOR_ITER_TUPLE); + instr->op.code = FOR_ITER_TUPLE; goto success; } else if (tp == &PyRangeIter_Type && next_op == STORE_FAST) { - _py_set_opcode(instr, FOR_ITER_RANGE); + instr->op.code = FOR_ITER_RANGE; goto success; } else if (tp == &PyGen_Type && oparg <= SHRT_MAX) { - assert(_Py_OPCODE(instr[oparg + INLINE_CACHE_ENTRIES_FOR_ITER + 1]) == END_FOR); - _py_set_opcode(instr, FOR_ITER_GEN); + assert(instr[oparg + INLINE_CACHE_ENTRIES_FOR_ITER + 1].op.code == END_FOR); + instr->op.code = FOR_ITER_GEN; goto success; } SPECIALIZATION_FAIL(FOR_ITER, _PySpecialization_ClassifyIterator(iter)); STAT_INC(FOR_ITER, failure); - _py_set_opcode(instr, FOR_ITER); + instr->op.code = FOR_ITER; cache->counter = adaptive_counter_backoff(cache->counter); return; success: @@ -2193,13 +2193,13 @@ _Py_Specialize_Send(PyObject *receiver, _Py_CODEUNIT *instr) _PySendCache *cache = (_PySendCache *)(instr + 1); PyTypeObject *tp = Py_TYPE(receiver); if (tp == &PyGen_Type || tp == &PyCoro_Type) { - _py_set_opcode(instr, SEND_GEN); + instr->op.code = SEND_GEN; goto success; } SPECIALIZATION_FAIL(SEND, _PySpecialization_ClassifyIterator(receiver)); STAT_INC(SEND, failure); - _py_set_opcode(instr, SEND); + instr->op.code = SEND; cache->counter = adaptive_counter_backoff(cache->counter); return; success: diff --git a/Tools/cases_generator/generate_cases.py b/Tools/cases_generator/generate_cases.py index aa8e14075c87..c7f52b55a5eb 100644 --- a/Tools/cases_generator/generate_cases.py +++ b/Tools/cases_generator/generate_cases.py @@ -8,6 +8,7 @@ import contextlib import dataclasses import os +import posixpath import re import sys import typing @@ -17,7 +18,7 @@ HERE = os.path.dirname(__file__) ROOT = os.path.join(HERE, "../..") -THIS = os.path.relpath(__file__, ROOT) +THIS = os.path.relpath(__file__, ROOT).replace(os.path.sep, posixpath.sep) DEFAULT_INPUT = os.path.relpath(os.path.join(ROOT, "Python/bytecodes.c")) DEFAULT_OUTPUT = os.path.relpath(os.path.join(ROOT, "Python/generated_cases.c.h")) @@ -930,7 +931,7 @@ def write_metadata(self) -> None: with open(self.metadata_filename, "w") as f: # Write provenance header f.write(f"// This file is generated by {THIS} --metadata\n") - f.write(f"// from {os.path.relpath(self.filename, ROOT)}\n") + f.write(f"// from {os.path.relpath(self.filename, ROOT).replace(os.path.sep, posixpath.sep)}\n") f.write(f"// Do not edit!\n") # Create formatter; the rest of the code uses this @@ -1009,7 +1010,7 @@ def write_instructions(self) -> None: with open(self.output_filename, "w") as f: # Write provenance header f.write(f"// This file is generated by {THIS}\n") - f.write(f"// from {os.path.relpath(self.filename, ROOT)}\n") + f.write(f"// from {os.path.relpath(self.filename, ROOT).replace(os.path.sep, posixpath.sep)}\n") f.write(f"// Do not edit!\n") # Create formatter; the rest of the code uses this From webhook-mailer at python.org Mon Feb 20 10:20:26 2023 From: webhook-mailer at python.org (corona10) Date: Mon, 20 Feb 2023 15:20:26 -0000 Subject: [Python-checkins] gh-101981: Apply HOMEBREW related environment variables (gh-102074) Message-ID: https://github.com/python/cpython/commit/ed01addb59a554804995303ad3e7bf0c6067737b commit: ed01addb59a554804995303ad3e7bf0c6067737b branch: main author: Dong-hee Na committer: corona10 date: 2023-02-21T00:20:18+09:00 summary: gh-101981: Apply HOMEBREW related environment variables (gh-102074) files: M .github/workflows/build.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index acc8d936774a..eec11e25a7c7 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -154,6 +154,9 @@ jobs: needs: check_source if: needs.check_source.outputs.run_tests == 'true' env: + HOMEBREW_NO_ANALYTICS: 1 + HOMEBREW_NO_AUTO_UPDATE: 1 + HOMEBREW_NO_INSTALL_CLEANUP: 1 PYTHONSTRICTEXTENSIONBUILD: 1 steps: - uses: actions/checkout at v3 From webhook-mailer at python.org Mon Feb 20 10:43:40 2023 From: webhook-mailer at python.org (corona10) Date: Mon, 20 Feb 2023 15:43:40 -0000 Subject: [Python-checkins] [3.10] gh-101981: Fix Ubuntu SSL tests with OpenSSL (3.1.0-beta1) CI issue (gh-102079) Message-ID: https://github.com/python/cpython/commit/c218132f935ba046663056f30fbcfd86a757a593 commit: c218132f935ba046663056f30fbcfd86a757a593 branch: 3.10 author: Dong-hee Na committer: corona10 date: 2023-02-21T00:43:33+09:00 summary: [3.10] gh-101981: Fix Ubuntu SSL tests with OpenSSL (3.1.0-beta1) CI issue (gh-102079) files: M .github/workflows/build.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 0f3ad7b0bcf5..07261dc6bdb6 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -266,7 +266,7 @@ jobs: echo "LD_LIBRARY_PATH=${GITHUB_WORKSPACE}/multissl/openssl/${OPENSSL_VER}/lib" >> $GITHUB_ENV - name: 'Restore OpenSSL build' id: cache-openssl - uses: actions/cache at v3.0.2 + uses: actions/cache at v3 with: path: ./multissl/openssl/${{ env.OPENSSL_VER }} key: ${{ runner.os }}-multissl-openssl-${{ env.OPENSSL_VER }} @@ -277,7 +277,7 @@ jobs: run: | echo "PATH=/usr/lib/ccache:$PATH" >> $GITHUB_ENV - name: Configure ccache action - uses: hendrikmuhs/ccache-action at v1 + uses: hendrikmuhs/ccache-action at v1.2 - name: Configure CPython run: ./configure --with-pydebug --with-openssl=$OPENSSL_DIR - name: Build CPython From webhook-mailer at python.org Mon Feb 20 11:22:21 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 20 Feb 2023 16:22:21 -0000 Subject: [Python-checkins] gh-101981: Apply HOMEBREW related environment variables (gh-102074) Message-ID: https://github.com/python/cpython/commit/95f4e2ca03efbe020ae590108c022219008bbaf3 commit: 95f4e2ca03efbe020ae590108c022219008bbaf3 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-20T08:22:13-08:00 summary: gh-101981: Apply HOMEBREW related environment variables (gh-102074) (cherry picked from commit ed01addb59a554804995303ad3e7bf0c6067737b) Co-authored-by: Dong-hee Na files: M .github/workflows/build.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 07261dc6bdb6..1686c172e2a3 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -174,6 +174,9 @@ jobs: needs: check_source if: needs.check_source.outputs.run_tests == 'true' env: + HOMEBREW_NO_ANALYTICS: 1 + HOMEBREW_NO_AUTO_UPDATE: 1 + HOMEBREW_NO_INSTALL_CLEANUP: 1 PYTHONSTRICTEXTENSIONBUILD: 1 steps: - uses: actions/checkout at v3 From webhook-mailer at python.org Mon Feb 20 11:25:10 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 20 Feb 2023 16:25:10 -0000 Subject: [Python-checkins] gh-101981: Apply HOMEBREW related environment variables (gh-102074) Message-ID: https://github.com/python/cpython/commit/1747be464197eb82c486fadbd7e6a19e011bcf19 commit: 1747be464197eb82c486fadbd7e6a19e011bcf19 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-20T08:24:49-08:00 summary: gh-101981: Apply HOMEBREW related environment variables (gh-102074) (cherry picked from commit ed01addb59a554804995303ad3e7bf0c6067737b) Co-authored-by: Dong-hee Na files: M .github/workflows/build.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index beb390eb7734..5dfc5e5bae50 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -187,6 +187,9 @@ jobs: needs: check_source if: needs.check_source.outputs.run_tests == 'true' env: + HOMEBREW_NO_ANALYTICS: 1 + HOMEBREW_NO_AUTO_UPDATE: 1 + HOMEBREW_NO_INSTALL_CLEANUP: 1 PYTHONSTRICTEXTENSIONBUILD: 1 steps: - uses: actions/checkout at v3 From webhook-mailer at python.org Mon Feb 20 12:07:10 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 20 Feb 2023 17:07:10 -0000 Subject: [Python-checkins] gh-88233: zipfile: handle extras after a zip64 extra (GH-96161) Message-ID: https://github.com/python/cpython/commit/59e86caca812fc993c5eb7dc8ccd1508ffccba86 commit: 59e86caca812fc993c5eb7dc8ccd1508ffccba86 branch: main author: Tim Hatch committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-20T09:07:03-08:00 summary: gh-88233: zipfile: handle extras after a zip64 extra (GH-96161) Previously, any data _after_ the zip64 extra would be removed. With many new tests. Fixes #88233 Automerge-Triggered-By: GH:jaraco files: A Misc/NEWS.d/next/Library/2022-09-05-12-17-34.gh-issue-88233.gff9qJ.rst M Lib/test/test_zipfile/test_core.py M Lib/zipfile/__init__.py diff --git a/Lib/test/test_zipfile/test_core.py b/Lib/test/test_zipfile/test_core.py index cf41d0e8cb8d..e23f5c2a8556 100644 --- a/Lib/test/test_zipfile/test_core.py +++ b/Lib/test/test_zipfile/test_core.py @@ -3010,5 +3010,67 @@ def test_cli_with_metadata_encoding_extract(self): self.assertIn(name, listing) +class StripExtraTests(unittest.TestCase): + # Note: all of the "z" characters are technically invalid, but up + # to 3 bytes at the end of the extra will be passed through as they + # are too short to encode a valid extra. + + ZIP64_EXTRA = 1 + + def test_no_data(self): + s = struct.Struct(" https://github.com/python/cpython/commit/62c0327487ce39ec6504919b7712e0f197075ef9 commit: 62c0327487ce39ec6504919b7712e0f197075ef9 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-20T09:33:00-08:00 summary: gh-88233: zipfile: handle extras after a zip64 extra (GH-96161) Previously, any data _after_ the zip64 extra would be removed. With many new tests. Fixes GH-88233 (cherry picked from commit 59e86caca812fc993c5eb7dc8ccd1508ffccba86) Co-authored-by: Tim Hatch Automerge-Triggered-By: GH:jaraco files: A Misc/NEWS.d/next/Library/2022-09-05-12-17-34.gh-issue-88233.gff9qJ.rst M Lib/test/test_zipfile.py M Lib/zipfile.py diff --git a/Lib/test/test_zipfile.py b/Lib/test/test_zipfile.py index 35d78d96a68e..da4ddb916f8c 100644 --- a/Lib/test/test_zipfile.py +++ b/Lib/test/test_zipfile.py @@ -3485,5 +3485,67 @@ def test_cli_with_metadata_encoding_extract(self): self.assertIn(name, listing) +class StripExtraTests(unittest.TestCase): + # Note: all of the "z" characters are technically invalid, but up + # to 3 bytes at the end of the extra will be passed through as they + # are too short to encode a valid extra. + + ZIP64_EXTRA = 1 + + def test_no_data(self): + s = struct.Struct(" https://github.com/python/cpython/commit/84181c14040ed2befe7f1a55b4f560c80fa61154 commit: 84181c14040ed2befe7f1a55b4f560c80fa61154 branch: main author: Filipe La?ns committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-20T11:21:10-08:00 summary: GH-99818: improve the documentation for zipfile.Path and Traversable (GH-101589) Automerge-Triggered-By: GH:FFY00 files: M Doc/library/importlib.resources.abc.rst M Doc/library/zipfile.rst diff --git a/Doc/library/importlib.resources.abc.rst b/Doc/library/importlib.resources.abc.rst index 7747e89a833c..2d0f137ffc79 100644 --- a/Doc/library/importlib.resources.abc.rst +++ b/Doc/library/importlib.resources.abc.rst @@ -89,9 +89,12 @@ .. class:: Traversable - An object with a subset of pathlib.Path methods suitable for + An object with a subset of :class:`pathlib.Path` methods suitable for traversing directories and opening files. + For a representation of the object on the file-system, use + :meth:`importlib.resources.as_file`. + .. versionadded:: 3.9 .. deprecated-removed:: 3.12 3.14 diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst index 1c516723f04a..0195abc3a992 100644 --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -55,8 +55,9 @@ The module defines the following items: .. class:: Path :noindex: - A pathlib-compatible wrapper for zip files. See section - :ref:`path-objects` for details. + Class that implements a subset of the interface provided by + :class:`pathlib.Path`, including the full + :class:`importlib.resources.abc.Traversable` interface. .. versionadded:: 3.8 From webhook-mailer at python.org Mon Feb 20 16:02:27 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 20 Feb 2023 21:02:27 -0000 Subject: [Python-checkins] gh-101566: Sync with zipp 3.14. (GH-102018) Message-ID: https://github.com/python/cpython/commit/36854bbb240e417c0df6f0014924fcc899388186 commit: 36854bbb240e417c0df6f0014924fcc899388186 branch: main author: Jason R. Coombs committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-20T13:01:58-08:00 summary: gh-101566: Sync with zipp 3.14. (GH-102018) files: A Lib/test/test_zipfile/_context.py A Lib/test/test_zipfile/_func_timeout_compat.py A Misc/NEWS.d/next/Library/2023-02-17-20-24-15.gh-issue-101566.FjgWBt.rst M Lib/test/test_zipfile/_itertools.py M Lib/test/test_zipfile/test_path.py M Lib/zipfile/_path.py diff --git a/Lib/test/test_zipfile/_context.py b/Lib/test/test_zipfile/_context.py new file mode 100644 index 000000000000..348798aa08d4 --- /dev/null +++ b/Lib/test/test_zipfile/_context.py @@ -0,0 +1,30 @@ +import contextlib +import time + + +class DeadlineExceeded(Exception): + pass + + +class TimedContext(contextlib.ContextDecorator): + """ + A context that will raise DeadlineExceeded if the + max duration is reached during the execution. + + >>> TimedContext(1)(time.sleep)(.1) + >>> TimedContext(0)(time.sleep)(.1) + Traceback (most recent call last): + ... + tests._context.DeadlineExceeded: (..., 0) + """ + + def __init__(self, max_duration: int): + self.max_duration = max_duration + + def __enter__(self): + self.start = time.monotonic() + + def __exit__(self, *err): + duration = time.monotonic() - self.start + if duration > self.max_duration: + raise DeadlineExceeded(duration, self.max_duration) diff --git a/Lib/test/test_zipfile/_func_timeout_compat.py b/Lib/test/test_zipfile/_func_timeout_compat.py new file mode 100644 index 000000000000..b1f2b2698a85 --- /dev/null +++ b/Lib/test/test_zipfile/_func_timeout_compat.py @@ -0,0 +1,8 @@ +try: + from func_timeout import func_set_timeout as set_timeout +except ImportError: # pragma: no cover + # provide a fallback that doesn't actually time out + from ._context import TimedContext as set_timeout + + +__all__ = ['set_timeout'] diff --git a/Lib/test/test_zipfile/_itertools.py b/Lib/test/test_zipfile/_itertools.py index 559f3f111b88..74f01fe5ba3d 100644 --- a/Lib/test/test_zipfile/_itertools.py +++ b/Lib/test/test_zipfile/_itertools.py @@ -1,3 +1,32 @@ +import itertools + + +# from jaraco.itertools 6.3.0 +class Counter: + """ + Wrap an iterable in an object that stores the count of items + that pass through it. + + >>> items = Counter(range(20)) + >>> items.count + 0 + >>> values = list(items) + >>> items.count + 20 + """ + + def __init__(self, i): + self.count = 0 + self.iter = zip(itertools.count(1), i) + + def __iter__(self): + return self + + def __next__(self): + self.count, result = next(self.iter) + return result + + # from more_itertools v8.13.0 def always_iterable(obj, base_type=(str, bytes)): if obj is None: diff --git a/Lib/test/test_zipfile/test_path.py b/Lib/test/test_zipfile/test_path.py index 3086fd2080a9..92fda690b2fc 100644 --- a/Lib/test/test_zipfile/test_path.py +++ b/Lib/test/test_zipfile/test_path.py @@ -4,36 +4,25 @@ import pathlib import pickle import string -from test.support.script_helper import assert_python_ok +import sys import unittest import zipfile -from ._test_params import parameterize, Invoked from ._functools import compose +from ._itertools import Counter +from ._test_params import parameterize, Invoked +from ._func_timeout_compat import set_timeout from test.support.os_helper import temp_dir -# Poor man's technique to consume a (smallish) iterable. -consume = tuple - - -# from jaraco.itertools 5.0 class jaraco: class itertools: - class Counter: - def __init__(self, i): - self.count = 0 - self._orig_iter = iter(i) + Counter = Counter - def __iter__(self): - return self - def __next__(self): - result = next(self._orig_iter) - self.count += 1 - return result +consume = tuple def add_dirs(zf): @@ -161,10 +150,10 @@ def test_open_encoding_utf16(self): u16 = path.joinpath("16.txt") with u16.open('r', "utf-16") as strm: data = strm.read() - self.assertEqual(data, "This was utf-16") + assert data == "This was utf-16" with u16.open(encoding="utf-16") as strm: data = strm.read() - self.assertEqual(data, "This was utf-16") + assert data == "This was utf-16" def test_open_encoding_errors(self): in_memory_file = io.BytesIO() @@ -177,9 +166,9 @@ def test_open_encoding_errors(self): # encoding= as a positional argument for gh-101144. data = u16.read_text("utf-8", errors="ignore") - self.assertEqual(data, "invalid utf-8: .") + assert data == "invalid utf-8: ." with u16.open("r", "utf-8", errors="surrogateescape") as f: - self.assertEqual(f.read(), "invalid utf-8: \udcff\udcff.") + assert f.read() == "invalid utf-8: \udcff\udcff." # encoding= both positional and keyword is an error; gh-101144. with self.assertRaisesRegex(TypeError, "encoding"): @@ -191,24 +180,21 @@ def test_open_encoding_errors(self): with self.assertRaises(UnicodeDecodeError): f.read() - def test_encoding_warnings(self): + @unittest.skipIf( + not getattr(sys.flags, 'warn_default_encoding', 0), + "Requires warn_default_encoding", + ) + @pass_alpharep + def test_encoding_warnings(self, alpharep): """EncodingWarning must blame the read_text and open calls.""" - code = '''\ -import io, zipfile -with zipfile.ZipFile(io.BytesIO(), "w") as zf: - zf.filename = '' - zf.writestr("path/file.txt", b"Spanish Inquisition") - root = zipfile.Path(zf) - (path,) = root.iterdir() - file_path = path.joinpath("file.txt") - unused = file_path.read_text() # should warn - file_path.open("r").close() # should warn -''' - proc = assert_python_ok('-X', 'warn_default_encoding', '-c', code) - warnings = proc.err.splitlines() - self.assertEqual(len(warnings), 2, proc.err) - self.assertRegex(warnings[0], rb"^:8: EncodingWarning:") - self.assertRegex(warnings[1], rb"^:9: EncodingWarning:") + assert sys.flags.warn_default_encoding + root = zipfile.Path(alpharep) + with self.assertWarns(EncodingWarning) as wc: + root.joinpath("a.txt").read_text() + assert __file__ == wc.filename + with self.assertWarns(EncodingWarning) as wc: + root.joinpath("a.txt").open("r").close() + assert __file__ == wc.filename def test_open_write(self): """ @@ -250,7 +236,8 @@ def test_read(self, alpharep): root = zipfile.Path(alpharep) a, b, g = root.iterdir() assert a.read_text(encoding="utf-8") == "content of a" - a.read_text("utf-8") # No positional arg TypeError per gh-101144. + # Also check positional encoding arg (gh-101144). + assert a.read_text("utf-8") == "content of a" assert a.read_bytes() == b"content of a" @pass_alpharep @@ -275,19 +262,6 @@ def test_traverse_truediv(self, alpharep): e = root / "b" / "d" / "e.txt" assert e.read_text(encoding="utf-8") == "content of e" - @pass_alpharep - def test_traverse_simplediv(self, alpharep): - """ - Disable the __future__.division when testing traversal. - """ - code = compile( - source="zipfile.Path(alpharep) / 'a'", - filename="(test)", - mode="eval", - dont_inherit=True, - ) - eval(code) - @pass_alpharep def test_pathlike_construction(self, alpharep): """ @@ -356,7 +330,7 @@ def test_joinpath_constant_time(self): # Check the file iterated all items assert entries.count == self.HUGE_ZIPFILE_NUM_ENTRIES - # @func_timeout.func_set_timeout(3) + @set_timeout(3) def test_implied_dirs_performance(self): data = ['/'.join(string.ascii_lowercase + str(n)) for n in range(10000)] zipfile.CompleteDirs._implied_dirs(data) @@ -472,6 +446,52 @@ def test_root_unnamed(self, alpharep): assert sub.name == "b" assert sub.parent + @pass_alpharep + def test_match_and_glob(self, alpharep): + root = zipfile.Path(alpharep) + assert not root.match("*.txt") + + assert list(root.glob("b/c.*")) == [zipfile.Path(alpharep, "b/c.txt")] + + files = root.glob("**/*.txt") + assert all(each.match("*.txt") for each in files) + + assert list(root.glob("**/*.txt")) == list(root.rglob("*.txt")) + + def test_glob_empty(self): + root = zipfile.Path(zipfile.ZipFile(io.BytesIO(), 'w')) + with self.assertRaises(ValueError): + root.glob('') + + @pass_alpharep + def test_eq_hash(self, alpharep): + root = zipfile.Path(alpharep) + assert root == zipfile.Path(alpharep) + + assert root != (root / "a.txt") + assert (root / "a.txt") == (root / "a.txt") + + root = zipfile.Path(alpharep) + assert root in {root} + + @pass_alpharep + def test_is_symlink(self, alpharep): + """ + See python/cpython#82102 for symlink support beyond this object. + """ + + root = zipfile.Path(alpharep) + assert not root.is_symlink() + + @pass_alpharep + def test_relative_to(self, alpharep): + root = zipfile.Path(alpharep) + relative = root.joinpath("b", "c.txt").relative_to(root / "b") + assert str(relative) == "c.txt" + + relative = root.joinpath("b", "d", "e.txt").relative_to(root / "b") + assert str(relative) == "d/e.txt" + @pass_alpharep def test_inheritance(self, alpharep): cls = type('PathChild', (zipfile.Path,), {}) @@ -493,3 +513,14 @@ def test_pickle(self, alpharep, path_type, subpath): restored_1 = pickle.loads(saved_1) first, *rest = restored_1.iterdir() assert first.read_text().startswith('content of ') + + @pass_alpharep + def test_extract_orig_with_implied_dirs(self, alpharep): + """ + A zip file wrapped in a Path should extract even with implied dirs. + """ + source_path = self.zipfile_ondisk(alpharep) + zf = zipfile.ZipFile(source_path) + # wrap the zipfile for its side effect + zipfile.Path(zf) + zf.extractall(source_path.parent) diff --git a/Lib/zipfile/_path.py b/Lib/zipfile/_path.py index 7c7a6a0e2c0d..c2c804e96d6b 100644 --- a/Lib/zipfile/_path.py +++ b/Lib/zipfile/_path.py @@ -4,6 +4,8 @@ import itertools import contextlib import pathlib +import re +import fnmatch __all__ = ['Path'] @@ -93,7 +95,7 @@ def _implied_dirs(names): return _dedupe(_difference(as_dirs, names)) def namelist(self): - names = super(CompleteDirs, self).namelist() + names = super().namelist() return names + list(self._implied_dirs(names)) def _name_set(self): @@ -109,6 +111,17 @@ def resolve_dir(self, name): dir_match = name not in names and dirname in names return dirname if dir_match else name + def getinfo(self, name): + """ + Supplement getinfo for implied dirs. + """ + try: + return super().getinfo(name) + except KeyError: + if not name.endswith('/') or name not in self._name_set(): + raise + return zipfile.ZipInfo(filename=name) + @classmethod def make(cls, source): """ @@ -138,13 +151,13 @@ class FastLookup(CompleteDirs): def namelist(self): with contextlib.suppress(AttributeError): return self.__names - self.__names = super(FastLookup, self).namelist() + self.__names = super().namelist() return self.__names def _name_set(self): with contextlib.suppress(AttributeError): return self.__lookup - self.__lookup = super(FastLookup, self)._name_set() + self.__lookup = super()._name_set() return self.__lookup @@ -246,6 +259,18 @@ def __init__(self, root, at=""): self.root = FastLookup.make(root) self.at = at + def __eq__(self, other): + """ + >>> Path(zipfile.ZipFile(io.BytesIO(), 'w')) == 'foo' + False + """ + if self.__class__ is not other.__class__: + return NotImplemented + return (self.root, self.at) == (other.root, other.at) + + def __hash__(self): + return hash((self.root, self.at)) + def open(self, mode='r', *args, pwd=None, **kwargs): """ Open this entry as text or binary following the semantics @@ -316,6 +341,38 @@ def iterdir(self): subs = map(self._next, self.root.namelist()) return filter(self._is_child, subs) + def match(self, path_pattern): + return pathlib.Path(self.at).match(path_pattern) + + def is_symlink(self): + """ + Return whether this path is a symlink. Always false (python/cpython#82102). + """ + return False + + def _descendants(self): + for child in self.iterdir(): + yield child + if child.is_dir(): + yield from child._descendants() + + def glob(self, pattern): + if not pattern: + raise ValueError(f"Unacceptable pattern: {pattern!r}") + + matches = re.compile(fnmatch.translate(pattern)).fullmatch + return ( + child + for child in self._descendants() + if matches(str(child.relative_to(self))) + ) + + def rglob(self, pattern): + return self.glob(f'**/{pattern}') + + def relative_to(self, other, *extra): + return posixpath.relpath(str(self), str(other.joinpath(*extra))) + def __str__(self): return posixpath.join(self.root.filename, self.at) diff --git a/Misc/NEWS.d/next/Library/2023-02-17-20-24-15.gh-issue-101566.FjgWBt.rst b/Misc/NEWS.d/next/Library/2023-02-17-20-24-15.gh-issue-101566.FjgWBt.rst new file mode 100644 index 000000000000..5fc1a0288a82 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-17-20-24-15.gh-issue-101566.FjgWBt.rst @@ -0,0 +1,4 @@ +In zipfile, sync Path with `zipp 3.14 +`_, including +fix for extractall on the underlying zipfile after being wrapped in +``Path``. From webhook-mailer at python.org Mon Feb 20 16:54:28 2023 From: webhook-mailer at python.org (iritkatriel) Date: Mon, 20 Feb 2023 21:54:28 -0000 Subject: [Python-checkins] gh-102011: use sys.exception() instead of sys.exc_info() in docs where possible (#102012) Message-ID: https://github.com/python/cpython/commit/4d3bc89a3f54c4f09756a9b644b3912bf54191a7 commit: 4d3bc89a3f54c4f09756a9b644b3912bf54191a7 branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-20T21:54:19Z summary: gh-102011: use sys.exception() instead of sys.exc_info() in docs where possible (#102012) files: M Doc/library/exceptions.rst M Doc/library/traceback.rst M Doc/library/types.rst M Doc/library/wsgiref.rst M Doc/reference/compound_stmts.rst M Doc/reference/datamodel.rst diff --git a/Doc/library/exceptions.rst b/Doc/library/exceptions.rst index 1217b817b4e8..4a57e9c87993 100644 --- a/Doc/library/exceptions.rst +++ b/Doc/library/exceptions.rst @@ -123,7 +123,7 @@ The following exceptions are used mostly as base classes for other exceptions. try: ... except SomeException: - tb = sys.exc_info()[2] + tb = sys.exception().__traceback__ raise OtherException(...).with_traceback(tb) .. method:: add_note(note) diff --git a/Doc/library/traceback.rst b/Doc/library/traceback.rst index 69818baf184d..561c85290463 100644 --- a/Doc/library/traceback.rst +++ b/Doc/library/traceback.rst @@ -16,9 +16,8 @@ interpreter. .. index:: object: traceback -The module uses traceback objects --- this is the object type that is stored in -the :data:`sys.last_traceback` variable and returned as the third item from -:func:`sys.exc_info`. +The module uses traceback objects --- these are objects of type :class:`types.TracebackType`, +which are assigned to the ``__traceback__`` field of :class:`BaseException` instances. .. seealso:: @@ -81,7 +80,7 @@ The module defines the following functions: .. function:: print_exc(limit=None, file=None, chain=True) - This is a shorthand for ``print_exception(*sys.exc_info(), limit, file, + This is a shorthand for ``print_exception(sys.exception(), limit, file, chain)``. @@ -444,11 +443,11 @@ exception and traceback: try: lumberjack() except IndexError: - exc_type, exc_value, exc_traceback = sys.exc_info() + exc = sys.exception() print("*** print_tb:") - traceback.print_tb(exc_traceback, limit=1, file=sys.stdout) + traceback.print_tb(exc.__traceback__, limit=1, file=sys.stdout) print("*** print_exception:") - traceback.print_exception(exc_value, limit=2, file=sys.stdout) + traceback.print_exception(exc, limit=2, file=sys.stdout) print("*** print_exc:") traceback.print_exc(limit=2, file=sys.stdout) print("*** format_exc, first and last line:") @@ -456,12 +455,12 @@ exception and traceback: print(formatted_lines[0]) print(formatted_lines[-1]) print("*** format_exception:") - print(repr(traceback.format_exception(exc_value))) + print(repr(traceback.format_exception(exc))) print("*** extract_tb:") - print(repr(traceback.extract_tb(exc_traceback))) + print(repr(traceback.extract_tb(exc.__traceback__))) print("*** format_tb:") - print(repr(traceback.format_tb(exc_traceback))) - print("*** tb_lineno:", exc_traceback.tb_lineno) + print(repr(traceback.format_tb(exc.__traceback__))) + print("*** tb_lineno:", exc.__traceback__.tb_lineno) The output for the example would look similar to this: diff --git a/Doc/library/types.rst b/Doc/library/types.rst index cce0ad960edf..415413c4cd97 100644 --- a/Doc/library/types.rst +++ b/Doc/library/types.rst @@ -320,7 +320,7 @@ Standard names are defined for the following types: .. class:: TracebackType(tb_next, tb_frame, tb_lasti, tb_lineno) - The type of traceback objects such as found in ``sys.exc_info()[2]``. + The type of traceback objects such as found in ``sys.exception().__traceback__``. See :ref:`the language reference ` for details of the available attributes and operations, and guidance on creating tracebacks diff --git a/Doc/library/wsgiref.rst b/Doc/library/wsgiref.rst index 75dea4663351..39a4c1ba1423 100644 --- a/Doc/library/wsgiref.rst +++ b/Doc/library/wsgiref.rst @@ -674,7 +674,7 @@ input, output, and error streams. This method is a WSGI application to generate an error page for the user. It is only invoked if an error occurs before headers are sent to the client. - This method can access the current error information using ``sys.exc_info()``, + This method can access the current error using ``sys.exception()``, and should pass that information to *start_response* when calling it (as described in the "Error Handling" section of :pep:`3333`). diff --git a/Doc/reference/compound_stmts.rst b/Doc/reference/compound_stmts.rst index d5cb2899c231..52bb0b22b042 100644 --- a/Doc/reference/compound_stmts.rst +++ b/Doc/reference/compound_stmts.rst @@ -301,31 +301,28 @@ keeping all locals in that frame alive until the next garbage collection occurs. object: traceback Before an :keyword:`!except` clause's suite is executed, -details about the exception are -stored in the :mod:`sys` module and can be accessed via :func:`sys.exc_info`. -:func:`sys.exc_info` returns a 3-tuple consisting of the exception class, the -exception instance and a traceback object (see section :ref:`types`) identifying -the point in the program where the exception occurred. The details about the -exception accessed via :func:`sys.exc_info` are restored to their previous values -when leaving an exception handler:: - - >>> print(sys.exc_info()) - (None, None, None) +the exception is stored in the :mod:`sys` module, where it can be accessed +from within the body of the :keyword:`!except` clause by calling +:func:`sys.exception`. When leaving an exception handler, the exception +stored in the :mod:`sys` module is reset to its previous value:: + + >>> print(sys.exception()) + None >>> try: ... raise TypeError ... except: - ... print(sys.exc_info()) + ... print(repr(sys.exception())) ... try: ... raise ValueError ... except: - ... print(sys.exc_info()) - ... print(sys.exc_info()) + ... print(repr(sys.exception())) + ... print(repr(sys.exception())) ... - (, TypeError(), ) - (, ValueError(), ) - (, TypeError(), ) - >>> print(sys.exc_info()) - (None, None, None) + TypeError() + ValueError() + TypeError() + >>> print(sys.exception()) + None .. index:: diff --git a/Doc/reference/datamodel.rst b/Doc/reference/datamodel.rst index e4e471e50079..f447bbb1216d 100644 --- a/Doc/reference/datamodel.rst +++ b/Doc/reference/datamodel.rst @@ -1122,6 +1122,7 @@ Internal types single: exc_info (in module sys) single: last_traceback (in module sys) single: sys.exc_info + single: sys.exception single: sys.last_traceback Traceback objects represent a stack trace of an exception. A traceback object From webhook-mailer at python.org Mon Feb 20 17:16:26 2023 From: webhook-mailer at python.org (iritkatriel) Date: Mon, 20 Feb 2023 22:16:26 -0000 Subject: [Python-checkins] gh-102056: Fix a few bugs in error handling of exception printing code (#102078) Message-ID: https://github.com/python/cpython/commit/022b44f2546c44183e4df7b67e3e64502fae9143 commit: 022b44f2546c44183e4df7b67e3e64502fae9143 branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-20T22:16:09Z summary: gh-102056: Fix a few bugs in error handling of exception printing code (#102078) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-20-15-18-33.gh-issue-102056.uHKuwH.rst M Lib/test/test_threading.py M Python/pythonrun.c diff --git a/Lib/test/test_threading.py b/Lib/test/test_threading.py index 7fea2d38673e..a39a267b403d 100644 --- a/Lib/test/test_threading.py +++ b/Lib/test/test_threading.py @@ -1523,6 +1523,37 @@ def run(): self.assertEqual(out, b'') self.assertNotIn("Unhandled exception", err.decode()) + def test_print_exception_gh_102056(self): + # This used to crash. See gh-102056. + script = r"""if True: + import time + import threading + import _thread + + def f(): + try: + f() + except RecursionError: + f() + + def g(): + try: + raise ValueError() + except* ValueError: + f() + + def h(): + time.sleep(1) + _thread.interrupt_main() + + t = threading.Thread(target=h) + t.start() + g() + t.join() + """ + + assert_python_failure("-c", script) + def test_bare_raise_in_brand_new_thread(self): def bare_raise(): raise diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-20-15-18-33.gh-issue-102056.uHKuwH.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-20-15-18-33.gh-issue-102056.uHKuwH.rst new file mode 100644 index 000000000000..78cd525b365f --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-20-15-18-33.gh-issue-102056.uHKuwH.rst @@ -0,0 +1 @@ +Fix error handling bugs in interpreter's exception printing code, which could cause a crash on infinite recursion. diff --git a/Python/pythonrun.c b/Python/pythonrun.c index ce993ea8796c..34a44dd92847 100644 --- a/Python/pythonrun.c +++ b/Python/pythonrun.c @@ -1246,8 +1246,7 @@ print_chained(struct exception_print_context* ctx, PyObject *value, const char * message, const char *tag) { PyObject *f = ctx->file; - - if (_Py_EnterRecursiveCall(" in print_chained") < 0) { + if (_Py_EnterRecursiveCall(" in print_chained")) { return -1; } bool need_close = ctx->need_close; @@ -1374,7 +1373,9 @@ print_exception_group(struct exception_print_context *ctx, PyObject *value) if (ctx->exception_group_depth == 0) { ctx->exception_group_depth += 1; } - print_exception(ctx, value); + if (print_exception(ctx, value) < 0) { + return -1; + } PyObject *excs = ((PyBaseExceptionGroupObject *)value)->excs; assert(excs && PyTuple_Check(excs)); @@ -1424,7 +1425,7 @@ print_exception_group(struct exception_print_context *ctx, PyObject *value) PyObject *exc = PyTuple_GET_ITEM(excs, i); if (!truncated) { - if (_Py_EnterRecursiveCall(" in print_exception_group") != 0) { + if (_Py_EnterRecursiveCall(" in print_exception_group")) { return -1; } int res = print_exception_recursive(ctx, exc); @@ -1477,22 +1478,30 @@ print_exception_group(struct exception_print_context *ctx, PyObject *value) static int print_exception_recursive(struct exception_print_context *ctx, PyObject *value) { + if (_Py_EnterRecursiveCall(" in print_exception_recursive")) { + return -1; + } if (ctx->seen != NULL) { /* Exception chaining */ if (print_exception_cause_and_context(ctx, value) < 0) { - return -1; + goto error; } } if (!_PyBaseExceptionGroup_Check(value)) { if (print_exception(ctx, value) < 0) { - return -1; + goto error; } } else if (print_exception_group(ctx, value) < 0) { - return -1; + goto error; } assert(!PyErr_Occurred()); + + _Py_LeaveRecursiveCall(); return 0; +error: + _Py_LeaveRecursiveCall(); + return -1; } #define PyErr_MAX_GROUP_WIDTH 15 From webhook-mailer at python.org Mon Feb 20 18:21:56 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 20 Feb 2023 23:21:56 -0000 Subject: [Python-checkins] [3.10] gh-101566: Sync with zipp 3.14. (GH-102018). (GH-102091) Message-ID: https://github.com/python/cpython/commit/7bb41d9d5d2f0be30fd367fd085bb307db1a1ca3 commit: 7bb41d9d5d2f0be30fd367fd085bb307db1a1ca3 branch: 3.10 author: Jason R. Coombs committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-20T15:21:48-08:00 summary: [3.10] gh-101566: Sync with zipp 3.14. (GH-102018). (GH-102091) (cherry picked from commit 36854bbb240e417c0df6f0014924fcc899388186) Includes the bugfix only. Automerge-Triggered-By: GH:jaraco files: A Misc/NEWS.d/next/Library/2023-02-17-20-24-15.gh-issue-101566.FjgWBt.rst M Lib/test/test_zipfile.py M Lib/zipfile.py diff --git a/Lib/test/test_zipfile.py b/Lib/test/test_zipfile.py index cf40412f8526..c7371f59859e 100644 --- a/Lib/test/test_zipfile.py +++ b/Lib/test/test_zipfile.py @@ -3209,6 +3209,17 @@ def test_inheritance(self, alpharep): file = cls(alpharep).joinpath('some dir').parent assert isinstance(file, cls) + @pass_alpharep + def test_extract_orig_with_implied_dirs(self, alpharep): + """ + A zip file wrapped in a Path should extract even with implied dirs. + """ + source_path = self.zipfile_ondisk(alpharep) + zf = zipfile.ZipFile(source_path) + # wrap the zipfile for its side effect + zipfile.Path(zf) + zf.extractall(source_path.parent) + if __name__ == "__main__": unittest.main() diff --git a/Lib/zipfile.py b/Lib/zipfile.py index dc4eb38e3a90..23e4ee42d4d9 100644 --- a/Lib/zipfile.py +++ b/Lib/zipfile.py @@ -2197,6 +2197,17 @@ def resolve_dir(self, name): dir_match = name not in names and dirname in names return dirname if dir_match else name + def getinfo(self, name): + """ + Supplement getinfo for implied dirs. + """ + try: + return super().getinfo(name) + except KeyError: + if not name.endswith('/') or name not in self._name_set(): + raise + return ZipInfo(filename=name) + @classmethod def make(cls, source): """ diff --git a/Misc/NEWS.d/next/Library/2023-02-17-20-24-15.gh-issue-101566.FjgWBt.rst b/Misc/NEWS.d/next/Library/2023-02-17-20-24-15.gh-issue-101566.FjgWBt.rst new file mode 100644 index 000000000000..f0c0c249a54b --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-17-20-24-15.gh-issue-101566.FjgWBt.rst @@ -0,0 +1,3 @@ +In zipfile, apply +fix for extractall on the underlying zipfile after being wrapped in +``Path``. From webhook-mailer at python.org Mon Feb 20 18:22:09 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 20 Feb 2023 23:22:09 -0000 Subject: [Python-checkins] [3.11] gh-101566: Sync with zipp 3.14. (GH-102018). (GH-102090) Message-ID: https://github.com/python/cpython/commit/d15e9589f33557b43c868ff709f2533ff1cd7e5c commit: d15e9589f33557b43c868ff709f2533ff1cd7e5c branch: 3.11 author: Jason R. Coombs committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-20T15:22:03-08:00 summary: [3.11] gh-101566: Sync with zipp 3.14. (GH-102018). (GH-102090) (cherry picked from commit 36854bbb240e417c0df6f0014924fcc899388186) Backport of bugfix only. Automerge-Triggered-By: GH:jaraco files: A Misc/NEWS.d/next/Library/2023-02-17-20-24-15.gh-issue-101566.FjgWBt.rst M Lib/test/test_zipfile.py M Lib/zipfile.py diff --git a/Lib/test/test_zipfile.py b/Lib/test/test_zipfile.py index da4ddb916f8c..ffb954659e42 100644 --- a/Lib/test/test_zipfile.py +++ b/Lib/test/test_zipfile.py @@ -3339,6 +3339,17 @@ def test_inheritance(self, alpharep): file = cls(alpharep).joinpath('some dir').parent assert isinstance(file, cls) + @pass_alpharep + def test_extract_orig_with_implied_dirs(self, alpharep): + """ + A zip file wrapped in a Path should extract even with implied dirs. + """ + source_path = self.zipfile_ondisk(alpharep) + zf = zipfile.ZipFile(source_path) + # wrap the zipfile for its side effect + zipfile.Path(zf) + zf.extractall(source_path.parent) + class EncodedMetadataTests(unittest.TestCase): file_names = ['\u4e00', '\u4e8c', '\u4e09'] # Han 'one', 'two', 'three' diff --git a/Lib/zipfile.py b/Lib/zipfile.py index a0b29e25a3cc..f14a638d40c5 100644 --- a/Lib/zipfile.py +++ b/Lib/zipfile.py @@ -2250,6 +2250,17 @@ def resolve_dir(self, name): dir_match = name not in names and dirname in names return dirname if dir_match else name + def getinfo(self, name): + """ + Supplement getinfo for implied dirs. + """ + try: + return super().getinfo(name) + except KeyError: + if not name.endswith('/') or name not in self._name_set(): + raise + return ZipInfo(filename=name) + @classmethod def make(cls, source): """ diff --git a/Misc/NEWS.d/next/Library/2023-02-17-20-24-15.gh-issue-101566.FjgWBt.rst b/Misc/NEWS.d/next/Library/2023-02-17-20-24-15.gh-issue-101566.FjgWBt.rst new file mode 100644 index 000000000000..f0c0c249a54b --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-17-20-24-15.gh-issue-101566.FjgWBt.rst @@ -0,0 +1,3 @@ +In zipfile, apply +fix for extractall on the underlying zipfile after being wrapped in +``Path``. From webhook-mailer at python.org Mon Feb 20 18:52:09 2023 From: webhook-mailer at python.org (ezio-melotti) Date: Mon, 20 Feb 2023 23:52:09 -0000 Subject: [Python-checkins] [3.11] gh-100210: Correct the comment link for unescaping HTML (GH-100212) (#102044) Message-ID: https://github.com/python/cpython/commit/e9103e69cd46c54276c75c0d2d0fe52e28d68809 commit: e9103e69cd46c54276c75c0d2d0fe52e28d68809 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: ezio-melotti date: 2023-02-21T07:52:02+08:00 summary: [3.11] gh-100210: Correct the comment link for unescaping HTML (GH-100212) (#102044) gh-100210: Correct the comment link for unescaping HTML (GH-100212) (cherry picked from commit 9a07eff628c1cd88b7cdda88a8fd0db3fe7ea552) gh-100210: correct the comment link for unescaping HTML Co-authored-by: Jean-Christophe Amiel Co-authored-by: Ezio Melotti files: M Lib/html/__init__.py diff --git a/Lib/html/__init__.py b/Lib/html/__init__.py index da0a0a3ce70e..1543460ca33b 100644 --- a/Lib/html/__init__.py +++ b/Lib/html/__init__.py @@ -25,7 +25,7 @@ def escape(s, quote=True): return s -# see http://www.w3.org/TR/html5/syntax.html#tokenizing-character-references +# see https://html.spec.whatwg.org/multipage/parsing.html#numeric-character-reference-end-state _invalid_charrefs = { 0x00: '\ufffd', # REPLACEMENT CHARACTER From webhook-mailer at python.org Mon Feb 20 22:10:45 2023 From: webhook-mailer at python.org (corona10) Date: Tue, 21 Feb 2023 03:10:45 -0000 Subject: [Python-checkins] gh-101961 fileinput.hookcompressed should not set the encoding value for the binary mode (gh-102068) Message-ID: https://github.com/python/cpython/commit/6f25657b83d7a680a97849490f6e973b3a695e1a commit: 6f25657b83d7a680a97849490f6e973b3a695e1a branch: main author: Gihwan Kim committer: corona10 date: 2023-02-21T12:10:29+09:00 summary: gh-101961 fileinput.hookcompressed should not set the encoding value for the binary mode (gh-102068) files: A Misc/NEWS.d/next/Library/2023-02-21-10-05-33.gh-issue-101961.7e56jh.rst M Lib/fileinput.py M Lib/test/test_fileinput.py M Misc/ACKS diff --git a/Lib/fileinput.py b/Lib/fileinput.py index e234dc9ea65f..1b25f28f3d34 100644 --- a/Lib/fileinput.py +++ b/Lib/fileinput.py @@ -399,7 +399,7 @@ def isstdin(self): def hook_compressed(filename, mode, *, encoding=None, errors=None): - if encoding is None: # EncodingWarning is emitted in FileInput() already. + if encoding is None and "b" not in mode: # EncodingWarning is emitted in FileInput() already. encoding = "locale" ext = os.path.splitext(filename)[1] if ext == '.gz': diff --git a/Lib/test/test_fileinput.py b/Lib/test/test_fileinput.py index ac20c74baa09..786d91866343 100644 --- a/Lib/test/test_fileinput.py +++ b/Lib/test/test_fileinput.py @@ -855,29 +855,29 @@ def setUp(self): self.fake_open = InvocationRecorder() def test_empty_string(self): - self.do_test_use_builtin_open("", 1) + self.do_test_use_builtin_open_text("", "r") def test_no_ext(self): - self.do_test_use_builtin_open("abcd", 2) + self.do_test_use_builtin_open_text("abcd", "r") @unittest.skipUnless(gzip, "Requires gzip and zlib") def test_gz_ext_fake(self): original_open = gzip.open gzip.open = self.fake_open try: - result = fileinput.hook_compressed("test.gz", "3") + result = fileinput.hook_compressed("test.gz", "r") finally: gzip.open = original_open self.assertEqual(self.fake_open.invocation_count, 1) - self.assertEqual(self.fake_open.last_invocation, (("test.gz", "3"), {})) + self.assertEqual(self.fake_open.last_invocation, (("test.gz", "r"), {})) @unittest.skipUnless(gzip, "Requires gzip and zlib") def test_gz_with_encoding_fake(self): original_open = gzip.open gzip.open = lambda filename, mode: io.BytesIO(b'Ex-binary string') try: - result = fileinput.hook_compressed("test.gz", "3", encoding="utf-8") + result = fileinput.hook_compressed("test.gz", "r", encoding="utf-8") finally: gzip.open = original_open self.assertEqual(list(result), ['Ex-binary string']) @@ -887,23 +887,40 @@ def test_bz2_ext_fake(self): original_open = bz2.BZ2File bz2.BZ2File = self.fake_open try: - result = fileinput.hook_compressed("test.bz2", "4") + result = fileinput.hook_compressed("test.bz2", "r") finally: bz2.BZ2File = original_open self.assertEqual(self.fake_open.invocation_count, 1) - self.assertEqual(self.fake_open.last_invocation, (("test.bz2", "4"), {})) + self.assertEqual(self.fake_open.last_invocation, (("test.bz2", "r"), {})) def test_blah_ext(self): - self.do_test_use_builtin_open("abcd.blah", "5") + self.do_test_use_builtin_open_binary("abcd.blah", "rb") def test_gz_ext_builtin(self): - self.do_test_use_builtin_open("abcd.Gz", "6") + self.do_test_use_builtin_open_binary("abcd.Gz", "rb") def test_bz2_ext_builtin(self): - self.do_test_use_builtin_open("abcd.Bz2", "7") + self.do_test_use_builtin_open_binary("abcd.Bz2", "rb") - def do_test_use_builtin_open(self, filename, mode): + def test_binary_mode_encoding(self): + self.do_test_use_builtin_open_binary("abcd", "rb") + + def test_text_mode_encoding(self): + self.do_test_use_builtin_open_text("abcd", "r") + + def do_test_use_builtin_open_binary(self, filename, mode): + original_open = self.replace_builtin_open(self.fake_open) + try: + result = fileinput.hook_compressed(filename, mode) + finally: + self.replace_builtin_open(original_open) + + self.assertEqual(self.fake_open.invocation_count, 1) + self.assertEqual(self.fake_open.last_invocation, + ((filename, mode), {'encoding': None, 'errors': None})) + + def do_test_use_builtin_open_text(self, filename, mode): original_open = self.replace_builtin_open(self.fake_open) try: result = fileinput.hook_compressed(filename, mode) diff --git a/Misc/ACKS b/Misc/ACKS index ca92608868f2..a9d4e453c7b5 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -927,6 +927,7 @@ Tyler Kieft Mads Kiilerich Jason Killen Derek D. Kim +Gihwan Kim Jan Kim Taek Joo Kim Sam Kimbrel diff --git a/Misc/NEWS.d/next/Library/2023-02-21-10-05-33.gh-issue-101961.7e56jh.rst b/Misc/NEWS.d/next/Library/2023-02-21-10-05-33.gh-issue-101961.7e56jh.rst new file mode 100644 index 000000000000..a3d4119e7cbd --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-21-10-05-33.gh-issue-101961.7e56jh.rst @@ -0,0 +1,2 @@ +For the binary mode, :func:`fileinput.hookcompressed` doesn't set the ``encoding`` value +even if the value is ``None``. Patch by Gihwan Kim. From webhook-mailer at python.org Mon Feb 20 22:39:39 2023 From: webhook-mailer at python.org (corona10) Date: Tue, 21 Feb 2023 03:39:39 -0000 Subject: [Python-checkins] [3.11] gh-101961 fileinput.hookcompressed should not set the encoding value for the binary mode (gh-102068) (#102098) Message-ID: https://github.com/python/cpython/commit/5bc6927c68804c679583a17c13145619ebc44df5 commit: 5bc6927c68804c679583a17c13145619ebc44df5 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: corona10 date: 2023-02-21T12:39:16+09:00 summary: [3.11] gh-101961 fileinput.hookcompressed should not set the encoding value for the binary mode (gh-102068) (#102098) gh-101961 fileinput.hookcompressed should not set the encoding value for the binary mode (gh-102068) (cherry picked from commit 6f25657b83d7a680a97849490f6e973b3a695e1a) Co-authored-by: Gihwan Kim files: A Misc/NEWS.d/next/Library/2023-02-21-10-05-33.gh-issue-101961.7e56jh.rst M Lib/fileinput.py M Lib/test/test_fileinput.py M Misc/ACKS diff --git a/Lib/fileinput.py b/Lib/fileinput.py index e234dc9ea65f..1b25f28f3d34 100644 --- a/Lib/fileinput.py +++ b/Lib/fileinput.py @@ -399,7 +399,7 @@ def isstdin(self): def hook_compressed(filename, mode, *, encoding=None, errors=None): - if encoding is None: # EncodingWarning is emitted in FileInput() already. + if encoding is None and "b" not in mode: # EncodingWarning is emitted in FileInput() already. encoding = "locale" ext = os.path.splitext(filename)[1] if ext == '.gz': diff --git a/Lib/test/test_fileinput.py b/Lib/test/test_fileinput.py index ac20c74baa09..786d91866343 100644 --- a/Lib/test/test_fileinput.py +++ b/Lib/test/test_fileinput.py @@ -855,29 +855,29 @@ def setUp(self): self.fake_open = InvocationRecorder() def test_empty_string(self): - self.do_test_use_builtin_open("", 1) + self.do_test_use_builtin_open_text("", "r") def test_no_ext(self): - self.do_test_use_builtin_open("abcd", 2) + self.do_test_use_builtin_open_text("abcd", "r") @unittest.skipUnless(gzip, "Requires gzip and zlib") def test_gz_ext_fake(self): original_open = gzip.open gzip.open = self.fake_open try: - result = fileinput.hook_compressed("test.gz", "3") + result = fileinput.hook_compressed("test.gz", "r") finally: gzip.open = original_open self.assertEqual(self.fake_open.invocation_count, 1) - self.assertEqual(self.fake_open.last_invocation, (("test.gz", "3"), {})) + self.assertEqual(self.fake_open.last_invocation, (("test.gz", "r"), {})) @unittest.skipUnless(gzip, "Requires gzip and zlib") def test_gz_with_encoding_fake(self): original_open = gzip.open gzip.open = lambda filename, mode: io.BytesIO(b'Ex-binary string') try: - result = fileinput.hook_compressed("test.gz", "3", encoding="utf-8") + result = fileinput.hook_compressed("test.gz", "r", encoding="utf-8") finally: gzip.open = original_open self.assertEqual(list(result), ['Ex-binary string']) @@ -887,23 +887,40 @@ def test_bz2_ext_fake(self): original_open = bz2.BZ2File bz2.BZ2File = self.fake_open try: - result = fileinput.hook_compressed("test.bz2", "4") + result = fileinput.hook_compressed("test.bz2", "r") finally: bz2.BZ2File = original_open self.assertEqual(self.fake_open.invocation_count, 1) - self.assertEqual(self.fake_open.last_invocation, (("test.bz2", "4"), {})) + self.assertEqual(self.fake_open.last_invocation, (("test.bz2", "r"), {})) def test_blah_ext(self): - self.do_test_use_builtin_open("abcd.blah", "5") + self.do_test_use_builtin_open_binary("abcd.blah", "rb") def test_gz_ext_builtin(self): - self.do_test_use_builtin_open("abcd.Gz", "6") + self.do_test_use_builtin_open_binary("abcd.Gz", "rb") def test_bz2_ext_builtin(self): - self.do_test_use_builtin_open("abcd.Bz2", "7") + self.do_test_use_builtin_open_binary("abcd.Bz2", "rb") - def do_test_use_builtin_open(self, filename, mode): + def test_binary_mode_encoding(self): + self.do_test_use_builtin_open_binary("abcd", "rb") + + def test_text_mode_encoding(self): + self.do_test_use_builtin_open_text("abcd", "r") + + def do_test_use_builtin_open_binary(self, filename, mode): + original_open = self.replace_builtin_open(self.fake_open) + try: + result = fileinput.hook_compressed(filename, mode) + finally: + self.replace_builtin_open(original_open) + + self.assertEqual(self.fake_open.invocation_count, 1) + self.assertEqual(self.fake_open.last_invocation, + ((filename, mode), {'encoding': None, 'errors': None})) + + def do_test_use_builtin_open_text(self, filename, mode): original_open = self.replace_builtin_open(self.fake_open) try: result = fileinput.hook_compressed(filename, mode) diff --git a/Misc/ACKS b/Misc/ACKS index f674332751c3..523b9e8380d5 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -919,6 +919,7 @@ Tyler Kieft Mads Kiilerich Jason Killen Derek D. Kim +Gihwan Kim Jan Kim Taek Joo Kim Sam Kimbrel diff --git a/Misc/NEWS.d/next/Library/2023-02-21-10-05-33.gh-issue-101961.7e56jh.rst b/Misc/NEWS.d/next/Library/2023-02-21-10-05-33.gh-issue-101961.7e56jh.rst new file mode 100644 index 000000000000..a3d4119e7cbd --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-21-10-05-33.gh-issue-101961.7e56jh.rst @@ -0,0 +1,2 @@ +For the binary mode, :func:`fileinput.hookcompressed` doesn't set the ``encoding`` value +even if the value is ``None``. Patch by Gihwan Kim. From webhook-mailer at python.org Mon Feb 20 22:39:39 2023 From: webhook-mailer at python.org (corona10) Date: Tue, 21 Feb 2023 03:39:39 -0000 Subject: [Python-checkins] [3.10] gh-101961 fileinput.hookcompressed should not set the encoding value for the binary mode (gh-102068) (#102099) Message-ID: https://github.com/python/cpython/commit/d9dce23643c319ff4f721ad65a7f9811e8adfc53 commit: d9dce23643c319ff4f721ad65a7f9811e8adfc53 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: corona10 date: 2023-02-21T12:39:30+09:00 summary: [3.10] gh-101961 fileinput.hookcompressed should not set the encoding value for the binary mode (gh-102068) (#102099) gh-101961 fileinput.hookcompressed should not set the encoding value for the binary mode (gh-102068) (cherry picked from commit 6f25657b83d7a680a97849490f6e973b3a695e1a) Co-authored-by: Gihwan Kim files: A Misc/NEWS.d/next/Library/2023-02-21-10-05-33.gh-issue-101961.7e56jh.rst M Lib/fileinput.py M Lib/test/test_fileinput.py M Misc/ACKS diff --git a/Lib/fileinput.py b/Lib/fileinput.py index 2ce2f911435b..3bd19906dcf5 100644 --- a/Lib/fileinput.py +++ b/Lib/fileinput.py @@ -419,7 +419,7 @@ def isstdin(self): def hook_compressed(filename, mode, *, encoding=None, errors=None): - if encoding is None: # EncodingWarning is emitted in FileInput() already. + if encoding is None and "b" not in mode: # EncodingWarning is emitted in FileInput() already. encoding = "locale" ext = os.path.splitext(filename)[1] if ext == '.gz': diff --git a/Lib/test/test_fileinput.py b/Lib/test/test_fileinput.py index 21924ff41a72..6c4b5fe5d129 100644 --- a/Lib/test/test_fileinput.py +++ b/Lib/test/test_fileinput.py @@ -903,29 +903,29 @@ def setUp(self): self.fake_open = InvocationRecorder() def test_empty_string(self): - self.do_test_use_builtin_open("", 1) + self.do_test_use_builtin_open_text("", "r") def test_no_ext(self): - self.do_test_use_builtin_open("abcd", 2) + self.do_test_use_builtin_open_text("abcd", "r") @unittest.skipUnless(gzip, "Requires gzip and zlib") def test_gz_ext_fake(self): original_open = gzip.open gzip.open = self.fake_open try: - result = fileinput.hook_compressed("test.gz", "3") + result = fileinput.hook_compressed("test.gz", "r") finally: gzip.open = original_open self.assertEqual(self.fake_open.invocation_count, 1) - self.assertEqual(self.fake_open.last_invocation, (("test.gz", "3"), {})) + self.assertEqual(self.fake_open.last_invocation, (("test.gz", "r"), {})) @unittest.skipUnless(gzip, "Requires gzip and zlib") def test_gz_with_encoding_fake(self): original_open = gzip.open gzip.open = lambda filename, mode: io.BytesIO(b'Ex-binary string') try: - result = fileinput.hook_compressed("test.gz", "3", encoding="utf-8") + result = fileinput.hook_compressed("test.gz", "r", encoding="utf-8") finally: gzip.open = original_open self.assertEqual(list(result), ['Ex-binary string']) @@ -935,23 +935,40 @@ def test_bz2_ext_fake(self): original_open = bz2.BZ2File bz2.BZ2File = self.fake_open try: - result = fileinput.hook_compressed("test.bz2", "4") + result = fileinput.hook_compressed("test.bz2", "r") finally: bz2.BZ2File = original_open self.assertEqual(self.fake_open.invocation_count, 1) - self.assertEqual(self.fake_open.last_invocation, (("test.bz2", "4"), {})) + self.assertEqual(self.fake_open.last_invocation, (("test.bz2", "r"), {})) def test_blah_ext(self): - self.do_test_use_builtin_open("abcd.blah", "5") + self.do_test_use_builtin_open_binary("abcd.blah", "rb") def test_gz_ext_builtin(self): - self.do_test_use_builtin_open("abcd.Gz", "6") + self.do_test_use_builtin_open_binary("abcd.Gz", "rb") def test_bz2_ext_builtin(self): - self.do_test_use_builtin_open("abcd.Bz2", "7") + self.do_test_use_builtin_open_binary("abcd.Bz2", "rb") - def do_test_use_builtin_open(self, filename, mode): + def test_binary_mode_encoding(self): + self.do_test_use_builtin_open_binary("abcd", "rb") + + def test_text_mode_encoding(self): + self.do_test_use_builtin_open_text("abcd", "r") + + def do_test_use_builtin_open_binary(self, filename, mode): + original_open = self.replace_builtin_open(self.fake_open) + try: + result = fileinput.hook_compressed(filename, mode) + finally: + self.replace_builtin_open(original_open) + + self.assertEqual(self.fake_open.invocation_count, 1) + self.assertEqual(self.fake_open.last_invocation, + ((filename, mode), {'encoding': None, 'errors': None})) + + def do_test_use_builtin_open_text(self, filename, mode): original_open = self.replace_builtin_open(self.fake_open) try: result = fileinput.hook_compressed(filename, mode) diff --git a/Misc/ACKS b/Misc/ACKS index a78c086e16c6..b9aa307ae087 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -912,6 +912,7 @@ Tyler Kieft Mads Kiilerich Jason Killen Derek D. Kim +Gihwan Kim Jan Kim Taek Joo Kim Sam Kimbrel diff --git a/Misc/NEWS.d/next/Library/2023-02-21-10-05-33.gh-issue-101961.7e56jh.rst b/Misc/NEWS.d/next/Library/2023-02-21-10-05-33.gh-issue-101961.7e56jh.rst new file mode 100644 index 000000000000..a3d4119e7cbd --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-21-10-05-33.gh-issue-101961.7e56jh.rst @@ -0,0 +1,2 @@ +For the binary mode, :func:`fileinput.hookcompressed` doesn't set the ``encoding`` value +even if the value is ``None``. Patch by Gihwan Kim. From webhook-mailer at python.org Tue Feb 21 03:16:20 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Tue, 21 Feb 2023 08:16:20 -0000 Subject: [Python-checkins] gh-101578: Amend exception docs (#102057) Message-ID: https://github.com/python/cpython/commit/02d9f1504beebd98dea807f4b8761d3675a500d0 commit: 02d9f1504beebd98dea807f4b8761d3675a500d0 branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-21T09:15:49+01:00 summary: gh-101578: Amend exception docs (#102057) Co-authored-by: C.A.M. Gerlach files: M Doc/c-api/exceptions.rst M Doc/data/refcounts.dat diff --git a/Doc/c-api/exceptions.rst b/Doc/c-api/exceptions.rst index e735f8db4023..8836c973f1f5 100644 --- a/Doc/c-api/exceptions.rst +++ b/Doc/c-api/exceptions.rst @@ -438,7 +438,9 @@ Querying the error indicator .. c:function:: void PyErr_Fetch(PyObject **ptype, PyObject **pvalue, PyObject **ptraceback) - As of 3.12, this function is deprecated. Use :c:func:`PyErr_GetRaisedException` instead. + .. deprecated:: 3.12 + + Use :c:func:`PyErr_GetRaisedException` instead. Retrieve the error indicator into three variables whose addresses are passed. If the error indicator is not set, set all three variables to ``NULL``. If it is @@ -447,8 +449,10 @@ Querying the error indicator .. note:: - This function is normally only used by code that needs to catch exceptions or - by code that needs to save and restore the error indicator temporarily, e.g.:: + This function is normally only used by legacy code that needs to catch + exceptions or save and restore the error indicator temporarily. + + For example:: { PyObject *type, *value, *traceback; @@ -459,15 +463,17 @@ Querying the error indicator PyErr_Restore(type, value, traceback); } - .. deprecated:: 3.12 - .. c:function:: void PyErr_Restore(PyObject *type, PyObject *value, PyObject *traceback) - As of 3.12, this function is deprecated. Use :c:func:`PyErr_SetRaisedException` instead. + .. deprecated:: 3.12 + + Use :c:func:`PyErr_SetRaisedException` instead. - Set the error indicator from the three objects. If the error indicator is - already set, it is cleared first. If the objects are ``NULL``, the error + Set the error indicator from the three objects, + *type*, *value*, and *traceback*, + clearing the existing exception if one is set. + If the objects are ``NULL``, the error indicator is cleared. Do not pass a ``NULL`` type and non-``NULL`` value or traceback. The exception type should be a class. Do not pass an invalid exception type or value. (Violating these rules will cause subtle problems @@ -478,18 +484,17 @@ Querying the error indicator .. note:: - This function is normally only used by code that needs to save and restore the - error indicator temporarily. Use :c:func:`PyErr_Fetch` to save the current - error indicator. - - .. deprecated:: 3.12 + This function is normally only used by legacy code that needs to + save and restore the error indicator temporarily. + Use :c:func:`PyErr_Fetch` to save the current error indicator. .. c:function:: void PyErr_NormalizeException(PyObject **exc, PyObject **val, PyObject **tb) - As of 3.12, this function is deprecated. - Use :c:func:`PyErr_GetRaisedException` instead of :c:func:`PyErr_Fetch` to avoid - any possible de-normalization. + .. deprecated:: 3.12 + + Use :c:func:`PyErr_GetRaisedException` instead, + to avoid any possible de-normalization. Under certain circumstances, the values returned by :c:func:`PyErr_Fetch` below can be "unnormalized", meaning that ``*exc`` is a class object but ``*val`` is @@ -507,8 +512,6 @@ Querying the error indicator PyException_SetTraceback(val, tb); } - .. deprecated:: 3.12 - .. c:function:: PyObject* PyErr_GetHandledException(void) @@ -756,14 +759,12 @@ Exception Objects .. c:function:: PyObject* PyException_GetArgs(PyObject *ex) - Return args of the given exception as a new reference, - as accessible from Python through :attr:`args`. + Return :attr:`~BaseException.args` of exception *ex*. .. c:function:: void PyException_SetArgs(PyObject *ex, PyObject *args) - Set the args of the given exception, - as accessible from Python through :attr:`args`. + Set :attr:`~BaseException.args` of exception *ex* to *args*. .. _unicodeexceptions: diff --git a/Doc/data/refcounts.dat b/Doc/data/refcounts.dat index ed2958f8cd22..ee64ffdc9166 100644 --- a/Doc/data/refcounts.dat +++ b/Doc/data/refcounts.dat @@ -839,6 +839,8 @@ PyEval_EvalFrameEx:int:throwflag:: PyEval_MergeCompilerFlags:int::: PyEval_MergeCompilerFlags:PyCompilerFlags*:cf:: +PyException_GetArgs:PyObject*::+1: + PyException_GetCause:PyObject*::+1: PyException_GetCause:PyObject*:ex:0: From webhook-mailer at python.org Tue Feb 21 05:03:58 2023 From: webhook-mailer at python.org (miss-islington) Date: Tue, 21 Feb 2023 10:03:58 -0000 Subject: [Python-checkins] gh-101965: Fix usage of Py_EnterRecursiveCall return value in _bisectmodule.c (GH-101966) Message-ID: https://github.com/python/cpython/commit/0f7a9725300e750947159409abbf5722a4a79f8b commit: 0f7a9725300e750947159409abbf5722a4a79f8b branch: main author: Owain Davies <116417456+OTheDev at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-21T02:03:50-08:00 summary: gh-101965: Fix usage of Py_EnterRecursiveCall return value in _bisectmodule.c (GH-101966) Closes #101965 Automerge-Triggered-By: GH:erlend-aasland files: M Modules/_bisectmodule.c diff --git a/Modules/_bisectmodule.c b/Modules/_bisectmodule.c index 9ceb3ae46fe5..d3bec535ee51 100644 --- a/Modules/_bisectmodule.c +++ b/Modules/_bisectmodule.c @@ -66,7 +66,7 @@ internal_bisect_right(PyObject *list, PyObject *item, Py_ssize_t lo, Py_ssize_t if (sq_item == NULL) { return -1; } - if (Py_EnterRecursiveCall("in _bisect.bisect_right") < 0) { + if (Py_EnterRecursiveCall("in _bisect.bisect_right")) { return -1; } PyTypeObject *tp = Py_TYPE(item); @@ -246,7 +246,7 @@ internal_bisect_left(PyObject *list, PyObject *item, Py_ssize_t lo, Py_ssize_t h if (sq_item == NULL) { return -1; } - if (Py_EnterRecursiveCall("in _bisect.bisect_left") < 0) { + if (Py_EnterRecursiveCall("in _bisect.bisect_left")) { return -1; } PyTypeObject *tp = Py_TYPE(item); From webhook-mailer at python.org Tue Feb 21 05:24:40 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Tue, 21 Feb 2023 10:24:40 -0000 Subject: [Python-checkins] gh-101777: Make `PriorityQueue` docs slightly clearer (#102026) Message-ID: https://github.com/python/cpython/commit/350ba7c07f8983537883e093c5c623287a2a22e5 commit: 350ba7c07f8983537883e093c5c623287a2a22e5 branch: main author: Owain Davies <116417456+OTheDev at users.noreply.github.com> committer: erlend-aasland date: 2023-02-21T11:24:33+01:00 summary: gh-101777: Make `PriorityQueue` docs slightly clearer (#102026) Adjust wording slightly, and use min(entries) instead of sorted(list(entries))[0] as an example. files: M Doc/library/queue.rst diff --git a/Doc/library/queue.rst b/Doc/library/queue.rst index c67f15e953bc..46b8e9b18a3c 100644 --- a/Doc/library/queue.rst +++ b/Doc/library/queue.rst @@ -57,8 +57,8 @@ The :mod:`queue` module defines the following classes and exceptions: *maxsize* is less than or equal to zero, the queue size is infinite. The lowest valued entries are retrieved first (the lowest valued entry is the - one returned by ``sorted(list(entries))[0]``). A typical pattern for entries - is a tuple in the form: ``(priority_number, data)``. + one that would be returned by ``min(entries)``). A typical pattern for + entries is a tuple in the form: ``(priority_number, data)``. If the *data* elements are not comparable, the data can be wrapped in a class that ignores the data item and only compares the priority number:: From webhook-mailer at python.org Tue Feb 21 05:31:58 2023 From: webhook-mailer at python.org (miss-islington) Date: Tue, 21 Feb 2023 10:31:58 -0000 Subject: [Python-checkins] gh-101777: Make `PriorityQueue` docs slightly clearer (GH-102026) Message-ID: https://github.com/python/cpython/commit/2d9f8d4d3882968dbc8fb2bceeb51d4c31fcd7b1 commit: 2d9f8d4d3882968dbc8fb2bceeb51d4c31fcd7b1 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-21T02:31:51-08:00 summary: gh-101777: Make `PriorityQueue` docs slightly clearer (GH-102026) Adjust wording slightly, and use min(entries) instead of sorted(list(entries))[0] as an example. (cherry picked from commit 350ba7c07f8983537883e093c5c623287a2a22e5) Co-authored-by: Owain Davies <116417456+OTheDev at users.noreply.github.com> files: M Doc/library/queue.rst diff --git a/Doc/library/queue.rst b/Doc/library/queue.rst index c67f15e953bc..46b8e9b18a3c 100644 --- a/Doc/library/queue.rst +++ b/Doc/library/queue.rst @@ -57,8 +57,8 @@ The :mod:`queue` module defines the following classes and exceptions: *maxsize* is less than or equal to zero, the queue size is infinite. The lowest valued entries are retrieved first (the lowest valued entry is the - one returned by ``sorted(list(entries))[0]``). A typical pattern for entries - is a tuple in the form: ``(priority_number, data)``. + one that would be returned by ``min(entries)``). A typical pattern for + entries is a tuple in the form: ``(priority_number, data)``. If the *data* elements are not comparable, the data can be wrapped in a class that ignores the data item and only compares the priority number:: From webhook-mailer at python.org Tue Feb 21 05:33:05 2023 From: webhook-mailer at python.org (miss-islington) Date: Tue, 21 Feb 2023 10:33:05 -0000 Subject: [Python-checkins] gh-101777: Make `PriorityQueue` docs slightly clearer (GH-102026) Message-ID: https://github.com/python/cpython/commit/f94ffcf6182bbd70b458769e00bd66147dac4e41 commit: f94ffcf6182bbd70b458769e00bd66147dac4e41 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-21T02:32:58-08:00 summary: gh-101777: Make `PriorityQueue` docs slightly clearer (GH-102026) Adjust wording slightly, and use min(entries) instead of sorted(list(entries))[0] as an example. (cherry picked from commit 350ba7c07f8983537883e093c5c623287a2a22e5) Co-authored-by: Owain Davies <116417456+OTheDev at users.noreply.github.com> files: M Doc/library/queue.rst diff --git a/Doc/library/queue.rst b/Doc/library/queue.rst index c67f15e953bc..46b8e9b18a3c 100644 --- a/Doc/library/queue.rst +++ b/Doc/library/queue.rst @@ -57,8 +57,8 @@ The :mod:`queue` module defines the following classes and exceptions: *maxsize* is less than or equal to zero, the queue size is infinite. The lowest valued entries are retrieved first (the lowest valued entry is the - one returned by ``sorted(list(entries))[0]``). A typical pattern for entries - is a tuple in the form: ``(priority_number, data)``. + one that would be returned by ``min(entries)``). A typical pattern for + entries is a tuple in the form: ``(priority_number, data)``. If the *data* elements are not comparable, the data can be wrapped in a class that ignores the data item and only compares the priority number:: From webhook-mailer at python.org Tue Feb 21 05:35:04 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Tue, 21 Feb 2023 10:35:04 -0000 Subject: [Python-checkins] gh-100556: Improve clarity of `or` docs (#100589) Message-ID: https://github.com/python/cpython/commit/b40dd71241f092d90c192ebff5d58cbd7e84dc52 commit: b40dd71241f092d90c192ebff5d58cbd7e84dc52 branch: main author: ram vikram singh committer: erlend-aasland date: 2023-02-21T11:34:56+01:00 summary: gh-100556: Improve clarity of `or` docs (#100589) Co-authored-by: Hugo van Kemenade files: M Doc/library/stdtypes.rst diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst index 98bda6ded30f..41947d66c151 100644 --- a/Doc/library/stdtypes.rst +++ b/Doc/library/stdtypes.rst @@ -84,8 +84,8 @@ These are the Boolean operations, ordered by ascending priority: +-------------+---------------------------------+-------+ | Operation | Result | Notes | +=============+=================================+=======+ -| ``x or y`` | if *x* is false, then *y*, else | \(1) | -| | *x* | | +| ``x or y`` | if *x* is true, then *x*, else | \(1) | +| | *y* | | +-------------+---------------------------------+-------+ | ``x and y`` | if *x* is false, then *x*, else | \(2) | | | *y* | | From webhook-mailer at python.org Tue Feb 21 05:43:56 2023 From: webhook-mailer at python.org (miss-islington) Date: Tue, 21 Feb 2023 10:43:56 -0000 Subject: [Python-checkins] gh-100556: Improve clarity of `or` docs (GH-100589) Message-ID: https://github.com/python/cpython/commit/444ec749452b67318404ae0d6007e62298b012fc commit: 444ec749452b67318404ae0d6007e62298b012fc branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-21T02:43:48-08:00 summary: gh-100556: Improve clarity of `or` docs (GH-100589) (cherry picked from commit b40dd71241f092d90c192ebff5d58cbd7e84dc52) Co-authored-by: ram vikram singh Co-authored-by: Hugo van Kemenade files: M Doc/library/stdtypes.rst diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst index 7cdde5334e24..59d64457edcd 100644 --- a/Doc/library/stdtypes.rst +++ b/Doc/library/stdtypes.rst @@ -84,8 +84,8 @@ These are the Boolean operations, ordered by ascending priority: +-------------+---------------------------------+-------+ | Operation | Result | Notes | +=============+=================================+=======+ -| ``x or y`` | if *x* is false, then *y*, else | \(1) | -| | *x* | | +| ``x or y`` | if *x* is true, then *x*, else | \(1) | +| | *y* | | +-------------+---------------------------------+-------+ | ``x and y`` | if *x* is false, then *x*, else | \(2) | | | *y* | | From webhook-mailer at python.org Tue Feb 21 05:45:55 2023 From: webhook-mailer at python.org (miss-islington) Date: Tue, 21 Feb 2023 10:45:55 -0000 Subject: [Python-checkins] gh-100556: Improve clarity of `or` docs (GH-100589) Message-ID: https://github.com/python/cpython/commit/a94f3ad10b8846d435cadd7c96d9b9e32dcc11e4 commit: a94f3ad10b8846d435cadd7c96d9b9e32dcc11e4 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-21T02:45:49-08:00 summary: gh-100556: Improve clarity of `or` docs (GH-100589) (cherry picked from commit b40dd71241f092d90c192ebff5d58cbd7e84dc52) Co-authored-by: ram vikram singh Co-authored-by: Hugo van Kemenade files: M Doc/library/stdtypes.rst diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst index 1563c4dff138..608e15453be8 100644 --- a/Doc/library/stdtypes.rst +++ b/Doc/library/stdtypes.rst @@ -84,8 +84,8 @@ These are the Boolean operations, ordered by ascending priority: +-------------+---------------------------------+-------+ | Operation | Result | Notes | +=============+=================================+=======+ -| ``x or y`` | if *x* is false, then *y*, else | \(1) | -| | *x* | | +| ``x or y`` | if *x* is true, then *x*, else | \(1) | +| | *y* | | +-------------+---------------------------------+-------+ | ``x and y`` | if *x* is false, then *x*, else | \(2) | | | *y* | | From webhook-mailer at python.org Tue Feb 21 06:06:27 2023 From: webhook-mailer at python.org (miss-islington) Date: Tue, 21 Feb 2023 11:06:27 -0000 Subject: [Python-checkins] gh-102011: use sys.exception() instead of sys.exc_info() in docs where possible (GH-102012) Message-ID: https://github.com/python/cpython/commit/78eee7618534d09121c7bb4e47b43ef5867a2a16 commit: 78eee7618534d09121c7bb4e47b43ef5867a2a16 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-21T03:06:20-08:00 summary: gh-102011: use sys.exception() instead of sys.exc_info() in docs where possible (GH-102012) (cherry picked from commit 4d3bc89a3f54c4f09756a9b644b3912bf54191a7) Co-authored-by: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> files: M Doc/library/exceptions.rst M Doc/library/traceback.rst M Doc/library/types.rst M Doc/library/wsgiref.rst M Doc/reference/compound_stmts.rst M Doc/reference/datamodel.rst diff --git a/Doc/library/exceptions.rst b/Doc/library/exceptions.rst index 1217b817b4e8..4a57e9c87993 100644 --- a/Doc/library/exceptions.rst +++ b/Doc/library/exceptions.rst @@ -123,7 +123,7 @@ The following exceptions are used mostly as base classes for other exceptions. try: ... except SomeException: - tb = sys.exc_info()[2] + tb = sys.exception().__traceback__ raise OtherException(...).with_traceback(tb) .. method:: add_note(note) diff --git a/Doc/library/traceback.rst b/Doc/library/traceback.rst index a7a015f17193..053e97325e31 100644 --- a/Doc/library/traceback.rst +++ b/Doc/library/traceback.rst @@ -16,9 +16,8 @@ interpreter. .. index:: object: traceback -The module uses traceback objects --- this is the object type that is stored in -the :data:`sys.last_traceback` variable and returned as the third item from -:func:`sys.exc_info`. +The module uses traceback objects --- these are objects of type :class:`types.TracebackType`, +which are assigned to the ``__traceback__`` field of :class:`BaseException` instances. .. seealso:: @@ -81,7 +80,7 @@ The module defines the following functions: .. function:: print_exc(limit=None, file=None, chain=True) - This is a shorthand for ``print_exception(*sys.exc_info(), limit, file, + This is a shorthand for ``print_exception(sys.exception(), limit, file, chain)``. @@ -440,11 +439,11 @@ exception and traceback: try: lumberjack() except IndexError: - exc_type, exc_value, exc_traceback = sys.exc_info() + exc = sys.exception() print("*** print_tb:") - traceback.print_tb(exc_traceback, limit=1, file=sys.stdout) + traceback.print_tb(exc.__traceback__, limit=1, file=sys.stdout) print("*** print_exception:") - traceback.print_exception(exc_value, limit=2, file=sys.stdout) + traceback.print_exception(exc, limit=2, file=sys.stdout) print("*** print_exc:") traceback.print_exc(limit=2, file=sys.stdout) print("*** format_exc, first and last line:") @@ -452,12 +451,12 @@ exception and traceback: print(formatted_lines[0]) print(formatted_lines[-1]) print("*** format_exception:") - print(repr(traceback.format_exception(exc_value))) + print(repr(traceback.format_exception(exc))) print("*** extract_tb:") - print(repr(traceback.extract_tb(exc_traceback))) + print(repr(traceback.extract_tb(exc.__traceback__))) print("*** format_tb:") - print(repr(traceback.format_tb(exc_traceback))) - print("*** tb_lineno:", exc_traceback.tb_lineno) + print(repr(traceback.format_tb(exc.__traceback__))) + print("*** tb_lineno:", exc.__traceback__.tb_lineno) The output for the example would look similar to this: diff --git a/Doc/library/types.rst b/Doc/library/types.rst index e0e77dfbfe7e..aafba29b4cf6 100644 --- a/Doc/library/types.rst +++ b/Doc/library/types.rst @@ -320,7 +320,7 @@ Standard names are defined for the following types: .. class:: TracebackType(tb_next, tb_frame, tb_lasti, tb_lineno) - The type of traceback objects such as found in ``sys.exc_info()[2]``. + The type of traceback objects such as found in ``sys.exception().__traceback__``. See :ref:`the language reference ` for details of the available attributes and operations, and guidance on creating tracebacks diff --git a/Doc/library/wsgiref.rst b/Doc/library/wsgiref.rst index 75dea4663351..39a4c1ba1423 100644 --- a/Doc/library/wsgiref.rst +++ b/Doc/library/wsgiref.rst @@ -674,7 +674,7 @@ input, output, and error streams. This method is a WSGI application to generate an error page for the user. It is only invoked if an error occurs before headers are sent to the client. - This method can access the current error information using ``sys.exc_info()``, + This method can access the current error using ``sys.exception()``, and should pass that information to *start_response* when calling it (as described in the "Error Handling" section of :pep:`3333`). diff --git a/Doc/reference/compound_stmts.rst b/Doc/reference/compound_stmts.rst index d5cb2899c231..52bb0b22b042 100644 --- a/Doc/reference/compound_stmts.rst +++ b/Doc/reference/compound_stmts.rst @@ -301,31 +301,28 @@ keeping all locals in that frame alive until the next garbage collection occurs. object: traceback Before an :keyword:`!except` clause's suite is executed, -details about the exception are -stored in the :mod:`sys` module and can be accessed via :func:`sys.exc_info`. -:func:`sys.exc_info` returns a 3-tuple consisting of the exception class, the -exception instance and a traceback object (see section :ref:`types`) identifying -the point in the program where the exception occurred. The details about the -exception accessed via :func:`sys.exc_info` are restored to their previous values -when leaving an exception handler:: - - >>> print(sys.exc_info()) - (None, None, None) +the exception is stored in the :mod:`sys` module, where it can be accessed +from within the body of the :keyword:`!except` clause by calling +:func:`sys.exception`. When leaving an exception handler, the exception +stored in the :mod:`sys` module is reset to its previous value:: + + >>> print(sys.exception()) + None >>> try: ... raise TypeError ... except: - ... print(sys.exc_info()) + ... print(repr(sys.exception())) ... try: ... raise ValueError ... except: - ... print(sys.exc_info()) - ... print(sys.exc_info()) + ... print(repr(sys.exception())) + ... print(repr(sys.exception())) ... - (, TypeError(), ) - (, ValueError(), ) - (, TypeError(), ) - >>> print(sys.exc_info()) - (None, None, None) + TypeError() + ValueError() + TypeError() + >>> print(sys.exception()) + None .. index:: diff --git a/Doc/reference/datamodel.rst b/Doc/reference/datamodel.rst index dce48a466301..e94b535901c5 100644 --- a/Doc/reference/datamodel.rst +++ b/Doc/reference/datamodel.rst @@ -1122,6 +1122,7 @@ Internal types single: exc_info (in module sys) single: last_traceback (in module sys) single: sys.exc_info + single: sys.exception single: sys.last_traceback Traceback objects represent a stack trace of an exception. A traceback object From webhook-mailer at python.org Tue Feb 21 06:58:56 2023 From: webhook-mailer at python.org (iritkatriel) Date: Tue, 21 Feb 2023 11:58:56 -0000 Subject: [Python-checkins] gh-101903: Remove obsolete undefs for previously removed macros Py_EnterRecursiveCall and Py_LeaveRecursiveCall (#101923) Message-ID: https://github.com/python/cpython/commit/7346a381be1bc5911924fcf298408e6909f3949f commit: 7346a381be1bc5911924fcf298408e6909f3949f branch: main author: Owain Davies <116417456+OTheDev at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-21T11:58:47Z summary: gh-101903: Remove obsolete undefs for previously removed macros Py_EnterRecursiveCall and Py_LeaveRecursiveCall (#101923) files: M Python/ceval.c diff --git a/Python/ceval.c b/Python/ceval.c index b85231a1df8a..001bdb15c0f7 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -3068,15 +3068,11 @@ maybe_dtrace_line(_PyInterpreterFrame *frame, /* Implement Py_EnterRecursiveCall() and Py_LeaveRecursiveCall() as functions for the limited API. */ -#undef Py_EnterRecursiveCall - int Py_EnterRecursiveCall(const char *where) { return _Py_EnterRecursiveCall(where); } -#undef Py_LeaveRecursiveCall - void Py_LeaveRecursiveCall(void) { _Py_LeaveRecursiveCall(); From webhook-mailer at python.org Tue Feb 21 08:02:01 2023 From: webhook-mailer at python.org (pablogsal) Date: Tue, 21 Feb 2023 13:02:01 -0000 Subject: [Python-checkins] [3.11] gh-101967: add a missing error check (GH-101968) (#102015) Message-ID: https://github.com/python/cpython/commit/1633aea0e430f4c0d115b1ea5baac5daaecf81ff commit: 1633aea0e430f4c0d115b1ea5baac5daaecf81ff branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: pablogsal date: 2023-02-21T13:01:15Z summary: [3.11] gh-101967: add a missing error check (GH-101968) (#102015) gh-101967: add a missing error check (GH-101968) (cherry picked from commit 89413bbccb9261b72190e275eefe4b0d49671477) Co-authored-by: Eclips4 <80244920+Eclips4 at users.noreply.github.com> Co-authored-by: Shantanu <12621235+hauntsaninja at users.noreply.github.com> files: A Misc/NEWS.d/next/Core and Builtins/2023-02-16-23-19-01.gh-issue-101967.Kqr1dz.rst M Python/ceval.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-16-23-19-01.gh-issue-101967.Kqr1dz.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-16-23-19-01.gh-issue-101967.Kqr1dz.rst new file mode 100644 index 000000000000..6e681f910f53 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-16-23-19-01.gh-issue-101967.Kqr1dz.rst @@ -0,0 +1 @@ +Fix possible segfault in ``positional_only_passed_as_keyword`` function, when new list created. diff --git a/Python/ceval.c b/Python/ceval.c index 01ac2618df0d..96d215aba800 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -6006,7 +6006,9 @@ positional_only_passed_as_keyword(PyThreadState *tstate, PyCodeObject *co, { int posonly_conflicts = 0; PyObject* posonly_names = PyList_New(0); - + if (posonly_names == NULL) { + goto fail; + } for(int k=0; k < co->co_posonlyargcount; k++){ PyObject* posonly_name = PyTuple_GET_ITEM(co->co_localsplusnames, k); From webhook-mailer at python.org Tue Feb 21 10:11:39 2023 From: webhook-mailer at python.org (iritkatriel) Date: Tue, 21 Feb 2023 15:11:39 -0000 Subject: [Python-checkins] gh-102008: simplify test_except_star by using sys.exception() instead of sys.exc_info() (#102009) Message-ID: https://github.com/python/cpython/commit/c2b85a95a50687a2e5d1873e17266370876e77e9 commit: c2b85a95a50687a2e5d1873e17266370876e77e9 branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-21T15:11:31Z summary: gh-102008: simplify test_except_star by using sys.exception() instead of sys.exc_info() (#102009) files: M Lib/test/test_except_star.py diff --git a/Lib/test/test_except_star.py b/Lib/test/test_except_star.py index 9de72dbd5a32..c5167c5bba38 100644 --- a/Lib/test/test_except_star.py +++ b/Lib/test/test_except_star.py @@ -208,44 +208,38 @@ def assertMetadataNotEqual(self, e1, e2): class TestExceptStarSplitSemantics(ExceptStarTest): def doSplitTestNamed(self, exc, T, match_template, rest_template): - initial_exc_info = sys.exc_info() - exc_info = match = rest = None + initial_sys_exception = sys.exception() + sys_exception = match = rest = None try: try: raise exc except* T as e: - exc_info = sys.exc_info() + sys_exception = sys.exception() match = e except BaseException as e: rest = e - if match_template: - self.assertEqual(exc_info[1], match) - else: - self.assertIsNone(exc_info) + self.assertEqual(sys_exception, match) self.assertExceptionIsLike(match, match_template) self.assertExceptionIsLike(rest, rest_template) - self.assertEqual(sys.exc_info(), initial_exc_info) + self.assertEqual(sys.exception(), initial_sys_exception) def doSplitTestUnnamed(self, exc, T, match_template, rest_template): - initial_exc_info = sys.exc_info() - exc_info = match = rest = None + initial_sys_exception = sys.exception() + sys_exception = match = rest = None try: try: raise exc except* T: - exc_info = sys.exc_info() - match = sys.exc_info()[1] + sys_exception = match = sys.exception() else: if rest_template: self.fail("Exception not raised") except BaseException as e: rest = e self.assertExceptionIsLike(match, match_template) - if match_template: - self.assertEqual(exc_info[0], type(match_template)) self.assertExceptionIsLike(rest, rest_template) - self.assertEqual(sys.exc_info(), initial_exc_info) + self.assertEqual(sys.exception(), initial_sys_exception) def doSplitTestInExceptHandler(self, exc, T, match_template, rest_template): try: @@ -409,11 +403,11 @@ def test_multiple_matches_unnamed(self): try: raise ExceptionGroup("mmu", [OSError("os"), BlockingIOError("io")]) except* BlockingIOError: - e = sys.exc_info()[1] + e = sys.exception() self.assertExceptionIsLike(e, ExceptionGroup("mmu", [BlockingIOError("io")])) except* OSError: - e = sys.exc_info()[1] + e = sys.exception() self.assertExceptionIsLike(e, ExceptionGroup("mmu", [OSError("os")])) else: @@ -434,7 +428,7 @@ def test_first_match_wins_unnamed(self): try: raise ExceptionGroup("fstu", [BlockingIOError("io")]) except* OSError: - e = sys.exc_info()[1] + e = sys.exception() self.assertExceptionIsLike(e, ExceptionGroup("fstu", [BlockingIOError("io")])) except* BlockingIOError: @@ -452,7 +446,7 @@ def test_nested_except_stars(self): pass else: self.fail("Exception not raised") - e = sys.exc_info()[1] + e = sys.exception() self.assertExceptionIsLike(e, ExceptionGroup("n", [BlockingIOError("io")])) else: @@ -766,7 +760,7 @@ def test_raise_unnamed(self): try: raise orig except* OSError: - e = sys.exc_info()[1] + e = sys.exception() raise TypeError(3) from e except ExceptionGroup as e: exc = e @@ -821,7 +815,7 @@ def test_raise_handle_all_raise_one_unnamed(self): try: raise orig except* (TypeError, ValueError) as e: - e = sys.exc_info()[1] + e = sys.exception() raise SyntaxError(3) from e except ExceptionGroup as e: exc = e @@ -882,10 +876,10 @@ def test_raise_handle_all_raise_two_unnamed(self): try: raise orig except* TypeError: - e = sys.exc_info()[1] + e = sys.exception() raise SyntaxError(3) from e except* ValueError: - e = sys.exc_info()[1] + e = sys.exception() raise SyntaxError(4) from e except ExceptionGroup as e: exc = e @@ -982,7 +976,7 @@ def derive(self, excs): class TestExceptStarCleanup(ExceptStarTest): - def test_exc_info_restored(self): + def test_sys_exception_restored(self): try: try: raise ValueError(42) @@ -997,7 +991,7 @@ def test_exc_info_restored(self): self.assertExceptionIsLike(exc, ZeroDivisionError('division by zero')) self.assertExceptionIsLike(exc.__context__, ValueError(42)) - self.assertEqual(sys.exc_info(), (None, None, None)) + self.assertEqual(sys.exception(), None) class TestExceptStar_WeirdLeafExceptions(ExceptStarTest): From webhook-mailer at python.org Tue Feb 21 10:14:55 2023 From: webhook-mailer at python.org (corona10) Date: Tue, 21 Feb 2023 15:14:55 -0000 Subject: [Python-checkins] gh-101936: Update the default value of fp from io.StringIO to io.BytesIO (gh-102100) Message-ID: https://github.com/python/cpython/commit/0d4c7fcd4f078708a5ac6499af378ce5ee8eb211 commit: 0d4c7fcd4f078708a5ac6499af378ce5ee8eb211 branch: main author: Vo Hoang Long committer: corona10 date: 2023-02-22T00:14:41+09:00 summary: gh-101936: Update the default value of fp from io.StringIO to io.BytesIO (gh-102100) Co-authored-by: Long Vo files: A Misc/NEWS.d/next/Library/2023-02-21-07-15-41.gh-issue-101936.QVOxHH.rst M Lib/test/test_urllib2.py M Lib/urllib/error.py M Misc/ACKS diff --git a/Lib/test/test_urllib2.py b/Lib/test/test_urllib2.py index 498c0382d213..633d596ac3de 100644 --- a/Lib/test/test_urllib2.py +++ b/Lib/test/test_urllib2.py @@ -1827,6 +1827,7 @@ def test_HTTPError_interface(self): def test_gh_98778(self): x = urllib.error.HTTPError("url", 405, "METHOD NOT ALLOWED", None, None) self.assertEqual(getattr(x, "__notes__", ()), ()) + self.assertIsInstance(x.fp.read(), bytes) def test_parse_proxy(self): parse_proxy_test_cases = [ diff --git a/Lib/urllib/error.py b/Lib/urllib/error.py index feec0e7f848e..a9cd1ecadd63 100644 --- a/Lib/urllib/error.py +++ b/Lib/urllib/error.py @@ -43,7 +43,7 @@ def __init__(self, url, code, msg, hdrs, fp): self.fp = fp self.filename = url if fp is None: - fp = io.StringIO() + fp = io.BytesIO() self.__super_init(fp, hdrs, url, code) def __str__(self): diff --git a/Misc/ACKS b/Misc/ACKS index a9d4e453c7b5..ab19d69b0133 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1898,6 +1898,7 @@ Kurt Vile Norman Vine Pauli Virtanen Frank Visser +Long Vo Johannes Vogel Michael Vogt Radu Voicilas diff --git a/Misc/NEWS.d/next/Library/2023-02-21-07-15-41.gh-issue-101936.QVOxHH.rst b/Misc/NEWS.d/next/Library/2023-02-21-07-15-41.gh-issue-101936.QVOxHH.rst new file mode 100644 index 000000000000..55841da44b11 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-21-07-15-41.gh-issue-101936.QVOxHH.rst @@ -0,0 +1,2 @@ +The default value of ``fp`` becomes :class:`io.BytesIO` if :exc:`~urllib.error.HTTPError` +is initialized without a designated ``fp`` parameter. Patch by Long Vo. From webhook-mailer at python.org Tue Feb 21 10:38:41 2023 From: webhook-mailer at python.org (miss-islington) Date: Tue, 21 Feb 2023 15:38:41 -0000 Subject: [Python-checkins] gh-102008: simplify test_except_star by using sys.exception() instead of sys.exc_info() (GH-102009) Message-ID: https://github.com/python/cpython/commit/3bfa608cbe0d740be7628f8155b73d6f02eec797 commit: 3bfa608cbe0d740be7628f8155b73d6f02eec797 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-21T07:38:27-08:00 summary: gh-102008: simplify test_except_star by using sys.exception() instead of sys.exc_info() (GH-102009) (cherry picked from commit c2b85a95a50687a2e5d1873e17266370876e77e9) Co-authored-by: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> files: M Lib/test/test_except_star.py diff --git a/Lib/test/test_except_star.py b/Lib/test/test_except_star.py index 9de72dbd5a32..c5167c5bba38 100644 --- a/Lib/test/test_except_star.py +++ b/Lib/test/test_except_star.py @@ -208,44 +208,38 @@ def assertMetadataNotEqual(self, e1, e2): class TestExceptStarSplitSemantics(ExceptStarTest): def doSplitTestNamed(self, exc, T, match_template, rest_template): - initial_exc_info = sys.exc_info() - exc_info = match = rest = None + initial_sys_exception = sys.exception() + sys_exception = match = rest = None try: try: raise exc except* T as e: - exc_info = sys.exc_info() + sys_exception = sys.exception() match = e except BaseException as e: rest = e - if match_template: - self.assertEqual(exc_info[1], match) - else: - self.assertIsNone(exc_info) + self.assertEqual(sys_exception, match) self.assertExceptionIsLike(match, match_template) self.assertExceptionIsLike(rest, rest_template) - self.assertEqual(sys.exc_info(), initial_exc_info) + self.assertEqual(sys.exception(), initial_sys_exception) def doSplitTestUnnamed(self, exc, T, match_template, rest_template): - initial_exc_info = sys.exc_info() - exc_info = match = rest = None + initial_sys_exception = sys.exception() + sys_exception = match = rest = None try: try: raise exc except* T: - exc_info = sys.exc_info() - match = sys.exc_info()[1] + sys_exception = match = sys.exception() else: if rest_template: self.fail("Exception not raised") except BaseException as e: rest = e self.assertExceptionIsLike(match, match_template) - if match_template: - self.assertEqual(exc_info[0], type(match_template)) self.assertExceptionIsLike(rest, rest_template) - self.assertEqual(sys.exc_info(), initial_exc_info) + self.assertEqual(sys.exception(), initial_sys_exception) def doSplitTestInExceptHandler(self, exc, T, match_template, rest_template): try: @@ -409,11 +403,11 @@ def test_multiple_matches_unnamed(self): try: raise ExceptionGroup("mmu", [OSError("os"), BlockingIOError("io")]) except* BlockingIOError: - e = sys.exc_info()[1] + e = sys.exception() self.assertExceptionIsLike(e, ExceptionGroup("mmu", [BlockingIOError("io")])) except* OSError: - e = sys.exc_info()[1] + e = sys.exception() self.assertExceptionIsLike(e, ExceptionGroup("mmu", [OSError("os")])) else: @@ -434,7 +428,7 @@ def test_first_match_wins_unnamed(self): try: raise ExceptionGroup("fstu", [BlockingIOError("io")]) except* OSError: - e = sys.exc_info()[1] + e = sys.exception() self.assertExceptionIsLike(e, ExceptionGroup("fstu", [BlockingIOError("io")])) except* BlockingIOError: @@ -452,7 +446,7 @@ def test_nested_except_stars(self): pass else: self.fail("Exception not raised") - e = sys.exc_info()[1] + e = sys.exception() self.assertExceptionIsLike(e, ExceptionGroup("n", [BlockingIOError("io")])) else: @@ -766,7 +760,7 @@ def test_raise_unnamed(self): try: raise orig except* OSError: - e = sys.exc_info()[1] + e = sys.exception() raise TypeError(3) from e except ExceptionGroup as e: exc = e @@ -821,7 +815,7 @@ def test_raise_handle_all_raise_one_unnamed(self): try: raise orig except* (TypeError, ValueError) as e: - e = sys.exc_info()[1] + e = sys.exception() raise SyntaxError(3) from e except ExceptionGroup as e: exc = e @@ -882,10 +876,10 @@ def test_raise_handle_all_raise_two_unnamed(self): try: raise orig except* TypeError: - e = sys.exc_info()[1] + e = sys.exception() raise SyntaxError(3) from e except* ValueError: - e = sys.exc_info()[1] + e = sys.exception() raise SyntaxError(4) from e except ExceptionGroup as e: exc = e @@ -982,7 +976,7 @@ def derive(self, excs): class TestExceptStarCleanup(ExceptStarTest): - def test_exc_info_restored(self): + def test_sys_exception_restored(self): try: try: raise ValueError(42) @@ -997,7 +991,7 @@ def test_exc_info_restored(self): self.assertExceptionIsLike(exc, ZeroDivisionError('division by zero')) self.assertExceptionIsLike(exc.__context__, ValueError(42)) - self.assertEqual(sys.exc_info(), (None, None, None)) + self.assertEqual(sys.exception(), None) class TestExceptStar_WeirdLeafExceptions(ExceptStarTest): From webhook-mailer at python.org Tue Feb 21 11:33:50 2023 From: webhook-mailer at python.org (ambv) Date: Tue, 21 Feb 2023 16:33:50 -0000 Subject: [Python-checkins] =?utf-8?q?=5B3=2E9=5D_gh-101981=3A_Fix_Ubuntu_?= =?utf-8?q?SSL_tests_with_OpenSSL_=283=2E1=2E0-beta1=29_CI_i=E2=80=A6_=28?= =?utf-8?q?=23102094=29?= Message-ID: https://github.com/python/cpython/commit/c25b484e82cace132a5f3c89b69888eaf7b1d40e commit: c25b484e82cace132a5f3c89b69888eaf7b1d40e branch: 3.9 author: Dong-hee Na committer: ambv date: 2023-02-21T17:33:23+01:00 summary: [3.9] gh-101981: Fix Ubuntu SSL tests with OpenSSL (3.1.0-beta1) CI i? (#102094) [3.9] gh-101981: Fix Ubuntu SSL tests with OpenSSL (3.1.0-beta1) CI issue (gh-102079) files: M .github/workflows/build.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 274a1df4384a..5a9659cede69 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -224,7 +224,7 @@ jobs: echo "LD_LIBRARY_PATH=${GITHUB_WORKSPACE}/multissl/openssl/${OPENSSL_VER}/lib" >> $GITHUB_ENV - name: 'Restore OpenSSL build' id: cache-openssl - uses: actions/cache at v3.0.2 + uses: actions/cache at v3 with: path: ./multissl/openssl/${{ env.OPENSSL_VER }} key: ${{ runner.os }}-multissl-openssl-${{ env.OPENSSL_VER }} @@ -235,7 +235,7 @@ jobs: run: | echo "PATH=/usr/lib/ccache:$PATH" >> $GITHUB_ENV - name: Configure ccache action - uses: hendrikmuhs/ccache-action at v1 + uses: hendrikmuhs/ccache-action at v1.2 - name: Configure CPython run: ./configure --with-pydebug --with-openssl=$OPENSSL_DIR - name: Build CPython From webhook-mailer at python.org Tue Feb 21 11:33:51 2023 From: webhook-mailer at python.org (ambv) Date: Tue, 21 Feb 2023 16:33:51 -0000 Subject: [Python-checkins] =?utf-8?q?=5B3=2E8=5D_gh-101981=3A_Fix_Ubuntu_?= =?utf-8?q?SSL_tests_with_OpenSSL_=283=2E1=2E0-beta1=29_CI_i=E2=80=A6_=28?= =?utf-8?q?=23102095=29?= Message-ID: https://github.com/python/cpython/commit/7a3db0cf514f7a3bfb24d5e441cb62d785645b01 commit: 7a3db0cf514f7a3bfb24d5e441cb62d785645b01 branch: 3.8 author: Dong-hee Na committer: ambv date: 2023-02-21T17:33:12+01:00 summary: [3.8] gh-101981: Fix Ubuntu SSL tests with OpenSSL (3.1.0-beta1) CI i? (#102095) [3.8] gh-101981: Fix Ubuntu SSL tests with OpenSSL (3.1.0-beta1) CI issue (gh-102079) files: M .github/workflows/build.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 5c8f6f27b7d8..e019e8992887 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -201,7 +201,7 @@ jobs: echo "LD_LIBRARY_PATH=${GITHUB_WORKSPACE}/multissl/openssl/${OPENSSL_VER}/lib" >> $GITHUB_ENV - name: 'Restore OpenSSL build' id: cache-openssl - uses: actions/cache at v2.1.4 + uses: actions/cache at v3 with: path: ./multissl/openssl/${{ env.OPENSSL_VER }} key: ${{ runner.os }}-multissl-openssl-${{ env.OPENSSL_VER }} @@ -212,7 +212,7 @@ jobs: run: | echo "PATH=/usr/lib/ccache:$PATH" >> $GITHUB_ENV - name: Configure ccache action - uses: hendrikmuhs/ccache-action at v1 + uses: hendrikmuhs/ccache-action at v1.2 - name: Configure CPython run: ./configure --with-pydebug --with-openssl=$OPENSSL_DIR - name: Build CPython From webhook-mailer at python.org Tue Feb 21 12:39:35 2023 From: webhook-mailer at python.org (corona10) Date: Tue, 21 Feb 2023 17:39:35 -0000 Subject: [Python-checkins] gh-95672 fix typo SkitTest to SkipTest (gh-102119) Message-ID: https://github.com/python/cpython/commit/d5c7954d0c3ff874d2d27d33dcc207bb7356f328 commit: d5c7954d0c3ff874d2d27d33dcc207bb7356f328 branch: main author: Hyunkyun Moon committer: corona10 date: 2023-02-22T02:39:00+09:00 summary: gh-95672 fix typo SkitTest to SkipTest (gh-102119) Co-authored-by: HyunKyun Moon files: M Lib/test/test_fcntl.py M Lib/test/test_subprocess.py M Misc/ACKS diff --git a/Lib/test/test_fcntl.py b/Lib/test/test_fcntl.py index 113c7802821d..26de67d92427 100644 --- a/Lib/test/test_fcntl.py +++ b/Lib/test/test_fcntl.py @@ -202,7 +202,7 @@ def test_fcntl_f_pipesize(self): pipesize_default = fcntl.fcntl(test_pipe_w, fcntl.F_GETPIPE_SZ) pipesize = pipesize_default // 2 # A new value to detect change. if pipesize < 512: # the POSIX minimum - raise unittest.SkitTest( + raise unittest.SkipTest( 'default pipesize too small to perform test.') fcntl.fcntl(test_pipe_w, fcntl.F_SETPIPE_SZ, pipesize) self.assertEqual(fcntl.fcntl(test_pipe_w, fcntl.F_GETPIPE_SZ), diff --git a/Lib/test/test_subprocess.py b/Lib/test/test_subprocess.py index abd0dd8b2569..727b0e6dc578 100644 --- a/Lib/test/test_subprocess.py +++ b/Lib/test/test_subprocess.py @@ -718,7 +718,7 @@ def test_pipesizes(self): os.close(test_pipe_w) pipesize = pipesize_default // 2 if pipesize < 512: # the POSIX minimum - raise unittest.SkitTest( + raise unittest.SkipTest( 'default pipesize too small to perform test.') p = subprocess.Popen( [sys.executable, "-c", diff --git a/Misc/ACKS b/Misc/ACKS index ab19d69b0133..33dbf4e989d9 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1229,6 +1229,7 @@ The Dragon De Monsyne Bastien Montagne Skip Montanaro Peter Moody +HyunKyun Moon Alan D. Moore Nicolai Moore Paul Moore From webhook-mailer at python.org Tue Feb 21 20:21:31 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 22 Feb 2023 01:21:31 -0000 Subject: [Python-checkins] gh-99942: python.pc on android/cygwin should link to libpython per configure.ac (GH-100356) Message-ID: https://github.com/python/cpython/commit/3ba7743b0696202e9caa283a0be253fd26a5cfbd commit: 3ba7743b0696202e9caa283a0be253fd26a5cfbd branch: main author: Eli Schwartz committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-21T17:21:24-08:00 summary: gh-99942: python.pc on android/cygwin should link to libpython per configure.ac (GH-100356) In commit 254b309c801f82509597e3d7d4be56885ef94c11 a previous change to avoid linking to libpython was partially reverted for Android (and later Cygwin as well), to add back the link flags. This was applied to distutils and to python-config.sh, but not to python.pc. Add it back to python.pc as well. Automerge-Triggered-By: GH:gpshead files: A Misc/NEWS.d/next/Build/2022-12-20-01-06-17.gh-issue-99942.lbmzYj.rst M Misc/python.pc.in diff --git a/Misc/NEWS.d/next/Build/2022-12-20-01-06-17.gh-issue-99942.lbmzYj.rst b/Misc/NEWS.d/next/Build/2022-12-20-01-06-17.gh-issue-99942.lbmzYj.rst new file mode 100644 index 000000000000..63a640a9cdf7 --- /dev/null +++ b/Misc/NEWS.d/next/Build/2022-12-20-01-06-17.gh-issue-99942.lbmzYj.rst @@ -0,0 +1,2 @@ +On Android, python.pc now correctly reports the library to link to, the same +as python-config.sh. diff --git a/Misc/python.pc.in b/Misc/python.pc.in index 87e04decc2a2..027dba38585a 100644 --- a/Misc/python.pc.in +++ b/Misc/python.pc.in @@ -9,5 +9,5 @@ Description: Build a C extension for Python Requires: Version: @VERSION@ Libs.private: @LIBS@ -Libs: +Libs: -L${libdir} @LIBPYTHON@ Cflags: -I${includedir}/python at VERSION@@ABIFLAGS@ From webhook-mailer at python.org Tue Feb 21 20:34:58 2023 From: webhook-mailer at python.org (corona10) Date: Wed, 22 Feb 2023 01:34:58 -0000 Subject: [Python-checkins] [3.11] gh-95672 fix typo SkitTest to SkipTest (gh-102119) (#102121) Message-ID: https://github.com/python/cpython/commit/5d47150eacbca622b351d48870d21381f74c57c5 commit: 5d47150eacbca622b351d48870d21381f74c57c5 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: corona10 date: 2023-02-22T10:34:51+09:00 summary: [3.11] gh-95672 fix typo SkitTest to SkipTest (gh-102119) (#102121) gh-95672 fix typo SkitTest to SkipTest (gh-102119) (cherry picked from commit d5c7954d0c3ff874d2d27d33dcc207bb7356f328) Co-authored-by: HyunKyun Moon files: M Lib/test/test_fcntl.py M Lib/test/test_subprocess.py M Misc/ACKS diff --git a/Lib/test/test_fcntl.py b/Lib/test/test_fcntl.py index fc8c39365f12..8e98256a62c6 100644 --- a/Lib/test/test_fcntl.py +++ b/Lib/test/test_fcntl.py @@ -200,7 +200,7 @@ def test_fcntl_f_pipesize(self): pipesize_default = fcntl.fcntl(test_pipe_w, fcntl.F_GETPIPE_SZ) pipesize = pipesize_default // 2 # A new value to detect change. if pipesize < 512: # the POSIX minimum - raise unittest.SkitTest( + raise unittest.SkipTest( 'default pipesize too small to perform test.') fcntl.fcntl(test_pipe_w, fcntl.F_SETPIPE_SZ, pipesize) self.assertEqual(fcntl.fcntl(test_pipe_w, fcntl.F_GETPIPE_SZ), diff --git a/Lib/test/test_subprocess.py b/Lib/test/test_subprocess.py index abd0dd8b2569..727b0e6dc578 100644 --- a/Lib/test/test_subprocess.py +++ b/Lib/test/test_subprocess.py @@ -718,7 +718,7 @@ def test_pipesizes(self): os.close(test_pipe_w) pipesize = pipesize_default // 2 if pipesize < 512: # the POSIX minimum - raise unittest.SkitTest( + raise unittest.SkipTest( 'default pipesize too small to perform test.') p = subprocess.Popen( [sys.executable, "-c", diff --git a/Misc/ACKS b/Misc/ACKS index 523b9e8380d5..cf6657f9559a 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1220,6 +1220,7 @@ The Dragon De Monsyne Bastien Montagne Skip Montanaro Peter Moody +HyunKyun Moon Alan D. Moore Nicolai Moore Paul Moore From webhook-mailer at python.org Tue Feb 21 20:36:07 2023 From: webhook-mailer at python.org (corona10) Date: Wed, 22 Feb 2023 01:36:07 -0000 Subject: [Python-checkins] [3.10] gh-95672 fix typo SkitTest to SkipTest (gh-102119) (gh-102122) Message-ID: https://github.com/python/cpython/commit/bac3fe76dfadca20c5d5a435b268fe4f799981bf commit: bac3fe76dfadca20c5d5a435b268fe4f799981bf branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: corona10 date: 2023-02-22T10:36:00+09:00 summary: [3.10] gh-95672 fix typo SkitTest to SkipTest (gh-102119) (gh-102122) gh-95672 fix typo SkitTest to SkipTest (gh-102119) (cherry picked from commit d5c7954d0c3ff874d2d27d33dcc207bb7356f328) Co-authored-by: HyunKyun Moon files: M Lib/test/test_fcntl.py M Lib/test/test_subprocess.py M Misc/ACKS diff --git a/Lib/test/test_fcntl.py b/Lib/test/test_fcntl.py index fc8c39365f12..8e98256a62c6 100644 --- a/Lib/test/test_fcntl.py +++ b/Lib/test/test_fcntl.py @@ -200,7 +200,7 @@ def test_fcntl_f_pipesize(self): pipesize_default = fcntl.fcntl(test_pipe_w, fcntl.F_GETPIPE_SZ) pipesize = pipesize_default // 2 # A new value to detect change. if pipesize < 512: # the POSIX minimum - raise unittest.SkitTest( + raise unittest.SkipTest( 'default pipesize too small to perform test.') fcntl.fcntl(test_pipe_w, fcntl.F_SETPIPE_SZ, pipesize) self.assertEqual(fcntl.fcntl(test_pipe_w, fcntl.F_GETPIPE_SZ), diff --git a/Lib/test/test_subprocess.py b/Lib/test/test_subprocess.py index ea02a9ecb67a..52540acf7e6d 100644 --- a/Lib/test/test_subprocess.py +++ b/Lib/test/test_subprocess.py @@ -707,7 +707,7 @@ def test_pipesizes(self): os.close(test_pipe_w) pipesize = pipesize_default // 2 if pipesize < 512: # the POSIX minimum - raise unittest.SkitTest( + raise unittest.SkipTest( 'default pipesize too small to perform test.') p = subprocess.Popen( [sys.executable, "-c", diff --git a/Misc/ACKS b/Misc/ACKS index b9aa307ae087..c96c4876e6cb 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1209,6 +1209,7 @@ The Dragon De Monsyne Bastien Montagne Skip Montanaro Peter Moody +HyunKyun Moon Alan D. Moore Nicolai Moore Paul Moore From webhook-mailer at python.org Wed Feb 22 05:11:48 2023 From: webhook-mailer at python.org (hugovk) Date: Wed, 22 Feb 2023 10:11:48 -0000 Subject: [Python-checkins] gh-102135: Update turtle docs to rename wikipedia demo to rosette (#102137) Message-ID: https://github.com/python/cpython/commit/8d46c7ed5e83e22d55fe4f4e6e873d87f340c1dc commit: 8d46c7ed5e83e22d55fe4f4e6e873d87f340c1dc branch: main author: somebody <98094921+UndoneStudios at users.noreply.github.com> committer: hugovk date: 2023-02-22T12:11:30+02:00 summary: gh-102135: Update turtle docs to rename wikipedia demo to rosette (#102137) files: M Doc/library/turtle.rst diff --git a/Doc/library/turtle.rst b/Doc/library/turtle.rst index 5add61c759ea..b7eb569c18a6 100644 --- a/Doc/library/turtle.rst +++ b/Doc/library/turtle.rst @@ -2444,6 +2444,9 @@ The demo scripts are: | planet_and_moon| simulation of | compound shapes, | | | gravitational system | :class:`Vec2D` | +----------------+------------------------------+-----------------------+ +| rosette | a pattern from the wikipedia | :func:`clone`, | +| | article on turtle graphics | :func:`undo` | ++----------------+------------------------------+-----------------------+ | round_dance | dancing turtles rotating | compound shapes, clone| | | pairwise in opposite | shapesize, tilt, | | | direction | get_shapepoly, update | @@ -2457,9 +2460,6 @@ The demo scripts are: | two_canvases | simple design | turtles on two | | | | canvases | +----------------+------------------------------+-----------------------+ -| wikipedia | a pattern from the wikipedia | :func:`clone`, | -| | article on turtle graphics | :func:`undo` | -+----------------+------------------------------+-----------------------+ | yinyang | another elementary example | :func:`circle` | +----------------+------------------------------+-----------------------+ From webhook-mailer at python.org Wed Feb 22 05:24:27 2023 From: webhook-mailer at python.org (hugovk) Date: Wed, 22 Feb 2023 10:24:27 -0000 Subject: [Python-checkins] [3.10] gh-102135: Update turtle docs to rename wikipedia demo to rosette (GH-102137) (#102139) Message-ID: https://github.com/python/cpython/commit/3e9fb1a8390f84a3db56a398eb289fec487bc7da commit: 3e9fb1a8390f84a3db56a398eb289fec487bc7da branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: hugovk date: 2023-02-22T12:24:18+02:00 summary: [3.10] gh-102135: Update turtle docs to rename wikipedia demo to rosette (GH-102137) (#102139) Co-authored-by: somebody <98094921+UndoneStudios at users.noreply.github.com> files: M Doc/library/turtle.rst diff --git a/Doc/library/turtle.rst b/Doc/library/turtle.rst index 228cf1e01e74..d48db6c6e753 100644 --- a/Doc/library/turtle.rst +++ b/Doc/library/turtle.rst @@ -2444,6 +2444,9 @@ The demo scripts are: | planet_and_moon| simulation of | compound shapes, | | | gravitational system | :class:`Vec2D` | +----------------+------------------------------+-----------------------+ +| rosette | a pattern from the wikipedia | :func:`clone`, | +| | article on turtle graphics | :func:`undo` | ++----------------+------------------------------+-----------------------+ | round_dance | dancing turtles rotating | compound shapes, clone| | | pairwise in opposite | shapesize, tilt, | | | direction | get_shapepoly, update | @@ -2457,9 +2460,6 @@ The demo scripts are: | two_canvases | simple design | turtles on two | | | | canvases | +----------------+------------------------------+-----------------------+ -| wikipedia | a pattern from the wikipedia | :func:`clone`, | -| | article on turtle graphics | :func:`undo` | -+----------------+------------------------------+-----------------------+ | yinyang | another elementary example | :func:`circle` | +----------------+------------------------------+-----------------------+ From webhook-mailer at python.org Wed Feb 22 05:25:25 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 22 Feb 2023 10:25:25 -0000 Subject: [Python-checkins] [3.11] gh-102135: Update turtle docs to rename wikipedia demo to rosette (GH-102137) (GH-102138) Message-ID: https://github.com/python/cpython/commit/528e91c067610eb9f2af240248dde70969982b04 commit: 528e91c067610eb9f2af240248dde70969982b04 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-22T02:25:17-08:00 summary: [3.11] gh-102135: Update turtle docs to rename wikipedia demo to rosette (GH-102137) (GH-102138) (cherry picked from commit 8d46c7ed5e83e22d55fe4f4e6e873d87f340c1dc) Co-authored-by: somebody <98094921+UndoneStudios at users.noreply.github.com> Automerge-Triggered-By: GH:hugovk files: M Doc/library/turtle.rst diff --git a/Doc/library/turtle.rst b/Doc/library/turtle.rst index 17bf8829a9fe..62acfff5a793 100644 --- a/Doc/library/turtle.rst +++ b/Doc/library/turtle.rst @@ -2444,6 +2444,9 @@ The demo scripts are: | planet_and_moon| simulation of | compound shapes, | | | gravitational system | :class:`Vec2D` | +----------------+------------------------------+-----------------------+ +| rosette | a pattern from the wikipedia | :func:`clone`, | +| | article on turtle graphics | :func:`undo` | ++----------------+------------------------------+-----------------------+ | round_dance | dancing turtles rotating | compound shapes, clone| | | pairwise in opposite | shapesize, tilt, | | | direction | get_shapepoly, update | @@ -2457,9 +2460,6 @@ The demo scripts are: | two_canvases | simple design | turtles on two | | | | canvases | +----------------+------------------------------+-----------------------+ -| wikipedia | a pattern from the wikipedia | :func:`clone`, | -| | article on turtle graphics | :func:`undo` | -+----------------+------------------------------+-----------------------+ | yinyang | another elementary example | :func:`circle` | +----------------+------------------------------+-----------------------+ From webhook-mailer at python.org Wed Feb 22 06:12:17 2023 From: webhook-mailer at python.org (markshannon) Date: Wed, 22 Feb 2023 11:12:17 -0000 Subject: [Python-checkins] GH-100982: Restrict `FOR_ITER_RANGE` to a single instruction to allow instrumentation. (GH-101985) Message-ID: https://github.com/python/cpython/commit/7c106a443f8cf1111947a425eed11ecf9e615ce3 commit: 7c106a443f8cf1111947a425eed11ecf9e615ce3 branch: main author: Mark Shannon committer: markshannon date: 2023-02-22T11:11:57Z summary: GH-100982: Restrict `FOR_ITER_RANGE` to a single instruction to allow instrumentation. (GH-101985) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-17-10-12-13.gh-issue-100982.mJGJQw.rst M Python/bytecodes.c M Python/generated_cases.c.h M Python/specialize.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-17-10-12-13.gh-issue-100982.mJGJQw.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-17-10-12-13.gh-issue-100982.mJGJQw.rst new file mode 100644 index 000000000000..53bbc860c53f --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-17-10-12-13.gh-issue-100982.mJGJQw.rst @@ -0,0 +1,2 @@ +Restrict the scope of the :opcode:`FOR_ITER_RANGE` instruction to the scope of the +original :opcode:`FOR_ITER` instruction, to allow instrumentation. diff --git a/Python/bytecodes.c b/Python/bytecodes.c index c5959f2f994f..ad68c794fe7a 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -2178,35 +2178,27 @@ dummy_func( // Common case: no jump, leave it to the code generator } - // This is slightly different, when the loop isn't terminated we - // jump over the immediately following STORE_FAST instruction. - inst(FOR_ITER_RANGE, (unused/1, iter -- iter, unused)) { + inst(FOR_ITER_RANGE, (unused/1, iter -- iter, next)) { assert(cframe.use_tracing == 0); _PyRangeIterObject *r = (_PyRangeIterObject *)iter; DEOPT_IF(Py_TYPE(r) != &PyRangeIter_Type, FOR_ITER); STAT_INC(FOR_ITER, hit); - _Py_CODEUNIT next = next_instr[INLINE_CACHE_ENTRIES_FOR_ITER]; - assert(_PyOpcode_Deopt[next.op.code] == STORE_FAST); if (r->len <= 0) { STACK_SHRINK(1); Py_DECREF(r); // Jump over END_FOR instruction. JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + oparg + 1); + DISPATCH(); } - else { - long value = r->start; - r->start = value + r->step; - r->len--; - if (_PyLong_AssignValue(&GETLOCAL(next.op.arg), value) < 0) { - goto error; - } - // The STORE_FAST is already done. - JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + 1); + long value = r->start; + r->start = value + r->step; + r->len--; + next = PyLong_FromLong(value); + if (next == NULL) { + goto error; } - DISPATCH(); } - // This is *not* a super-instruction, unique in the family. inst(FOR_ITER_GEN, (unused/1, iter -- iter, unused)) { assert(cframe.use_tracing == 0); PyGenObject *gen = (PyGenObject *)iter; diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index 487e63d855d1..2987adc3bba5 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -2756,28 +2756,28 @@ TARGET(FOR_ITER_RANGE) { PyObject *iter = PEEK(1); + PyObject *next; assert(cframe.use_tracing == 0); _PyRangeIterObject *r = (_PyRangeIterObject *)iter; DEOPT_IF(Py_TYPE(r) != &PyRangeIter_Type, FOR_ITER); STAT_INC(FOR_ITER, hit); - _Py_CODEUNIT next = next_instr[INLINE_CACHE_ENTRIES_FOR_ITER]; - assert(_PyOpcode_Deopt[next.op.code] == STORE_FAST); if (r->len <= 0) { STACK_SHRINK(1); Py_DECREF(r); // Jump over END_FOR instruction. JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + oparg + 1); + DISPATCH(); } - else { - long value = r->start; - r->start = value + r->step; - r->len--; - if (_PyLong_AssignValue(&GETLOCAL(next.op.arg), value) < 0) { - goto error; - } - // The STORE_FAST is already done. - JUMPBY(INLINE_CACHE_ENTRIES_FOR_ITER + 1); + long value = r->start; + r->start = value + r->step; + r->len--; + next = PyLong_FromLong(value); + if (next == NULL) { + goto error; } + STACK_GROW(1); + POKE(1, next); + JUMPBY(1); DISPATCH(); } diff --git a/Python/specialize.c b/Python/specialize.c index c9555f8ad4dc..3405d2b0ab06 100644 --- a/Python/specialize.c +++ b/Python/specialize.c @@ -2155,8 +2155,6 @@ _Py_Specialize_ForIter(PyObject *iter, _Py_CODEUNIT *instr, int oparg) assert(_PyOpcode_Caches[FOR_ITER] == INLINE_CACHE_ENTRIES_FOR_ITER); _PyForIterCache *cache = (_PyForIterCache *)(instr + 1); PyTypeObject *tp = Py_TYPE(iter); - _Py_CODEUNIT next = instr[1+INLINE_CACHE_ENTRIES_FOR_ITER]; - int next_op = _PyOpcode_Deopt[next.op.code]; if (tp == &PyListIter_Type) { instr->op.code = FOR_ITER_LIST; goto success; @@ -2165,7 +2163,7 @@ _Py_Specialize_ForIter(PyObject *iter, _Py_CODEUNIT *instr, int oparg) instr->op.code = FOR_ITER_TUPLE; goto success; } - else if (tp == &PyRangeIter_Type && next_op == STORE_FAST) { + else if (tp == &PyRangeIter_Type) { instr->op.code = FOR_ITER_RANGE; goto success; } From webhook-mailer at python.org Wed Feb 22 06:42:34 2023 From: webhook-mailer at python.org (corona10) Date: Wed, 22 Feb 2023 11:42:34 -0000 Subject: [Python-checkins] [3.11] gh-101936: Update the default value of fp from io.StringIO to io.BytesIO (gh-102100) (gh-102117) Message-ID: https://github.com/python/cpython/commit/edbde8feb7496e0f7898a49b80fb703c2d6012a8 commit: edbde8feb7496e0f7898a49b80fb703c2d6012a8 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: corona10 date: 2023-02-22T20:42:04+09:00 summary: [3.11] gh-101936: Update the default value of fp from io.StringIO to io.BytesIO (gh-102100) (gh-102117) gh-101936: Update the default value of fp from io.StringIO to io.BytesIO (gh-102100) (cherry picked from commit 0d4c7fcd4f078708a5ac6499af378ce5ee8eb211) Co-authored-by: Long Vo files: A Misc/NEWS.d/next/Library/2023-02-21-07-15-41.gh-issue-101936.QVOxHH.rst M Lib/test/test_urllib2.py M Lib/urllib/error.py M Misc/ACKS diff --git a/Lib/test/test_urllib2.py b/Lib/test/test_urllib2.py index db92ba56c34f..230b937645b5 100644 --- a/Lib/test/test_urllib2.py +++ b/Lib/test/test_urllib2.py @@ -1828,6 +1828,7 @@ def test_HTTPError_interface(self): def test_gh_98778(self): x = urllib.error.HTTPError("url", 405, "METHOD NOT ALLOWED", None, None) self.assertEqual(getattr(x, "__notes__", ()), ()) + self.assertIsInstance(x.fp.read(), bytes) def test_parse_proxy(self): parse_proxy_test_cases = [ diff --git a/Lib/urllib/error.py b/Lib/urllib/error.py index feec0e7f848e..a9cd1ecadd63 100644 --- a/Lib/urllib/error.py +++ b/Lib/urllib/error.py @@ -43,7 +43,7 @@ def __init__(self, url, code, msg, hdrs, fp): self.fp = fp self.filename = url if fp is None: - fp = io.StringIO() + fp = io.BytesIO() self.__super_init(fp, hdrs, url, code) def __str__(self): diff --git a/Misc/ACKS b/Misc/ACKS index cf6657f9559a..95e0589076c6 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1882,6 +1882,7 @@ Kurt Vile Norman Vine Pauli Virtanen Frank Visser +Long Vo Johannes Vogel Michael Vogt Radu Voicilas diff --git a/Misc/NEWS.d/next/Library/2023-02-21-07-15-41.gh-issue-101936.QVOxHH.rst b/Misc/NEWS.d/next/Library/2023-02-21-07-15-41.gh-issue-101936.QVOxHH.rst new file mode 100644 index 000000000000..55841da44b11 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-21-07-15-41.gh-issue-101936.QVOxHH.rst @@ -0,0 +1,2 @@ +The default value of ``fp`` becomes :class:`io.BytesIO` if :exc:`~urllib.error.HTTPError` +is initialized without a designated ``fp`` parameter. Patch by Long Vo. From webhook-mailer at python.org Wed Feb 22 06:42:35 2023 From: webhook-mailer at python.org (corona10) Date: Wed, 22 Feb 2023 11:42:35 -0000 Subject: [Python-checkins] [3.10] gh-101936: Update the default value of fp from io.StringIO to io.BytesIO (gh-102100) (#102118) Message-ID: https://github.com/python/cpython/commit/0f28af589bd23a1daceedfe03d920a619b03cb6c commit: 0f28af589bd23a1daceedfe03d920a619b03cb6c branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: corona10 date: 2023-02-22T20:42:28+09:00 summary: [3.10] gh-101936: Update the default value of fp from io.StringIO to io.BytesIO (gh-102100) (#102118) gh-101936: Update the default value of fp from io.StringIO to io.BytesIO (gh-102100) (cherry picked from commit 0d4c7fcd4f078708a5ac6499af378ce5ee8eb211) Co-authored-by: Long Vo files: A Misc/NEWS.d/next/Library/2023-02-21-07-15-41.gh-issue-101936.QVOxHH.rst M Lib/test/test_urllib2.py M Lib/urllib/error.py M Misc/ACKS diff --git a/Lib/test/test_urllib2.py b/Lib/test/test_urllib2.py index 4ef391c4540c..55f45db7288b 100644 --- a/Lib/test/test_urllib2.py +++ b/Lib/test/test_urllib2.py @@ -1826,6 +1826,7 @@ def test_HTTPError_interface(self): def test_gh_98778(self): x = urllib.error.HTTPError("url", 405, "METHOD NOT ALLOWED", None, None) self.assertEqual(getattr(x, "__notes__", ()), ()) + self.assertIsInstance(x.fp.read(), bytes) def test_parse_proxy(self): parse_proxy_test_cases = [ diff --git a/Lib/urllib/error.py b/Lib/urllib/error.py index feec0e7f848e..a9cd1ecadd63 100644 --- a/Lib/urllib/error.py +++ b/Lib/urllib/error.py @@ -43,7 +43,7 @@ def __init__(self, url, code, msg, hdrs, fp): self.fp = fp self.filename = url if fp is None: - fp = io.StringIO() + fp = io.BytesIO() self.__super_init(fp, hdrs, url, code) def __str__(self): diff --git a/Misc/ACKS b/Misc/ACKS index c96c4876e6cb..10a35fd64abf 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1867,6 +1867,7 @@ Kurt Vile Norman Vine Pauli Virtanen Frank Visser +Long Vo Johannes Vogel Michael Vogt Radu Voicilas diff --git a/Misc/NEWS.d/next/Library/2023-02-21-07-15-41.gh-issue-101936.QVOxHH.rst b/Misc/NEWS.d/next/Library/2023-02-21-07-15-41.gh-issue-101936.QVOxHH.rst new file mode 100644 index 000000000000..55841da44b11 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-21-07-15-41.gh-issue-101936.QVOxHH.rst @@ -0,0 +1,2 @@ +The default value of ``fp`` becomes :class:`io.BytesIO` if :exc:`~urllib.error.HTTPError` +is initialized without a designated ``fp`` parameter. Patch by Long Vo. From webhook-mailer at python.org Wed Feb 22 14:10:11 2023 From: webhook-mailer at python.org (mdickinson) Date: Wed, 22 Feb 2023 19:10:11 -0000 Subject: [Python-checkins] Few coverage nitpicks for the cmath module (#102067) Message-ID: https://github.com/python/cpython/commit/592f65fdb551f64a2db7849500e5df2291637f25 commit: 592f65fdb551f64a2db7849500e5df2291637f25 branch: main author: Sergey B Kirpichev committer: mdickinson date: 2023-02-22T19:10:01Z summary: Few coverage nitpicks for the cmath module (#102067) - partial tests for cosh/sinh overflows (L535 and L771). I doubt both ||-ed conditions could be tested. - removed inaccessible case in sqrt (L832): ax=ay=0 is handled above (L823) because fabs() is exact. Also added test (checked with mpmath and gmpy2) for second condition on that line. - some trivial tests for isclose (cover all conditions on L1217-1218) - add comment for uncovered L1018 Co-authored-by: Mark Dickinson files: M Lib/test/cmath_testcases.txt M Lib/test/test_cmath.py M Modules/cmathmodule.c diff --git a/Lib/test/cmath_testcases.txt b/Lib/test/cmath_testcases.txt index dd7e458ddcb7..0165e17634f4 100644 --- a/Lib/test/cmath_testcases.txt +++ b/Lib/test/cmath_testcases.txt @@ -1536,6 +1536,7 @@ sqrt0141 sqrt -1.797e+308 -9.9999999999999999e+306 -> 3.7284476432057307e+152 -1 sqrt0150 sqrt 1.7976931348623157e+308 0.0 -> 1.3407807929942596355e+154 0.0 sqrt0151 sqrt 2.2250738585072014e-308 0.0 -> 1.4916681462400413487e-154 0.0 sqrt0152 sqrt 5e-324 0.0 -> 2.2227587494850774834e-162 0.0 +sqrt0153 sqrt 5e-324 1.0 -> 0.7071067811865476 0.7071067811865476 -- special values sqrt1000 sqrt 0.0 0.0 -> 0.0 0.0 @@ -1744,6 +1745,7 @@ cosh0023 cosh 2.218885944363501 2.0015727395883687 -> -1.94294321081968 4.129026 -- large real part cosh0030 cosh 710.5 2.3519999999999999 -> -1.2967465239355998e+308 1.3076707908857333e+308 cosh0031 cosh -710.5 0.69999999999999996 -> 1.4085466381392499e+308 -1.1864024666450239e+308 +cosh0032 cosh 720.0 0.0 -> inf 0.0 overflow -- Additional real values (mpmath) cosh0050 cosh 1e-150 0.0 -> 1.0 0.0 @@ -1853,6 +1855,7 @@ sinh0023 sinh 0.043713693678420068 0.22512549887532657 -> 0.042624198673416713 0 -- large real part sinh0030 sinh 710.5 -2.3999999999999999 -> -1.3579970564885919e+308 -1.24394470907798e+308 sinh0031 sinh -710.5 0.80000000000000004 -> -1.2830671601735164e+308 1.3210954193997678e+308 +sinh0032 sinh 720.0 0.0 -> inf 0.0 overflow -- Additional real values (mpmath) sinh0050 sinh 1e-100 0.0 -> 1.00000000000000002e-100 0.0 diff --git a/Lib/test/test_cmath.py b/Lib/test/test_cmath.py index 9fa08dc4ff3f..cd2c6939105d 100644 --- a/Lib/test/test_cmath.py +++ b/Lib/test/test_cmath.py @@ -607,6 +607,14 @@ def test_complex_near_zero(self): self.assertIsClose(0.001-0.001j, 0.001+0.001j, abs_tol=2e-03) self.assertIsNotClose(0.001-0.001j, 0.001+0.001j, abs_tol=1e-03) + def test_complex_special(self): + self.assertIsNotClose(INF, INF*1j) + self.assertIsNotClose(INF*1j, INF) + self.assertIsNotClose(INF, -INF) + self.assertIsNotClose(-INF, INF) + self.assertIsNotClose(0, INF) + self.assertIsNotClose(0, INF*1j) + if __name__ == "__main__": unittest.main() diff --git a/Modules/cmathmodule.c b/Modules/cmathmodule.c index 53e34061d537..b4f7e5424b4c 100644 --- a/Modules/cmathmodule.c +++ b/Modules/cmathmodule.c @@ -829,7 +829,7 @@ cmath_sqrt_impl(PyObject *module, Py_complex z) ax = fabs(z.real); ay = fabs(z.imag); - if (ax < DBL_MIN && ay < DBL_MIN && (ax > 0. || ay > 0.)) { + if (ax < DBL_MIN && ay < DBL_MIN) { /* here we catch cases where hypot(ax, ay) is subnormal */ ax = ldexp(ax, CM_SCALE_UP); s = ldexp(sqrt(ax + hypot(ax, ldexp(ay, CM_SCALE_UP))), @@ -1013,7 +1013,7 @@ cmath_phase_impl(PyObject *module, Py_complex z) double phi; errno = 0; - phi = c_atan2(z); + phi = c_atan2(z); /* should not cause any exception */ if (errno != 0) return math_error(); else From webhook-mailer at python.org Wed Feb 22 14:20:07 2023 From: webhook-mailer at python.org (mdickinson) Date: Wed, 22 Feb 2023 19:20:07 -0000 Subject: [Python-checkins] [3.11] gh-97786: Fix compiler warnings in pytime.c (GH-101826) (#102062) Message-ID: https://github.com/python/cpython/commit/ddb65c47b1066fec8ce7a674ebb88b167f417d84 commit: ddb65c47b1066fec8ce7a674ebb88b167f417d84 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: mdickinson date: 2023-02-22T19:11:59Z summary: [3.11] gh-97786: Fix compiler warnings in pytime.c (GH-101826) (#102062) gh-97786: Fix compiler warnings in pytime.c (GH-101826) Fixes compiler warnings in pytime.c. (cherry picked from commit b1b375e2670a58fc37cb4c2629ed73b045159918) Co-authored-by: Mark Dickinson Co-authored-by: Shantanu <12621235+hauntsaninja at users.noreply.github.com> files: A Misc/NEWS.d/next/Library/2023-02-11-13-23-29.gh-issue-97786.QjvQ1B.rst M Python/pytime.c diff --git a/Misc/NEWS.d/next/Library/2023-02-11-13-23-29.gh-issue-97786.QjvQ1B.rst b/Misc/NEWS.d/next/Library/2023-02-11-13-23-29.gh-issue-97786.QjvQ1B.rst new file mode 100644 index 000000000000..df194b67590d --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-11-13-23-29.gh-issue-97786.QjvQ1B.rst @@ -0,0 +1,2 @@ +Fix potential undefined behaviour in corner cases of floating-point-to-time +conversions. diff --git a/Python/pytime.c b/Python/pytime.c index f49a25bf7bce..f4f112feae47 100644 --- a/Python/pytime.c +++ b/Python/pytime.c @@ -1,5 +1,4 @@ #include "Python.h" -#include "pycore_pymath.h" // _Py_InIntegralTypeRange() #ifdef MS_WINDOWS # include // struct timeval #endif @@ -41,6 +40,14 @@ # error "unsupported time_t size" #endif +#if PY_TIME_T_MAX + PY_TIME_T_MIN != -1 +# error "time_t is not a two's complement integer type" +#endif + +#if _PyTime_MIN + _PyTime_MAX != -1 +# error "_PyTime_t is not a two's complement integer type" +#endif + static void pytime_time_t_overflow(void) @@ -294,7 +301,21 @@ pytime_double_to_denominator(double d, time_t *sec, long *numerator, } assert(0.0 <= floatpart && floatpart < denominator); - if (!_Py_InIntegralTypeRange(time_t, intpart)) { + /* + Conversion of an out-of-range value to time_t gives undefined behaviour + (C99 ?6.3.1.4p1), so we must guard against it. However, checking that + `intpart` is in range is delicate: the obvious expression `intpart <= + PY_TIME_T_MAX` will first convert the value `PY_TIME_T_MAX` to a double, + potentially changing its value and leading to us failing to catch some + UB-inducing values. The code below works correctly under the mild + assumption that time_t is a two's complement integer type with no trap + representation, and that `PY_TIME_T_MIN` is within the representable + range of a C double. + + Note: we want the `if` condition below to be true for NaNs; therefore, + resist any temptation to simplify by applying De Morgan's laws. + */ + if (!((double)PY_TIME_T_MIN <= intpart && intpart < -(double)PY_TIME_T_MIN)) { pytime_time_t_overflow(); return -1; } @@ -349,7 +370,8 @@ _PyTime_ObjectToTime_t(PyObject *obj, time_t *sec, _PyTime_round_t round) d = pytime_round(d, round); (void)modf(d, &intpart); - if (!_Py_InIntegralTypeRange(time_t, intpart)) { + /* See comments in pytime_double_to_denominator */ + if (!((double)PY_TIME_T_MIN <= intpart && intpart < -(double)PY_TIME_T_MIN)) { pytime_time_t_overflow(); return -1; } @@ -507,8 +529,9 @@ pytime_from_double(_PyTime_t *tp, double value, _PyTime_round_t round, d *= (double)unit_to_ns; d = pytime_round(d, round); - if (!_Py_InIntegralTypeRange(_PyTime_t, d)) { - pytime_overflow(); + /* See comments in pytime_double_to_denominator */ + if (!((double)_PyTime_MIN <= d && d < -(double)_PyTime_MIN)) { + pytime_time_t_overflow(); return -1; } _PyTime_t ns = (_PyTime_t)d; @@ -902,7 +925,7 @@ py_get_system_clock(_PyTime_t *tp, _Py_clock_info_t *info, int raise_exc) info->monotonic = 0; info->adjustable = 1; if (clock_getres(CLOCK_REALTIME, &res) == 0) { - info->resolution = res.tv_sec + res.tv_nsec * 1e-9; + info->resolution = (double)res.tv_sec + (double)res.tv_nsec * 1e-9; } else { info->resolution = 1e-9; From webhook-mailer at python.org Wed Feb 22 15:21:47 2023 From: webhook-mailer at python.org (rhettinger) Date: Wed, 22 Feb 2023 20:21:47 -0000 Subject: [Python-checkins] GH-101777: `queue.rst`: use 2 spaces after a period to be consistent. (#102143) Message-ID: https://github.com/python/cpython/commit/96bf24380e44dfa1516d65480250995e737c0cb9 commit: 96bf24380e44dfa1516d65480250995e737c0cb9 branch: main author: Owain Davies <116417456+OTheDev at users.noreply.github.com> committer: rhettinger date: 2023-02-22T14:21:38-06:00 summary: GH-101777: `queue.rst`: use 2 spaces after a period to be consistent. (#102143) files: M Doc/library/queue.rst diff --git a/Doc/library/queue.rst b/Doc/library/queue.rst index 46b8e9b18a3c..b2b787c5a826 100644 --- a/Doc/library/queue.rst +++ b/Doc/library/queue.rst @@ -15,7 +15,7 @@ module implements all the required locking semantics. The module implements three types of queue, which differ only in the order in which the entries are retrieved. In a :abbr:`FIFO (first-in, first-out)` -queue, the first tasks added are the first retrieved. In a +queue, the first tasks added are the first retrieved. In a :abbr:`LIFO (last-in, first-out)` queue, the most recently added entry is the first retrieved (operating like a stack). With a priority queue, the entries are kept sorted (using the :mod:`heapq` module) and the @@ -57,7 +57,7 @@ The :mod:`queue` module defines the following classes and exceptions: *maxsize* is less than or equal to zero, the queue size is infinite. The lowest valued entries are retrieved first (the lowest valued entry is the - one that would be returned by ``min(entries)``). A typical pattern for + one that would be returned by ``min(entries)``). A typical pattern for entries is a tuple in the form: ``(priority_number, data)``. If the *data* elements are not comparable, the data can be wrapped in a class @@ -127,8 +127,8 @@ provide the public methods described below. .. method:: Queue.put(item, block=True, timeout=None) - Put *item* into the queue. If optional args *block* is true and *timeout* is - ``None`` (the default), block if necessary until a free slot is available. If + Put *item* into the queue. If optional args *block* is true and *timeout* is + ``None`` (the default), block if necessary until a free slot is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the :exc:`Full` exception if no free slot was available within that time. Otherwise (*block* is false), put an item on the queue if a free slot is @@ -143,7 +143,7 @@ provide the public methods described below. .. method:: Queue.get(block=True, timeout=None) - Remove and return an item from the queue. If optional args *block* is true and + Remove and return an item from the queue. If optional args *block* is true and *timeout* is ``None`` (the default), block if necessary until an item is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the :exc:`Empty` exception if no item was available within that time. @@ -152,7 +152,7 @@ provide the public methods described below. Prior to 3.0 on POSIX systems, and for all versions on Windows, if *block* is true and *timeout* is ``None``, this operation goes into - an uninterruptible wait on an underlying lock. This means that no exceptions + an uninterruptible wait on an underlying lock. This means that no exceptions can occur, and in particular a SIGINT will not trigger a :exc:`KeyboardInterrupt`. @@ -184,7 +184,7 @@ fully processed by daemon consumer threads. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls :meth:`task_done` to - indicate that the item was retrieved and all work on it is complete. When the + indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, :meth:`join` unblocks. @@ -227,7 +227,7 @@ SimpleQueue Objects .. method:: SimpleQueue.empty() - Return ``True`` if the queue is empty, ``False`` otherwise. If empty() + Return ``True`` if the queue is empty, ``False`` otherwise. If empty() returns ``False`` it doesn't guarantee that a subsequent call to get() will not block. From webhook-mailer at python.org Wed Feb 22 16:18:51 2023 From: webhook-mailer at python.org (gpshead) Date: Wed, 22 Feb 2023 21:18:51 -0000 Subject: [Python-checkins] gh-99108: Import MD5 and SHA1 from HACL* (#102089) Message-ID: https://github.com/python/cpython/commit/fcadc7e405141847ab10daf5cff16be880083a24 commit: fcadc7e405141847ab10daf5cff16be880083a24 branch: main author: Jonathan Protzenko committer: gpshead date: 2023-02-22T13:18:43-08:00 summary: gh-99108: Import MD5 and SHA1 from HACL* (#102089) Replaces our fallback non-OpenSSL MD5 and SHA1 implementations with those from HACL* as we've already done with SHA2. files: A Misc/NEWS.d/next/Security/2023-02-17-10-42-48.gh-issue-99108.MKA8-f.rst A Modules/_hacl/Hacl_Hash_MD5.c A Modules/_hacl/Hacl_Hash_MD5.h A Modules/_hacl/Hacl_Hash_SHA1.c A Modules/_hacl/Hacl_Hash_SHA1.h A Modules/_hacl/Hacl_Streaming_Types.h A Modules/_hacl/internal/Hacl_Hash_MD5.h A Modules/_hacl/internal/Hacl_Hash_SHA1.h M Makefile.pre.in M Modules/Setup M Modules/Setup.stdlib.in M Modules/_hacl/Hacl_Streaming_SHA2.c M Modules/_hacl/Hacl_Streaming_SHA2.h M Modules/_hacl/include/krml/FStar_UInt128_Verified.h M Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h M Modules/_hacl/include/krml/internal/target.h M Modules/_hacl/python_hacl_namespaces.h M Modules/_hacl/refresh.sh M Modules/md5module.c M Modules/sha1module.c M PCbuild/pythoncore.vcxproj M PCbuild/pythoncore.vcxproj.filters M configure M configure.ac diff --git a/Makefile.pre.in b/Makefile.pre.in index b28c6067a535..b12a1bc060af 100644 --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -578,17 +578,21 @@ LIBEXPAT_HEADERS= \ LIBHACL_SHA2_OBJS= \ Modules/_hacl/Hacl_Streaming_SHA2.o -LIBHACL_SHA2_HEADERS= \ - Modules/_hacl/Hacl_Streaming_SHA2.h \ +LIBHACL_HEADERS= \ Modules/_hacl/include/krml/FStar_UInt128_Verified.h \ Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h \ Modules/_hacl/include/krml/fstar_uint128_struct_endianness.h \ Modules/_hacl/include/krml/internal/target.h \ Modules/_hacl/include/krml/lowstar_endianness.h \ Modules/_hacl/include/krml/types.h \ - Modules/_hacl/internal/Hacl_SHA2_Generic.h \ + Modules/_hacl/Hacl_Streaming_Types.h \ Modules/_hacl/python_hacl_namespaces.h +LIBHACL_SHA2_HEADERS= \ + Modules/_hacl/Hacl_Streaming_SHA2.h \ + Modules/_hacl/internal/Hacl_SHA2_Generic.h \ + $(LIBHACL_HEADERS) + ######################################################################### # Rules @@ -2635,8 +2639,8 @@ MODULE__DECIMAL_DEPS=$(srcdir)/Modules/_decimal/docstrings.h @LIBMPDEC_INTERNAL@ MODULE__ELEMENTTREE_DEPS=$(srcdir)/Modules/pyexpat.c @LIBEXPAT_INTERNAL@ MODULE__HASHLIB_DEPS=$(srcdir)/Modules/hashlib.h MODULE__IO_DEPS=$(srcdir)/Modules/_io/_iomodule.h -MODULE__MD5_DEPS=$(srcdir)/Modules/hashlib.h -MODULE__SHA1_DEPS=$(srcdir)/Modules/hashlib.h +MODULE__MD5_DEPS=$(srcdir)/Modules/hashlib.h $(LIBHACL_HEADERS) Modules/_hacl/Hacl_Hash_MD5.h Modules/_hacl/Hacl_Hash_MD5.c +MODULE__SHA1_DEPS=$(srcdir)/Modules/hashlib.h $(LIBHACL_HEADERS) Modules/_hacl/Hacl_Hash_SHA1.h Modules/_hacl/Hacl_Hash_SHA1.c MODULE__SHA2_DEPS=$(srcdir)/Modules/hashlib.h $(LIBHACL_SHA2_HEADERS) $(LIBHACL_SHA2_A) MODULE__SHA3_DEPS=$(srcdir)/Modules/_sha3/sha3.c $(srcdir)/Modules/_sha3/sha3.h $(srcdir)/Modules/hashlib.h MODULE__SOCKET_DEPS=$(srcdir)/Modules/socketmodule.h $(srcdir)/Modules/addrinfo.h $(srcdir)/Modules/getaddrinfo.c $(srcdir)/Modules/getnameinfo.c diff --git a/Misc/NEWS.d/next/Security/2023-02-17-10-42-48.gh-issue-99108.MKA8-f.rst b/Misc/NEWS.d/next/Security/2023-02-17-10-42-48.gh-issue-99108.MKA8-f.rst new file mode 100644 index 000000000000..723d8a43a09f --- /dev/null +++ b/Misc/NEWS.d/next/Security/2023-02-17-10-42-48.gh-issue-99108.MKA8-f.rst @@ -0,0 +1,2 @@ +Replace builtin hashlib implementations of MD5 and SHA1 with verified ones +from the HACL* project. diff --git a/Modules/Setup b/Modules/Setup index 1d5183bc2df1..f9fa26cac9e2 100644 --- a/Modules/Setup +++ b/Modules/Setup @@ -163,8 +163,8 @@ PYTHONPATH=$(COREPYTHONPATH) # hashing builtins #_blake2 _blake2/blake2module.c _blake2/blake2b_impl.c _blake2/blake2s_impl.c -#_md5 md5module.c -#_sha1 sha1module.c +#_md5 md5module.c -I$(srcdir)/Modules/_hacl/include _hacl/libHacl_Hash_MD5.c +#_sha1 sha1module.c -I$(srcdir)/Modules/_hacl/include _hacl/libHacl_Hash_SHA1.c #_sha2 sha2module.c -I$(srcdir)/Modules/_hacl/include Modules/_hacl/libHacl_Streaming_SHA2.a #_sha3 _sha3/sha3module.c diff --git a/Modules/Setup.stdlib.in b/Modules/Setup.stdlib.in index 8f5e14a4e80e..d33cd8223999 100644 --- a/Modules/Setup.stdlib.in +++ b/Modules/Setup.stdlib.in @@ -77,8 +77,8 @@ @MODULE_READLINE_TRUE at readline readline.c # hashing builtins, can be disabled with --without-builtin-hashlib-hashes - at MODULE__MD5_TRUE@_md5 md5module.c - at MODULE__SHA1_TRUE@_sha1 sha1module.c + at MODULE__MD5_TRUE@_md5 md5module.c -I$(srcdir)/Modules/_hacl/include _hacl/Hacl_Hash_MD5.c -D_BSD_SOURCE -D_DEFAULT_SOURCE + at MODULE__SHA1_TRUE@_sha1 sha1module.c -I$(srcdir)/Modules/_hacl/include _hacl/Hacl_Hash_SHA1.c -D_BSD_SOURCE -D_DEFAULT_SOURCE @MODULE__SHA2_TRUE at _sha2 sha2module.c -I$(srcdir)/Modules/_hacl/include Modules/_hacl/libHacl_Streaming_SHA2.a @MODULE__SHA3_TRUE at _sha3 _sha3/sha3module.c @MODULE__BLAKE2_TRUE at _blake2 _blake2/blake2module.c _blake2/blake2b_impl.c _blake2/blake2s_impl.c diff --git a/Modules/_hacl/Hacl_Hash_MD5.c b/Modules/_hacl/Hacl_Hash_MD5.c new file mode 100644 index 000000000000..2c613066d9f6 --- /dev/null +++ b/Modules/_hacl/Hacl_Hash_MD5.c @@ -0,0 +1,1472 @@ +/* MIT License + * + * Copyright (c) 2016-2022 INRIA, CMU and Microsoft Corporation + * Copyright (c) 2022-2023 HACL* Contributors + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + + +#include "internal/Hacl_Hash_MD5.h" + +static uint32_t +_h0[4U] = + { (uint32_t)0x67452301U, (uint32_t)0xefcdab89U, (uint32_t)0x98badcfeU, (uint32_t)0x10325476U }; + +static uint32_t +_t[64U] = + { + (uint32_t)0xd76aa478U, (uint32_t)0xe8c7b756U, (uint32_t)0x242070dbU, (uint32_t)0xc1bdceeeU, + (uint32_t)0xf57c0fafU, (uint32_t)0x4787c62aU, (uint32_t)0xa8304613U, (uint32_t)0xfd469501U, + (uint32_t)0x698098d8U, (uint32_t)0x8b44f7afU, (uint32_t)0xffff5bb1U, (uint32_t)0x895cd7beU, + (uint32_t)0x6b901122U, (uint32_t)0xfd987193U, (uint32_t)0xa679438eU, (uint32_t)0x49b40821U, + (uint32_t)0xf61e2562U, (uint32_t)0xc040b340U, (uint32_t)0x265e5a51U, (uint32_t)0xe9b6c7aaU, + (uint32_t)0xd62f105dU, (uint32_t)0x02441453U, (uint32_t)0xd8a1e681U, (uint32_t)0xe7d3fbc8U, + (uint32_t)0x21e1cde6U, (uint32_t)0xc33707d6U, (uint32_t)0xf4d50d87U, (uint32_t)0x455a14edU, + (uint32_t)0xa9e3e905U, (uint32_t)0xfcefa3f8U, (uint32_t)0x676f02d9U, (uint32_t)0x8d2a4c8aU, + (uint32_t)0xfffa3942U, (uint32_t)0x8771f681U, (uint32_t)0x6d9d6122U, (uint32_t)0xfde5380cU, + (uint32_t)0xa4beea44U, (uint32_t)0x4bdecfa9U, (uint32_t)0xf6bb4b60U, (uint32_t)0xbebfbc70U, + (uint32_t)0x289b7ec6U, (uint32_t)0xeaa127faU, (uint32_t)0xd4ef3085U, (uint32_t)0x4881d05U, + (uint32_t)0xd9d4d039U, (uint32_t)0xe6db99e5U, (uint32_t)0x1fa27cf8U, (uint32_t)0xc4ac5665U, + (uint32_t)0xf4292244U, (uint32_t)0x432aff97U, (uint32_t)0xab9423a7U, (uint32_t)0xfc93a039U, + (uint32_t)0x655b59c3U, (uint32_t)0x8f0ccc92U, (uint32_t)0xffeff47dU, (uint32_t)0x85845dd1U, + (uint32_t)0x6fa87e4fU, (uint32_t)0xfe2ce6e0U, (uint32_t)0xa3014314U, (uint32_t)0x4e0811a1U, + (uint32_t)0xf7537e82U, (uint32_t)0xbd3af235U, (uint32_t)0x2ad7d2bbU, (uint32_t)0xeb86d391U + }; + +void Hacl_Hash_Core_MD5_legacy_init(uint32_t *s) +{ + KRML_MAYBE_FOR4(i, (uint32_t)0U, (uint32_t)4U, (uint32_t)1U, s[i] = _h0[i];); +} + +static void legacy_update(uint32_t *abcd, uint8_t *x) +{ + uint32_t aa = abcd[0U]; + uint32_t bb = abcd[1U]; + uint32_t cc = abcd[2U]; + uint32_t dd = abcd[3U]; + uint32_t va = abcd[0U]; + uint32_t vb0 = abcd[1U]; + uint32_t vc0 = abcd[2U]; + uint32_t vd0 = abcd[3U]; + uint8_t *b0 = x; + uint32_t u = load32_le(b0); + uint32_t xk = u; + uint32_t ti0 = _t[0U]; + uint32_t + v = + vb0 + + + ((va + ((vb0 & vc0) | (~vb0 & vd0)) + xk + ti0) + << (uint32_t)7U + | (va + ((vb0 & vc0) | (~vb0 & vd0)) + xk + ti0) >> (uint32_t)25U); + abcd[0U] = v; + uint32_t va0 = abcd[3U]; + uint32_t vb1 = abcd[0U]; + uint32_t vc1 = abcd[1U]; + uint32_t vd1 = abcd[2U]; + uint8_t *b1 = x + (uint32_t)4U; + uint32_t u0 = load32_le(b1); + uint32_t xk0 = u0; + uint32_t ti1 = _t[1U]; + uint32_t + v0 = + vb1 + + + ((va0 + ((vb1 & vc1) | (~vb1 & vd1)) + xk0 + ti1) + << (uint32_t)12U + | (va0 + ((vb1 & vc1) | (~vb1 & vd1)) + xk0 + ti1) >> (uint32_t)20U); + abcd[3U] = v0; + uint32_t va1 = abcd[2U]; + uint32_t vb2 = abcd[3U]; + uint32_t vc2 = abcd[0U]; + uint32_t vd2 = abcd[1U]; + uint8_t *b2 = x + (uint32_t)8U; + uint32_t u1 = load32_le(b2); + uint32_t xk1 = u1; + uint32_t ti2 = _t[2U]; + uint32_t + v1 = + vb2 + + + ((va1 + ((vb2 & vc2) | (~vb2 & vd2)) + xk1 + ti2) + << (uint32_t)17U + | (va1 + ((vb2 & vc2) | (~vb2 & vd2)) + xk1 + ti2) >> (uint32_t)15U); + abcd[2U] = v1; + uint32_t va2 = abcd[1U]; + uint32_t vb3 = abcd[2U]; + uint32_t vc3 = abcd[3U]; + uint32_t vd3 = abcd[0U]; + uint8_t *b3 = x + (uint32_t)12U; + uint32_t u2 = load32_le(b3); + uint32_t xk2 = u2; + uint32_t ti3 = _t[3U]; + uint32_t + v2 = + vb3 + + + ((va2 + ((vb3 & vc3) | (~vb3 & vd3)) + xk2 + ti3) + << (uint32_t)22U + | (va2 + ((vb3 & vc3) | (~vb3 & vd3)) + xk2 + ti3) >> (uint32_t)10U); + abcd[1U] = v2; + uint32_t va3 = abcd[0U]; + uint32_t vb4 = abcd[1U]; + uint32_t vc4 = abcd[2U]; + uint32_t vd4 = abcd[3U]; + uint8_t *b4 = x + (uint32_t)16U; + uint32_t u3 = load32_le(b4); + uint32_t xk3 = u3; + uint32_t ti4 = _t[4U]; + uint32_t + v3 = + vb4 + + + ((va3 + ((vb4 & vc4) | (~vb4 & vd4)) + xk3 + ti4) + << (uint32_t)7U + | (va3 + ((vb4 & vc4) | (~vb4 & vd4)) + xk3 + ti4) >> (uint32_t)25U); + abcd[0U] = v3; + uint32_t va4 = abcd[3U]; + uint32_t vb5 = abcd[0U]; + uint32_t vc5 = abcd[1U]; + uint32_t vd5 = abcd[2U]; + uint8_t *b5 = x + (uint32_t)20U; + uint32_t u4 = load32_le(b5); + uint32_t xk4 = u4; + uint32_t ti5 = _t[5U]; + uint32_t + v4 = + vb5 + + + ((va4 + ((vb5 & vc5) | (~vb5 & vd5)) + xk4 + ti5) + << (uint32_t)12U + | (va4 + ((vb5 & vc5) | (~vb5 & vd5)) + xk4 + ti5) >> (uint32_t)20U); + abcd[3U] = v4; + uint32_t va5 = abcd[2U]; + uint32_t vb6 = abcd[3U]; + uint32_t vc6 = abcd[0U]; + uint32_t vd6 = abcd[1U]; + uint8_t *b6 = x + (uint32_t)24U; + uint32_t u5 = load32_le(b6); + uint32_t xk5 = u5; + uint32_t ti6 = _t[6U]; + uint32_t + v5 = + vb6 + + + ((va5 + ((vb6 & vc6) | (~vb6 & vd6)) + xk5 + ti6) + << (uint32_t)17U + | (va5 + ((vb6 & vc6) | (~vb6 & vd6)) + xk5 + ti6) >> (uint32_t)15U); + abcd[2U] = v5; + uint32_t va6 = abcd[1U]; + uint32_t vb7 = abcd[2U]; + uint32_t vc7 = abcd[3U]; + uint32_t vd7 = abcd[0U]; + uint8_t *b7 = x + (uint32_t)28U; + uint32_t u6 = load32_le(b7); + uint32_t xk6 = u6; + uint32_t ti7 = _t[7U]; + uint32_t + v6 = + vb7 + + + ((va6 + ((vb7 & vc7) | (~vb7 & vd7)) + xk6 + ti7) + << (uint32_t)22U + | (va6 + ((vb7 & vc7) | (~vb7 & vd7)) + xk6 + ti7) >> (uint32_t)10U); + abcd[1U] = v6; + uint32_t va7 = abcd[0U]; + uint32_t vb8 = abcd[1U]; + uint32_t vc8 = abcd[2U]; + uint32_t vd8 = abcd[3U]; + uint8_t *b8 = x + (uint32_t)32U; + uint32_t u7 = load32_le(b8); + uint32_t xk7 = u7; + uint32_t ti8 = _t[8U]; + uint32_t + v7 = + vb8 + + + ((va7 + ((vb8 & vc8) | (~vb8 & vd8)) + xk7 + ti8) + << (uint32_t)7U + | (va7 + ((vb8 & vc8) | (~vb8 & vd8)) + xk7 + ti8) >> (uint32_t)25U); + abcd[0U] = v7; + uint32_t va8 = abcd[3U]; + uint32_t vb9 = abcd[0U]; + uint32_t vc9 = abcd[1U]; + uint32_t vd9 = abcd[2U]; + uint8_t *b9 = x + (uint32_t)36U; + uint32_t u8 = load32_le(b9); + uint32_t xk8 = u8; + uint32_t ti9 = _t[9U]; + uint32_t + v8 = + vb9 + + + ((va8 + ((vb9 & vc9) | (~vb9 & vd9)) + xk8 + ti9) + << (uint32_t)12U + | (va8 + ((vb9 & vc9) | (~vb9 & vd9)) + xk8 + ti9) >> (uint32_t)20U); + abcd[3U] = v8; + uint32_t va9 = abcd[2U]; + uint32_t vb10 = abcd[3U]; + uint32_t vc10 = abcd[0U]; + uint32_t vd10 = abcd[1U]; + uint8_t *b10 = x + (uint32_t)40U; + uint32_t u9 = load32_le(b10); + uint32_t xk9 = u9; + uint32_t ti10 = _t[10U]; + uint32_t + v9 = + vb10 + + + ((va9 + ((vb10 & vc10) | (~vb10 & vd10)) + xk9 + ti10) + << (uint32_t)17U + | (va9 + ((vb10 & vc10) | (~vb10 & vd10)) + xk9 + ti10) >> (uint32_t)15U); + abcd[2U] = v9; + uint32_t va10 = abcd[1U]; + uint32_t vb11 = abcd[2U]; + uint32_t vc11 = abcd[3U]; + uint32_t vd11 = abcd[0U]; + uint8_t *b11 = x + (uint32_t)44U; + uint32_t u10 = load32_le(b11); + uint32_t xk10 = u10; + uint32_t ti11 = _t[11U]; + uint32_t + v10 = + vb11 + + + ((va10 + ((vb11 & vc11) | (~vb11 & vd11)) + xk10 + ti11) + << (uint32_t)22U + | (va10 + ((vb11 & vc11) | (~vb11 & vd11)) + xk10 + ti11) >> (uint32_t)10U); + abcd[1U] = v10; + uint32_t va11 = abcd[0U]; + uint32_t vb12 = abcd[1U]; + uint32_t vc12 = abcd[2U]; + uint32_t vd12 = abcd[3U]; + uint8_t *b12 = x + (uint32_t)48U; + uint32_t u11 = load32_le(b12); + uint32_t xk11 = u11; + uint32_t ti12 = _t[12U]; + uint32_t + v11 = + vb12 + + + ((va11 + ((vb12 & vc12) | (~vb12 & vd12)) + xk11 + ti12) + << (uint32_t)7U + | (va11 + ((vb12 & vc12) | (~vb12 & vd12)) + xk11 + ti12) >> (uint32_t)25U); + abcd[0U] = v11; + uint32_t va12 = abcd[3U]; + uint32_t vb13 = abcd[0U]; + uint32_t vc13 = abcd[1U]; + uint32_t vd13 = abcd[2U]; + uint8_t *b13 = x + (uint32_t)52U; + uint32_t u12 = load32_le(b13); + uint32_t xk12 = u12; + uint32_t ti13 = _t[13U]; + uint32_t + v12 = + vb13 + + + ((va12 + ((vb13 & vc13) | (~vb13 & vd13)) + xk12 + ti13) + << (uint32_t)12U + | (va12 + ((vb13 & vc13) | (~vb13 & vd13)) + xk12 + ti13) >> (uint32_t)20U); + abcd[3U] = v12; + uint32_t va13 = abcd[2U]; + uint32_t vb14 = abcd[3U]; + uint32_t vc14 = abcd[0U]; + uint32_t vd14 = abcd[1U]; + uint8_t *b14 = x + (uint32_t)56U; + uint32_t u13 = load32_le(b14); + uint32_t xk13 = u13; + uint32_t ti14 = _t[14U]; + uint32_t + v13 = + vb14 + + + ((va13 + ((vb14 & vc14) | (~vb14 & vd14)) + xk13 + ti14) + << (uint32_t)17U + | (va13 + ((vb14 & vc14) | (~vb14 & vd14)) + xk13 + ti14) >> (uint32_t)15U); + abcd[2U] = v13; + uint32_t va14 = abcd[1U]; + uint32_t vb15 = abcd[2U]; + uint32_t vc15 = abcd[3U]; + uint32_t vd15 = abcd[0U]; + uint8_t *b15 = x + (uint32_t)60U; + uint32_t u14 = load32_le(b15); + uint32_t xk14 = u14; + uint32_t ti15 = _t[15U]; + uint32_t + v14 = + vb15 + + + ((va14 + ((vb15 & vc15) | (~vb15 & vd15)) + xk14 + ti15) + << (uint32_t)22U + | (va14 + ((vb15 & vc15) | (~vb15 & vd15)) + xk14 + ti15) >> (uint32_t)10U); + abcd[1U] = v14; + uint32_t va15 = abcd[0U]; + uint32_t vb16 = abcd[1U]; + uint32_t vc16 = abcd[2U]; + uint32_t vd16 = abcd[3U]; + uint8_t *b16 = x + (uint32_t)4U; + uint32_t u15 = load32_le(b16); + uint32_t xk15 = u15; + uint32_t ti16 = _t[16U]; + uint32_t + v15 = + vb16 + + + ((va15 + ((vb16 & vd16) | (vc16 & ~vd16)) + xk15 + ti16) + << (uint32_t)5U + | (va15 + ((vb16 & vd16) | (vc16 & ~vd16)) + xk15 + ti16) >> (uint32_t)27U); + abcd[0U] = v15; + uint32_t va16 = abcd[3U]; + uint32_t vb17 = abcd[0U]; + uint32_t vc17 = abcd[1U]; + uint32_t vd17 = abcd[2U]; + uint8_t *b17 = x + (uint32_t)24U; + uint32_t u16 = load32_le(b17); + uint32_t xk16 = u16; + uint32_t ti17 = _t[17U]; + uint32_t + v16 = + vb17 + + + ((va16 + ((vb17 & vd17) | (vc17 & ~vd17)) + xk16 + ti17) + << (uint32_t)9U + | (va16 + ((vb17 & vd17) | (vc17 & ~vd17)) + xk16 + ti17) >> (uint32_t)23U); + abcd[3U] = v16; + uint32_t va17 = abcd[2U]; + uint32_t vb18 = abcd[3U]; + uint32_t vc18 = abcd[0U]; + uint32_t vd18 = abcd[1U]; + uint8_t *b18 = x + (uint32_t)44U; + uint32_t u17 = load32_le(b18); + uint32_t xk17 = u17; + uint32_t ti18 = _t[18U]; + uint32_t + v17 = + vb18 + + + ((va17 + ((vb18 & vd18) | (vc18 & ~vd18)) + xk17 + ti18) + << (uint32_t)14U + | (va17 + ((vb18 & vd18) | (vc18 & ~vd18)) + xk17 + ti18) >> (uint32_t)18U); + abcd[2U] = v17; + uint32_t va18 = abcd[1U]; + uint32_t vb19 = abcd[2U]; + uint32_t vc19 = abcd[3U]; + uint32_t vd19 = abcd[0U]; + uint8_t *b19 = x; + uint32_t u18 = load32_le(b19); + uint32_t xk18 = u18; + uint32_t ti19 = _t[19U]; + uint32_t + v18 = + vb19 + + + ((va18 + ((vb19 & vd19) | (vc19 & ~vd19)) + xk18 + ti19) + << (uint32_t)20U + | (va18 + ((vb19 & vd19) | (vc19 & ~vd19)) + xk18 + ti19) >> (uint32_t)12U); + abcd[1U] = v18; + uint32_t va19 = abcd[0U]; + uint32_t vb20 = abcd[1U]; + uint32_t vc20 = abcd[2U]; + uint32_t vd20 = abcd[3U]; + uint8_t *b20 = x + (uint32_t)20U; + uint32_t u19 = load32_le(b20); + uint32_t xk19 = u19; + uint32_t ti20 = _t[20U]; + uint32_t + v19 = + vb20 + + + ((va19 + ((vb20 & vd20) | (vc20 & ~vd20)) + xk19 + ti20) + << (uint32_t)5U + | (va19 + ((vb20 & vd20) | (vc20 & ~vd20)) + xk19 + ti20) >> (uint32_t)27U); + abcd[0U] = v19; + uint32_t va20 = abcd[3U]; + uint32_t vb21 = abcd[0U]; + uint32_t vc21 = abcd[1U]; + uint32_t vd21 = abcd[2U]; + uint8_t *b21 = x + (uint32_t)40U; + uint32_t u20 = load32_le(b21); + uint32_t xk20 = u20; + uint32_t ti21 = _t[21U]; + uint32_t + v20 = + vb21 + + + ((va20 + ((vb21 & vd21) | (vc21 & ~vd21)) + xk20 + ti21) + << (uint32_t)9U + | (va20 + ((vb21 & vd21) | (vc21 & ~vd21)) + xk20 + ti21) >> (uint32_t)23U); + abcd[3U] = v20; + uint32_t va21 = abcd[2U]; + uint32_t vb22 = abcd[3U]; + uint32_t vc22 = abcd[0U]; + uint32_t vd22 = abcd[1U]; + uint8_t *b22 = x + (uint32_t)60U; + uint32_t u21 = load32_le(b22); + uint32_t xk21 = u21; + uint32_t ti22 = _t[22U]; + uint32_t + v21 = + vb22 + + + ((va21 + ((vb22 & vd22) | (vc22 & ~vd22)) + xk21 + ti22) + << (uint32_t)14U + | (va21 + ((vb22 & vd22) | (vc22 & ~vd22)) + xk21 + ti22) >> (uint32_t)18U); + abcd[2U] = v21; + uint32_t va22 = abcd[1U]; + uint32_t vb23 = abcd[2U]; + uint32_t vc23 = abcd[3U]; + uint32_t vd23 = abcd[0U]; + uint8_t *b23 = x + (uint32_t)16U; + uint32_t u22 = load32_le(b23); + uint32_t xk22 = u22; + uint32_t ti23 = _t[23U]; + uint32_t + v22 = + vb23 + + + ((va22 + ((vb23 & vd23) | (vc23 & ~vd23)) + xk22 + ti23) + << (uint32_t)20U + | (va22 + ((vb23 & vd23) | (vc23 & ~vd23)) + xk22 + ti23) >> (uint32_t)12U); + abcd[1U] = v22; + uint32_t va23 = abcd[0U]; + uint32_t vb24 = abcd[1U]; + uint32_t vc24 = abcd[2U]; + uint32_t vd24 = abcd[3U]; + uint8_t *b24 = x + (uint32_t)36U; + uint32_t u23 = load32_le(b24); + uint32_t xk23 = u23; + uint32_t ti24 = _t[24U]; + uint32_t + v23 = + vb24 + + + ((va23 + ((vb24 & vd24) | (vc24 & ~vd24)) + xk23 + ti24) + << (uint32_t)5U + | (va23 + ((vb24 & vd24) | (vc24 & ~vd24)) + xk23 + ti24) >> (uint32_t)27U); + abcd[0U] = v23; + uint32_t va24 = abcd[3U]; + uint32_t vb25 = abcd[0U]; + uint32_t vc25 = abcd[1U]; + uint32_t vd25 = abcd[2U]; + uint8_t *b25 = x + (uint32_t)56U; + uint32_t u24 = load32_le(b25); + uint32_t xk24 = u24; + uint32_t ti25 = _t[25U]; + uint32_t + v24 = + vb25 + + + ((va24 + ((vb25 & vd25) | (vc25 & ~vd25)) + xk24 + ti25) + << (uint32_t)9U + | (va24 + ((vb25 & vd25) | (vc25 & ~vd25)) + xk24 + ti25) >> (uint32_t)23U); + abcd[3U] = v24; + uint32_t va25 = abcd[2U]; + uint32_t vb26 = abcd[3U]; + uint32_t vc26 = abcd[0U]; + uint32_t vd26 = abcd[1U]; + uint8_t *b26 = x + (uint32_t)12U; + uint32_t u25 = load32_le(b26); + uint32_t xk25 = u25; + uint32_t ti26 = _t[26U]; + uint32_t + v25 = + vb26 + + + ((va25 + ((vb26 & vd26) | (vc26 & ~vd26)) + xk25 + ti26) + << (uint32_t)14U + | (va25 + ((vb26 & vd26) | (vc26 & ~vd26)) + xk25 + ti26) >> (uint32_t)18U); + abcd[2U] = v25; + uint32_t va26 = abcd[1U]; + uint32_t vb27 = abcd[2U]; + uint32_t vc27 = abcd[3U]; + uint32_t vd27 = abcd[0U]; + uint8_t *b27 = x + (uint32_t)32U; + uint32_t u26 = load32_le(b27); + uint32_t xk26 = u26; + uint32_t ti27 = _t[27U]; + uint32_t + v26 = + vb27 + + + ((va26 + ((vb27 & vd27) | (vc27 & ~vd27)) + xk26 + ti27) + << (uint32_t)20U + | (va26 + ((vb27 & vd27) | (vc27 & ~vd27)) + xk26 + ti27) >> (uint32_t)12U); + abcd[1U] = v26; + uint32_t va27 = abcd[0U]; + uint32_t vb28 = abcd[1U]; + uint32_t vc28 = abcd[2U]; + uint32_t vd28 = abcd[3U]; + uint8_t *b28 = x + (uint32_t)52U; + uint32_t u27 = load32_le(b28); + uint32_t xk27 = u27; + uint32_t ti28 = _t[28U]; + uint32_t + v27 = + vb28 + + + ((va27 + ((vb28 & vd28) | (vc28 & ~vd28)) + xk27 + ti28) + << (uint32_t)5U + | (va27 + ((vb28 & vd28) | (vc28 & ~vd28)) + xk27 + ti28) >> (uint32_t)27U); + abcd[0U] = v27; + uint32_t va28 = abcd[3U]; + uint32_t vb29 = abcd[0U]; + uint32_t vc29 = abcd[1U]; + uint32_t vd29 = abcd[2U]; + uint8_t *b29 = x + (uint32_t)8U; + uint32_t u28 = load32_le(b29); + uint32_t xk28 = u28; + uint32_t ti29 = _t[29U]; + uint32_t + v28 = + vb29 + + + ((va28 + ((vb29 & vd29) | (vc29 & ~vd29)) + xk28 + ti29) + << (uint32_t)9U + | (va28 + ((vb29 & vd29) | (vc29 & ~vd29)) + xk28 + ti29) >> (uint32_t)23U); + abcd[3U] = v28; + uint32_t va29 = abcd[2U]; + uint32_t vb30 = abcd[3U]; + uint32_t vc30 = abcd[0U]; + uint32_t vd30 = abcd[1U]; + uint8_t *b30 = x + (uint32_t)28U; + uint32_t u29 = load32_le(b30); + uint32_t xk29 = u29; + uint32_t ti30 = _t[30U]; + uint32_t + v29 = + vb30 + + + ((va29 + ((vb30 & vd30) | (vc30 & ~vd30)) + xk29 + ti30) + << (uint32_t)14U + | (va29 + ((vb30 & vd30) | (vc30 & ~vd30)) + xk29 + ti30) >> (uint32_t)18U); + abcd[2U] = v29; + uint32_t va30 = abcd[1U]; + uint32_t vb31 = abcd[2U]; + uint32_t vc31 = abcd[3U]; + uint32_t vd31 = abcd[0U]; + uint8_t *b31 = x + (uint32_t)48U; + uint32_t u30 = load32_le(b31); + uint32_t xk30 = u30; + uint32_t ti31 = _t[31U]; + uint32_t + v30 = + vb31 + + + ((va30 + ((vb31 & vd31) | (vc31 & ~vd31)) + xk30 + ti31) + << (uint32_t)20U + | (va30 + ((vb31 & vd31) | (vc31 & ~vd31)) + xk30 + ti31) >> (uint32_t)12U); + abcd[1U] = v30; + uint32_t va31 = abcd[0U]; + uint32_t vb32 = abcd[1U]; + uint32_t vc32 = abcd[2U]; + uint32_t vd32 = abcd[3U]; + uint8_t *b32 = x + (uint32_t)20U; + uint32_t u31 = load32_le(b32); + uint32_t xk31 = u31; + uint32_t ti32 = _t[32U]; + uint32_t + v31 = + vb32 + + + ((va31 + (vb32 ^ (vc32 ^ vd32)) + xk31 + ti32) + << (uint32_t)4U + | (va31 + (vb32 ^ (vc32 ^ vd32)) + xk31 + ti32) >> (uint32_t)28U); + abcd[0U] = v31; + uint32_t va32 = abcd[3U]; + uint32_t vb33 = abcd[0U]; + uint32_t vc33 = abcd[1U]; + uint32_t vd33 = abcd[2U]; + uint8_t *b33 = x + (uint32_t)32U; + uint32_t u32 = load32_le(b33); + uint32_t xk32 = u32; + uint32_t ti33 = _t[33U]; + uint32_t + v32 = + vb33 + + + ((va32 + (vb33 ^ (vc33 ^ vd33)) + xk32 + ti33) + << (uint32_t)11U + | (va32 + (vb33 ^ (vc33 ^ vd33)) + xk32 + ti33) >> (uint32_t)21U); + abcd[3U] = v32; + uint32_t va33 = abcd[2U]; + uint32_t vb34 = abcd[3U]; + uint32_t vc34 = abcd[0U]; + uint32_t vd34 = abcd[1U]; + uint8_t *b34 = x + (uint32_t)44U; + uint32_t u33 = load32_le(b34); + uint32_t xk33 = u33; + uint32_t ti34 = _t[34U]; + uint32_t + v33 = + vb34 + + + ((va33 + (vb34 ^ (vc34 ^ vd34)) + xk33 + ti34) + << (uint32_t)16U + | (va33 + (vb34 ^ (vc34 ^ vd34)) + xk33 + ti34) >> (uint32_t)16U); + abcd[2U] = v33; + uint32_t va34 = abcd[1U]; + uint32_t vb35 = abcd[2U]; + uint32_t vc35 = abcd[3U]; + uint32_t vd35 = abcd[0U]; + uint8_t *b35 = x + (uint32_t)56U; + uint32_t u34 = load32_le(b35); + uint32_t xk34 = u34; + uint32_t ti35 = _t[35U]; + uint32_t + v34 = + vb35 + + + ((va34 + (vb35 ^ (vc35 ^ vd35)) + xk34 + ti35) + << (uint32_t)23U + | (va34 + (vb35 ^ (vc35 ^ vd35)) + xk34 + ti35) >> (uint32_t)9U); + abcd[1U] = v34; + uint32_t va35 = abcd[0U]; + uint32_t vb36 = abcd[1U]; + uint32_t vc36 = abcd[2U]; + uint32_t vd36 = abcd[3U]; + uint8_t *b36 = x + (uint32_t)4U; + uint32_t u35 = load32_le(b36); + uint32_t xk35 = u35; + uint32_t ti36 = _t[36U]; + uint32_t + v35 = + vb36 + + + ((va35 + (vb36 ^ (vc36 ^ vd36)) + xk35 + ti36) + << (uint32_t)4U + | (va35 + (vb36 ^ (vc36 ^ vd36)) + xk35 + ti36) >> (uint32_t)28U); + abcd[0U] = v35; + uint32_t va36 = abcd[3U]; + uint32_t vb37 = abcd[0U]; + uint32_t vc37 = abcd[1U]; + uint32_t vd37 = abcd[2U]; + uint8_t *b37 = x + (uint32_t)16U; + uint32_t u36 = load32_le(b37); + uint32_t xk36 = u36; + uint32_t ti37 = _t[37U]; + uint32_t + v36 = + vb37 + + + ((va36 + (vb37 ^ (vc37 ^ vd37)) + xk36 + ti37) + << (uint32_t)11U + | (va36 + (vb37 ^ (vc37 ^ vd37)) + xk36 + ti37) >> (uint32_t)21U); + abcd[3U] = v36; + uint32_t va37 = abcd[2U]; + uint32_t vb38 = abcd[3U]; + uint32_t vc38 = abcd[0U]; + uint32_t vd38 = abcd[1U]; + uint8_t *b38 = x + (uint32_t)28U; + uint32_t u37 = load32_le(b38); + uint32_t xk37 = u37; + uint32_t ti38 = _t[38U]; + uint32_t + v37 = + vb38 + + + ((va37 + (vb38 ^ (vc38 ^ vd38)) + xk37 + ti38) + << (uint32_t)16U + | (va37 + (vb38 ^ (vc38 ^ vd38)) + xk37 + ti38) >> (uint32_t)16U); + abcd[2U] = v37; + uint32_t va38 = abcd[1U]; + uint32_t vb39 = abcd[2U]; + uint32_t vc39 = abcd[3U]; + uint32_t vd39 = abcd[0U]; + uint8_t *b39 = x + (uint32_t)40U; + uint32_t u38 = load32_le(b39); + uint32_t xk38 = u38; + uint32_t ti39 = _t[39U]; + uint32_t + v38 = + vb39 + + + ((va38 + (vb39 ^ (vc39 ^ vd39)) + xk38 + ti39) + << (uint32_t)23U + | (va38 + (vb39 ^ (vc39 ^ vd39)) + xk38 + ti39) >> (uint32_t)9U); + abcd[1U] = v38; + uint32_t va39 = abcd[0U]; + uint32_t vb40 = abcd[1U]; + uint32_t vc40 = abcd[2U]; + uint32_t vd40 = abcd[3U]; + uint8_t *b40 = x + (uint32_t)52U; + uint32_t u39 = load32_le(b40); + uint32_t xk39 = u39; + uint32_t ti40 = _t[40U]; + uint32_t + v39 = + vb40 + + + ((va39 + (vb40 ^ (vc40 ^ vd40)) + xk39 + ti40) + << (uint32_t)4U + | (va39 + (vb40 ^ (vc40 ^ vd40)) + xk39 + ti40) >> (uint32_t)28U); + abcd[0U] = v39; + uint32_t va40 = abcd[3U]; + uint32_t vb41 = abcd[0U]; + uint32_t vc41 = abcd[1U]; + uint32_t vd41 = abcd[2U]; + uint8_t *b41 = x; + uint32_t u40 = load32_le(b41); + uint32_t xk40 = u40; + uint32_t ti41 = _t[41U]; + uint32_t + v40 = + vb41 + + + ((va40 + (vb41 ^ (vc41 ^ vd41)) + xk40 + ti41) + << (uint32_t)11U + | (va40 + (vb41 ^ (vc41 ^ vd41)) + xk40 + ti41) >> (uint32_t)21U); + abcd[3U] = v40; + uint32_t va41 = abcd[2U]; + uint32_t vb42 = abcd[3U]; + uint32_t vc42 = abcd[0U]; + uint32_t vd42 = abcd[1U]; + uint8_t *b42 = x + (uint32_t)12U; + uint32_t u41 = load32_le(b42); + uint32_t xk41 = u41; + uint32_t ti42 = _t[42U]; + uint32_t + v41 = + vb42 + + + ((va41 + (vb42 ^ (vc42 ^ vd42)) + xk41 + ti42) + << (uint32_t)16U + | (va41 + (vb42 ^ (vc42 ^ vd42)) + xk41 + ti42) >> (uint32_t)16U); + abcd[2U] = v41; + uint32_t va42 = abcd[1U]; + uint32_t vb43 = abcd[2U]; + uint32_t vc43 = abcd[3U]; + uint32_t vd43 = abcd[0U]; + uint8_t *b43 = x + (uint32_t)24U; + uint32_t u42 = load32_le(b43); + uint32_t xk42 = u42; + uint32_t ti43 = _t[43U]; + uint32_t + v42 = + vb43 + + + ((va42 + (vb43 ^ (vc43 ^ vd43)) + xk42 + ti43) + << (uint32_t)23U + | (va42 + (vb43 ^ (vc43 ^ vd43)) + xk42 + ti43) >> (uint32_t)9U); + abcd[1U] = v42; + uint32_t va43 = abcd[0U]; + uint32_t vb44 = abcd[1U]; + uint32_t vc44 = abcd[2U]; + uint32_t vd44 = abcd[3U]; + uint8_t *b44 = x + (uint32_t)36U; + uint32_t u43 = load32_le(b44); + uint32_t xk43 = u43; + uint32_t ti44 = _t[44U]; + uint32_t + v43 = + vb44 + + + ((va43 + (vb44 ^ (vc44 ^ vd44)) + xk43 + ti44) + << (uint32_t)4U + | (va43 + (vb44 ^ (vc44 ^ vd44)) + xk43 + ti44) >> (uint32_t)28U); + abcd[0U] = v43; + uint32_t va44 = abcd[3U]; + uint32_t vb45 = abcd[0U]; + uint32_t vc45 = abcd[1U]; + uint32_t vd45 = abcd[2U]; + uint8_t *b45 = x + (uint32_t)48U; + uint32_t u44 = load32_le(b45); + uint32_t xk44 = u44; + uint32_t ti45 = _t[45U]; + uint32_t + v44 = + vb45 + + + ((va44 + (vb45 ^ (vc45 ^ vd45)) + xk44 + ti45) + << (uint32_t)11U + | (va44 + (vb45 ^ (vc45 ^ vd45)) + xk44 + ti45) >> (uint32_t)21U); + abcd[3U] = v44; + uint32_t va45 = abcd[2U]; + uint32_t vb46 = abcd[3U]; + uint32_t vc46 = abcd[0U]; + uint32_t vd46 = abcd[1U]; + uint8_t *b46 = x + (uint32_t)60U; + uint32_t u45 = load32_le(b46); + uint32_t xk45 = u45; + uint32_t ti46 = _t[46U]; + uint32_t + v45 = + vb46 + + + ((va45 + (vb46 ^ (vc46 ^ vd46)) + xk45 + ti46) + << (uint32_t)16U + | (va45 + (vb46 ^ (vc46 ^ vd46)) + xk45 + ti46) >> (uint32_t)16U); + abcd[2U] = v45; + uint32_t va46 = abcd[1U]; + uint32_t vb47 = abcd[2U]; + uint32_t vc47 = abcd[3U]; + uint32_t vd47 = abcd[0U]; + uint8_t *b47 = x + (uint32_t)8U; + uint32_t u46 = load32_le(b47); + uint32_t xk46 = u46; + uint32_t ti47 = _t[47U]; + uint32_t + v46 = + vb47 + + + ((va46 + (vb47 ^ (vc47 ^ vd47)) + xk46 + ti47) + << (uint32_t)23U + | (va46 + (vb47 ^ (vc47 ^ vd47)) + xk46 + ti47) >> (uint32_t)9U); + abcd[1U] = v46; + uint32_t va47 = abcd[0U]; + uint32_t vb48 = abcd[1U]; + uint32_t vc48 = abcd[2U]; + uint32_t vd48 = abcd[3U]; + uint8_t *b48 = x; + uint32_t u47 = load32_le(b48); + uint32_t xk47 = u47; + uint32_t ti48 = _t[48U]; + uint32_t + v47 = + vb48 + + + ((va47 + (vc48 ^ (vb48 | ~vd48)) + xk47 + ti48) + << (uint32_t)6U + | (va47 + (vc48 ^ (vb48 | ~vd48)) + xk47 + ti48) >> (uint32_t)26U); + abcd[0U] = v47; + uint32_t va48 = abcd[3U]; + uint32_t vb49 = abcd[0U]; + uint32_t vc49 = abcd[1U]; + uint32_t vd49 = abcd[2U]; + uint8_t *b49 = x + (uint32_t)28U; + uint32_t u48 = load32_le(b49); + uint32_t xk48 = u48; + uint32_t ti49 = _t[49U]; + uint32_t + v48 = + vb49 + + + ((va48 + (vc49 ^ (vb49 | ~vd49)) + xk48 + ti49) + << (uint32_t)10U + | (va48 + (vc49 ^ (vb49 | ~vd49)) + xk48 + ti49) >> (uint32_t)22U); + abcd[3U] = v48; + uint32_t va49 = abcd[2U]; + uint32_t vb50 = abcd[3U]; + uint32_t vc50 = abcd[0U]; + uint32_t vd50 = abcd[1U]; + uint8_t *b50 = x + (uint32_t)56U; + uint32_t u49 = load32_le(b50); + uint32_t xk49 = u49; + uint32_t ti50 = _t[50U]; + uint32_t + v49 = + vb50 + + + ((va49 + (vc50 ^ (vb50 | ~vd50)) + xk49 + ti50) + << (uint32_t)15U + | (va49 + (vc50 ^ (vb50 | ~vd50)) + xk49 + ti50) >> (uint32_t)17U); + abcd[2U] = v49; + uint32_t va50 = abcd[1U]; + uint32_t vb51 = abcd[2U]; + uint32_t vc51 = abcd[3U]; + uint32_t vd51 = abcd[0U]; + uint8_t *b51 = x + (uint32_t)20U; + uint32_t u50 = load32_le(b51); + uint32_t xk50 = u50; + uint32_t ti51 = _t[51U]; + uint32_t + v50 = + vb51 + + + ((va50 + (vc51 ^ (vb51 | ~vd51)) + xk50 + ti51) + << (uint32_t)21U + | (va50 + (vc51 ^ (vb51 | ~vd51)) + xk50 + ti51) >> (uint32_t)11U); + abcd[1U] = v50; + uint32_t va51 = abcd[0U]; + uint32_t vb52 = abcd[1U]; + uint32_t vc52 = abcd[2U]; + uint32_t vd52 = abcd[3U]; + uint8_t *b52 = x + (uint32_t)48U; + uint32_t u51 = load32_le(b52); + uint32_t xk51 = u51; + uint32_t ti52 = _t[52U]; + uint32_t + v51 = + vb52 + + + ((va51 + (vc52 ^ (vb52 | ~vd52)) + xk51 + ti52) + << (uint32_t)6U + | (va51 + (vc52 ^ (vb52 | ~vd52)) + xk51 + ti52) >> (uint32_t)26U); + abcd[0U] = v51; + uint32_t va52 = abcd[3U]; + uint32_t vb53 = abcd[0U]; + uint32_t vc53 = abcd[1U]; + uint32_t vd53 = abcd[2U]; + uint8_t *b53 = x + (uint32_t)12U; + uint32_t u52 = load32_le(b53); + uint32_t xk52 = u52; + uint32_t ti53 = _t[53U]; + uint32_t + v52 = + vb53 + + + ((va52 + (vc53 ^ (vb53 | ~vd53)) + xk52 + ti53) + << (uint32_t)10U + | (va52 + (vc53 ^ (vb53 | ~vd53)) + xk52 + ti53) >> (uint32_t)22U); + abcd[3U] = v52; + uint32_t va53 = abcd[2U]; + uint32_t vb54 = abcd[3U]; + uint32_t vc54 = abcd[0U]; + uint32_t vd54 = abcd[1U]; + uint8_t *b54 = x + (uint32_t)40U; + uint32_t u53 = load32_le(b54); + uint32_t xk53 = u53; + uint32_t ti54 = _t[54U]; + uint32_t + v53 = + vb54 + + + ((va53 + (vc54 ^ (vb54 | ~vd54)) + xk53 + ti54) + << (uint32_t)15U + | (va53 + (vc54 ^ (vb54 | ~vd54)) + xk53 + ti54) >> (uint32_t)17U); + abcd[2U] = v53; + uint32_t va54 = abcd[1U]; + uint32_t vb55 = abcd[2U]; + uint32_t vc55 = abcd[3U]; + uint32_t vd55 = abcd[0U]; + uint8_t *b55 = x + (uint32_t)4U; + uint32_t u54 = load32_le(b55); + uint32_t xk54 = u54; + uint32_t ti55 = _t[55U]; + uint32_t + v54 = + vb55 + + + ((va54 + (vc55 ^ (vb55 | ~vd55)) + xk54 + ti55) + << (uint32_t)21U + | (va54 + (vc55 ^ (vb55 | ~vd55)) + xk54 + ti55) >> (uint32_t)11U); + abcd[1U] = v54; + uint32_t va55 = abcd[0U]; + uint32_t vb56 = abcd[1U]; + uint32_t vc56 = abcd[2U]; + uint32_t vd56 = abcd[3U]; + uint8_t *b56 = x + (uint32_t)32U; + uint32_t u55 = load32_le(b56); + uint32_t xk55 = u55; + uint32_t ti56 = _t[56U]; + uint32_t + v55 = + vb56 + + + ((va55 + (vc56 ^ (vb56 | ~vd56)) + xk55 + ti56) + << (uint32_t)6U + | (va55 + (vc56 ^ (vb56 | ~vd56)) + xk55 + ti56) >> (uint32_t)26U); + abcd[0U] = v55; + uint32_t va56 = abcd[3U]; + uint32_t vb57 = abcd[0U]; + uint32_t vc57 = abcd[1U]; + uint32_t vd57 = abcd[2U]; + uint8_t *b57 = x + (uint32_t)60U; + uint32_t u56 = load32_le(b57); + uint32_t xk56 = u56; + uint32_t ti57 = _t[57U]; + uint32_t + v56 = + vb57 + + + ((va56 + (vc57 ^ (vb57 | ~vd57)) + xk56 + ti57) + << (uint32_t)10U + | (va56 + (vc57 ^ (vb57 | ~vd57)) + xk56 + ti57) >> (uint32_t)22U); + abcd[3U] = v56; + uint32_t va57 = abcd[2U]; + uint32_t vb58 = abcd[3U]; + uint32_t vc58 = abcd[0U]; + uint32_t vd58 = abcd[1U]; + uint8_t *b58 = x + (uint32_t)24U; + uint32_t u57 = load32_le(b58); + uint32_t xk57 = u57; + uint32_t ti58 = _t[58U]; + uint32_t + v57 = + vb58 + + + ((va57 + (vc58 ^ (vb58 | ~vd58)) + xk57 + ti58) + << (uint32_t)15U + | (va57 + (vc58 ^ (vb58 | ~vd58)) + xk57 + ti58) >> (uint32_t)17U); + abcd[2U] = v57; + uint32_t va58 = abcd[1U]; + uint32_t vb59 = abcd[2U]; + uint32_t vc59 = abcd[3U]; + uint32_t vd59 = abcd[0U]; + uint8_t *b59 = x + (uint32_t)52U; + uint32_t u58 = load32_le(b59); + uint32_t xk58 = u58; + uint32_t ti59 = _t[59U]; + uint32_t + v58 = + vb59 + + + ((va58 + (vc59 ^ (vb59 | ~vd59)) + xk58 + ti59) + << (uint32_t)21U + | (va58 + (vc59 ^ (vb59 | ~vd59)) + xk58 + ti59) >> (uint32_t)11U); + abcd[1U] = v58; + uint32_t va59 = abcd[0U]; + uint32_t vb60 = abcd[1U]; + uint32_t vc60 = abcd[2U]; + uint32_t vd60 = abcd[3U]; + uint8_t *b60 = x + (uint32_t)16U; + uint32_t u59 = load32_le(b60); + uint32_t xk59 = u59; + uint32_t ti60 = _t[60U]; + uint32_t + v59 = + vb60 + + + ((va59 + (vc60 ^ (vb60 | ~vd60)) + xk59 + ti60) + << (uint32_t)6U + | (va59 + (vc60 ^ (vb60 | ~vd60)) + xk59 + ti60) >> (uint32_t)26U); + abcd[0U] = v59; + uint32_t va60 = abcd[3U]; + uint32_t vb61 = abcd[0U]; + uint32_t vc61 = abcd[1U]; + uint32_t vd61 = abcd[2U]; + uint8_t *b61 = x + (uint32_t)44U; + uint32_t u60 = load32_le(b61); + uint32_t xk60 = u60; + uint32_t ti61 = _t[61U]; + uint32_t + v60 = + vb61 + + + ((va60 + (vc61 ^ (vb61 | ~vd61)) + xk60 + ti61) + << (uint32_t)10U + | (va60 + (vc61 ^ (vb61 | ~vd61)) + xk60 + ti61) >> (uint32_t)22U); + abcd[3U] = v60; + uint32_t va61 = abcd[2U]; + uint32_t vb62 = abcd[3U]; + uint32_t vc62 = abcd[0U]; + uint32_t vd62 = abcd[1U]; + uint8_t *b62 = x + (uint32_t)8U; + uint32_t u61 = load32_le(b62); + uint32_t xk61 = u61; + uint32_t ti62 = _t[62U]; + uint32_t + v61 = + vb62 + + + ((va61 + (vc62 ^ (vb62 | ~vd62)) + xk61 + ti62) + << (uint32_t)15U + | (va61 + (vc62 ^ (vb62 | ~vd62)) + xk61 + ti62) >> (uint32_t)17U); + abcd[2U] = v61; + uint32_t va62 = abcd[1U]; + uint32_t vb = abcd[2U]; + uint32_t vc = abcd[3U]; + uint32_t vd = abcd[0U]; + uint8_t *b63 = x + (uint32_t)36U; + uint32_t u62 = load32_le(b63); + uint32_t xk62 = u62; + uint32_t ti = _t[63U]; + uint32_t + v62 = + vb + + + ((va62 + (vc ^ (vb | ~vd)) + xk62 + ti) + << (uint32_t)21U + | (va62 + (vc ^ (vb | ~vd)) + xk62 + ti) >> (uint32_t)11U); + abcd[1U] = v62; + uint32_t a = abcd[0U]; + uint32_t b = abcd[1U]; + uint32_t c = abcd[2U]; + uint32_t d = abcd[3U]; + abcd[0U] = a + aa; + abcd[1U] = b + bb; + abcd[2U] = c + cc; + abcd[3U] = d + dd; +} + +static void legacy_pad(uint64_t len, uint8_t *dst) +{ + uint8_t *dst1 = dst; + dst1[0U] = (uint8_t)0x80U; + uint8_t *dst2 = dst + (uint32_t)1U; + for + (uint32_t + i = (uint32_t)0U; + i + < ((uint32_t)128U - ((uint32_t)9U + (uint32_t)(len % (uint64_t)(uint32_t)64U))) % (uint32_t)64U; + i++) + { + dst2[i] = (uint8_t)0U; + } + uint8_t + *dst3 = + dst + + + (uint32_t)1U + + + ((uint32_t)128U - ((uint32_t)9U + (uint32_t)(len % (uint64_t)(uint32_t)64U))) + % (uint32_t)64U; + store64_le(dst3, len << (uint32_t)3U); +} + +void Hacl_Hash_Core_MD5_legacy_finish(uint32_t *s, uint8_t *dst) +{ + KRML_MAYBE_FOR4(i, + (uint32_t)0U, + (uint32_t)4U, + (uint32_t)1U, + store32_le(dst + i * (uint32_t)4U, s[i]);); +} + +void Hacl_Hash_MD5_legacy_update_multi(uint32_t *s, uint8_t *blocks, uint32_t n_blocks) +{ + for (uint32_t i = (uint32_t)0U; i < n_blocks; i++) + { + uint32_t sz = (uint32_t)64U; + uint8_t *block = blocks + sz * i; + legacy_update(s, block); + } +} + +void +Hacl_Hash_MD5_legacy_update_last( + uint32_t *s, + uint64_t prev_len, + uint8_t *input, + uint32_t input_len +) +{ + uint32_t blocks_n = input_len / (uint32_t)64U; + uint32_t blocks_len = blocks_n * (uint32_t)64U; + uint8_t *blocks = input; + uint32_t rest_len = input_len - blocks_len; + uint8_t *rest = input + blocks_len; + Hacl_Hash_MD5_legacy_update_multi(s, blocks, blocks_n); + uint64_t total_input_len = prev_len + (uint64_t)input_len; + uint32_t + pad_len = + (uint32_t)1U + + + ((uint32_t)128U - ((uint32_t)9U + (uint32_t)(total_input_len % (uint64_t)(uint32_t)64U))) + % (uint32_t)64U + + (uint32_t)8U; + uint32_t tmp_len = rest_len + pad_len; + uint8_t tmp_twoblocks[128U] = { 0U }; + uint8_t *tmp = tmp_twoblocks; + uint8_t *tmp_rest = tmp; + uint8_t *tmp_pad = tmp + rest_len; + memcpy(tmp_rest, rest, rest_len * sizeof (uint8_t)); + legacy_pad(total_input_len, tmp_pad); + Hacl_Hash_MD5_legacy_update_multi(s, tmp, tmp_len / (uint32_t)64U); +} + +void Hacl_Hash_MD5_legacy_hash(uint8_t *input, uint32_t input_len, uint8_t *dst) +{ + uint32_t + s[4U] = + { (uint32_t)0x67452301U, (uint32_t)0xefcdab89U, (uint32_t)0x98badcfeU, (uint32_t)0x10325476U }; + uint32_t blocks_n0 = input_len / (uint32_t)64U; + uint32_t blocks_n1; + if (input_len % (uint32_t)64U == (uint32_t)0U && blocks_n0 > (uint32_t)0U) + { + blocks_n1 = blocks_n0 - (uint32_t)1U; + } + else + { + blocks_n1 = blocks_n0; + } + uint32_t blocks_len0 = blocks_n1 * (uint32_t)64U; + uint8_t *blocks0 = input; + uint32_t rest_len0 = input_len - blocks_len0; + uint8_t *rest0 = input + blocks_len0; + uint32_t blocks_n = blocks_n1; + uint32_t blocks_len = blocks_len0; + uint8_t *blocks = blocks0; + uint32_t rest_len = rest_len0; + uint8_t *rest = rest0; + Hacl_Hash_MD5_legacy_update_multi(s, blocks, blocks_n); + Hacl_Hash_MD5_legacy_update_last(s, (uint64_t)blocks_len, rest, rest_len); + Hacl_Hash_Core_MD5_legacy_finish(s, dst); +} + +Hacl_Streaming_MD_state_32 *Hacl_Streaming_MD5_legacy_create_in(void) +{ + uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)64U, sizeof (uint8_t)); + uint32_t *block_state = (uint32_t *)KRML_HOST_CALLOC((uint32_t)4U, sizeof (uint32_t)); + Hacl_Streaming_MD_state_32 + s = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; + Hacl_Streaming_MD_state_32 + *p = (Hacl_Streaming_MD_state_32 *)KRML_HOST_MALLOC(sizeof (Hacl_Streaming_MD_state_32)); + p[0U] = s; + Hacl_Hash_Core_MD5_legacy_init(block_state); + return p; +} + +void Hacl_Streaming_MD5_legacy_init(Hacl_Streaming_MD_state_32 *s) +{ + Hacl_Streaming_MD_state_32 scrut = *s; + uint8_t *buf = scrut.buf; + uint32_t *block_state = scrut.block_state; + Hacl_Hash_Core_MD5_legacy_init(block_state); + Hacl_Streaming_MD_state_32 + tmp = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; + s[0U] = tmp; +} + +/** +0 = success, 1 = max length exceeded +*/ +uint32_t +Hacl_Streaming_MD5_legacy_update(Hacl_Streaming_MD_state_32 *p, uint8_t *data, uint32_t len) +{ + Hacl_Streaming_MD_state_32 s = *p; + uint64_t total_len = s.total_len; + if ((uint64_t)len > (uint64_t)2305843009213693951U - total_len) + { + return (uint32_t)1U; + } + uint32_t sz; + if (total_len % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len > (uint64_t)0U) + { + sz = (uint32_t)64U; + } + else + { + sz = (uint32_t)(total_len % (uint64_t)(uint32_t)64U); + } + if (len <= (uint32_t)64U - sz) + { + Hacl_Streaming_MD_state_32 s1 = *p; + uint32_t *block_state1 = s1.block_state; + uint8_t *buf = s1.buf; + uint64_t total_len1 = s1.total_len; + uint32_t sz1; + if (total_len1 % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len1 > (uint64_t)0U) + { + sz1 = (uint32_t)64U; + } + else + { + sz1 = (uint32_t)(total_len1 % (uint64_t)(uint32_t)64U); + } + uint8_t *buf2 = buf + sz1; + memcpy(buf2, data, len * sizeof (uint8_t)); + uint64_t total_len2 = total_len1 + (uint64_t)len; + *p + = + ( + (Hacl_Streaming_MD_state_32){ + .block_state = block_state1, + .buf = buf, + .total_len = total_len2 + } + ); + } + else if (sz == (uint32_t)0U) + { + Hacl_Streaming_MD_state_32 s1 = *p; + uint32_t *block_state1 = s1.block_state; + uint8_t *buf = s1.buf; + uint64_t total_len1 = s1.total_len; + uint32_t sz1; + if (total_len1 % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len1 > (uint64_t)0U) + { + sz1 = (uint32_t)64U; + } + else + { + sz1 = (uint32_t)(total_len1 % (uint64_t)(uint32_t)64U); + } + if (!(sz1 == (uint32_t)0U)) + { + Hacl_Hash_MD5_legacy_update_multi(block_state1, buf, (uint32_t)1U); + } + uint32_t ite; + if ((uint64_t)len % (uint64_t)(uint32_t)64U == (uint64_t)0U && (uint64_t)len > (uint64_t)0U) + { + ite = (uint32_t)64U; + } + else + { + ite = (uint32_t)((uint64_t)len % (uint64_t)(uint32_t)64U); + } + uint32_t n_blocks = (len - ite) / (uint32_t)64U; + uint32_t data1_len = n_blocks * (uint32_t)64U; + uint32_t data2_len = len - data1_len; + uint8_t *data1 = data; + uint8_t *data2 = data + data1_len; + Hacl_Hash_MD5_legacy_update_multi(block_state1, data1, data1_len / (uint32_t)64U); + uint8_t *dst = buf; + memcpy(dst, data2, data2_len * sizeof (uint8_t)); + *p + = + ( + (Hacl_Streaming_MD_state_32){ + .block_state = block_state1, + .buf = buf, + .total_len = total_len1 + (uint64_t)len + } + ); + } + else + { + uint32_t diff = (uint32_t)64U - sz; + uint8_t *data1 = data; + uint8_t *data2 = data + diff; + Hacl_Streaming_MD_state_32 s1 = *p; + uint32_t *block_state10 = s1.block_state; + uint8_t *buf0 = s1.buf; + uint64_t total_len10 = s1.total_len; + uint32_t sz10; + if (total_len10 % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len10 > (uint64_t)0U) + { + sz10 = (uint32_t)64U; + } + else + { + sz10 = (uint32_t)(total_len10 % (uint64_t)(uint32_t)64U); + } + uint8_t *buf2 = buf0 + sz10; + memcpy(buf2, data1, diff * sizeof (uint8_t)); + uint64_t total_len2 = total_len10 + (uint64_t)diff; + *p + = + ( + (Hacl_Streaming_MD_state_32){ + .block_state = block_state10, + .buf = buf0, + .total_len = total_len2 + } + ); + Hacl_Streaming_MD_state_32 s10 = *p; + uint32_t *block_state1 = s10.block_state; + uint8_t *buf = s10.buf; + uint64_t total_len1 = s10.total_len; + uint32_t sz1; + if (total_len1 % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len1 > (uint64_t)0U) + { + sz1 = (uint32_t)64U; + } + else + { + sz1 = (uint32_t)(total_len1 % (uint64_t)(uint32_t)64U); + } + if (!(sz1 == (uint32_t)0U)) + { + Hacl_Hash_MD5_legacy_update_multi(block_state1, buf, (uint32_t)1U); + } + uint32_t ite; + if + ( + (uint64_t)(len - diff) + % (uint64_t)(uint32_t)64U + == (uint64_t)0U + && (uint64_t)(len - diff) > (uint64_t)0U + ) + { + ite = (uint32_t)64U; + } + else + { + ite = (uint32_t)((uint64_t)(len - diff) % (uint64_t)(uint32_t)64U); + } + uint32_t n_blocks = (len - diff - ite) / (uint32_t)64U; + uint32_t data1_len = n_blocks * (uint32_t)64U; + uint32_t data2_len = len - diff - data1_len; + uint8_t *data11 = data2; + uint8_t *data21 = data2 + data1_len; + Hacl_Hash_MD5_legacy_update_multi(block_state1, data11, data1_len / (uint32_t)64U); + uint8_t *dst = buf; + memcpy(dst, data21, data2_len * sizeof (uint8_t)); + *p + = + ( + (Hacl_Streaming_MD_state_32){ + .block_state = block_state1, + .buf = buf, + .total_len = total_len1 + (uint64_t)(len - diff) + } + ); + } + return (uint32_t)0U; +} + +void Hacl_Streaming_MD5_legacy_finish(Hacl_Streaming_MD_state_32 *p, uint8_t *dst) +{ + Hacl_Streaming_MD_state_32 scrut = *p; + uint32_t *block_state = scrut.block_state; + uint8_t *buf_ = scrut.buf; + uint64_t total_len = scrut.total_len; + uint32_t r; + if (total_len % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len > (uint64_t)0U) + { + r = (uint32_t)64U; + } + else + { + r = (uint32_t)(total_len % (uint64_t)(uint32_t)64U); + } + uint8_t *buf_1 = buf_; + uint32_t tmp_block_state[4U] = { 0U }; + memcpy(tmp_block_state, block_state, (uint32_t)4U * sizeof (uint32_t)); + uint32_t ite; + if (r % (uint32_t)64U == (uint32_t)0U && r > (uint32_t)0U) + { + ite = (uint32_t)64U; + } + else + { + ite = r % (uint32_t)64U; + } + uint8_t *buf_last = buf_1 + r - ite; + uint8_t *buf_multi = buf_1; + Hacl_Hash_MD5_legacy_update_multi(tmp_block_state, buf_multi, (uint32_t)0U); + uint64_t prev_len_last = total_len - (uint64_t)r; + Hacl_Hash_MD5_legacy_update_last(tmp_block_state, prev_len_last, buf_last, r); + Hacl_Hash_Core_MD5_legacy_finish(tmp_block_state, dst); +} + +void Hacl_Streaming_MD5_legacy_free(Hacl_Streaming_MD_state_32 *s) +{ + Hacl_Streaming_MD_state_32 scrut = *s; + uint8_t *buf = scrut.buf; + uint32_t *block_state = scrut.block_state; + KRML_HOST_FREE(block_state); + KRML_HOST_FREE(buf); + KRML_HOST_FREE(s); +} + +Hacl_Streaming_MD_state_32 *Hacl_Streaming_MD5_legacy_copy(Hacl_Streaming_MD_state_32 *s0) +{ + Hacl_Streaming_MD_state_32 scrut = *s0; + uint32_t *block_state0 = scrut.block_state; + uint8_t *buf0 = scrut.buf; + uint64_t total_len0 = scrut.total_len; + uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)64U, sizeof (uint8_t)); + memcpy(buf, buf0, (uint32_t)64U * sizeof (uint8_t)); + uint32_t *block_state = (uint32_t *)KRML_HOST_CALLOC((uint32_t)4U, sizeof (uint32_t)); + memcpy(block_state, block_state0, (uint32_t)4U * sizeof (uint32_t)); + Hacl_Streaming_MD_state_32 + s = { .block_state = block_state, .buf = buf, .total_len = total_len0 }; + Hacl_Streaming_MD_state_32 + *p = (Hacl_Streaming_MD_state_32 *)KRML_HOST_MALLOC(sizeof (Hacl_Streaming_MD_state_32)); + p[0U] = s; + return p; +} + +void Hacl_Streaming_MD5_legacy_hash(uint8_t *input, uint32_t input_len, uint8_t *dst) +{ + Hacl_Hash_MD5_legacy_hash(input, input_len, dst); +} + diff --git a/Modules/_hacl/Hacl_Hash_MD5.h b/Modules/_hacl/Hacl_Hash_MD5.h new file mode 100644 index 000000000000..015e3668751b --- /dev/null +++ b/Modules/_hacl/Hacl_Hash_MD5.h @@ -0,0 +1,65 @@ +/* MIT License + * + * Copyright (c) 2016-2022 INRIA, CMU and Microsoft Corporation + * Copyright (c) 2022-2023 HACL* Contributors + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + + +#ifndef __Hacl_Hash_MD5_H +#define __Hacl_Hash_MD5_H + +#if defined(__cplusplus) +extern "C" { +#endif + +#include +#include "krml/types.h" +#include "krml/lowstar_endianness.h" +#include "krml/internal/target.h" + +#include "Hacl_Streaming_Types.h" + +typedef Hacl_Streaming_MD_state_32 Hacl_Streaming_MD5_state; + +Hacl_Streaming_MD_state_32 *Hacl_Streaming_MD5_legacy_create_in(void); + +void Hacl_Streaming_MD5_legacy_init(Hacl_Streaming_MD_state_32 *s); + +/** +0 = success, 1 = max length exceeded +*/ +uint32_t +Hacl_Streaming_MD5_legacy_update(Hacl_Streaming_MD_state_32 *p, uint8_t *data, uint32_t len); + +void Hacl_Streaming_MD5_legacy_finish(Hacl_Streaming_MD_state_32 *p, uint8_t *dst); + +void Hacl_Streaming_MD5_legacy_free(Hacl_Streaming_MD_state_32 *s); + +Hacl_Streaming_MD_state_32 *Hacl_Streaming_MD5_legacy_copy(Hacl_Streaming_MD_state_32 *s0); + +void Hacl_Streaming_MD5_legacy_hash(uint8_t *input, uint32_t input_len, uint8_t *dst); + +#if defined(__cplusplus) +} +#endif + +#define __Hacl_Hash_MD5_H_DEFINED +#endif diff --git a/Modules/_hacl/Hacl_Hash_SHA1.c b/Modules/_hacl/Hacl_Hash_SHA1.c new file mode 100644 index 000000000000..e155e338271c --- /dev/null +++ b/Modules/_hacl/Hacl_Hash_SHA1.c @@ -0,0 +1,508 @@ +/* MIT License + * + * Copyright (c) 2016-2022 INRIA, CMU and Microsoft Corporation + * Copyright (c) 2022-2023 HACL* Contributors + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + + +#include "internal/Hacl_Hash_SHA1.h" + +static uint32_t +_h0[5U] = + { + (uint32_t)0x67452301U, (uint32_t)0xefcdab89U, (uint32_t)0x98badcfeU, (uint32_t)0x10325476U, + (uint32_t)0xc3d2e1f0U + }; + +void Hacl_Hash_Core_SHA1_legacy_init(uint32_t *s) +{ + KRML_MAYBE_FOR5(i, (uint32_t)0U, (uint32_t)5U, (uint32_t)1U, s[i] = _h0[i];); +} + +static void legacy_update(uint32_t *h, uint8_t *l) +{ + uint32_t ha = h[0U]; + uint32_t hb = h[1U]; + uint32_t hc = h[2U]; + uint32_t hd = h[3U]; + uint32_t he = h[4U]; + uint32_t _w[80U] = { 0U }; + for (uint32_t i = (uint32_t)0U; i < (uint32_t)80U; i++) + { + uint32_t v; + if (i < (uint32_t)16U) + { + uint8_t *b = l + i * (uint32_t)4U; + uint32_t u = load32_be(b); + v = u; + } + else + { + uint32_t wmit3 = _w[i - (uint32_t)3U]; + uint32_t wmit8 = _w[i - (uint32_t)8U]; + uint32_t wmit14 = _w[i - (uint32_t)14U]; + uint32_t wmit16 = _w[i - (uint32_t)16U]; + v = + (wmit3 ^ (wmit8 ^ (wmit14 ^ wmit16))) + << (uint32_t)1U + | (wmit3 ^ (wmit8 ^ (wmit14 ^ wmit16))) >> (uint32_t)31U; + } + _w[i] = v; + } + for (uint32_t i = (uint32_t)0U; i < (uint32_t)80U; i++) + { + uint32_t _a = h[0U]; + uint32_t _b = h[1U]; + uint32_t _c = h[2U]; + uint32_t _d = h[3U]; + uint32_t _e = h[4U]; + uint32_t wmit = _w[i]; + uint32_t ite0; + if (i < (uint32_t)20U) + { + ite0 = (_b & _c) ^ (~_b & _d); + } + else if ((uint32_t)39U < i && i < (uint32_t)60U) + { + ite0 = (_b & _c) ^ ((_b & _d) ^ (_c & _d)); + } + else + { + ite0 = _b ^ (_c ^ _d); + } + uint32_t ite; + if (i < (uint32_t)20U) + { + ite = (uint32_t)0x5a827999U; + } + else if (i < (uint32_t)40U) + { + ite = (uint32_t)0x6ed9eba1U; + } + else if (i < (uint32_t)60U) + { + ite = (uint32_t)0x8f1bbcdcU; + } + else + { + ite = (uint32_t)0xca62c1d6U; + } + uint32_t _T = (_a << (uint32_t)5U | _a >> (uint32_t)27U) + ite0 + _e + ite + wmit; + h[0U] = _T; + h[1U] = _a; + h[2U] = _b << (uint32_t)30U | _b >> (uint32_t)2U; + h[3U] = _c; + h[4U] = _d; + } + for (uint32_t i = (uint32_t)0U; i < (uint32_t)80U; i++) + { + _w[i] = (uint32_t)0U; + } + uint32_t sta = h[0U]; + uint32_t stb = h[1U]; + uint32_t stc = h[2U]; + uint32_t std = h[3U]; + uint32_t ste = h[4U]; + h[0U] = sta + ha; + h[1U] = stb + hb; + h[2U] = stc + hc; + h[3U] = std + hd; + h[4U] = ste + he; +} + +static void legacy_pad(uint64_t len, uint8_t *dst) +{ + uint8_t *dst1 = dst; + dst1[0U] = (uint8_t)0x80U; + uint8_t *dst2 = dst + (uint32_t)1U; + for + (uint32_t + i = (uint32_t)0U; + i + < ((uint32_t)128U - ((uint32_t)9U + (uint32_t)(len % (uint64_t)(uint32_t)64U))) % (uint32_t)64U; + i++) + { + dst2[i] = (uint8_t)0U; + } + uint8_t + *dst3 = + dst + + + (uint32_t)1U + + + ((uint32_t)128U - ((uint32_t)9U + (uint32_t)(len % (uint64_t)(uint32_t)64U))) + % (uint32_t)64U; + store64_be(dst3, len << (uint32_t)3U); +} + +void Hacl_Hash_Core_SHA1_legacy_finish(uint32_t *s, uint8_t *dst) +{ + KRML_MAYBE_FOR5(i, + (uint32_t)0U, + (uint32_t)5U, + (uint32_t)1U, + store32_be(dst + i * (uint32_t)4U, s[i]);); +} + +void Hacl_Hash_SHA1_legacy_update_multi(uint32_t *s, uint8_t *blocks, uint32_t n_blocks) +{ + for (uint32_t i = (uint32_t)0U; i < n_blocks; i++) + { + uint32_t sz = (uint32_t)64U; + uint8_t *block = blocks + sz * i; + legacy_update(s, block); + } +} + +void +Hacl_Hash_SHA1_legacy_update_last( + uint32_t *s, + uint64_t prev_len, + uint8_t *input, + uint32_t input_len +) +{ + uint32_t blocks_n = input_len / (uint32_t)64U; + uint32_t blocks_len = blocks_n * (uint32_t)64U; + uint8_t *blocks = input; + uint32_t rest_len = input_len - blocks_len; + uint8_t *rest = input + blocks_len; + Hacl_Hash_SHA1_legacy_update_multi(s, blocks, blocks_n); + uint64_t total_input_len = prev_len + (uint64_t)input_len; + uint32_t + pad_len = + (uint32_t)1U + + + ((uint32_t)128U - ((uint32_t)9U + (uint32_t)(total_input_len % (uint64_t)(uint32_t)64U))) + % (uint32_t)64U + + (uint32_t)8U; + uint32_t tmp_len = rest_len + pad_len; + uint8_t tmp_twoblocks[128U] = { 0U }; + uint8_t *tmp = tmp_twoblocks; + uint8_t *tmp_rest = tmp; + uint8_t *tmp_pad = tmp + rest_len; + memcpy(tmp_rest, rest, rest_len * sizeof (uint8_t)); + legacy_pad(total_input_len, tmp_pad); + Hacl_Hash_SHA1_legacy_update_multi(s, tmp, tmp_len / (uint32_t)64U); +} + +void Hacl_Hash_SHA1_legacy_hash(uint8_t *input, uint32_t input_len, uint8_t *dst) +{ + uint32_t + s[5U] = + { + (uint32_t)0x67452301U, (uint32_t)0xefcdab89U, (uint32_t)0x98badcfeU, (uint32_t)0x10325476U, + (uint32_t)0xc3d2e1f0U + }; + uint32_t blocks_n0 = input_len / (uint32_t)64U; + uint32_t blocks_n1; + if (input_len % (uint32_t)64U == (uint32_t)0U && blocks_n0 > (uint32_t)0U) + { + blocks_n1 = blocks_n0 - (uint32_t)1U; + } + else + { + blocks_n1 = blocks_n0; + } + uint32_t blocks_len0 = blocks_n1 * (uint32_t)64U; + uint8_t *blocks0 = input; + uint32_t rest_len0 = input_len - blocks_len0; + uint8_t *rest0 = input + blocks_len0; + uint32_t blocks_n = blocks_n1; + uint32_t blocks_len = blocks_len0; + uint8_t *blocks = blocks0; + uint32_t rest_len = rest_len0; + uint8_t *rest = rest0; + Hacl_Hash_SHA1_legacy_update_multi(s, blocks, blocks_n); + Hacl_Hash_SHA1_legacy_update_last(s, (uint64_t)blocks_len, rest, rest_len); + Hacl_Hash_Core_SHA1_legacy_finish(s, dst); +} + +Hacl_Streaming_MD_state_32 *Hacl_Streaming_SHA1_legacy_create_in(void) +{ + uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)64U, sizeof (uint8_t)); + uint32_t *block_state = (uint32_t *)KRML_HOST_CALLOC((uint32_t)5U, sizeof (uint32_t)); + Hacl_Streaming_MD_state_32 + s = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; + Hacl_Streaming_MD_state_32 + *p = (Hacl_Streaming_MD_state_32 *)KRML_HOST_MALLOC(sizeof (Hacl_Streaming_MD_state_32)); + p[0U] = s; + Hacl_Hash_Core_SHA1_legacy_init(block_state); + return p; +} + +void Hacl_Streaming_SHA1_legacy_init(Hacl_Streaming_MD_state_32 *s) +{ + Hacl_Streaming_MD_state_32 scrut = *s; + uint8_t *buf = scrut.buf; + uint32_t *block_state = scrut.block_state; + Hacl_Hash_Core_SHA1_legacy_init(block_state); + Hacl_Streaming_MD_state_32 + tmp = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; + s[0U] = tmp; +} + +/** +0 = success, 1 = max length exceeded +*/ +uint32_t +Hacl_Streaming_SHA1_legacy_update(Hacl_Streaming_MD_state_32 *p, uint8_t *data, uint32_t len) +{ + Hacl_Streaming_MD_state_32 s = *p; + uint64_t total_len = s.total_len; + if ((uint64_t)len > (uint64_t)2305843009213693951U - total_len) + { + return (uint32_t)1U; + } + uint32_t sz; + if (total_len % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len > (uint64_t)0U) + { + sz = (uint32_t)64U; + } + else + { + sz = (uint32_t)(total_len % (uint64_t)(uint32_t)64U); + } + if (len <= (uint32_t)64U - sz) + { + Hacl_Streaming_MD_state_32 s1 = *p; + uint32_t *block_state1 = s1.block_state; + uint8_t *buf = s1.buf; + uint64_t total_len1 = s1.total_len; + uint32_t sz1; + if (total_len1 % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len1 > (uint64_t)0U) + { + sz1 = (uint32_t)64U; + } + else + { + sz1 = (uint32_t)(total_len1 % (uint64_t)(uint32_t)64U); + } + uint8_t *buf2 = buf + sz1; + memcpy(buf2, data, len * sizeof (uint8_t)); + uint64_t total_len2 = total_len1 + (uint64_t)len; + *p + = + ( + (Hacl_Streaming_MD_state_32){ + .block_state = block_state1, + .buf = buf, + .total_len = total_len2 + } + ); + } + else if (sz == (uint32_t)0U) + { + Hacl_Streaming_MD_state_32 s1 = *p; + uint32_t *block_state1 = s1.block_state; + uint8_t *buf = s1.buf; + uint64_t total_len1 = s1.total_len; + uint32_t sz1; + if (total_len1 % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len1 > (uint64_t)0U) + { + sz1 = (uint32_t)64U; + } + else + { + sz1 = (uint32_t)(total_len1 % (uint64_t)(uint32_t)64U); + } + if (!(sz1 == (uint32_t)0U)) + { + Hacl_Hash_SHA1_legacy_update_multi(block_state1, buf, (uint32_t)1U); + } + uint32_t ite; + if ((uint64_t)len % (uint64_t)(uint32_t)64U == (uint64_t)0U && (uint64_t)len > (uint64_t)0U) + { + ite = (uint32_t)64U; + } + else + { + ite = (uint32_t)((uint64_t)len % (uint64_t)(uint32_t)64U); + } + uint32_t n_blocks = (len - ite) / (uint32_t)64U; + uint32_t data1_len = n_blocks * (uint32_t)64U; + uint32_t data2_len = len - data1_len; + uint8_t *data1 = data; + uint8_t *data2 = data + data1_len; + Hacl_Hash_SHA1_legacy_update_multi(block_state1, data1, data1_len / (uint32_t)64U); + uint8_t *dst = buf; + memcpy(dst, data2, data2_len * sizeof (uint8_t)); + *p + = + ( + (Hacl_Streaming_MD_state_32){ + .block_state = block_state1, + .buf = buf, + .total_len = total_len1 + (uint64_t)len + } + ); + } + else + { + uint32_t diff = (uint32_t)64U - sz; + uint8_t *data1 = data; + uint8_t *data2 = data + diff; + Hacl_Streaming_MD_state_32 s1 = *p; + uint32_t *block_state10 = s1.block_state; + uint8_t *buf0 = s1.buf; + uint64_t total_len10 = s1.total_len; + uint32_t sz10; + if (total_len10 % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len10 > (uint64_t)0U) + { + sz10 = (uint32_t)64U; + } + else + { + sz10 = (uint32_t)(total_len10 % (uint64_t)(uint32_t)64U); + } + uint8_t *buf2 = buf0 + sz10; + memcpy(buf2, data1, diff * sizeof (uint8_t)); + uint64_t total_len2 = total_len10 + (uint64_t)diff; + *p + = + ( + (Hacl_Streaming_MD_state_32){ + .block_state = block_state10, + .buf = buf0, + .total_len = total_len2 + } + ); + Hacl_Streaming_MD_state_32 s10 = *p; + uint32_t *block_state1 = s10.block_state; + uint8_t *buf = s10.buf; + uint64_t total_len1 = s10.total_len; + uint32_t sz1; + if (total_len1 % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len1 > (uint64_t)0U) + { + sz1 = (uint32_t)64U; + } + else + { + sz1 = (uint32_t)(total_len1 % (uint64_t)(uint32_t)64U); + } + if (!(sz1 == (uint32_t)0U)) + { + Hacl_Hash_SHA1_legacy_update_multi(block_state1, buf, (uint32_t)1U); + } + uint32_t ite; + if + ( + (uint64_t)(len - diff) + % (uint64_t)(uint32_t)64U + == (uint64_t)0U + && (uint64_t)(len - diff) > (uint64_t)0U + ) + { + ite = (uint32_t)64U; + } + else + { + ite = (uint32_t)((uint64_t)(len - diff) % (uint64_t)(uint32_t)64U); + } + uint32_t n_blocks = (len - diff - ite) / (uint32_t)64U; + uint32_t data1_len = n_blocks * (uint32_t)64U; + uint32_t data2_len = len - diff - data1_len; + uint8_t *data11 = data2; + uint8_t *data21 = data2 + data1_len; + Hacl_Hash_SHA1_legacy_update_multi(block_state1, data11, data1_len / (uint32_t)64U); + uint8_t *dst = buf; + memcpy(dst, data21, data2_len * sizeof (uint8_t)); + *p + = + ( + (Hacl_Streaming_MD_state_32){ + .block_state = block_state1, + .buf = buf, + .total_len = total_len1 + (uint64_t)(len - diff) + } + ); + } + return (uint32_t)0U; +} + +void Hacl_Streaming_SHA1_legacy_finish(Hacl_Streaming_MD_state_32 *p, uint8_t *dst) +{ + Hacl_Streaming_MD_state_32 scrut = *p; + uint32_t *block_state = scrut.block_state; + uint8_t *buf_ = scrut.buf; + uint64_t total_len = scrut.total_len; + uint32_t r; + if (total_len % (uint64_t)(uint32_t)64U == (uint64_t)0U && total_len > (uint64_t)0U) + { + r = (uint32_t)64U; + } + else + { + r = (uint32_t)(total_len % (uint64_t)(uint32_t)64U); + } + uint8_t *buf_1 = buf_; + uint32_t tmp_block_state[5U] = { 0U }; + memcpy(tmp_block_state, block_state, (uint32_t)5U * sizeof (uint32_t)); + uint32_t ite; + if (r % (uint32_t)64U == (uint32_t)0U && r > (uint32_t)0U) + { + ite = (uint32_t)64U; + } + else + { + ite = r % (uint32_t)64U; + } + uint8_t *buf_last = buf_1 + r - ite; + uint8_t *buf_multi = buf_1; + Hacl_Hash_SHA1_legacy_update_multi(tmp_block_state, buf_multi, (uint32_t)0U); + uint64_t prev_len_last = total_len - (uint64_t)r; + Hacl_Hash_SHA1_legacy_update_last(tmp_block_state, prev_len_last, buf_last, r); + Hacl_Hash_Core_SHA1_legacy_finish(tmp_block_state, dst); +} + +void Hacl_Streaming_SHA1_legacy_free(Hacl_Streaming_MD_state_32 *s) +{ + Hacl_Streaming_MD_state_32 scrut = *s; + uint8_t *buf = scrut.buf; + uint32_t *block_state = scrut.block_state; + KRML_HOST_FREE(block_state); + KRML_HOST_FREE(buf); + KRML_HOST_FREE(s); +} + +Hacl_Streaming_MD_state_32 *Hacl_Streaming_SHA1_legacy_copy(Hacl_Streaming_MD_state_32 *s0) +{ + Hacl_Streaming_MD_state_32 scrut = *s0; + uint32_t *block_state0 = scrut.block_state; + uint8_t *buf0 = scrut.buf; + uint64_t total_len0 = scrut.total_len; + uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)64U, sizeof (uint8_t)); + memcpy(buf, buf0, (uint32_t)64U * sizeof (uint8_t)); + uint32_t *block_state = (uint32_t *)KRML_HOST_CALLOC((uint32_t)5U, sizeof (uint32_t)); + memcpy(block_state, block_state0, (uint32_t)5U * sizeof (uint32_t)); + Hacl_Streaming_MD_state_32 + s = { .block_state = block_state, .buf = buf, .total_len = total_len0 }; + Hacl_Streaming_MD_state_32 + *p = (Hacl_Streaming_MD_state_32 *)KRML_HOST_MALLOC(sizeof (Hacl_Streaming_MD_state_32)); + p[0U] = s; + return p; +} + +void Hacl_Streaming_SHA1_legacy_hash(uint8_t *input, uint32_t input_len, uint8_t *dst) +{ + Hacl_Hash_SHA1_legacy_hash(input, input_len, dst); +} + diff --git a/Modules/_hacl/Hacl_Hash_SHA1.h b/Modules/_hacl/Hacl_Hash_SHA1.h new file mode 100644 index 000000000000..5e2ae8e71329 --- /dev/null +++ b/Modules/_hacl/Hacl_Hash_SHA1.h @@ -0,0 +1,65 @@ +/* MIT License + * + * Copyright (c) 2016-2022 INRIA, CMU and Microsoft Corporation + * Copyright (c) 2022-2023 HACL* Contributors + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + + +#ifndef __Hacl_Hash_SHA1_H +#define __Hacl_Hash_SHA1_H + +#if defined(__cplusplus) +extern "C" { +#endif + +#include +#include "krml/types.h" +#include "krml/lowstar_endianness.h" +#include "krml/internal/target.h" + +#include "Hacl_Streaming_Types.h" + +typedef Hacl_Streaming_MD_state_32 Hacl_Streaming_SHA1_state; + +Hacl_Streaming_MD_state_32 *Hacl_Streaming_SHA1_legacy_create_in(void); + +void Hacl_Streaming_SHA1_legacy_init(Hacl_Streaming_MD_state_32 *s); + +/** +0 = success, 1 = max length exceeded +*/ +uint32_t +Hacl_Streaming_SHA1_legacy_update(Hacl_Streaming_MD_state_32 *p, uint8_t *data, uint32_t len); + +void Hacl_Streaming_SHA1_legacy_finish(Hacl_Streaming_MD_state_32 *p, uint8_t *dst); + +void Hacl_Streaming_SHA1_legacy_free(Hacl_Streaming_MD_state_32 *s); + +Hacl_Streaming_MD_state_32 *Hacl_Streaming_SHA1_legacy_copy(Hacl_Streaming_MD_state_32 *s0); + +void Hacl_Streaming_SHA1_legacy_hash(uint8_t *input, uint32_t input_len, uint8_t *dst); + +#if defined(__cplusplus) +} +#endif + +#define __Hacl_Hash_SHA1_H_DEFINED +#endif diff --git a/Modules/_hacl/Hacl_Streaming_SHA2.c b/Modules/_hacl/Hacl_Streaming_SHA2.c index 8169c7a35673..69c3be8cdf7f 100644 --- a/Modules/_hacl/Hacl_Streaming_SHA2.c +++ b/Modules/_hacl/Hacl_Streaming_SHA2.c @@ -477,17 +477,14 @@ static inline void sha384_finish(uint64_t *st, uint8_t *h) Allocate initial state for the SHA2_256 hash. The state is to be freed by calling `free_256`. */ -Hacl_Streaming_SHA2_state_sha2_224 *Hacl_Streaming_SHA2_create_in_256(void) +Hacl_Streaming_MD_state_32 *Hacl_Streaming_SHA2_create_in_256(void) { uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)64U, sizeof (uint8_t)); uint32_t *block_state = (uint32_t *)KRML_HOST_CALLOC((uint32_t)8U, sizeof (uint32_t)); - Hacl_Streaming_SHA2_state_sha2_224 + Hacl_Streaming_MD_state_32 s = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; - Hacl_Streaming_SHA2_state_sha2_224 - *p = - (Hacl_Streaming_SHA2_state_sha2_224 *)KRML_HOST_MALLOC(sizeof ( - Hacl_Streaming_SHA2_state_sha2_224 - )); + Hacl_Streaming_MD_state_32 + *p = (Hacl_Streaming_MD_state_32 *)KRML_HOST_MALLOC(sizeof (Hacl_Streaming_MD_state_32)); p[0U] = s; sha256_init(block_state); return p; @@ -499,10 +496,9 @@ The state is to be freed by calling `free_256`. Cloning the state this way is useful, for instance, if your control-flow diverges and you need to feed more (different) data into the hash in each branch. */ -Hacl_Streaming_SHA2_state_sha2_224 -*Hacl_Streaming_SHA2_copy_256(Hacl_Streaming_SHA2_state_sha2_224 *s0) +Hacl_Streaming_MD_state_32 *Hacl_Streaming_SHA2_copy_256(Hacl_Streaming_MD_state_32 *s0) { - Hacl_Streaming_SHA2_state_sha2_224 scrut = *s0; + Hacl_Streaming_MD_state_32 scrut = *s0; uint32_t *block_state0 = scrut.block_state; uint8_t *buf0 = scrut.buf; uint64_t total_len0 = scrut.total_len; @@ -510,13 +506,10 @@ Hacl_Streaming_SHA2_state_sha2_224 memcpy(buf, buf0, (uint32_t)64U * sizeof (uint8_t)); uint32_t *block_state = (uint32_t *)KRML_HOST_CALLOC((uint32_t)8U, sizeof (uint32_t)); memcpy(block_state, block_state0, (uint32_t)8U * sizeof (uint32_t)); - Hacl_Streaming_SHA2_state_sha2_224 + Hacl_Streaming_MD_state_32 s = { .block_state = block_state, .buf = buf, .total_len = total_len0 }; - Hacl_Streaming_SHA2_state_sha2_224 - *p = - (Hacl_Streaming_SHA2_state_sha2_224 *)KRML_HOST_MALLOC(sizeof ( - Hacl_Streaming_SHA2_state_sha2_224 - )); + Hacl_Streaming_MD_state_32 + *p = (Hacl_Streaming_MD_state_32 *)KRML_HOST_MALLOC(sizeof (Hacl_Streaming_MD_state_32)); p[0U] = s; return p; } @@ -524,21 +517,21 @@ Hacl_Streaming_SHA2_state_sha2_224 /** Reset an existing state to the initial hash state with empty data. */ -void Hacl_Streaming_SHA2_init_256(Hacl_Streaming_SHA2_state_sha2_224 *s) +void Hacl_Streaming_SHA2_init_256(Hacl_Streaming_MD_state_32 *s) { - Hacl_Streaming_SHA2_state_sha2_224 scrut = *s; + Hacl_Streaming_MD_state_32 scrut = *s; uint8_t *buf = scrut.buf; uint32_t *block_state = scrut.block_state; sha256_init(block_state); - Hacl_Streaming_SHA2_state_sha2_224 + Hacl_Streaming_MD_state_32 tmp = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; s[0U] = tmp; } static inline uint32_t -update_224_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *data, uint32_t len) +update_224_256(Hacl_Streaming_MD_state_32 *p, uint8_t *data, uint32_t len) { - Hacl_Streaming_SHA2_state_sha2_224 s = *p; + Hacl_Streaming_MD_state_32 s = *p; uint64_t total_len = s.total_len; if ((uint64_t)len > (uint64_t)2305843009213693951U - total_len) { @@ -555,7 +548,7 @@ update_224_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *data, uint32_t le } if (len <= (uint32_t)64U - sz) { - Hacl_Streaming_SHA2_state_sha2_224 s1 = *p; + Hacl_Streaming_MD_state_32 s1 = *p; uint32_t *block_state1 = s1.block_state; uint8_t *buf = s1.buf; uint64_t total_len1 = s1.total_len; @@ -574,7 +567,7 @@ update_224_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *data, uint32_t le *p = ( - (Hacl_Streaming_SHA2_state_sha2_224){ + (Hacl_Streaming_MD_state_32){ .block_state = block_state1, .buf = buf, .total_len = total_len2 @@ -583,7 +576,7 @@ update_224_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *data, uint32_t le } else if (sz == (uint32_t)0U) { - Hacl_Streaming_SHA2_state_sha2_224 s1 = *p; + Hacl_Streaming_MD_state_32 s1 = *p; uint32_t *block_state1 = s1.block_state; uint8_t *buf = s1.buf; uint64_t total_len1 = s1.total_len; @@ -620,7 +613,7 @@ update_224_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *data, uint32_t le *p = ( - (Hacl_Streaming_SHA2_state_sha2_224){ + (Hacl_Streaming_MD_state_32){ .block_state = block_state1, .buf = buf, .total_len = total_len1 + (uint64_t)len @@ -632,7 +625,7 @@ update_224_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *data, uint32_t le uint32_t diff = (uint32_t)64U - sz; uint8_t *data1 = data; uint8_t *data2 = data + diff; - Hacl_Streaming_SHA2_state_sha2_224 s1 = *p; + Hacl_Streaming_MD_state_32 s1 = *p; uint32_t *block_state10 = s1.block_state; uint8_t *buf0 = s1.buf; uint64_t total_len10 = s1.total_len; @@ -651,13 +644,13 @@ update_224_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *data, uint32_t le *p = ( - (Hacl_Streaming_SHA2_state_sha2_224){ + (Hacl_Streaming_MD_state_32){ .block_state = block_state10, .buf = buf0, .total_len = total_len2 } ); - Hacl_Streaming_SHA2_state_sha2_224 s10 = *p; + Hacl_Streaming_MD_state_32 s10 = *p; uint32_t *block_state1 = s10.block_state; uint8_t *buf = s10.buf; uint64_t total_len1 = s10.total_len; @@ -700,7 +693,7 @@ update_224_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *data, uint32_t le *p = ( - (Hacl_Streaming_SHA2_state_sha2_224){ + (Hacl_Streaming_MD_state_32){ .block_state = block_state1, .buf = buf, .total_len = total_len1 + (uint64_t)(len - diff) @@ -719,7 +712,7 @@ This function is identical to the update function for SHA2_224. */ uint32_t Hacl_Streaming_SHA2_update_256( - Hacl_Streaming_SHA2_state_sha2_224 *p, + Hacl_Streaming_MD_state_32 *p, uint8_t *input, uint32_t input_len ) @@ -733,9 +726,9 @@ valid after a call to `finish_256`, meaning the user may feed more data into the hash via `update_256`. (The finish_256 function operates on an internal copy of the state and therefore does not invalidate the client-held state `p`.) */ -void Hacl_Streaming_SHA2_finish_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *dst) +void Hacl_Streaming_SHA2_finish_256(Hacl_Streaming_MD_state_32 *p, uint8_t *dst) { - Hacl_Streaming_SHA2_state_sha2_224 scrut = *p; + Hacl_Streaming_MD_state_32 scrut = *p; uint32_t *block_state = scrut.block_state; uint8_t *buf_ = scrut.buf; uint64_t total_len = scrut.total_len; @@ -773,9 +766,9 @@ Free a state allocated with `create_in_256`. This function is identical to the free function for SHA2_224. */ -void Hacl_Streaming_SHA2_free_256(Hacl_Streaming_SHA2_state_sha2_224 *s) +void Hacl_Streaming_SHA2_free_256(Hacl_Streaming_MD_state_32 *s) { - Hacl_Streaming_SHA2_state_sha2_224 scrut = *s; + Hacl_Streaming_MD_state_32 scrut = *s; uint8_t *buf = scrut.buf; uint32_t *block_state = scrut.block_state; KRML_HOST_FREE(block_state); @@ -802,36 +795,33 @@ void Hacl_Streaming_SHA2_sha256(uint8_t *input, uint32_t input_len, uint8_t *dst sha256_finish(st, rb); } -Hacl_Streaming_SHA2_state_sha2_224 *Hacl_Streaming_SHA2_create_in_224(void) +Hacl_Streaming_MD_state_32 *Hacl_Streaming_SHA2_create_in_224(void) { uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)64U, sizeof (uint8_t)); uint32_t *block_state = (uint32_t *)KRML_HOST_CALLOC((uint32_t)8U, sizeof (uint32_t)); - Hacl_Streaming_SHA2_state_sha2_224 + Hacl_Streaming_MD_state_32 s = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; - Hacl_Streaming_SHA2_state_sha2_224 - *p = - (Hacl_Streaming_SHA2_state_sha2_224 *)KRML_HOST_MALLOC(sizeof ( - Hacl_Streaming_SHA2_state_sha2_224 - )); + Hacl_Streaming_MD_state_32 + *p = (Hacl_Streaming_MD_state_32 *)KRML_HOST_MALLOC(sizeof (Hacl_Streaming_MD_state_32)); p[0U] = s; sha224_init(block_state); return p; } -void Hacl_Streaming_SHA2_init_224(Hacl_Streaming_SHA2_state_sha2_224 *s) +void Hacl_Streaming_SHA2_init_224(Hacl_Streaming_MD_state_32 *s) { - Hacl_Streaming_SHA2_state_sha2_224 scrut = *s; + Hacl_Streaming_MD_state_32 scrut = *s; uint8_t *buf = scrut.buf; uint32_t *block_state = scrut.block_state; sha224_init(block_state); - Hacl_Streaming_SHA2_state_sha2_224 + Hacl_Streaming_MD_state_32 tmp = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; s[0U] = tmp; } uint32_t Hacl_Streaming_SHA2_update_224( - Hacl_Streaming_SHA2_state_sha2_224 *p, + Hacl_Streaming_MD_state_32 *p, uint8_t *input, uint32_t input_len ) @@ -844,9 +834,9 @@ Write the resulting hash into `dst`, an array of 28 bytes. The state remains valid after a call to `finish_224`, meaning the user may feed more data into the hash via `update_224`. */ -void Hacl_Streaming_SHA2_finish_224(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *dst) +void Hacl_Streaming_SHA2_finish_224(Hacl_Streaming_MD_state_32 *p, uint8_t *dst) { - Hacl_Streaming_SHA2_state_sha2_224 scrut = *p; + Hacl_Streaming_MD_state_32 scrut = *p; uint32_t *block_state = scrut.block_state; uint8_t *buf_ = scrut.buf; uint64_t total_len = scrut.total_len; @@ -879,7 +869,7 @@ void Hacl_Streaming_SHA2_finish_224(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8 sha224_finish(tmp_block_state, dst); } -void Hacl_Streaming_SHA2_free_224(Hacl_Streaming_SHA2_state_sha2_224 *p) +void Hacl_Streaming_SHA2_free_224(Hacl_Streaming_MD_state_32 *p) { Hacl_Streaming_SHA2_free_256(p); } @@ -903,17 +893,14 @@ void Hacl_Streaming_SHA2_sha224(uint8_t *input, uint32_t input_len, uint8_t *dst sha224_finish(st, rb); } -Hacl_Streaming_SHA2_state_sha2_384 *Hacl_Streaming_SHA2_create_in_512(void) +Hacl_Streaming_MD_state_64 *Hacl_Streaming_SHA2_create_in_512(void) { uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)128U, sizeof (uint8_t)); uint64_t *block_state = (uint64_t *)KRML_HOST_CALLOC((uint32_t)8U, sizeof (uint64_t)); - Hacl_Streaming_SHA2_state_sha2_384 + Hacl_Streaming_MD_state_64 s = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; - Hacl_Streaming_SHA2_state_sha2_384 - *p = - (Hacl_Streaming_SHA2_state_sha2_384 *)KRML_HOST_MALLOC(sizeof ( - Hacl_Streaming_SHA2_state_sha2_384 - )); + Hacl_Streaming_MD_state_64 + *p = (Hacl_Streaming_MD_state_64 *)KRML_HOST_MALLOC(sizeof (Hacl_Streaming_MD_state_64)); p[0U] = s; Hacl_SHA2_Scalar32_sha512_init(block_state); return p; @@ -925,10 +912,9 @@ The state is to be freed by calling `free_512`. Cloning the state this way is useful, for instance, if your control-flow diverges and you need to feed more (different) data into the hash in each branch. */ -Hacl_Streaming_SHA2_state_sha2_384 -*Hacl_Streaming_SHA2_copy_512(Hacl_Streaming_SHA2_state_sha2_384 *s0) +Hacl_Streaming_MD_state_64 *Hacl_Streaming_SHA2_copy_512(Hacl_Streaming_MD_state_64 *s0) { - Hacl_Streaming_SHA2_state_sha2_384 scrut = *s0; + Hacl_Streaming_MD_state_64 scrut = *s0; uint64_t *block_state0 = scrut.block_state; uint8_t *buf0 = scrut.buf; uint64_t total_len0 = scrut.total_len; @@ -936,32 +922,29 @@ Hacl_Streaming_SHA2_state_sha2_384 memcpy(buf, buf0, (uint32_t)128U * sizeof (uint8_t)); uint64_t *block_state = (uint64_t *)KRML_HOST_CALLOC((uint32_t)8U, sizeof (uint64_t)); memcpy(block_state, block_state0, (uint32_t)8U * sizeof (uint64_t)); - Hacl_Streaming_SHA2_state_sha2_384 + Hacl_Streaming_MD_state_64 s = { .block_state = block_state, .buf = buf, .total_len = total_len0 }; - Hacl_Streaming_SHA2_state_sha2_384 - *p = - (Hacl_Streaming_SHA2_state_sha2_384 *)KRML_HOST_MALLOC(sizeof ( - Hacl_Streaming_SHA2_state_sha2_384 - )); + Hacl_Streaming_MD_state_64 + *p = (Hacl_Streaming_MD_state_64 *)KRML_HOST_MALLOC(sizeof (Hacl_Streaming_MD_state_64)); p[0U] = s; return p; } -void Hacl_Streaming_SHA2_init_512(Hacl_Streaming_SHA2_state_sha2_384 *s) +void Hacl_Streaming_SHA2_init_512(Hacl_Streaming_MD_state_64 *s) { - Hacl_Streaming_SHA2_state_sha2_384 scrut = *s; + Hacl_Streaming_MD_state_64 scrut = *s; uint8_t *buf = scrut.buf; uint64_t *block_state = scrut.block_state; Hacl_SHA2_Scalar32_sha512_init(block_state); - Hacl_Streaming_SHA2_state_sha2_384 + Hacl_Streaming_MD_state_64 tmp = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; s[0U] = tmp; } static inline uint32_t -update_384_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *data, uint32_t len) +update_384_512(Hacl_Streaming_MD_state_64 *p, uint8_t *data, uint32_t len) { - Hacl_Streaming_SHA2_state_sha2_384 s = *p; + Hacl_Streaming_MD_state_64 s = *p; uint64_t total_len = s.total_len; if ((uint64_t)len > (uint64_t)18446744073709551615U - total_len) { @@ -978,7 +961,7 @@ update_384_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *data, uint32_t le } if (len <= (uint32_t)128U - sz) { - Hacl_Streaming_SHA2_state_sha2_384 s1 = *p; + Hacl_Streaming_MD_state_64 s1 = *p; uint64_t *block_state1 = s1.block_state; uint8_t *buf = s1.buf; uint64_t total_len1 = s1.total_len; @@ -997,7 +980,7 @@ update_384_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *data, uint32_t le *p = ( - (Hacl_Streaming_SHA2_state_sha2_384){ + (Hacl_Streaming_MD_state_64){ .block_state = block_state1, .buf = buf, .total_len = total_len2 @@ -1006,7 +989,7 @@ update_384_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *data, uint32_t le } else if (sz == (uint32_t)0U) { - Hacl_Streaming_SHA2_state_sha2_384 s1 = *p; + Hacl_Streaming_MD_state_64 s1 = *p; uint64_t *block_state1 = s1.block_state; uint8_t *buf = s1.buf; uint64_t total_len1 = s1.total_len; @@ -1043,7 +1026,7 @@ update_384_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *data, uint32_t le *p = ( - (Hacl_Streaming_SHA2_state_sha2_384){ + (Hacl_Streaming_MD_state_64){ .block_state = block_state1, .buf = buf, .total_len = total_len1 + (uint64_t)len @@ -1055,7 +1038,7 @@ update_384_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *data, uint32_t le uint32_t diff = (uint32_t)128U - sz; uint8_t *data1 = data; uint8_t *data2 = data + diff; - Hacl_Streaming_SHA2_state_sha2_384 s1 = *p; + Hacl_Streaming_MD_state_64 s1 = *p; uint64_t *block_state10 = s1.block_state; uint8_t *buf0 = s1.buf; uint64_t total_len10 = s1.total_len; @@ -1074,13 +1057,13 @@ update_384_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *data, uint32_t le *p = ( - (Hacl_Streaming_SHA2_state_sha2_384){ + (Hacl_Streaming_MD_state_64){ .block_state = block_state10, .buf = buf0, .total_len = total_len2 } ); - Hacl_Streaming_SHA2_state_sha2_384 s10 = *p; + Hacl_Streaming_MD_state_64 s10 = *p; uint64_t *block_state1 = s10.block_state; uint8_t *buf = s10.buf; uint64_t total_len1 = s10.total_len; @@ -1123,7 +1106,7 @@ update_384_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *data, uint32_t le *p = ( - (Hacl_Streaming_SHA2_state_sha2_384){ + (Hacl_Streaming_MD_state_64){ .block_state = block_state1, .buf = buf, .total_len = total_len1 + (uint64_t)(len - diff) @@ -1142,7 +1125,7 @@ This function is identical to the update function for SHA2_384. */ uint32_t Hacl_Streaming_SHA2_update_512( - Hacl_Streaming_SHA2_state_sha2_384 *p, + Hacl_Streaming_MD_state_64 *p, uint8_t *input, uint32_t input_len ) @@ -1156,9 +1139,9 @@ valid after a call to `finish_512`, meaning the user may feed more data into the hash via `update_512`. (The finish_512 function operates on an internal copy of the state and therefore does not invalidate the client-held state `p`.) */ -void Hacl_Streaming_SHA2_finish_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *dst) +void Hacl_Streaming_SHA2_finish_512(Hacl_Streaming_MD_state_64 *p, uint8_t *dst) { - Hacl_Streaming_SHA2_state_sha2_384 scrut = *p; + Hacl_Streaming_MD_state_64 scrut = *p; uint64_t *block_state = scrut.block_state; uint8_t *buf_ = scrut.buf; uint64_t total_len = scrut.total_len; @@ -1200,9 +1183,9 @@ Free a state allocated with `create_in_512`. This function is identical to the free function for SHA2_384. */ -void Hacl_Streaming_SHA2_free_512(Hacl_Streaming_SHA2_state_sha2_384 *s) +void Hacl_Streaming_SHA2_free_512(Hacl_Streaming_MD_state_64 *s) { - Hacl_Streaming_SHA2_state_sha2_384 scrut = *s; + Hacl_Streaming_MD_state_64 scrut = *s; uint8_t *buf = scrut.buf; uint64_t *block_state = scrut.block_state; KRML_HOST_FREE(block_state); @@ -1229,36 +1212,33 @@ void Hacl_Streaming_SHA2_sha512(uint8_t *input, uint32_t input_len, uint8_t *dst sha512_finish(st, rb); } -Hacl_Streaming_SHA2_state_sha2_384 *Hacl_Streaming_SHA2_create_in_384(void) +Hacl_Streaming_MD_state_64 *Hacl_Streaming_SHA2_create_in_384(void) { uint8_t *buf = (uint8_t *)KRML_HOST_CALLOC((uint32_t)128U, sizeof (uint8_t)); uint64_t *block_state = (uint64_t *)KRML_HOST_CALLOC((uint32_t)8U, sizeof (uint64_t)); - Hacl_Streaming_SHA2_state_sha2_384 + Hacl_Streaming_MD_state_64 s = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; - Hacl_Streaming_SHA2_state_sha2_384 - *p = - (Hacl_Streaming_SHA2_state_sha2_384 *)KRML_HOST_MALLOC(sizeof ( - Hacl_Streaming_SHA2_state_sha2_384 - )); + Hacl_Streaming_MD_state_64 + *p = (Hacl_Streaming_MD_state_64 *)KRML_HOST_MALLOC(sizeof (Hacl_Streaming_MD_state_64)); p[0U] = s; sha384_init(block_state); return p; } -void Hacl_Streaming_SHA2_init_384(Hacl_Streaming_SHA2_state_sha2_384 *s) +void Hacl_Streaming_SHA2_init_384(Hacl_Streaming_MD_state_64 *s) { - Hacl_Streaming_SHA2_state_sha2_384 scrut = *s; + Hacl_Streaming_MD_state_64 scrut = *s; uint8_t *buf = scrut.buf; uint64_t *block_state = scrut.block_state; sha384_init(block_state); - Hacl_Streaming_SHA2_state_sha2_384 + Hacl_Streaming_MD_state_64 tmp = { .block_state = block_state, .buf = buf, .total_len = (uint64_t)(uint32_t)0U }; s[0U] = tmp; } uint32_t Hacl_Streaming_SHA2_update_384( - Hacl_Streaming_SHA2_state_sha2_384 *p, + Hacl_Streaming_MD_state_64 *p, uint8_t *input, uint32_t input_len ) @@ -1271,9 +1251,9 @@ Write the resulting hash into `dst`, an array of 48 bytes. The state remains valid after a call to `finish_384`, meaning the user may feed more data into the hash via `update_384`. */ -void Hacl_Streaming_SHA2_finish_384(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *dst) +void Hacl_Streaming_SHA2_finish_384(Hacl_Streaming_MD_state_64 *p, uint8_t *dst) { - Hacl_Streaming_SHA2_state_sha2_384 scrut = *p; + Hacl_Streaming_MD_state_64 scrut = *p; uint64_t *block_state = scrut.block_state; uint8_t *buf_ = scrut.buf; uint64_t total_len = scrut.total_len; @@ -1310,7 +1290,7 @@ void Hacl_Streaming_SHA2_finish_384(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8 sha384_finish(tmp_block_state, dst); } -void Hacl_Streaming_SHA2_free_384(Hacl_Streaming_SHA2_state_sha2_384 *p) +void Hacl_Streaming_SHA2_free_384(Hacl_Streaming_MD_state_64 *p) { Hacl_Streaming_SHA2_free_512(p); } diff --git a/Modules/_hacl/Hacl_Streaming_SHA2.h b/Modules/_hacl/Hacl_Streaming_SHA2.h index 2c905854f336..b58df4c4d121 100644 --- a/Modules/_hacl/Hacl_Streaming_SHA2.h +++ b/Modules/_hacl/Hacl_Streaming_SHA2.h @@ -36,33 +36,22 @@ extern "C" { #include "krml/lowstar_endianness.h" #include "krml/internal/target.h" +#include "Hacl_Streaming_Types.h" -typedef struct Hacl_Streaming_SHA2_state_sha2_224_s -{ - uint32_t *block_state; - uint8_t *buf; - uint64_t total_len; -} -Hacl_Streaming_SHA2_state_sha2_224; +typedef Hacl_Streaming_MD_state_32 Hacl_Streaming_SHA2_state_sha2_224; -typedef Hacl_Streaming_SHA2_state_sha2_224 Hacl_Streaming_SHA2_state_sha2_256; +typedef Hacl_Streaming_MD_state_32 Hacl_Streaming_SHA2_state_sha2_256; -typedef struct Hacl_Streaming_SHA2_state_sha2_384_s -{ - uint64_t *block_state; - uint8_t *buf; - uint64_t total_len; -} -Hacl_Streaming_SHA2_state_sha2_384; +typedef Hacl_Streaming_MD_state_64 Hacl_Streaming_SHA2_state_sha2_384; -typedef Hacl_Streaming_SHA2_state_sha2_384 Hacl_Streaming_SHA2_state_sha2_512; +typedef Hacl_Streaming_MD_state_64 Hacl_Streaming_SHA2_state_sha2_512; /** Allocate initial state for the SHA2_256 hash. The state is to be freed by calling `free_256`. */ -Hacl_Streaming_SHA2_state_sha2_224 *Hacl_Streaming_SHA2_create_in_256(void); +Hacl_Streaming_MD_state_32 *Hacl_Streaming_SHA2_create_in_256(void); /** Copies the state passed as argument into a newly allocated state (deep copy). @@ -70,13 +59,12 @@ The state is to be freed by calling `free_256`. Cloning the state this way is useful, for instance, if your control-flow diverges and you need to feed more (different) data into the hash in each branch. */ -Hacl_Streaming_SHA2_state_sha2_224 -*Hacl_Streaming_SHA2_copy_256(Hacl_Streaming_SHA2_state_sha2_224 *s0); +Hacl_Streaming_MD_state_32 *Hacl_Streaming_SHA2_copy_256(Hacl_Streaming_MD_state_32 *s0); /** Reset an existing state to the initial hash state with empty data. */ -void Hacl_Streaming_SHA2_init_256(Hacl_Streaming_SHA2_state_sha2_224 *s); +void Hacl_Streaming_SHA2_init_256(Hacl_Streaming_MD_state_32 *s); /** Feed an arbitrary amount of data into the hash. This function returns 0 for @@ -87,7 +75,7 @@ This function is identical to the update function for SHA2_224. */ uint32_t Hacl_Streaming_SHA2_update_256( - Hacl_Streaming_SHA2_state_sha2_224 *p, + Hacl_Streaming_MD_state_32 *p, uint8_t *input, uint32_t input_len ); @@ -98,27 +86,27 @@ valid after a call to `finish_256`, meaning the user may feed more data into the hash via `update_256`. (The finish_256 function operates on an internal copy of the state and therefore does not invalidate the client-held state `p`.) */ -void Hacl_Streaming_SHA2_finish_256(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *dst); +void Hacl_Streaming_SHA2_finish_256(Hacl_Streaming_MD_state_32 *p, uint8_t *dst); /** Free a state allocated with `create_in_256`. This function is identical to the free function for SHA2_224. */ -void Hacl_Streaming_SHA2_free_256(Hacl_Streaming_SHA2_state_sha2_224 *s); +void Hacl_Streaming_SHA2_free_256(Hacl_Streaming_MD_state_32 *s); /** Hash `input`, of len `input_len`, into `dst`, an array of 32 bytes. */ void Hacl_Streaming_SHA2_sha256(uint8_t *input, uint32_t input_len, uint8_t *dst); -Hacl_Streaming_SHA2_state_sha2_224 *Hacl_Streaming_SHA2_create_in_224(void); +Hacl_Streaming_MD_state_32 *Hacl_Streaming_SHA2_create_in_224(void); -void Hacl_Streaming_SHA2_init_224(Hacl_Streaming_SHA2_state_sha2_224 *s); +void Hacl_Streaming_SHA2_init_224(Hacl_Streaming_MD_state_32 *s); uint32_t Hacl_Streaming_SHA2_update_224( - Hacl_Streaming_SHA2_state_sha2_224 *p, + Hacl_Streaming_MD_state_32 *p, uint8_t *input, uint32_t input_len ); @@ -128,16 +116,16 @@ Write the resulting hash into `dst`, an array of 28 bytes. The state remains valid after a call to `finish_224`, meaning the user may feed more data into the hash via `update_224`. */ -void Hacl_Streaming_SHA2_finish_224(Hacl_Streaming_SHA2_state_sha2_224 *p, uint8_t *dst); +void Hacl_Streaming_SHA2_finish_224(Hacl_Streaming_MD_state_32 *p, uint8_t *dst); -void Hacl_Streaming_SHA2_free_224(Hacl_Streaming_SHA2_state_sha2_224 *p); +void Hacl_Streaming_SHA2_free_224(Hacl_Streaming_MD_state_32 *p); /** Hash `input`, of len `input_len`, into `dst`, an array of 28 bytes. */ void Hacl_Streaming_SHA2_sha224(uint8_t *input, uint32_t input_len, uint8_t *dst); -Hacl_Streaming_SHA2_state_sha2_384 *Hacl_Streaming_SHA2_create_in_512(void); +Hacl_Streaming_MD_state_64 *Hacl_Streaming_SHA2_create_in_512(void); /** Copies the state passed as argument into a newly allocated state (deep copy). @@ -145,10 +133,9 @@ The state is to be freed by calling `free_512`. Cloning the state this way is useful, for instance, if your control-flow diverges and you need to feed more (different) data into the hash in each branch. */ -Hacl_Streaming_SHA2_state_sha2_384 -*Hacl_Streaming_SHA2_copy_512(Hacl_Streaming_SHA2_state_sha2_384 *s0); +Hacl_Streaming_MD_state_64 *Hacl_Streaming_SHA2_copy_512(Hacl_Streaming_MD_state_64 *s0); -void Hacl_Streaming_SHA2_init_512(Hacl_Streaming_SHA2_state_sha2_384 *s); +void Hacl_Streaming_SHA2_init_512(Hacl_Streaming_MD_state_64 *s); /** Feed an arbitrary amount of data into the hash. This function returns 0 for @@ -159,7 +146,7 @@ This function is identical to the update function for SHA2_384. */ uint32_t Hacl_Streaming_SHA2_update_512( - Hacl_Streaming_SHA2_state_sha2_384 *p, + Hacl_Streaming_MD_state_64 *p, uint8_t *input, uint32_t input_len ); @@ -170,27 +157,27 @@ valid after a call to `finish_512`, meaning the user may feed more data into the hash via `update_512`. (The finish_512 function operates on an internal copy of the state and therefore does not invalidate the client-held state `p`.) */ -void Hacl_Streaming_SHA2_finish_512(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *dst); +void Hacl_Streaming_SHA2_finish_512(Hacl_Streaming_MD_state_64 *p, uint8_t *dst); /** Free a state allocated with `create_in_512`. This function is identical to the free function for SHA2_384. */ -void Hacl_Streaming_SHA2_free_512(Hacl_Streaming_SHA2_state_sha2_384 *s); +void Hacl_Streaming_SHA2_free_512(Hacl_Streaming_MD_state_64 *s); /** Hash `input`, of len `input_len`, into `dst`, an array of 64 bytes. */ void Hacl_Streaming_SHA2_sha512(uint8_t *input, uint32_t input_len, uint8_t *dst); -Hacl_Streaming_SHA2_state_sha2_384 *Hacl_Streaming_SHA2_create_in_384(void); +Hacl_Streaming_MD_state_64 *Hacl_Streaming_SHA2_create_in_384(void); -void Hacl_Streaming_SHA2_init_384(Hacl_Streaming_SHA2_state_sha2_384 *s); +void Hacl_Streaming_SHA2_init_384(Hacl_Streaming_MD_state_64 *s); uint32_t Hacl_Streaming_SHA2_update_384( - Hacl_Streaming_SHA2_state_sha2_384 *p, + Hacl_Streaming_MD_state_64 *p, uint8_t *input, uint32_t input_len ); @@ -200,9 +187,9 @@ Write the resulting hash into `dst`, an array of 48 bytes. The state remains valid after a call to `finish_384`, meaning the user may feed more data into the hash via `update_384`. */ -void Hacl_Streaming_SHA2_finish_384(Hacl_Streaming_SHA2_state_sha2_384 *p, uint8_t *dst); +void Hacl_Streaming_SHA2_finish_384(Hacl_Streaming_MD_state_64 *p, uint8_t *dst); -void Hacl_Streaming_SHA2_free_384(Hacl_Streaming_SHA2_state_sha2_384 *p); +void Hacl_Streaming_SHA2_free_384(Hacl_Streaming_MD_state_64 *p); /** Hash `input`, of len `input_len`, into `dst`, an array of 48 bytes. diff --git a/Modules/_hacl/Hacl_Streaming_Types.h b/Modules/_hacl/Hacl_Streaming_Types.h new file mode 100644 index 000000000000..51057611ca97 --- /dev/null +++ b/Modules/_hacl/Hacl_Streaming_Types.h @@ -0,0 +1,59 @@ +/* MIT License + * + * Copyright (c) 2016-2022 INRIA, CMU and Microsoft Corporation + * Copyright (c) 2022-2023 HACL* Contributors + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + + +#ifndef __Hacl_Streaming_Types_H +#define __Hacl_Streaming_Types_H + +#if defined(__cplusplus) +extern "C" { +#endif + +#include +#include "krml/types.h" +#include "krml/lowstar_endianness.h" +#include "krml/internal/target.h" + +typedef struct Hacl_Streaming_MD_state_32_s +{ + uint32_t *block_state; + uint8_t *buf; + uint64_t total_len; +} +Hacl_Streaming_MD_state_32; + +typedef struct Hacl_Streaming_MD_state_64_s +{ + uint64_t *block_state; + uint8_t *buf; + uint64_t total_len; +} +Hacl_Streaming_MD_state_64; + +#if defined(__cplusplus) +} +#endif + +#define __Hacl_Streaming_Types_H_DEFINED +#endif diff --git a/Modules/_hacl/include/krml/FStar_UInt128_Verified.h b/Modules/_hacl/include/krml/FStar_UInt128_Verified.h index ee160193539e..3d36d4407355 100644 --- a/Modules/_hacl/include/krml/FStar_UInt128_Verified.h +++ b/Modules/_hacl/include/krml/FStar_UInt128_Verified.h @@ -7,13 +7,12 @@ #ifndef __FStar_UInt128_Verified_H #define __FStar_UInt128_Verified_H - - #include "FStar_UInt_8_16_32_64.h" #include #include #include "krml/types.h" #include "krml/internal/target.h" + static inline uint64_t FStar_UInt128_constant_time_carry(uint64_t a, uint64_t b) { return (a ^ ((a ^ b) | ((a - b) ^ b))) >> (uint32_t)63U; diff --git a/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h b/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h index 965afc836fd1..a56c7d613498 100644 --- a/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h +++ b/Modules/_hacl/include/krml/FStar_UInt_8_16_32_64.h @@ -7,15 +7,13 @@ #ifndef __FStar_UInt_8_16_32_64_H #define __FStar_UInt_8_16_32_64_H - - - #include #include #include "krml/lowstar_endianness.h" #include "krml/types.h" #include "krml/internal/target.h" + static inline uint64_t FStar_UInt64_eq_mask(uint64_t a, uint64_t b) { uint64_t x = a ^ b; diff --git a/Modules/_hacl/include/krml/internal/target.h b/Modules/_hacl/include/krml/internal/target.h index 9ef59859a554..dcbe7007b17b 100644 --- a/Modules/_hacl/include/krml/internal/target.h +++ b/Modules/_hacl/include/krml/internal/target.h @@ -31,6 +31,10 @@ # define KRML_HOST_FREE free #endif +#ifndef KRML_HOST_IGNORE +# define KRML_HOST_IGNORE(x) (void)(x) +#endif + /* Macros for prettier unrolling of loops */ #define KRML_LOOP1(i, n, x) { \ x \ diff --git a/Modules/_hacl/internal/Hacl_Hash_MD5.h b/Modules/_hacl/internal/Hacl_Hash_MD5.h new file mode 100644 index 000000000000..87ad4cf228d9 --- /dev/null +++ b/Modules/_hacl/internal/Hacl_Hash_MD5.h @@ -0,0 +1,61 @@ +/* MIT License + * + * Copyright (c) 2016-2022 INRIA, CMU and Microsoft Corporation + * Copyright (c) 2022-2023 HACL* Contributors + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + + +#ifndef __internal_Hacl_Hash_MD5_H +#define __internal_Hacl_Hash_MD5_H + +#if defined(__cplusplus) +extern "C" { +#endif + +#include +#include "krml/types.h" +#include "krml/lowstar_endianness.h" +#include "krml/internal/target.h" + +#include "../Hacl_Hash_MD5.h" + +void Hacl_Hash_Core_MD5_legacy_init(uint32_t *s); + +void Hacl_Hash_Core_MD5_legacy_finish(uint32_t *s, uint8_t *dst); + +void Hacl_Hash_MD5_legacy_update_multi(uint32_t *s, uint8_t *blocks, uint32_t n_blocks); + +void +Hacl_Hash_MD5_legacy_update_last( + uint32_t *s, + uint64_t prev_len, + uint8_t *input, + uint32_t input_len +); + +void Hacl_Hash_MD5_legacy_hash(uint8_t *input, uint32_t input_len, uint8_t *dst); + +#if defined(__cplusplus) +} +#endif + +#define __internal_Hacl_Hash_MD5_H_DEFINED +#endif diff --git a/Modules/_hacl/internal/Hacl_Hash_SHA1.h b/Modules/_hacl/internal/Hacl_Hash_SHA1.h new file mode 100644 index 000000000000..d2d9df44c6c1 --- /dev/null +++ b/Modules/_hacl/internal/Hacl_Hash_SHA1.h @@ -0,0 +1,61 @@ +/* MIT License + * + * Copyright (c) 2016-2022 INRIA, CMU and Microsoft Corporation + * Copyright (c) 2022-2023 HACL* Contributors + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in all + * copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + + +#ifndef __internal_Hacl_Hash_SHA1_H +#define __internal_Hacl_Hash_SHA1_H + +#if defined(__cplusplus) +extern "C" { +#endif + +#include +#include "krml/types.h" +#include "krml/lowstar_endianness.h" +#include "krml/internal/target.h" + +#include "../Hacl_Hash_SHA1.h" + +void Hacl_Hash_Core_SHA1_legacy_init(uint32_t *s); + +void Hacl_Hash_Core_SHA1_legacy_finish(uint32_t *s, uint8_t *dst); + +void Hacl_Hash_SHA1_legacy_update_multi(uint32_t *s, uint8_t *blocks, uint32_t n_blocks); + +void +Hacl_Hash_SHA1_legacy_update_last( + uint32_t *s, + uint64_t prev_len, + uint8_t *input, + uint32_t input_len +); + +void Hacl_Hash_SHA1_legacy_hash(uint8_t *input, uint32_t input_len, uint8_t *dst); + +#if defined(__cplusplus) +} +#endif + +#define __internal_Hacl_Hash_SHA1_H_DEFINED +#endif diff --git a/Modules/_hacl/python_hacl_namespaces.h b/Modules/_hacl/python_hacl_namespaces.h index ac12f386257b..ee28f244266b 100644 --- a/Modules/_hacl/python_hacl_namespaces.h +++ b/Modules/_hacl/python_hacl_namespaces.h @@ -43,4 +43,21 @@ #define Hacl_Streaming_SHA2_sha512 python_hashlib_Hacl_Streaming_SHA2_sha512 #define Hacl_Streaming_SHA2_sha384 python_hashlib_Hacl_Streaming_SHA2_sha384 +#define Hacl_Streaming_MD5_legacy_create_in python_hashlib_Hacl_Streaming_MD5_legacy_create_in +#define Hacl_Streaming_MD5_legacy_init python_hashlib_Hacl_Streaming_MD5_legacy_init +#define Hacl_Streaming_MD5_legacy_update python_hashlib_Hacl_Streaming_MD5_legacy_update +#define Hacl_Streaming_MD5_legacy_finish python_hashlib_Hacl_Streaming_MD5_legacy_finish +#define Hacl_Streaming_MD5_legacy_free python_hashlib_Hacl_Streaming_MD5_legacy_free +#define Hacl_Streaming_MD5_legacy_copy python_hashlib_Hacl_Streaming_MD5_legacy_copy +#define Hacl_Streaming_MD5_legacy_hash python_hashlib_Hacl_Streaming_MD5_legacy_hash + +#define Hacl_Streaming_SHA1_legacy_create_in python_hashlib_Hacl_Streaming_SHA1_legacy_create_in +#define Hacl_Streaming_SHA1_legacy_init python_hashlib_Hacl_Streaming_SHA1_legacy_init +#define Hacl_Streaming_SHA1_legacy_update python_hashlib_Hacl_Streaming_SHA1_legacy_update +#define Hacl_Streaming_SHA1_legacy_finish python_hashlib_Hacl_Streaming_SHA1_legacy_finish +#define Hacl_Streaming_SHA1_legacy_free python_hashlib_Hacl_Streaming_SHA1_legacy_free +#define Hacl_Streaming_SHA1_legacy_copy python_hashlib_Hacl_Streaming_SHA1_legacy_copy +#define Hacl_Streaming_SHA1_legacy_hash python_hashlib_Hacl_Streaming_SHA1_legacy_hash + + #endif // _PYTHON_HACL_NAMESPACES_H diff --git a/Modules/_hacl/refresh.sh b/Modules/_hacl/refresh.sh index dba8cb3972ea..76b92ec45991 100755 --- a/Modules/_hacl/refresh.sh +++ b/Modules/_hacl/refresh.sh @@ -22,7 +22,7 @@ fi # Update this when updating to a new version after verifying that the changes # the update brings in are good. -expected_hacl_star_rev=4751fc2b11639f651718abf8522fcc36902ca67c +expected_hacl_star_rev=13e0c6721ac9206c4249ecc1dc04ed617ad1e262 hacl_dir="$(realpath "$1")" cd "$(dirname "$0")" @@ -41,8 +41,15 @@ fi declare -a dist_files dist_files=( Hacl_Streaming_SHA2.h + Hacl_Streaming_Types.h + Hacl_Hash_SHA1.h + internal/Hacl_Hash_SHA1.h + Hacl_Hash_MD5.h + internal/Hacl_Hash_MD5.h internal/Hacl_SHA2_Generic.h Hacl_Streaming_SHA2.c + Hacl_Hash_SHA1.c + Hacl_Hash_MD5.c ) declare -a include_files diff --git a/Modules/md5module.c b/Modules/md5module.c index 48b11e0779f8..df3d6a4a70d7 100644 --- a/Modules/md5module.c +++ b/Modules/md5module.c @@ -43,283 +43,17 @@ typedef long long MD5_INT64; /* 64-bit integer */ #define MD5_BLOCKSIZE 64 #define MD5_DIGESTSIZE 16 -/* The structure for storing MD5 info */ +#include "_hacl/Hacl_Hash_MD5.h" -struct md5_state { - MD5_INT64 length; - MD5_INT32 state[4], curlen; - unsigned char buf[MD5_BLOCKSIZE]; -}; typedef struct { PyObject_HEAD - struct md5_state hash_state; + Hacl_Streaming_MD5_state *hash_state; } MD5object; #include "clinic/md5module.c.h" -/* ------------------------------------------------------------------------ - * - * This code for the MD5 algorithm was noted as public domain. The - * original headers are pasted below. - * - * Several changes have been made to make it more compatible with the - * Python environment and desired interface. - * - */ - -/* LibTomCrypt, modular cryptographic library -- Tom St Denis - * - * LibTomCrypt is a library that provides various cryptographic - * algorithms in a highly modular and flexible manner. - * - * The library is free for all purposes without any express - * guarantee it works. - * - * Tom St Denis, tomstdenis at gmail.com, https://www.libtom.net - */ - -/* rotate the hard way (platform optimizations could be done) */ -#define ROLc(x, y) ( (((unsigned long)(x)<<(unsigned long)((y)&31)) | (((unsigned long)(x)&0xFFFFFFFFUL)>>(unsigned long)(32-((y)&31)))) & 0xFFFFFFFFUL) - -/* Endian Neutral macros that work on all platforms */ - -#define STORE32L(x, y) \ - { (y)[3] = (unsigned char)(((x)>>24)&255); (y)[2] = (unsigned char)(((x)>>16)&255); \ - (y)[1] = (unsigned char)(((x)>>8)&255); (y)[0] = (unsigned char)((x)&255); } - -#define LOAD32L(x, y) \ - { x = ((unsigned long)((y)[3] & 255)<<24) | \ - ((unsigned long)((y)[2] & 255)<<16) | \ - ((unsigned long)((y)[1] & 255)<<8) | \ - ((unsigned long)((y)[0] & 255)); } - -#define STORE64L(x, y) \ - { (y)[7] = (unsigned char)(((x)>>56)&255); (y)[6] = (unsigned char)(((x)>>48)&255); \ - (y)[5] = (unsigned char)(((x)>>40)&255); (y)[4] = (unsigned char)(((x)>>32)&255); \ - (y)[3] = (unsigned char)(((x)>>24)&255); (y)[2] = (unsigned char)(((x)>>16)&255); \ - (y)[1] = (unsigned char)(((x)>>8)&255); (y)[0] = (unsigned char)((x)&255); } - - -/* MD5 macros */ - -#define F(x,y,z) (z ^ (x & (y ^ z))) -#define G(x,y,z) (y ^ (z & (y ^ x))) -#define H(x,y,z) (x^y^z) -#define I(x,y,z) (y^(x|(~z))) - -#define FF(a,b,c,d,M,s,t) \ - a = (a + F(b,c,d) + M + t); a = ROLc(a, s) + b; - -#define GG(a,b,c,d,M,s,t) \ - a = (a + G(b,c,d) + M + t); a = ROLc(a, s) + b; - -#define HH(a,b,c,d,M,s,t) \ - a = (a + H(b,c,d) + M + t); a = ROLc(a, s) + b; - -#define II(a,b,c,d,M,s,t) \ - a = (a + I(b,c,d) + M + t); a = ROLc(a, s) + b; - - -static void md5_compress(struct md5_state *md5, const unsigned char *buf) -{ - MD5_INT32 i, W[16], a, b, c, d; - - assert(md5 != NULL); - assert(buf != NULL); - - /* copy the state into 512-bits into W[0..15] */ - for (i = 0; i < 16; i++) { - LOAD32L(W[i], buf + (4*i)); - } - - /* copy state */ - a = md5->state[0]; - b = md5->state[1]; - c = md5->state[2]; - d = md5->state[3]; - - FF(a,b,c,d,W[0],7,0xd76aa478UL) - FF(d,a,b,c,W[1],12,0xe8c7b756UL) - FF(c,d,a,b,W[2],17,0x242070dbUL) - FF(b,c,d,a,W[3],22,0xc1bdceeeUL) - FF(a,b,c,d,W[4],7,0xf57c0fafUL) - FF(d,a,b,c,W[5],12,0x4787c62aUL) - FF(c,d,a,b,W[6],17,0xa8304613UL) - FF(b,c,d,a,W[7],22,0xfd469501UL) - FF(a,b,c,d,W[8],7,0x698098d8UL) - FF(d,a,b,c,W[9],12,0x8b44f7afUL) - FF(c,d,a,b,W[10],17,0xffff5bb1UL) - FF(b,c,d,a,W[11],22,0x895cd7beUL) - FF(a,b,c,d,W[12],7,0x6b901122UL) - FF(d,a,b,c,W[13],12,0xfd987193UL) - FF(c,d,a,b,W[14],17,0xa679438eUL) - FF(b,c,d,a,W[15],22,0x49b40821UL) - GG(a,b,c,d,W[1],5,0xf61e2562UL) - GG(d,a,b,c,W[6],9,0xc040b340UL) - GG(c,d,a,b,W[11],14,0x265e5a51UL) - GG(b,c,d,a,W[0],20,0xe9b6c7aaUL) - GG(a,b,c,d,W[5],5,0xd62f105dUL) - GG(d,a,b,c,W[10],9,0x02441453UL) - GG(c,d,a,b,W[15],14,0xd8a1e681UL) - GG(b,c,d,a,W[4],20,0xe7d3fbc8UL) - GG(a,b,c,d,W[9],5,0x21e1cde6UL) - GG(d,a,b,c,W[14],9,0xc33707d6UL) - GG(c,d,a,b,W[3],14,0xf4d50d87UL) - GG(b,c,d,a,W[8],20,0x455a14edUL) - GG(a,b,c,d,W[13],5,0xa9e3e905UL) - GG(d,a,b,c,W[2],9,0xfcefa3f8UL) - GG(c,d,a,b,W[7],14,0x676f02d9UL) - GG(b,c,d,a,W[12],20,0x8d2a4c8aUL) - HH(a,b,c,d,W[5],4,0xfffa3942UL) - HH(d,a,b,c,W[8],11,0x8771f681UL) - HH(c,d,a,b,W[11],16,0x6d9d6122UL) - HH(b,c,d,a,W[14],23,0xfde5380cUL) - HH(a,b,c,d,W[1],4,0xa4beea44UL) - HH(d,a,b,c,W[4],11,0x4bdecfa9UL) - HH(c,d,a,b,W[7],16,0xf6bb4b60UL) - HH(b,c,d,a,W[10],23,0xbebfbc70UL) - HH(a,b,c,d,W[13],4,0x289b7ec6UL) - HH(d,a,b,c,W[0],11,0xeaa127faUL) - HH(c,d,a,b,W[3],16,0xd4ef3085UL) - HH(b,c,d,a,W[6],23,0x04881d05UL) - HH(a,b,c,d,W[9],4,0xd9d4d039UL) - HH(d,a,b,c,W[12],11,0xe6db99e5UL) - HH(c,d,a,b,W[15],16,0x1fa27cf8UL) - HH(b,c,d,a,W[2],23,0xc4ac5665UL) - II(a,b,c,d,W[0],6,0xf4292244UL) - II(d,a,b,c,W[7],10,0x432aff97UL) - II(c,d,a,b,W[14],15,0xab9423a7UL) - II(b,c,d,a,W[5],21,0xfc93a039UL) - II(a,b,c,d,W[12],6,0x655b59c3UL) - II(d,a,b,c,W[3],10,0x8f0ccc92UL) - II(c,d,a,b,W[10],15,0xffeff47dUL) - II(b,c,d,a,W[1],21,0x85845dd1UL) - II(a,b,c,d,W[8],6,0x6fa87e4fUL) - II(d,a,b,c,W[15],10,0xfe2ce6e0UL) - II(c,d,a,b,W[6],15,0xa3014314UL) - II(b,c,d,a,W[13],21,0x4e0811a1UL) - II(a,b,c,d,W[4],6,0xf7537e82UL) - II(d,a,b,c,W[11],10,0xbd3af235UL) - II(c,d,a,b,W[2],15,0x2ad7d2bbUL) - II(b,c,d,a,W[9],21,0xeb86d391UL) - - md5->state[0] = md5->state[0] + a; - md5->state[1] = md5->state[1] + b; - md5->state[2] = md5->state[2] + c; - md5->state[3] = md5->state[3] + d; -} - - -/** - Initialize the hash state - @param md5 The hash state you wish to initialize -*/ -static void -md5_init(struct md5_state *md5) -{ - assert(md5 != NULL); - md5->state[0] = 0x67452301UL; - md5->state[1] = 0xefcdab89UL; - md5->state[2] = 0x98badcfeUL; - md5->state[3] = 0x10325476UL; - md5->curlen = 0; - md5->length = 0; -} - -/** - Process a block of memory though the hash - @param md5 The hash state - @param in The data to hash - @param inlen The length of the data (octets) -*/ -static void -md5_process(struct md5_state *md5, const unsigned char *in, Py_ssize_t inlen) -{ - Py_ssize_t n; - - assert(md5 != NULL); - assert(in != NULL); - assert(md5->curlen <= sizeof(md5->buf)); - - while (inlen > 0) { - if (md5->curlen == 0 && inlen >= MD5_BLOCKSIZE) { - md5_compress(md5, in); - md5->length += MD5_BLOCKSIZE * 8; - in += MD5_BLOCKSIZE; - inlen -= MD5_BLOCKSIZE; - } else { - n = Py_MIN(inlen, (Py_ssize_t)(MD5_BLOCKSIZE - md5->curlen)); - memcpy(md5->buf + md5->curlen, in, (size_t)n); - md5->curlen += (MD5_INT32)n; - in += n; - inlen -= n; - if (md5->curlen == MD5_BLOCKSIZE) { - md5_compress(md5, md5->buf); - md5->length += 8*MD5_BLOCKSIZE; - md5->curlen = 0; - } - } - } -} - -/** - Terminate the hash to get the digest - @param md5 The hash state - @param out [out] The destination of the hash (16 bytes) -*/ -static void -md5_done(struct md5_state *md5, unsigned char *out) -{ - int i; - - assert(md5 != NULL); - assert(out != NULL); - assert(md5->curlen < sizeof(md5->buf)); - - /* increase the length of the message */ - md5->length += md5->curlen * 8; - - /* append the '1' bit */ - md5->buf[md5->curlen++] = (unsigned char)0x80; - - /* if the length is currently above 56 bytes we append zeros - * then compress. Then we can fall back to padding zeros and length - * encoding like normal. - */ - if (md5->curlen > 56) { - while (md5->curlen < 64) { - md5->buf[md5->curlen++] = (unsigned char)0; - } - md5_compress(md5, md5->buf); - md5->curlen = 0; - } - - /* pad up to 56 bytes of zeroes */ - while (md5->curlen < 56) { - md5->buf[md5->curlen++] = (unsigned char)0; - } - - /* store length */ - STORE64L(md5->length, md5->buf+56); - md5_compress(md5, md5->buf); - - /* copy output */ - for (i = 0; i < 4; i++) { - STORE32L(md5->state[i], out+(4*i)); - } -} - -/* .Source: /cvs/libtom/libtomcrypt/src/hashes/md5.c,v $ */ -/* .Revision: 1.10 $ */ -/* .Date: 2007/05/12 14:25:28 $ */ - -/* - * End of copied MD5 code. - * - * ------------------------------------------------------------------------ - */ typedef struct { PyTypeObject* md5_type; @@ -350,8 +84,9 @@ MD5_traverse(PyObject *ptr, visitproc visit, void *arg) } static void -MD5_dealloc(PyObject *ptr) +MD5_dealloc(MD5object *ptr) { + Hacl_Streaming_MD5_legacy_free(ptr->hash_state); PyTypeObject *tp = Py_TYPE(ptr); PyObject_GC_UnTrack(ptr); PyObject_GC_Del(ptr); @@ -379,7 +114,7 @@ MD5Type_copy_impl(MD5object *self, PyTypeObject *cls) if ((newobj = newMD5object(st))==NULL) return NULL; - newobj->hash_state = self->hash_state; + newobj->hash_state = Hacl_Streaming_MD5_legacy_copy(self->hash_state); return (PyObject *)newobj; } @@ -394,10 +129,7 @@ MD5Type_digest_impl(MD5object *self) /*[clinic end generated code: output=eb691dc4190a07ec input=bc0c4397c2994be6]*/ { unsigned char digest[MD5_DIGESTSIZE]; - struct md5_state temp; - - temp = self->hash_state; - md5_done(&temp, digest); + Hacl_Streaming_MD5_legacy_finish(self->hash_state, digest); return PyBytes_FromStringAndSize((const char *)digest, MD5_DIGESTSIZE); } @@ -412,15 +144,21 @@ MD5Type_hexdigest_impl(MD5object *self) /*[clinic end generated code: output=17badced1f3ac932 input=b60b19de644798dd]*/ { unsigned char digest[MD5_DIGESTSIZE]; - struct md5_state temp; - - /* Get the raw (binary) digest value */ - temp = self->hash_state; - md5_done(&temp, digest); - + Hacl_Streaming_MD5_legacy_finish(self->hash_state, digest); return _Py_strhex((const char*)digest, MD5_DIGESTSIZE); } +static void update(Hacl_Streaming_MD5_state *state, uint8_t *buf, Py_ssize_t len) { +#if PY_SSIZE_T_MAX > UINT32_MAX + while (len > UINT32_MAX) { + Hacl_Streaming_MD5_legacy_update(state, buf, UINT32_MAX); + len -= UINT32_MAX; + buf += UINT32_MAX; + } +#endif + Hacl_Streaming_MD5_legacy_update(state, buf, (uint32_t) len); +} + /*[clinic input] MD5Type.update @@ -438,7 +176,7 @@ MD5Type_update(MD5object *self, PyObject *obj) GET_BUFFER_VIEW_OR_ERROUT(obj, &buf); - md5_process(&self->hash_state, buf.buf, buf.len); + update(self->hash_state, buf.buf, buf.len); PyBuffer_Release(&buf); Py_RETURN_NONE; @@ -531,7 +269,7 @@ _md5_md5_impl(PyObject *module, PyObject *string, int usedforsecurity) return NULL; } - md5_init(&new->hash_state); + new->hash_state = Hacl_Streaming_MD5_legacy_create_in(); if (PyErr_Occurred()) { Py_DECREF(new); @@ -540,7 +278,7 @@ _md5_md5_impl(PyObject *module, PyObject *string, int usedforsecurity) return NULL; } if (string) { - md5_process(&new->hash_state, buf.buf, buf.len); + update(new->hash_state, buf.buf, buf.len); PyBuffer_Release(&buf); } diff --git a/Modules/sha1module.c b/Modules/sha1module.c index 9153557fbde7..0f50d532acf9 100644 --- a/Modules/sha1module.c +++ b/Modules/sha1module.c @@ -43,260 +43,16 @@ typedef long long SHA1_INT64; /* 64-bit integer */ #define SHA1_BLOCKSIZE 64 #define SHA1_DIGESTSIZE 20 -/* The structure for storing SHA1 info */ - -struct sha1_state { - SHA1_INT64 length; - SHA1_INT32 state[5], curlen; - unsigned char buf[SHA1_BLOCKSIZE]; -}; +#include "_hacl/Hacl_Hash_SHA1.h" typedef struct { PyObject_HEAD - struct sha1_state hash_state; + Hacl_Streaming_SHA1_state *hash_state; } SHA1object; #include "clinic/sha1module.c.h" -/* ------------------------------------------------------------------------ - * - * This code for the SHA1 algorithm was noted as public domain. The - * original headers are pasted below. - * - * Several changes have been made to make it more compatible with the - * Python environment and desired interface. - * - */ - -/* LibTomCrypt, modular cryptographic library -- Tom St Denis - * - * LibTomCrypt is a library that provides various cryptographic - * algorithms in a highly modular and flexible manner. - * - * The library is free for all purposes without any express - * guarantee it works. - * - * Tom St Denis, tomstdenis at gmail.com, https://www.libtom.net - */ - -/* rotate the hard way (platform optimizations could be done) */ -#define ROL(x, y) ( (((unsigned long)(x)<<(unsigned long)((y)&31)) | (((unsigned long)(x)&0xFFFFFFFFUL)>>(unsigned long)(32-((y)&31)))) & 0xFFFFFFFFUL) -#define ROLc(x, y) ( (((unsigned long)(x)<<(unsigned long)((y)&31)) | (((unsigned long)(x)&0xFFFFFFFFUL)>>(unsigned long)(32-((y)&31)))) & 0xFFFFFFFFUL) - -/* Endian Neutral macros that work on all platforms */ - -#define STORE32H(x, y) \ - { (y)[0] = (unsigned char)(((x)>>24)&255); (y)[1] = (unsigned char)(((x)>>16)&255); \ - (y)[2] = (unsigned char)(((x)>>8)&255); (y)[3] = (unsigned char)((x)&255); } - -#define LOAD32H(x, y) \ - { x = ((unsigned long)((y)[0] & 255)<<24) | \ - ((unsigned long)((y)[1] & 255)<<16) | \ - ((unsigned long)((y)[2] & 255)<<8) | \ - ((unsigned long)((y)[3] & 255)); } - -#define STORE64H(x, y) \ - { (y)[0] = (unsigned char)(((x)>>56)&255); (y)[1] = (unsigned char)(((x)>>48)&255); \ - (y)[2] = (unsigned char)(((x)>>40)&255); (y)[3] = (unsigned char)(((x)>>32)&255); \ - (y)[4] = (unsigned char)(((x)>>24)&255); (y)[5] = (unsigned char)(((x)>>16)&255); \ - (y)[6] = (unsigned char)(((x)>>8)&255); (y)[7] = (unsigned char)((x)&255); } - - -/* SHA1 macros */ - -#define F0(x,y,z) (z ^ (x & (y ^ z))) -#define F1(x,y,z) (x ^ y ^ z) -#define F2(x,y,z) ((x & y) | (z & (x | y))) -#define F3(x,y,z) (x ^ y ^ z) - -static void sha1_compress(struct sha1_state *sha1, unsigned char *buf) -{ - SHA1_INT32 a,b,c,d,e,W[80],i; - - /* copy the state into 512-bits into W[0..15] */ - for (i = 0; i < 16; i++) { - LOAD32H(W[i], buf + (4*i)); - } - - /* copy state */ - a = sha1->state[0]; - b = sha1->state[1]; - c = sha1->state[2]; - d = sha1->state[3]; - e = sha1->state[4]; - - /* expand it */ - for (i = 16; i < 80; i++) { - W[i] = ROL(W[i-3] ^ W[i-8] ^ W[i-14] ^ W[i-16], 1); - } - - /* compress */ - /* round one */ - #define FF_0(a,b,c,d,e,i) e = (ROLc(a, 5) + F0(b,c,d) + e + W[i] + 0x5a827999UL); b = ROLc(b, 30); - #define FF_1(a,b,c,d,e,i) e = (ROLc(a, 5) + F1(b,c,d) + e + W[i] + 0x6ed9eba1UL); b = ROLc(b, 30); - #define FF_2(a,b,c,d,e,i) e = (ROLc(a, 5) + F2(b,c,d) + e + W[i] + 0x8f1bbcdcUL); b = ROLc(b, 30); - #define FF_3(a,b,c,d,e,i) e = (ROLc(a, 5) + F3(b,c,d) + e + W[i] + 0xca62c1d6UL); b = ROLc(b, 30); - - for (i = 0; i < 20; ) { - FF_0(a,b,c,d,e,i++); - FF_0(e,a,b,c,d,i++); - FF_0(d,e,a,b,c,i++); - FF_0(c,d,e,a,b,i++); - FF_0(b,c,d,e,a,i++); - } - - /* round two */ - for (; i < 40; ) { - FF_1(a,b,c,d,e,i++); - FF_1(e,a,b,c,d,i++); - FF_1(d,e,a,b,c,i++); - FF_1(c,d,e,a,b,i++); - FF_1(b,c,d,e,a,i++); - } - - /* round three */ - for (; i < 60; ) { - FF_2(a,b,c,d,e,i++); - FF_2(e,a,b,c,d,i++); - FF_2(d,e,a,b,c,i++); - FF_2(c,d,e,a,b,i++); - FF_2(b,c,d,e,a,i++); - } - - /* round four */ - for (; i < 80; ) { - FF_3(a,b,c,d,e,i++); - FF_3(e,a,b,c,d,i++); - FF_3(d,e,a,b,c,i++); - FF_3(c,d,e,a,b,i++); - FF_3(b,c,d,e,a,i++); - } - - #undef FF_0 - #undef FF_1 - #undef FF_2 - #undef FF_3 - - /* store */ - sha1->state[0] = sha1->state[0] + a; - sha1->state[1] = sha1->state[1] + b; - sha1->state[2] = sha1->state[2] + c; - sha1->state[3] = sha1->state[3] + d; - sha1->state[4] = sha1->state[4] + e; -} - -/** - Initialize the hash state - @param sha1 The hash state you wish to initialize -*/ -static void -sha1_init(struct sha1_state *sha1) -{ - assert(sha1 != NULL); - sha1->state[0] = 0x67452301UL; - sha1->state[1] = 0xefcdab89UL; - sha1->state[2] = 0x98badcfeUL; - sha1->state[3] = 0x10325476UL; - sha1->state[4] = 0xc3d2e1f0UL; - sha1->curlen = 0; - sha1->length = 0; -} - -/** - Process a block of memory though the hash - @param sha1 The hash state - @param in The data to hash - @param inlen The length of the data (octets) -*/ -static void -sha1_process(struct sha1_state *sha1, - const unsigned char *in, Py_ssize_t inlen) -{ - Py_ssize_t n; - - assert(sha1 != NULL); - assert(in != NULL); - assert(sha1->curlen <= sizeof(sha1->buf)); - - while (inlen > 0) { - if (sha1->curlen == 0 && inlen >= SHA1_BLOCKSIZE) { - sha1_compress(sha1, (unsigned char *)in); - sha1->length += SHA1_BLOCKSIZE * 8; - in += SHA1_BLOCKSIZE; - inlen -= SHA1_BLOCKSIZE; - } else { - n = Py_MIN(inlen, (Py_ssize_t)(SHA1_BLOCKSIZE - sha1->curlen)); - memcpy(sha1->buf + sha1->curlen, in, (size_t)n); - sha1->curlen += (SHA1_INT32)n; - in += n; - inlen -= n; - if (sha1->curlen == SHA1_BLOCKSIZE) { - sha1_compress(sha1, sha1->buf); - sha1->length += 8*SHA1_BLOCKSIZE; - sha1->curlen = 0; - } - } - } -} - -/** - Terminate the hash to get the digest - @param sha1 The hash state - @param out [out] The destination of the hash (20 bytes) -*/ -static void -sha1_done(struct sha1_state *sha1, unsigned char *out) -{ - int i; - - assert(sha1 != NULL); - assert(out != NULL); - assert(sha1->curlen < sizeof(sha1->buf)); - - /* increase the length of the message */ - sha1->length += sha1->curlen * 8; - - /* append the '1' bit */ - sha1->buf[sha1->curlen++] = (unsigned char)0x80; - - /* if the length is currently above 56 bytes we append zeros - * then compress. Then we can fall back to padding zeros and length - * encoding like normal. - */ - if (sha1->curlen > 56) { - while (sha1->curlen < 64) { - sha1->buf[sha1->curlen++] = (unsigned char)0; - } - sha1_compress(sha1, sha1->buf); - sha1->curlen = 0; - } - - /* pad up to 56 bytes of zeroes */ - while (sha1->curlen < 56) { - sha1->buf[sha1->curlen++] = (unsigned char)0; - } - - /* store length */ - STORE64H(sha1->length, sha1->buf+56); - sha1_compress(sha1, sha1->buf); - - /* copy output */ - for (i = 0; i < 5; i++) { - STORE32H(sha1->state[i], out+(4*i)); - } -} - - -/* .Source: /cvs/libtom/libtomcrypt/src/hashes/sha1.c,v $ */ -/* .Revision: 1.10 $ */ -/* .Date: 2007/05/12 14:25:28 $ */ - -/* - * End of copied SHA1 code. - * - * ------------------------------------------------------------------------ - */ typedef struct { PyTypeObject* sha1_type; @@ -328,8 +84,9 @@ SHA1_traverse(PyObject *ptr, visitproc visit, void *arg) } static void -SHA1_dealloc(PyObject *ptr) +SHA1_dealloc(SHA1object *ptr) { + Hacl_Streaming_SHA1_legacy_free(ptr->hash_state); PyTypeObject *tp = Py_TYPE(ptr); PyObject_GC_UnTrack(ptr); PyObject_GC_Del(ptr); @@ -357,7 +114,7 @@ SHA1Type_copy_impl(SHA1object *self, PyTypeObject *cls) if ((newobj = newSHA1object(st)) == NULL) return NULL; - newobj->hash_state = self->hash_state; + newobj->hash_state = Hacl_Streaming_SHA1_legacy_copy(self->hash_state); return (PyObject *)newobj; } @@ -372,10 +129,7 @@ SHA1Type_digest_impl(SHA1object *self) /*[clinic end generated code: output=2f05302a7aa2b5cb input=13824b35407444bd]*/ { unsigned char digest[SHA1_DIGESTSIZE]; - struct sha1_state temp; - - temp = self->hash_state; - sha1_done(&temp, digest); + Hacl_Streaming_SHA1_legacy_finish(self->hash_state, digest); return PyBytes_FromStringAndSize((const char *)digest, SHA1_DIGESTSIZE); } @@ -390,15 +144,21 @@ SHA1Type_hexdigest_impl(SHA1object *self) /*[clinic end generated code: output=4161fd71e68c6659 input=97691055c0c74ab0]*/ { unsigned char digest[SHA1_DIGESTSIZE]; - struct sha1_state temp; - - /* Get the raw (binary) digest value */ - temp = self->hash_state; - sha1_done(&temp, digest); - + Hacl_Streaming_SHA1_legacy_finish(self->hash_state, digest); return _Py_strhex((const char *)digest, SHA1_DIGESTSIZE); } +static void update(Hacl_Streaming_SHA1_state *state, uint8_t *buf, Py_ssize_t len) { +#if PY_SSIZE_T_MAX > UINT32_MAX + while (len > UINT32_MAX) { + Hacl_Streaming_SHA1_legacy_update(state, buf, UINT32_MAX); + len -= UINT32_MAX; + buf += UINT32_MAX; + } +#endif + Hacl_Streaming_SHA1_legacy_update(state, buf, (uint32_t) len); +} + /*[clinic input] SHA1Type.update @@ -416,7 +176,7 @@ SHA1Type_update(SHA1object *self, PyObject *obj) GET_BUFFER_VIEW_OR_ERROUT(obj, &buf); - sha1_process(&self->hash_state, buf.buf, buf.len); + update(self->hash_state, buf.buf, buf.len); PyBuffer_Release(&buf); Py_RETURN_NONE; @@ -509,7 +269,7 @@ _sha1_sha1_impl(PyObject *module, PyObject *string, int usedforsecurity) return NULL; } - sha1_init(&new->hash_state); + new->hash_state = Hacl_Streaming_SHA1_legacy_create_in(); if (PyErr_Occurred()) { Py_DECREF(new); @@ -518,7 +278,7 @@ _sha1_sha1_impl(PyObject *module, PyObject *string, int usedforsecurity) return NULL; } if (string) { - sha1_process(&new->hash_state, buf.buf, buf.len); + update(new->hash_state, buf.buf, buf.len); PyBuffer_Release(&buf); } diff --git a/PCbuild/pythoncore.vcxproj b/PCbuild/pythoncore.vcxproj index 222963bc42d1..85dc8caa458e 100644 --- a/PCbuild/pythoncore.vcxproj +++ b/PCbuild/pythoncore.vcxproj @@ -400,12 +400,14 @@ + + diff --git a/PCbuild/pythoncore.vcxproj.filters b/PCbuild/pythoncore.vcxproj.filters index efb96222043a..98e7d59ba102 100644 --- a/PCbuild/pythoncore.vcxproj.filters +++ b/PCbuild/pythoncore.vcxproj.filters @@ -848,6 +848,9 @@ Modules + + Modules + Modules @@ -863,6 +866,9 @@ Modules + + Modules + Modules diff --git a/configure b/configure index 17dc62fb63de..557519ad86e0 100755 --- a/configure +++ b/configure @@ -26844,7 +26844,7 @@ fi as_fn_append MODULE_BLOCK "MODULE__MD5_STATE=$py_cv_module__md5$as_nl" if test "x$py_cv_module__md5" = xyes; then : - + as_fn_append MODULE_BLOCK "MODULE__MD5_CFLAGS=-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE$as_nl" fi @@ -26878,7 +26878,7 @@ fi as_fn_append MODULE_BLOCK "MODULE__SHA1_STATE=$py_cv_module__sha1$as_nl" if test "x$py_cv_module__sha1" = xyes; then : - + as_fn_append MODULE_BLOCK "MODULE__SHA1_CFLAGS=-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE$as_nl" fi diff --git a/configure.ac b/configure.ac index bc288b86cfa5..982f669acbcf 100644 --- a/configure.ac +++ b/configure.ac @@ -7194,8 +7194,12 @@ PY_STDLIB_MOD_SIMPLE([unicodedata]) dnl By default we always compile these even when OpenSSL is available dnl (issue #14693). The modules are small. -PY_STDLIB_MOD([_md5], [test "$with_builtin_md5" = yes]) -PY_STDLIB_MOD([_sha1], [test "$with_builtin_sha1" = yes]) +PY_STDLIB_MOD([_md5], + [test "$with_builtin_md5" = yes], [], + [-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE]) +PY_STDLIB_MOD([_sha1], + [test "$with_builtin_sha1" = yes], [], + [-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE]) PY_STDLIB_MOD([_sha2], [test "$with_builtin_sha2" = yes], [], [-I\$(srcdir)/Modules/_hacl/include -I\$(srcdir)/Modules/_hacl/internal -D_BSD_SOURCE -D_DEFAULT_SOURCE]) From webhook-mailer at python.org Wed Feb 22 16:35:59 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 22 Feb 2023 21:35:59 -0000 Subject: [Python-checkins] GH-101777: `queue.rst`: use 2 spaces after a period to be consistent. (GH-102143) Message-ID: https://github.com/python/cpython/commit/fa592f0e017b251812ab9b459dbe9c417415b458 commit: fa592f0e017b251812ab9b459dbe9c417415b458 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-22T13:35:52-08:00 summary: GH-101777: `queue.rst`: use 2 spaces after a period to be consistent. (GH-102143) (cherry picked from commit 96bf24380e44dfa1516d65480250995e737c0cb9) Co-authored-by: Owain Davies <116417456+OTheDev at users.noreply.github.com> files: M Doc/library/queue.rst diff --git a/Doc/library/queue.rst b/Doc/library/queue.rst index 46b8e9b18a3c..b2b787c5a826 100644 --- a/Doc/library/queue.rst +++ b/Doc/library/queue.rst @@ -15,7 +15,7 @@ module implements all the required locking semantics. The module implements three types of queue, which differ only in the order in which the entries are retrieved. In a :abbr:`FIFO (first-in, first-out)` -queue, the first tasks added are the first retrieved. In a +queue, the first tasks added are the first retrieved. In a :abbr:`LIFO (last-in, first-out)` queue, the most recently added entry is the first retrieved (operating like a stack). With a priority queue, the entries are kept sorted (using the :mod:`heapq` module) and the @@ -57,7 +57,7 @@ The :mod:`queue` module defines the following classes and exceptions: *maxsize* is less than or equal to zero, the queue size is infinite. The lowest valued entries are retrieved first (the lowest valued entry is the - one that would be returned by ``min(entries)``). A typical pattern for + one that would be returned by ``min(entries)``). A typical pattern for entries is a tuple in the form: ``(priority_number, data)``. If the *data* elements are not comparable, the data can be wrapped in a class @@ -127,8 +127,8 @@ provide the public methods described below. .. method:: Queue.put(item, block=True, timeout=None) - Put *item* into the queue. If optional args *block* is true and *timeout* is - ``None`` (the default), block if necessary until a free slot is available. If + Put *item* into the queue. If optional args *block* is true and *timeout* is + ``None`` (the default), block if necessary until a free slot is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the :exc:`Full` exception if no free slot was available within that time. Otherwise (*block* is false), put an item on the queue if a free slot is @@ -143,7 +143,7 @@ provide the public methods described below. .. method:: Queue.get(block=True, timeout=None) - Remove and return an item from the queue. If optional args *block* is true and + Remove and return an item from the queue. If optional args *block* is true and *timeout* is ``None`` (the default), block if necessary until an item is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the :exc:`Empty` exception if no item was available within that time. @@ -152,7 +152,7 @@ provide the public methods described below. Prior to 3.0 on POSIX systems, and for all versions on Windows, if *block* is true and *timeout* is ``None``, this operation goes into - an uninterruptible wait on an underlying lock. This means that no exceptions + an uninterruptible wait on an underlying lock. This means that no exceptions can occur, and in particular a SIGINT will not trigger a :exc:`KeyboardInterrupt`. @@ -184,7 +184,7 @@ fully processed by daemon consumer threads. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls :meth:`task_done` to - indicate that the item was retrieved and all work on it is complete. When the + indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, :meth:`join` unblocks. @@ -227,7 +227,7 @@ SimpleQueue Objects .. method:: SimpleQueue.empty() - Return ``True`` if the queue is empty, ``False`` otherwise. If empty() + Return ``True`` if the queue is empty, ``False`` otherwise. If empty() returns ``False`` it doesn't guarantee that a subsequent call to get() will not block. From webhook-mailer at python.org Wed Feb 22 16:39:12 2023 From: webhook-mailer at python.org (miss-islington) Date: Wed, 22 Feb 2023 21:39:12 -0000 Subject: [Python-checkins] GH-101777: `queue.rst`: use 2 spaces after a period to be consistent. (GH-102143) Message-ID: https://github.com/python/cpython/commit/8b5498e36795d2d17c0673cfa8831cb052ec7a0a commit: 8b5498e36795d2d17c0673cfa8831cb052ec7a0a branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-22T13:39:05-08:00 summary: GH-101777: `queue.rst`: use 2 spaces after a period to be consistent. (GH-102143) (cherry picked from commit 96bf24380e44dfa1516d65480250995e737c0cb9) Co-authored-by: Owain Davies <116417456+OTheDev at users.noreply.github.com> files: M Doc/library/queue.rst diff --git a/Doc/library/queue.rst b/Doc/library/queue.rst index 46b8e9b18a3c..b2b787c5a826 100644 --- a/Doc/library/queue.rst +++ b/Doc/library/queue.rst @@ -15,7 +15,7 @@ module implements all the required locking semantics. The module implements three types of queue, which differ only in the order in which the entries are retrieved. In a :abbr:`FIFO (first-in, first-out)` -queue, the first tasks added are the first retrieved. In a +queue, the first tasks added are the first retrieved. In a :abbr:`LIFO (last-in, first-out)` queue, the most recently added entry is the first retrieved (operating like a stack). With a priority queue, the entries are kept sorted (using the :mod:`heapq` module) and the @@ -57,7 +57,7 @@ The :mod:`queue` module defines the following classes and exceptions: *maxsize* is less than or equal to zero, the queue size is infinite. The lowest valued entries are retrieved first (the lowest valued entry is the - one that would be returned by ``min(entries)``). A typical pattern for + one that would be returned by ``min(entries)``). A typical pattern for entries is a tuple in the form: ``(priority_number, data)``. If the *data* elements are not comparable, the data can be wrapped in a class @@ -127,8 +127,8 @@ provide the public methods described below. .. method:: Queue.put(item, block=True, timeout=None) - Put *item* into the queue. If optional args *block* is true and *timeout* is - ``None`` (the default), block if necessary until a free slot is available. If + Put *item* into the queue. If optional args *block* is true and *timeout* is + ``None`` (the default), block if necessary until a free slot is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the :exc:`Full` exception if no free slot was available within that time. Otherwise (*block* is false), put an item on the queue if a free slot is @@ -143,7 +143,7 @@ provide the public methods described below. .. method:: Queue.get(block=True, timeout=None) - Remove and return an item from the queue. If optional args *block* is true and + Remove and return an item from the queue. If optional args *block* is true and *timeout* is ``None`` (the default), block if necessary until an item is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the :exc:`Empty` exception if no item was available within that time. @@ -152,7 +152,7 @@ provide the public methods described below. Prior to 3.0 on POSIX systems, and for all versions on Windows, if *block* is true and *timeout* is ``None``, this operation goes into - an uninterruptible wait on an underlying lock. This means that no exceptions + an uninterruptible wait on an underlying lock. This means that no exceptions can occur, and in particular a SIGINT will not trigger a :exc:`KeyboardInterrupt`. @@ -184,7 +184,7 @@ fully processed by daemon consumer threads. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls :meth:`task_done` to - indicate that the item was retrieved and all work on it is complete. When the + indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, :meth:`join` unblocks. @@ -227,7 +227,7 @@ SimpleQueue Objects .. method:: SimpleQueue.empty() - Return ``True`` if the queue is empty, ``False`` otherwise. If empty() + Return ``True`` if the queue is empty, ``False`` otherwise. If empty() returns ``False`` it doesn't guarantee that a subsequent call to get() will not block. From webhook-mailer at python.org Wed Feb 22 18:55:12 2023 From: webhook-mailer at python.org (terryjreedy) Date: Wed, 22 Feb 2023 23:55:12 -0000 Subject: [Python-checkins] Fix syntax error in struct doc example (#102160) Message-ID: https://github.com/python/cpython/commit/8f647477f0ab5362741d261701b5bcd76bd69ec1 commit: 8f647477f0ab5362741d261701b5bcd76bd69ec1 branch: main author: Terry Jan Reedy committer: terryjreedy date: 2023-02-22T18:55:03-05:00 summary: Fix syntax error in struct doc example (#102160) Missing closing ) reported on Discuss by Chukwudi Nwachukwu. files: M Doc/library/struct.rst diff --git a/Doc/library/struct.rst b/Doc/library/struct.rst index 69d95f27cb61..9c0e32ba16bf 100644 --- a/Doc/library/struct.rst +++ b/Doc/library/struct.rst @@ -371,7 +371,7 @@ ordering:: >>> from struct import * >>> pack(">bhl", 1, 2, 3) b'\x01\x00\x02\x00\x00\x00\x03' - >>> unpack('>bhl', b'\x01\x00\x02\x00\x00\x00\x03' + >>> unpack('>bhl', b'\x01\x00\x02\x00\x00\x00\x03') (1, 2, 3) >>> calcsize('>bhl') 7 From webhook-mailer at python.org Wed Feb 22 19:02:41 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 23 Feb 2023 00:02:41 -0000 Subject: [Python-checkins] Fix syntax error in struct doc example (GH-102160) Message-ID: https://github.com/python/cpython/commit/bf0a8362cdc2670373b4462943e6e6bf156d0a5c commit: bf0a8362cdc2670373b4462943e6e6bf156d0a5c branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-22T16:02:31-08:00 summary: Fix syntax error in struct doc example (GH-102160) Missing closing ) reported on Discuss by Chukwudi Nwachukwu. (cherry picked from commit 8f647477f0ab5362741d261701b5bcd76bd69ec1) Co-authored-by: Terry Jan Reedy files: M Doc/library/struct.rst diff --git a/Doc/library/struct.rst b/Doc/library/struct.rst index 50d70731f775..12f247462fa6 100644 --- a/Doc/library/struct.rst +++ b/Doc/library/struct.rst @@ -371,7 +371,7 @@ ordering:: >>> from struct import * >>> pack(">bhl", 1, 2, 3) b'\x01\x00\x02\x00\x00\x00\x03' - >>> unpack('>bhl', b'\x01\x00\x02\x00\x00\x00\x03' + >>> unpack('>bhl', b'\x01\x00\x02\x00\x00\x00\x03') (1, 2, 3) >>> calcsize('>bhl') 7 From webhook-mailer at python.org Wed Feb 22 19:02:44 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 23 Feb 2023 00:02:44 -0000 Subject: [Python-checkins] Fix syntax error in struct doc example (GH-102160) Message-ID: https://github.com/python/cpython/commit/780dec8a9468a5f31c9f7ebfc4f673ed4ed9e0e8 commit: 780dec8a9468a5f31c9f7ebfc4f673ed4ed9e0e8 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-22T16:02:38-08:00 summary: Fix syntax error in struct doc example (GH-102160) Missing closing ) reported on Discuss by Chukwudi Nwachukwu. (cherry picked from commit 8f647477f0ab5362741d261701b5bcd76bd69ec1) Co-authored-by: Terry Jan Reedy files: M Doc/library/struct.rst diff --git a/Doc/library/struct.rst b/Doc/library/struct.rst index 50d70731f775..12f247462fa6 100644 --- a/Doc/library/struct.rst +++ b/Doc/library/struct.rst @@ -371,7 +371,7 @@ ordering:: >>> from struct import * >>> pack(">bhl", 1, 2, 3) b'\x01\x00\x02\x00\x00\x00\x03' - >>> unpack('>bhl', b'\x01\x00\x02\x00\x00\x00\x03' + >>> unpack('>bhl', b'\x01\x00\x02\x00\x00\x00\x03') (1, 2, 3) >>> calcsize('>bhl') 7 From webhook-mailer at python.org Wed Feb 22 20:49:37 2023 From: webhook-mailer at python.org (ethanfurman) Date: Thu, 23 Feb 2023 01:49:37 -0000 Subject: [Python-checkins] gh-87634: remove locking from functools.cached_property (GH-101890) Message-ID: https://github.com/python/cpython/commit/056dfc71dce15f81887f0bd6da09d6099d71f979 commit: 056dfc71dce15f81887f0bd6da09d6099d71f979 branch: main author: Carl Meyer committer: ethanfurman date: 2023-02-22T17:49:22-08:00 summary: gh-87634: remove locking from functools.cached_property (GH-101890) Remove the undocumented locking capabilities of functools.cached_property. files: A Misc/NEWS.d/next/Library/2023-02-13-12-55-48.gh-issue-87634.q-SBhJ.rst M Doc/library/functools.rst M Doc/whatsnew/3.12.rst M Lib/functools.py M Lib/test/test_functools.py diff --git a/Doc/library/functools.rst b/Doc/library/functools.rst index 80a405e87d8d..d467e50bc7a4 100644 --- a/Doc/library/functools.rst +++ b/Doc/library/functools.rst @@ -86,6 +86,14 @@ The :mod:`functools` module defines the following functions: The cached value can be cleared by deleting the attribute. This allows the *cached_property* method to run again. + The *cached_property* does not prevent a possible race condition in + multi-threaded usage. The getter function could run more than once on the + same instance, with the latest run setting the cached value. If the cached + property is idempotent or otherwise not harmful to run more than once on an + instance, this is fine. If synchronization is needed, implement the necessary + locking inside the decorated getter function or around the cached property + access. + Note, this decorator interferes with the operation of :pep:`412` key-sharing dictionaries. This means that instance dictionaries can take more space than usual. @@ -110,6 +118,14 @@ The :mod:`functools` module defines the following functions: def stdev(self): return statistics.stdev(self._data) + + .. versionchanged:: 3.12 + Prior to Python 3.12, ``cached_property`` included an undocumented lock to + ensure that in multi-threaded usage the getter function was guaranteed to + run only once per instance. However, the lock was per-property, not + per-instance, which could result in unacceptably high lock contention. In + Python 3.12+ this locking is removed. + .. versionadded:: 3.8 diff --git a/Doc/whatsnew/3.12.rst b/Doc/whatsnew/3.12.rst index c62f462a19a2..909c9102a405 100644 --- a/Doc/whatsnew/3.12.rst +++ b/Doc/whatsnew/3.12.rst @@ -761,6 +761,15 @@ Changes in the Python API around process-global resources, which are best managed from the main interpreter. (Contributed by Dong-hee Na in :gh:`99127`.) +* The undocumented locking behavior of :func:`~functools.cached_property` + is removed, because it locked across all instances of the class, leading to high + lock contention. This means that a cached property getter function could now run + more than once for a single instance, if two threads race. For most simple + cached properties (e.g. those that are idempotent and simply calculate a value + based on other attributes of the instance) this will be fine. If + synchronization is needed, implement locking within the cached property getter + function or around multi-threaded access points. + Build Changes ============= diff --git a/Lib/functools.py b/Lib/functools.py index 43ead512e1ea..aaf4291150fb 100644 --- a/Lib/functools.py +++ b/Lib/functools.py @@ -959,15 +959,12 @@ def __isabstractmethod__(self): ### cached_property() - computed once per instance, cached as attribute ################################################################################ -_NOT_FOUND = object() - class cached_property: def __init__(self, func): self.func = func self.attrname = None self.__doc__ = func.__doc__ - self.lock = RLock() def __set_name__(self, owner, name): if self.attrname is None: @@ -992,21 +989,15 @@ def __get__(self, instance, owner=None): f"instance to cache {self.attrname!r} property." ) raise TypeError(msg) from None - val = cache.get(self.attrname, _NOT_FOUND) - if val is _NOT_FOUND: - with self.lock: - # check if another thread filled cache while we awaited lock - val = cache.get(self.attrname, _NOT_FOUND) - if val is _NOT_FOUND: - val = self.func(instance) - try: - cache[self.attrname] = val - except TypeError: - msg = ( - f"The '__dict__' attribute on {type(instance).__name__!r} instance " - f"does not support item assignment for caching {self.attrname!r} property." - ) - raise TypeError(msg) from None + val = self.func(instance) + try: + cache[self.attrname] = val + except TypeError: + msg = ( + f"The '__dict__' attribute on {type(instance).__name__!r} instance " + f"does not support item assignment for caching {self.attrname!r} property." + ) + raise TypeError(msg) from None return val __class_getitem__ = classmethod(GenericAlias) diff --git a/Lib/test/test_functools.py b/Lib/test/test_functools.py index 730ab1f595f2..57db96d37ee3 100644 --- a/Lib/test/test_functools.py +++ b/Lib/test/test_functools.py @@ -2931,21 +2931,6 @@ def get_cost(self): cached_cost = py_functools.cached_property(get_cost) -class CachedCostItemWait: - - def __init__(self, event): - self._cost = 1 - self.lock = py_functools.RLock() - self.event = event - - @py_functools.cached_property - def cost(self): - self.event.wait(1) - with self.lock: - self._cost += 1 - return self._cost - - class CachedCostItemWithSlots: __slots__ = ('_cost') @@ -2970,27 +2955,6 @@ def test_cached_attribute_name_differs_from_func_name(self): self.assertEqual(item.get_cost(), 4) self.assertEqual(item.cached_cost, 3) - @threading_helper.requires_working_threading() - def test_threaded(self): - go = threading.Event() - item = CachedCostItemWait(go) - - num_threads = 3 - - orig_si = sys.getswitchinterval() - sys.setswitchinterval(1e-6) - try: - threads = [ - threading.Thread(target=lambda: item.cost) - for k in range(num_threads) - ] - with threading_helper.start_threads(threads): - go.set() - finally: - sys.setswitchinterval(orig_si) - - self.assertEqual(item.cost, 2) - def test_object_with_slots(self): item = CachedCostItemWithSlots() with self.assertRaisesRegex( diff --git a/Misc/NEWS.d/next/Library/2023-02-13-12-55-48.gh-issue-87634.q-SBhJ.rst b/Misc/NEWS.d/next/Library/2023-02-13-12-55-48.gh-issue-87634.q-SBhJ.rst new file mode 100644 index 000000000000..a17927500bd9 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-13-12-55-48.gh-issue-87634.q-SBhJ.rst @@ -0,0 +1 @@ +Remove locking behavior from :func:`functools.cached_property`. From webhook-mailer at python.org Thu Feb 23 05:18:25 2023 From: webhook-mailer at python.org (iritkatriel) Date: Thu, 23 Feb 2023 10:18:25 -0000 Subject: [Python-checkins] Revert "bpo-46978: Correct docstrings for in-place builtin operators #31802) (#102146) Message-ID: https://github.com/python/cpython/commit/572223f9ce99e8816abdcc1536db6c4ceed2d848 commit: 572223f9ce99e8816abdcc1536db6c4ceed2d848 branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-23T10:17:44Z summary: Revert "bpo-46978: Correct docstrings for in-place builtin operators #31802) (#102146) Revert "bpo-46978: Correct docstrings for in-place builtin operators (#31802)" This reverts commit 128379b8cdb88a6d3d7fed24df082c9a654b3fb8. files: D Misc/NEWS.d/next/Core and Builtins/2022-03-10-21-48-05.bpo-46978.f5QFfw.rst M Objects/typeobject.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2022-03-10-21-48-05.bpo-46978.f5QFfw.rst b/Misc/NEWS.d/next/Core and Builtins/2022-03-10-21-48-05.bpo-46978.f5QFfw.rst deleted file mode 100644 index 72291d042a03..000000000000 --- a/Misc/NEWS.d/next/Core and Builtins/2022-03-10-21-48-05.bpo-46978.f5QFfw.rst +++ /dev/null @@ -1 +0,0 @@ -Fixed docstrings for in-place operators of built-in types. diff --git a/Objects/typeobject.c b/Objects/typeobject.c index 2d1220a06950..b3f1429debc5 100644 --- a/Objects/typeobject.c +++ b/Objects/typeobject.c @@ -8564,7 +8564,7 @@ an all-zero entry. #NAME "($self, /)\n--\n\n" DOC) #define IBSLOT(NAME, SLOT, FUNCTION, WRAPPER, DOC) \ ETSLOT(NAME, as_number.SLOT, FUNCTION, WRAPPER, \ - #NAME "($self, value, /)\n--\n\nCompute self " DOC " value.") + #NAME "($self, value, /)\n--\n\nReturn self" DOC "value.") #define BINSLOT(NAME, SLOT, FUNCTION, DOC) \ ETSLOT(NAME, as_number.SLOT, FUNCTION, wrap_binaryfunc_l, \ #NAME "($self, value, /)\n--\n\nReturn self" DOC "value.") From webhook-mailer at python.org Thu Feb 23 05:19:09 2023 From: webhook-mailer at python.org (markshannon) Date: Thu, 23 Feb 2023 10:19:09 -0000 Subject: [Python-checkins] GH-100719: Remove redundant `gi_code` field from generator object. (GH-100749) Message-ID: https://github.com/python/cpython/commit/22b8d77b98a5944e688be0927b8139c49d4a7257 commit: 22b8d77b98a5944e688be0927b8139c49d4a7257 branch: main author: Mark Shannon committer: markshannon date: 2023-02-23T10:19:01Z summary: GH-100719: Remove redundant `gi_code` field from generator object. (GH-100749) files: A Misc/NEWS.d/next/Core and Builtins/2023-01-04-12-49-33.gh-issue-100719.uRPccL.rst M Include/cpython/genobject.h M Include/internal/pycore_frame.h M Lib/test/test_capi/test_misc.py M Lib/test/test_sys.py M Modules/_testcapimodule.c M Objects/genobject.c M Python/ceval.c M Python/frame.c diff --git a/Include/cpython/genobject.h b/Include/cpython/genobject.h index 6127ba7babb8..18b8ce913e6e 100644 --- a/Include/cpython/genobject.h +++ b/Include/cpython/genobject.h @@ -13,8 +13,6 @@ extern "C" { and coroutine objects. */ #define _PyGenObject_HEAD(prefix) \ PyObject_HEAD \ - /* The code object backing the generator */ \ - PyCodeObject *prefix##_code; \ /* List of weak reference. */ \ PyObject *prefix##_weakreflist; \ /* Name of the generator. */ \ @@ -28,7 +26,7 @@ extern "C" { char prefix##_running_async; \ /* The frame */ \ int8_t prefix##_frame_state; \ - PyObject *prefix##_iframe[1]; + PyObject *prefix##_iframe[1]; \ typedef struct { /* The gi_ prefix is intended to remind of generator-iterator. */ @@ -46,6 +44,7 @@ PyAPI_FUNC(PyObject *) PyGen_NewWithQualName(PyFrameObject *, PyAPI_FUNC(int) _PyGen_SetStopIterationValue(PyObject *); PyAPI_FUNC(int) _PyGen_FetchStopIterationValue(PyObject **); PyAPI_FUNC(void) _PyGen_Finalize(PyObject *self); +PyAPI_FUNC(PyCodeObject *) PyGen_GetCode(PyGenObject *gen); /* --- PyCoroObject ------------------------------------------------------- */ diff --git a/Include/internal/pycore_frame.h b/Include/internal/pycore_frame.h index 81d16b219c30..5806cf05f174 100644 --- a/Include/internal/pycore_frame.h +++ b/Include/internal/pycore_frame.h @@ -209,7 +209,7 @@ _PyFrame_GetFrameObject(_PyInterpreterFrame *frame) * frames like the ones in generators and coroutines. */ void -_PyFrame_Clear(_PyInterpreterFrame * frame); +_PyFrame_ClearExceptCode(_PyInterpreterFrame * frame); int _PyFrame_Traverse(_PyInterpreterFrame *frame, visitproc visit, void *arg); diff --git a/Lib/test/test_capi/test_misc.py b/Lib/test/test_capi/test_misc.py index ad099c61463b..f4569fb00546 100644 --- a/Lib/test/test_capi/test_misc.py +++ b/Lib/test/test_capi/test_misc.py @@ -1213,6 +1213,11 @@ def test_pendingcalls_non_threaded(self): self.pendingcalls_submit(l, n) self.pendingcalls_wait(l, n) + def test_gen_get_code(self): + def genf(): yield + gen = genf() + self.assertEqual(_testcapi.gen_get_code(gen), gen.gi_code) + class SubinterpreterTest(unittest.TestCase): diff --git a/Lib/test/test_sys.py b/Lib/test/test_sys.py index ab1a06594718..58aa9d10210e 100644 --- a/Lib/test/test_sys.py +++ b/Lib/test/test_sys.py @@ -1460,7 +1460,7 @@ def bar(cls): check(bar, size('PP')) # generator def get_gen(): yield 1 - check(get_gen(), size('P2P4P4c7P2ic??2P')) + check(get_gen(), size('PP4P4c7P2ic??2P')) # iterator check(iter('abc'), size('lP')) # callable-iterator diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-01-04-12-49-33.gh-issue-100719.uRPccL.rst b/Misc/NEWS.d/next/Core and Builtins/2023-01-04-12-49-33.gh-issue-100719.uRPccL.rst new file mode 100644 index 000000000000..2addef27b8ea --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-01-04-12-49-33.gh-issue-100719.uRPccL.rst @@ -0,0 +1,3 @@ +Remove gi_code field from generator (and coroutine and async generator) +objects as it is redundant. The frame already includes a reference to the +code object. diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index 0d8d1d73fb23..e2237d25a9a9 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -3076,6 +3076,16 @@ eval_get_func_desc(PyObject *self, PyObject *func) return PyUnicode_FromString(PyEval_GetFuncDesc(func)); } +static PyObject * +gen_get_code(PyObject *self, PyObject *gen) +{ + if (!PyGen_Check(gen)) { + PyErr_SetString(PyExc_TypeError, "argument must be a generator object"); + return NULL; + } + return (PyObject *)PyGen_GetCode((PyGenObject *)gen); +} + static PyObject * eval_eval_code_ex(PyObject *mod, PyObject *pos_args) { @@ -3657,6 +3667,7 @@ static PyMethodDef TestMethods[] = { {"frame_getvarstring", test_frame_getvarstring, METH_VARARGS, NULL}, {"eval_get_func_name", eval_get_func_name, METH_O, NULL}, {"eval_get_func_desc", eval_get_func_desc, METH_O, NULL}, + {"gen_get_code", gen_get_code, METH_O, NULL}, {"get_feature_macros", get_feature_macros, METH_NOARGS, NULL}, {"test_code_api", test_code_api, METH_NOARGS, NULL}, {"settrace_to_record", settrace_to_record, METH_O, NULL}, diff --git a/Objects/genobject.c b/Objects/genobject.c index aec64ca7004e..4ab6581e12ab 100644 --- a/Objects/genobject.c +++ b/Objects/genobject.c @@ -24,6 +24,21 @@ static const char *NON_INIT_CORO_MSG = "can't send non-None value to a " static const char *ASYNC_GEN_IGNORED_EXIT_MSG = "async generator ignored GeneratorExit"; +/* Returns a borrowed reference */ +static inline PyCodeObject * +_PyGen_GetCode(PyGenObject *gen) { + _PyInterpreterFrame *frame = (_PyInterpreterFrame *)(gen->gi_iframe); + return frame->f_code; +} + +PyCodeObject * +PyGen_GetCode(PyGenObject *gen) { + assert(PyGen_Check(gen)); + PyCodeObject *res = _PyGen_GetCode(gen); + Py_INCREF(res); + return res; +} + static inline int exc_state_traverse(_PyErr_StackItem *exc_state, visitproc visit, void *arg) { @@ -34,7 +49,6 @@ exc_state_traverse(_PyErr_StackItem *exc_state, visitproc visit, void *arg) static int gen_traverse(PyGenObject *gen, visitproc visit, void *arg) { - Py_VISIT(gen->gi_code); Py_VISIT(gen->gi_name); Py_VISIT(gen->gi_qualname); if (gen->gi_frame_state < FRAME_CLEARED) { @@ -88,8 +102,8 @@ _PyGen_Finalize(PyObject *self) /* If `gen` is a coroutine, and if it was never awaited on, issue a RuntimeWarning. */ - if (gen->gi_code != NULL && - ((PyCodeObject *)gen->gi_code)->co_flags & CO_COROUTINE && + assert(_PyGen_GetCode(gen) != NULL); + if (_PyGen_GetCode(gen)->co_flags & CO_COROUTINE && gen->gi_frame_state == FRAME_CREATED) { _PyErr_WarnUnawaitedCoroutine((PyObject *)gen); @@ -137,12 +151,12 @@ gen_dealloc(PyGenObject *gen) _PyInterpreterFrame *frame = (_PyInterpreterFrame *)gen->gi_iframe; gen->gi_frame_state = FRAME_CLEARED; frame->previous = NULL; - _PyFrame_Clear(frame); + _PyFrame_ClearExceptCode(frame); } - if (((PyCodeObject *)gen->gi_code)->co_flags & CO_COROUTINE) { + if (_PyGen_GetCode(gen)->co_flags & CO_COROUTINE) { Py_CLEAR(((PyCoroObject *)gen)->cr_origin_or_finalizer); } - Py_CLEAR(gen->gi_code); + Py_DECREF(_PyGen_GetCode(gen)); Py_CLEAR(gen->gi_name); Py_CLEAR(gen->gi_qualname); _PyErr_ClearExcState(&gen->gi_exc_state); @@ -332,7 +346,7 @@ _PyGen_yf(PyGenObject *gen) /* Return immediately if the frame didn't start yet. SEND always come after LOAD_CONST: a code object should not start with SEND */ - assert(_PyCode_CODE(gen->gi_code)[0].op.code != SEND); + assert(_PyCode_CODE(_PyGen_GetCode(gen))[0].op.code != SEND); return NULL; } _Py_CODEUNIT next = frame->prev_instr[1]; @@ -767,6 +781,21 @@ gen_getframe(PyGenObject *gen, void *Py_UNUSED(ignored)) return _gen_getframe(gen, "gi_frame"); } +static PyObject * +_gen_getcode(PyGenObject *gen, const char *const name) +{ + if (PySys_Audit("object.__getattr__", "Os", gen, name) < 0) { + return NULL; + } + return Py_NewRef(_PyGen_GetCode(gen)); +} + +static PyObject * +gen_getcode(PyGenObject *gen, void *Py_UNUSED(ignored)) +{ + return _gen_getcode(gen, "gi_code"); +} + static PyGetSetDef gen_getsetlist[] = { {"__name__", (getter)gen_get_name, (setter)gen_set_name, PyDoc_STR("name of the generator")}, @@ -777,11 +806,11 @@ static PyGetSetDef gen_getsetlist[] = { {"gi_running", (getter)gen_getrunning, NULL, NULL}, {"gi_frame", (getter)gen_getframe, NULL, NULL}, {"gi_suspended", (getter)gen_getsuspended, NULL, NULL}, + {"gi_code", (getter)gen_getcode, NULL, NULL}, {NULL} /* Sentinel */ }; static PyMemberDef gen_memberlist[] = { - {"gi_code", T_OBJECT, offsetof(PyGenObject, gi_code), READONLY|PY_AUDIT_READ}, {NULL} /* Sentinel */ }; @@ -790,7 +819,7 @@ gen_sizeof(PyGenObject *gen, PyObject *Py_UNUSED(ignored)) { Py_ssize_t res; res = offsetof(PyGenObject, gi_iframe) + offsetof(_PyInterpreterFrame, localsplus); - PyCodeObject *code = gen->gi_code; + PyCodeObject *code = _PyGen_GetCode(gen); res += _PyFrame_NumSlotsForCodeObject(code) * sizeof(PyObject *); return PyLong_FromSsize_t(res); } @@ -878,7 +907,6 @@ make_gen(PyTypeObject *type, PyFunctionObject *func) return NULL; } gen->gi_frame_state = FRAME_CLEARED; - gen->gi_code = (PyCodeObject *)Py_NewRef(func->func_code); gen->gi_weakreflist = NULL; gen->gi_exc_state.exc_value = NULL; gen->gi_exc_state.previous_item = NULL; @@ -960,8 +988,6 @@ gen_new_with_qualname(PyTypeObject *type, PyFrameObject *f, f->f_frame = frame; frame->owner = FRAME_OWNED_BY_GENERATOR; assert(PyObject_GC_IsTracked((PyObject *)f)); - gen->gi_code = PyFrame_GetCode(f); - Py_INCREF(gen->gi_code); Py_DECREF(f); gen->gi_weakreflist = NULL; gen->gi_exc_state.exc_value = NULL; @@ -969,11 +995,11 @@ gen_new_with_qualname(PyTypeObject *type, PyFrameObject *f, if (name != NULL) gen->gi_name = Py_NewRef(name); else - gen->gi_name = Py_NewRef(gen->gi_code->co_name); + gen->gi_name = Py_NewRef(_PyGen_GetCode(gen)->co_name); if (qualname != NULL) gen->gi_qualname = Py_NewRef(qualname); else - gen->gi_qualname = Py_NewRef(gen->gi_code->co_qualname); + gen->gi_qualname = Py_NewRef(_PyGen_GetCode(gen)->co_qualname); _PyObject_GC_TRACK(gen); return (PyObject *)gen; } @@ -1001,7 +1027,7 @@ static int gen_is_coroutine(PyObject *o) { if (PyGen_CheckExact(o)) { - PyCodeObject *code = (PyCodeObject *)((PyGenObject*)o)->gi_code; + PyCodeObject *code = _PyGen_GetCode((PyGenObject*)o); if (code->co_flags & CO_ITERABLE_COROUTINE) { return 1; } @@ -1110,6 +1136,12 @@ cr_getframe(PyCoroObject *coro, void *Py_UNUSED(ignored)) return _gen_getframe((PyGenObject *)coro, "cr_frame"); } +static PyObject * +cr_getcode(PyCoroObject *coro, void *Py_UNUSED(ignored)) +{ + return _gen_getcode((PyGenObject *)coro, "cr_code"); +} + static PyGetSetDef coro_getsetlist[] = { {"__name__", (getter)gen_get_name, (setter)gen_set_name, @@ -1120,12 +1152,12 @@ static PyGetSetDef coro_getsetlist[] = { PyDoc_STR("object being awaited on, or None")}, {"cr_running", (getter)cr_getrunning, NULL, NULL}, {"cr_frame", (getter)cr_getframe, NULL, NULL}, + {"cr_code", (getter)cr_getcode, NULL, NULL}, {"cr_suspended", (getter)cr_getsuspended, NULL, NULL}, {NULL} /* Sentinel */ }; static PyMemberDef coro_memberlist[] = { - {"cr_code", T_OBJECT, offsetof(PyCoroObject, cr_code), READONLY|PY_AUDIT_READ}, {"cr_origin", T_OBJECT, offsetof(PyCoroObject, cr_origin_or_finalizer), READONLY}, {NULL} /* Sentinel */ }; @@ -1514,6 +1546,12 @@ ag_getframe(PyAsyncGenObject *ag, void *Py_UNUSED(ignored)) return _gen_getframe((PyGenObject *)ag, "ag_frame"); } +static PyObject * +ag_getcode(PyGenObject *gen, void *Py_UNUSED(ignored)) +{ + return _gen_getcode(gen, "ag__code"); +} + static PyGetSetDef async_gen_getsetlist[] = { {"__name__", (getter)gen_get_name, (setter)gen_set_name, PyDoc_STR("name of the async generator")}, @@ -1522,13 +1560,13 @@ static PyGetSetDef async_gen_getsetlist[] = { {"ag_await", (getter)coro_get_cr_await, NULL, PyDoc_STR("object being awaited on, or None")}, {"ag_frame", (getter)ag_getframe, NULL, NULL}, + {"ag_code", (getter)ag_getcode, NULL, NULL}, {NULL} /* Sentinel */ }; static PyMemberDef async_gen_memberlist[] = { {"ag_running", T_BOOL, offsetof(PyAsyncGenObject, ag_running_async), READONLY}, - {"ag_code", T_OBJECT, offsetof(PyAsyncGenObject, ag_code), READONLY|PY_AUDIT_READ}, {NULL} /* Sentinel */ }; diff --git a/Python/ceval.c b/Python/ceval.c index 001bdb15c0f7..b382d2109b93 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -1604,41 +1604,6 @@ initialize_locals(PyThreadState *tstate, PyFunctionObject *func, return -1; } -/* Consumes references to func, locals and all the args */ -static _PyInterpreterFrame * -_PyEvalFramePushAndInit(PyThreadState *tstate, PyFunctionObject *func, - PyObject *locals, PyObject* const* args, - size_t argcount, PyObject *kwnames) -{ - PyCodeObject * code = (PyCodeObject *)func->func_code; - CALL_STAT_INC(frames_pushed); - _PyInterpreterFrame *frame = _PyThreadState_PushFrame(tstate, code->co_framesize); - if (frame == NULL) { - goto fail; - } - _PyFrame_Initialize(frame, func, locals, code, 0); - PyObject **localsarray = &frame->localsplus[0]; - if (initialize_locals(tstate, func, localsarray, args, argcount, kwnames)) { - assert(frame->owner != FRAME_OWNED_BY_GENERATOR); - _PyEvalFrameClearAndPop(tstate, frame); - return NULL; - } - return frame; -fail: - /* Consume the references */ - for (size_t i = 0; i < argcount; i++) { - Py_DECREF(args[i]); - } - if (kwnames) { - Py_ssize_t kwcount = PyTuple_GET_SIZE(kwnames); - for (Py_ssize_t i = 0; i < kwcount; i++) { - Py_DECREF(args[i+argcount]); - } - } - PyErr_NoMemory(); - return NULL; -} - static void clear_thread_frame(PyThreadState *tstate, _PyInterpreterFrame * frame) { @@ -1649,7 +1614,8 @@ clear_thread_frame(PyThreadState *tstate, _PyInterpreterFrame * frame) tstate->datastack_top); tstate->c_recursion_remaining--; assert(frame->frame_obj == NULL || frame->frame_obj->f_frame == frame); - _PyFrame_Clear(frame); + _PyFrame_ClearExceptCode(frame); + Py_DECREF(frame->f_code); tstate->c_recursion_remaining++; _PyThreadState_PopFrame(tstate, frame); } @@ -1665,7 +1631,7 @@ clear_gen_frame(PyThreadState *tstate, _PyInterpreterFrame * frame) gen->gi_exc_state.previous_item = NULL; tstate->c_recursion_remaining--; assert(frame->frame_obj == NULL || frame->frame_obj->f_frame == frame); - _PyFrame_Clear(frame); + _PyFrame_ClearExceptCode(frame); tstate->c_recursion_remaining++; frame->previous = NULL; } @@ -1681,6 +1647,39 @@ _PyEvalFrameClearAndPop(PyThreadState *tstate, _PyInterpreterFrame * frame) } } +/* Consumes references to func, locals and all the args */ +static _PyInterpreterFrame * +_PyEvalFramePushAndInit(PyThreadState *tstate, PyFunctionObject *func, + PyObject *locals, PyObject* const* args, + size_t argcount, PyObject *kwnames) +{ + PyCodeObject * code = (PyCodeObject *)func->func_code; + CALL_STAT_INC(frames_pushed); + _PyInterpreterFrame *frame = _PyThreadState_PushFrame(tstate, code->co_framesize); + if (frame == NULL) { + goto fail; + } + _PyFrame_Initialize(frame, func, locals, code, 0); + if (initialize_locals(tstate, func, frame->localsplus, args, argcount, kwnames)) { + assert(frame->owner == FRAME_OWNED_BY_THREAD); + clear_thread_frame(tstate, frame); + return NULL; + } + return frame; +fail: + /* Consume the references */ + for (size_t i = 0; i < argcount; i++) { + Py_DECREF(args[i]); + } + if (kwnames) { + Py_ssize_t kwcount = PyTuple_GET_SIZE(kwnames); + for (Py_ssize_t i = 0; i < kwcount; i++) { + Py_DECREF(args[i+argcount]); + } + } + PyErr_NoMemory(); + return NULL; +} PyObject * _PyEval_Vector(PyThreadState *tstate, PyFunctionObject *func, diff --git a/Python/frame.c b/Python/frame.c index 6a287d472405..b562709ce10f 100644 --- a/Python/frame.c +++ b/Python/frame.c @@ -84,6 +84,7 @@ take_ownership(PyFrameObject *f, _PyInterpreterFrame *frame) assert(frame->owner != FRAME_OWNED_BY_FRAME_OBJECT); assert(frame->owner != FRAME_CLEARED); Py_ssize_t size = ((char*)&frame->localsplus[frame->stacktop]) - (char *)frame; + Py_INCREF(frame->f_code); memcpy((_PyInterpreterFrame *)f->_f_frame_data, frame, size); frame = (_PyInterpreterFrame *)f->_f_frame_data; f->f_frame = frame; @@ -118,7 +119,7 @@ take_ownership(PyFrameObject *f, _PyInterpreterFrame *frame) } void -_PyFrame_Clear(_PyInterpreterFrame *frame) +_PyFrame_ClearExceptCode(_PyInterpreterFrame *frame) { /* It is the responsibility of the owning generator/coroutine * to have cleared the enclosing generator, if any. */ @@ -144,7 +145,6 @@ _PyFrame_Clear(_PyInterpreterFrame *frame) Py_XDECREF(frame->frame_obj); Py_XDECREF(frame->f_locals); Py_DECREF(frame->f_funcobj); - Py_DECREF(frame->f_code); } int From webhook-mailer at python.org Thu Feb 23 07:20:02 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Thu, 23 Feb 2023 12:20:02 -0000 Subject: [Python-checkins] gh-101578: Fixup NEWS and add What's New entry for new exception APIs (#102157) Message-ID: https://github.com/python/cpython/commit/5b9573eed43c9a43bf0cf54fe012413e08cce34f commit: 5b9573eed43c9a43bf0cf54fe012413e08cce34f branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-23T13:19:21+01:00 summary: gh-101578: Fixup NEWS and add What's New entry for new exception APIs (#102157) files: M Doc/whatsnew/3.12.rst M Misc/NEWS.d/next/C API/2023-02-06-16-14-30.gh-issue-101578.PW5fA9.rst diff --git a/Doc/whatsnew/3.12.rst b/Doc/whatsnew/3.12.rst index 909c9102a405..e551c5b4fd06 100644 --- a/Doc/whatsnew/3.12.rst +++ b/Doc/whatsnew/3.12.rst @@ -870,6 +870,19 @@ New Features get a frame variable by its name. (Contributed by Victor Stinner in :gh:`91248`.) +* Add :c:func:`PyErr_GetRaisedException` and :c:func:`PyErr_SetRaisedException` + for saving and restoring the current exception. + These functions return and accept a single exception object, + rather than the triple arguments of the now-deprecated + :c:func:`PyErr_Fetch` and :c:func:`PyErr_Restore`. + This is less error prone and a bit more efficient. + (Contributed by Mark Shannon in :gh:`101578`.) + +* Add :c:func:`PyException_GetArgs` and :c:func:`PyException_SetArgs` + as convenience functions for retrieving and modifying + the :attr:`~BaseException.args` passed to the exception's constructor. + (Contributed by Mark Shannon in :gh:`101578`.) + Porting to Python 3.12 ---------------------- @@ -993,6 +1006,11 @@ Deprecated (Contributed in :gh:`47146` by Petr Viktorin, based on earlier work by Alexander Belopolsky and Matthias Braun.) +* :c:func:`PyErr_Fetch` and :c:func:`PyErr_Restore` are deprecated. + Use :c:func:`PyErr_GetRaisedException` and + :c:func:`PyErr_SetRaisedException` instead. + (Contributed by Mark Shannon in :gh:`101578`.) + Removed ------- diff --git a/Misc/NEWS.d/next/C API/2023-02-06-16-14-30.gh-issue-101578.PW5fA9.rst b/Misc/NEWS.d/next/C API/2023-02-06-16-14-30.gh-issue-101578.PW5fA9.rst index fc694f6e051b..27294a9e5179 100644 --- a/Misc/NEWS.d/next/C API/2023-02-06-16-14-30.gh-issue-101578.PW5fA9.rst +++ b/Misc/NEWS.d/next/C API/2023-02-06-16-14-30.gh-issue-101578.PW5fA9.rst @@ -1,13 +1,10 @@ -Add new C-API functions for saving and restoring the current exception: -``PyErr_GetRaisedException`` and ``PyErr_SetRaisedException``. -These functions take and return a single exception rather than -the triple of ``PyErr_Fetch`` and ``PyErr_Restore``. +Add :c:func:`PyErr_GetRaisedException` and :c:func:`PyErr_SetRaisedException` +for saving and restoring the current exception. +These functions return and accept a single exception object, +rather than the triple arguments of the now-deprecated +:c:func:`PyErr_Fetch` and :c:func:`PyErr_Restore`. This is less error prone and a bit more efficient. -The three arguments forms of saving and restoring the -current exception: ``PyErr_Fetch`` and ``PyErr_Restore`` -are deprecated. - -Also add ``PyException_GetArgs`` and ``PyException_SetArgs`` -as convenience functions to help dealing with -exceptions in the C API. +Add :c:func:`PyException_GetArgs` and :c:func:`PyException_SetArgs` +as convenience functions for retrieving and modifying +the :attr:`~BaseException.args` passed to the exception's constructor. From webhook-mailer at python.org Thu Feb 23 08:28:41 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Thu, 23 Feb 2023 13:28:41 -0000 Subject: [Python-checkins] Fix typo in `Py_GetPythonHome` signature (#102168) Message-ID: https://github.com/python/cpython/commit/9bba8035bd99813203cb3b0de218f9cc3bcdaf2f commit: 9bba8035bd99813203cb3b0de218f9cc3bcdaf2f branch: main author: Tanner Firl <105078804+TannerFirl at users.noreply.github.com> committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-23T18:58:33+05:30 summary: Fix typo in `Py_GetPythonHome` signature (#102168) files: M Doc/c-api/init.rst diff --git a/Doc/c-api/init.rst b/Doc/c-api/init.rst index ad06616eeb0e..b50ee3b3803e 100644 --- a/Doc/c-api/init.rst +++ b/Doc/c-api/init.rst @@ -818,7 +818,7 @@ Process-wide parameters .. deprecated:: 3.11 -.. c:function:: w_char* Py_GetPythonHome() +.. c:function:: wchar_t* Py_GetPythonHome() Return the default "home", that is, the value set by a previous call to :c:func:`Py_SetPythonHome`, or the value of the :envvar:`PYTHONHOME` From webhook-mailer at python.org Thu Feb 23 08:36:06 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Thu, 23 Feb 2023 13:36:06 -0000 Subject: [Python-checkins] [3.11] gh-100226: Clarify StreamReader.read behavior (GH-101807) (#102001) Message-ID: https://github.com/python/cpython/commit/42d0ca92ed141e7518b5b4d77e6e27e36ac451dc commit: 42d0ca92ed141e7518b5b4d77e6e27e36ac451dc branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-23T19:05:59+05:30 summary: [3.11] gh-100226: Clarify StreamReader.read behavior (GH-101807) (#102001) gh-100226: Clarify StreamReader.read behavior (GH-101807) (cherry picked from commit 77d95c83733722ada35eb1ef89ae5b84a51ddd32) Co-authored-by: Jan Gosmann Co-authored-by: Shantanu <12621235+hauntsaninja at users.noreply.github.com> files: M Doc/library/asyncio-stream.rst M Lib/asyncio/streams.py diff --git a/Doc/library/asyncio-stream.rst b/Doc/library/asyncio-stream.rst index 74891306a637..e9d466d95e54 100644 --- a/Doc/library/asyncio-stream.rst +++ b/Doc/library/asyncio-stream.rst @@ -206,12 +206,20 @@ StreamReader .. coroutinemethod:: read(n=-1) - Read up to *n* bytes. If *n* is not provided, or set to ``-1``, - read until EOF and return all read bytes. + Read up to *n* bytes from the stream. + If *n* is not provided or set to ``-1``, + read until EOF, then return all read :class:`bytes`. If EOF was received and the internal buffer is empty, return an empty ``bytes`` object. + If *n* is ``0``, return an empty ``bytes`` object immediately. + + If *n* is positive, return at most *n* available ``bytes`` + as soon as at least 1 byte is available in the internal buffer. + If EOF is received before any byte is read, return an empty + ``bytes`` object. + .. coroutinemethod:: readline() Read one line, where "line" is a sequence of bytes diff --git a/Lib/asyncio/streams.py b/Lib/asyncio/streams.py index d21a31d1860d..560ad8a8510e 100644 --- a/Lib/asyncio/streams.py +++ b/Lib/asyncio/streams.py @@ -648,16 +648,17 @@ async def readuntil(self, separator=b'\n'): async def read(self, n=-1): """Read up to `n` bytes from the stream. - If n is not provided, or set to -1, read until EOF and return all read - bytes. If the EOF was received and the internal buffer is empty, return - an empty bytes object. + If `n` is not provided or set to -1, + read until EOF, then return all read bytes. + If EOF was received and the internal buffer is empty, + return an empty bytes object. - If n is zero, return empty bytes object immediately. + If `n` is 0, return an empty bytes object immediately. - If n is positive, this function try to read `n` bytes, and may return - less or equal bytes than requested, but at least one byte. If EOF was - received before any byte is read, this function returns empty byte - object. + If `n` is positive, return at most `n` available bytes + as soon as at least 1 byte is available in the internal buffer. + If EOF is received before any byte is read, return an empty + bytes object. Returned value is not limited with limit, configured at stream creation. From webhook-mailer at python.org Thu Feb 23 08:37:32 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 23 Feb 2023 13:37:32 -0000 Subject: [Python-checkins] Fix typo in `Py_GetPythonHome` signature (GH-102168) Message-ID: https://github.com/python/cpython/commit/7d2ad478d1f0d26be4a1ab6aa58e88e10c492217 commit: 7d2ad478d1f0d26be4a1ab6aa58e88e10c492217 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-23T05:37:24-08:00 summary: Fix typo in `Py_GetPythonHome` signature (GH-102168) (cherry picked from commit 9bba8035bd99813203cb3b0de218f9cc3bcdaf2f) Co-authored-by: Tanner Firl <105078804+TannerFirl at users.noreply.github.com> files: M Doc/c-api/init.rst diff --git a/Doc/c-api/init.rst b/Doc/c-api/init.rst index 483bcd990ca7..e1d2328c5f31 100644 --- a/Doc/c-api/init.rst +++ b/Doc/c-api/init.rst @@ -662,7 +662,7 @@ Process-wide parameters :c:expr:`wchar_*` string. -.. c:function:: w_char* Py_GetPythonHome() +.. c:function:: wchar_t* Py_GetPythonHome() Return the default "home", that is, the value set by a previous call to :c:func:`Py_SetPythonHome`, or the value of the :envvar:`PYTHONHOME` From webhook-mailer at python.org Thu Feb 23 08:38:15 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 23 Feb 2023 13:38:15 -0000 Subject: [Python-checkins] Fix typo in `Py_GetPythonHome` signature (GH-102168) Message-ID: https://github.com/python/cpython/commit/61e0bbdb0cd29abe443266a486980cf4985c0cee commit: 61e0bbdb0cd29abe443266a486980cf4985c0cee branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-23T05:38:08-08:00 summary: Fix typo in `Py_GetPythonHome` signature (GH-102168) (cherry picked from commit 9bba8035bd99813203cb3b0de218f9cc3bcdaf2f) Co-authored-by: Tanner Firl <105078804+TannerFirl at users.noreply.github.com> files: M Doc/c-api/init.rst diff --git a/Doc/c-api/init.rst b/Doc/c-api/init.rst index a59667c4a95b..844115a9c691 100644 --- a/Doc/c-api/init.rst +++ b/Doc/c-api/init.rst @@ -709,7 +709,7 @@ Process-wide parameters .. deprecated:: 3.11 -.. c:function:: w_char* Py_GetPythonHome() +.. c:function:: wchar_t* Py_GetPythonHome() Return the default "home", that is, the value set by a previous call to :c:func:`Py_SetPythonHome`, or the value of the :envvar:`PYTHONHOME` From webhook-mailer at python.org Thu Feb 23 09:01:06 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 23 Feb 2023 14:01:06 -0000 Subject: [Python-checkins] bpo-23224: Fix segfaults and multiple leaks in the lzma and bz2 modules (GH-7822) Message-ID: https://github.com/python/cpython/commit/665730d2176aabd05ca5741056aef43189b6f754 commit: 665730d2176aabd05ca5741056aef43189b6f754 branch: main author: Zackery Spytz committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-23T06:00:58-08:00 summary: bpo-23224: Fix segfaults and multiple leaks in the lzma and bz2 modules (GH-7822) lzma.LZMADecompressor and bz2.BZ2Decompressor objects caused segfaults when their `__init__()` methods were not called. lzma.LZMADecompressor, lzma.LZMACompressor, bz2.BZ2Compressor, and bz2.BZ2Decompressor objects would leak locks and internal buffers when their `__init__()` methods were called multiple times. https://bugs.python.org/issue23224 files: A Misc/NEWS.d/next/Library/2018-06-20-09-12-21.bpo-23224.zxCQ13.rst M Lib/test/test_bz2.py M Lib/test/test_lzma.py M Modules/_bz2module.c M Modules/_lzmamodule.c M Modules/clinic/_bz2module.c.h M Modules/clinic/_lzmamodule.c.h diff --git a/Lib/test/test_bz2.py b/Lib/test/test_bz2.py index c97ed1cea0d1..e4dd7fc2100b 100644 --- a/Lib/test/test_bz2.py +++ b/Lib/test/test_bz2.py @@ -844,6 +844,10 @@ def test_refleaks_in___init__(self): bzd.__init__() self.assertAlmostEqual(gettotalrefcount() - refs_before, 0, delta=10) + def test_uninitialized_BZ2Decompressor_crash(self): + self.assertEqual(BZ2Decompressor.__new__(BZ2Decompressor). + decompress(bytes()), b'') + class CompressDecompressTest(BaseTest): def testCompress(self): diff --git a/Lib/test/test_lzma.py b/Lib/test/test_lzma.py index 18f474ba2a8b..ac53bdda2f17 100644 --- a/Lib/test/test_lzma.py +++ b/Lib/test/test_lzma.py @@ -380,6 +380,10 @@ def test_refleaks_in_decompressor___init__(self): lzd.__init__() self.assertAlmostEqual(gettotalrefcount() - refs_before, 0, delta=10) + def test_uninitialized_LZMADecompressor_crash(self): + self.assertEqual(LZMADecompressor.__new__(LZMADecompressor). + decompress(bytes()), b'') + class CompressDecompressFunctionTestCase(unittest.TestCase): diff --git a/Misc/NEWS.d/next/Library/2018-06-20-09-12-21.bpo-23224.zxCQ13.rst b/Misc/NEWS.d/next/Library/2018-06-20-09-12-21.bpo-23224.zxCQ13.rst new file mode 100644 index 000000000000..8909753c7f9e --- /dev/null +++ b/Misc/NEWS.d/next/Library/2018-06-20-09-12-21.bpo-23224.zxCQ13.rst @@ -0,0 +1,6 @@ +Fix segfaults when creating :class:`lzma.LZMADecompressor` and +:class:`bz2.BZ2Decompressor` objects without calling ``__init__()``, and fix +leakage of locks and internal buffers when calling the ``__init__()`` +methods of :class:`lzma.LZMADecompressor`, :class:`lzma.LZMACompressor`, +:class:`bz2.BZ2Compressor`, and :class:`bz2.BZ2Decompressor` objects +multiple times. diff --git a/Modules/_bz2module.c b/Modules/_bz2module.c index 9304c13fbed5..8e7b8e8078af 100644 --- a/Modules/_bz2module.c +++ b/Modules/_bz2module.c @@ -15,6 +15,29 @@ #error "The maximum block size accepted by libbzip2 is UINT32_MAX." #endif +typedef struct { + PyTypeObject *bz2_compressor_type; + PyTypeObject *bz2_decompressor_type; +} _bz2_state; + +static inline _bz2_state * +get_module_state(PyObject *module) +{ + void *state = PyModule_GetState(module); + assert(state != NULL); + return (_bz2_state *)state; +} + +static struct PyModuleDef _bz2module; + +static inline _bz2_state * +find_module_state_by_def(PyTypeObject *type) +{ + PyObject *module = PyType_GetModuleByDef(type, &_bz2module); + assert(module != NULL); + return get_module_state(module); +} + /* On success, return value >= 0 On failure, return -1 */ static inline Py_ssize_t @@ -214,12 +237,14 @@ compress(BZ2Compressor *c, char *data, size_t len, int action) /*[clinic input] module _bz2 -class _bz2.BZ2Compressor "BZ2Compressor *" "&BZ2Compressor_Type" -class _bz2.BZ2Decompressor "BZ2Decompressor *" "&BZ2Decompressor_Type" +class _bz2.BZ2Compressor "BZ2Compressor *" "clinic_state()->bz2_compressor_type" +class _bz2.BZ2Decompressor "BZ2Decompressor *" "clinic_state()->bz2_decompressor_type" [clinic start generated code]*/ -/*[clinic end generated code: output=da39a3ee5e6b4b0d input=dc7d7992a79f9cb7]*/ +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=92348121632b94c4]*/ +#define clinic_state() (find_module_state_by_def(type)) #include "clinic/_bz2module.c.h" +#undef clinic_state /*[clinic input] _bz2.BZ2Compressor.compress @@ -295,24 +320,43 @@ BZ2_Free(void* ctx, void *ptr) PyMem_RawFree(ptr); } +/*[clinic input] + at classmethod +_bz2.BZ2Compressor.__new__ + + compresslevel: int = 9 + Compression level, as a number between 1 and 9. + / -/* Argument Clinic is not used since the Argument Clinic always want to - check the type which would be wrong here */ -static int -_bz2_BZ2Compressor___init___impl(BZ2Compressor *self, int compresslevel) +Create a compressor object for compressing data incrementally. + +For one-shot compression, use the compress() function instead. +[clinic start generated code]*/ + +static PyObject * +_bz2_BZ2Compressor_impl(PyTypeObject *type, int compresslevel) +/*[clinic end generated code: output=83346c96beaacad7 input=d4500d2a52c8b263]*/ { int bzerror; + BZ2Compressor *self; if (!(1 <= compresslevel && compresslevel <= 9)) { PyErr_SetString(PyExc_ValueError, "compresslevel must be between 1 and 9"); - return -1; + return NULL; + } + + assert(type != NULL && type->tp_alloc != NULL); + self = (BZ2Compressor *)type->tp_alloc(type, 0); + if (self == NULL) { + return NULL; } self->lock = PyThread_allocate_lock(); if (self->lock == NULL) { + Py_DECREF(self); PyErr_SetString(PyExc_MemoryError, "Unable to allocate lock"); - return -1; + return NULL; } self->bzs.opaque = NULL; @@ -322,49 +366,11 @@ _bz2_BZ2Compressor___init___impl(BZ2Compressor *self, int compresslevel) if (catch_bz2_error(bzerror)) goto error; - return 0; + return (PyObject *)self; error: - PyThread_free_lock(self->lock); - self->lock = NULL; - return -1; -} - -PyDoc_STRVAR(_bz2_BZ2Compressor___init____doc__, -"BZ2Compressor(compresslevel=9, /)\n" -"--\n" -"\n" -"Create a compressor object for compressing data incrementally.\n" -"\n" -" compresslevel\n" -" Compression level, as a number between 1 and 9.\n" -"\n" -"For one-shot compression, use the compress() function instead."); - -static int -_bz2_BZ2Compressor___init__(PyObject *self, PyObject *args, PyObject *kwargs) -{ - int return_value = -1; - int compresslevel = 9; - - if (!_PyArg_NoKeywords("BZ2Compressor", kwargs)) { - goto exit; - } - if (!_PyArg_CheckPositional("BZ2Compressor", PyTuple_GET_SIZE(args), 0, 1)) { - goto exit; - } - if (PyTuple_GET_SIZE(args) < 1) { - goto skip_optional; - } - compresslevel = _PyLong_AsInt(PyTuple_GET_ITEM(args, 0)); - if (compresslevel == -1 && PyErr_Occurred()) { - goto exit; - } -skip_optional: - return_value = _bz2_BZ2Compressor___init___impl((BZ2Compressor *)self, compresslevel); - -exit: - return return_value; + Py_DECREF(self); + return NULL; } static void @@ -395,9 +401,8 @@ static PyMethodDef BZ2Compressor_methods[] = { static PyType_Slot bz2_compressor_type_slots[] = { {Py_tp_dealloc, BZ2Compressor_dealloc}, {Py_tp_methods, BZ2Compressor_methods}, - {Py_tp_init, _bz2_BZ2Compressor___init__}, - {Py_tp_new, PyType_GenericNew}, - {Py_tp_doc, (char *)_bz2_BZ2Compressor___init____doc__}, + {Py_tp_new, _bz2_BZ2Compressor}, + {Py_tp_doc, (char *)_bz2_BZ2Compressor__doc__}, {Py_tp_traverse, BZ2Compressor_traverse}, {0, 0} }; @@ -624,28 +629,40 @@ _bz2_BZ2Decompressor_decompress_impl(BZ2Decompressor *self, Py_buffer *data, return result; } -/* Argument Clinic is not used since the Argument Clinic always want to - check the type which would be wrong here */ -static int -_bz2_BZ2Decompressor___init___impl(BZ2Decompressor *self) +/*[clinic input] + at classmethod +_bz2.BZ2Decompressor.__new__ + +Create a decompressor object for decompressing data incrementally. + +For one-shot decompression, use the decompress() function instead. +[clinic start generated code]*/ + +static PyObject * +_bz2_BZ2Decompressor_impl(PyTypeObject *type) +/*[clinic end generated code: output=5150d51ccaab220e input=b87413ce51853528]*/ { + BZ2Decompressor *self; int bzerror; - PyThread_type_lock lock = PyThread_allocate_lock(); - if (lock == NULL) { - PyErr_SetString(PyExc_MemoryError, "Unable to allocate lock"); - return -1; + assert(type != NULL && type->tp_alloc != NULL); + self = (BZ2Decompressor *)type->tp_alloc(type, 0); + if (self == NULL) { + return NULL; } - if (self->lock != NULL) { - PyThread_free_lock(self->lock); + + self->lock = PyThread_allocate_lock(); + if (self->lock == NULL) { + Py_DECREF(self); + PyErr_SetString(PyExc_MemoryError, "Unable to allocate lock"); + return NULL; } - self->lock = lock; self->needs_input = 1; self->bzs_avail_in_real = 0; self->input_buffer = NULL; self->input_buffer_size = 0; - Py_XSETREF(self->unused_data, PyBytes_FromStringAndSize(NULL, 0)); + self->unused_data = PyBytes_FromStringAndSize(NULL, 0); if (self->unused_data == NULL) goto error; @@ -653,40 +670,13 @@ _bz2_BZ2Decompressor___init___impl(BZ2Decompressor *self) if (catch_bz2_error(bzerror)) goto error; - return 0; + return (PyObject *)self; error: - Py_CLEAR(self->unused_data); - PyThread_free_lock(self->lock); - self->lock = NULL; - return -1; -} - -static int -_bz2_BZ2Decompressor___init__(PyObject *self, PyObject *args, PyObject *kwargs) -{ - int return_value = -1; - - if (!_PyArg_NoPositional("BZ2Decompressor", args)) { - goto exit; - } - if (!_PyArg_NoKeywords("BZ2Decompressor", kwargs)) { - goto exit; - } - return_value = _bz2_BZ2Decompressor___init___impl((BZ2Decompressor *)self); - -exit: - return return_value; + Py_DECREF(self); + return NULL; } -PyDoc_STRVAR(_bz2_BZ2Decompressor___init____doc__, -"BZ2Decompressor()\n" -"--\n" -"\n" -"Create a decompressor object for decompressing data incrementally.\n" -"\n" -"For one-shot decompression, use the decompress() function instead."); - static void BZ2Decompressor_dealloc(BZ2Decompressor *self) { @@ -738,10 +728,9 @@ static PyMemberDef BZ2Decompressor_members[] = { static PyType_Slot bz2_decompressor_type_slots[] = { {Py_tp_dealloc, BZ2Decompressor_dealloc}, {Py_tp_methods, BZ2Decompressor_methods}, - {Py_tp_init, _bz2_BZ2Decompressor___init__}, - {Py_tp_doc, (char *)_bz2_BZ2Decompressor___init____doc__}, + {Py_tp_doc, (char *)_bz2_BZ2Decompressor__doc__}, {Py_tp_members, BZ2Decompressor_members}, - {Py_tp_new, PyType_GenericNew}, + {Py_tp_new, _bz2_BZ2Decompressor}, {Py_tp_traverse, BZ2Decompressor_traverse}, {0, 0} }; @@ -762,31 +751,52 @@ static PyType_Spec bz2_decompressor_type_spec = { static int _bz2_exec(PyObject *module) { - PyTypeObject *bz2_compressor_type = (PyTypeObject *)PyType_FromModuleAndSpec(module, + _bz2_state *state = get_module_state(module); + state->bz2_compressor_type = (PyTypeObject *)PyType_FromModuleAndSpec(module, &bz2_compressor_type_spec, NULL); - if (bz2_compressor_type == NULL) { + if (state->bz2_compressor_type == NULL) { return -1; } - int rc = PyModule_AddType(module, bz2_compressor_type); - Py_DECREF(bz2_compressor_type); - if (rc < 0) { + if (PyModule_AddType(module, state->bz2_compressor_type) < 0) { return -1; } - PyTypeObject *bz2_decompressor_type = (PyTypeObject *)PyType_FromModuleAndSpec(module, + state->bz2_decompressor_type = (PyTypeObject *)PyType_FromModuleAndSpec(module, &bz2_decompressor_type_spec, NULL); - if (bz2_decompressor_type == NULL) { + if (state->bz2_decompressor_type == NULL) { return -1; } - rc = PyModule_AddType(module, bz2_decompressor_type); - Py_DECREF(bz2_decompressor_type); - if (rc < 0) { + if (PyModule_AddType(module, state->bz2_decompressor_type) < 0) { return -1; } return 0; } +static int +_bz2_traverse(PyObject *module, visitproc visit, void *arg) +{ + _bz2_state *state = get_module_state(module); + Py_VISIT(state->bz2_compressor_type); + Py_VISIT(state->bz2_decompressor_type); + return 0; +} + +static int +_bz2_clear(PyObject *module) +{ + _bz2_state *state = get_module_state(module); + Py_CLEAR(state->bz2_compressor_type); + Py_CLEAR(state->bz2_decompressor_type); + return 0; +} + +static void +_bz2_free(void *module) +{ + (void)_bz2_clear((PyObject *)module); +} + static struct PyModuleDef_Slot _bz2_slots[] = { {Py_mod_exec, _bz2_exec}, {0, NULL} @@ -795,6 +805,10 @@ static struct PyModuleDef_Slot _bz2_slots[] = { static struct PyModuleDef _bz2module = { .m_base = PyModuleDef_HEAD_INIT, .m_name = "_bz2", + .m_size = sizeof(_bz2_state), + .m_traverse = _bz2_traverse, + .m_clear = _bz2_clear, + .m_free = _bz2_free, .m_slots = _bz2_slots, }; diff --git a/Modules/_lzmamodule.c b/Modules/_lzmamodule.c index b572d8cd909f..bccab8639159 100644 --- a/Modules/_lzmamodule.c +++ b/Modules/_lzmamodule.c @@ -734,7 +734,8 @@ Compressor_init_raw(_lzma_state *state, lzma_stream *lzs, PyObject *filterspecs) } /*[-clinic input] -_lzma.LZMACompressor.__init__ + at classmethod +_lzma.LZMACompressor.__new__ format: int(c_default="FORMAT_XZ") = FORMAT_XZ The container format to use for the output. This can @@ -765,8 +766,8 @@ the raw compressor does not support preset compression levels. For one-shot compression, use the compress() function instead. [-clinic start generated code]*/ -static int -Compressor_init(Compressor *self, PyObject *args, PyObject *kwargs) +static PyObject * +Compressor_new(PyTypeObject *type, PyObject *args, PyObject *kwargs) { static char *arg_names[] = {"format", "check", "preset", "filters", NULL}; int format = FORMAT_XZ; @@ -774,31 +775,37 @@ Compressor_init(Compressor *self, PyObject *args, PyObject *kwargs) uint32_t preset = LZMA_PRESET_DEFAULT; PyObject *preset_obj = Py_None; PyObject *filterspecs = Py_None; - _lzma_state *state = PyType_GetModuleState(Py_TYPE(self)); + Compressor *self; + + _lzma_state *state = PyType_GetModuleState(type); assert(state != NULL); if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|iiOO:LZMACompressor", arg_names, &format, &check, &preset_obj, &filterspecs)) { - return -1; + return NULL; } if (format != FORMAT_XZ && check != -1 && check != LZMA_CHECK_NONE) { PyErr_SetString(PyExc_ValueError, "Integrity checks are only supported by FORMAT_XZ"); - return -1; + return NULL; } if (preset_obj != Py_None && filterspecs != Py_None) { PyErr_SetString(PyExc_ValueError, "Cannot specify both preset and filter chain"); - return -1; + return NULL; } - if (preset_obj != Py_None) { - if (!uint32_converter(preset_obj, &preset)) { - return -1; - } + if (preset_obj != Py_None && !uint32_converter(preset_obj, &preset)) { + return NULL; + } + + assert(type != NULL && type->tp_alloc != NULL); + self = (Compressor *)type->tp_alloc(type, 0); + if (self == NULL) { + return NULL; } self->alloc.opaque = NULL; @@ -808,8 +815,9 @@ Compressor_init(Compressor *self, PyObject *args, PyObject *kwargs) self->lock = PyThread_allocate_lock(); if (self->lock == NULL) { + Py_DECREF(self); PyErr_SetString(PyExc_MemoryError, "Unable to allocate lock"); - return -1; + return NULL; } self->flushed = 0; @@ -819,31 +827,33 @@ Compressor_init(Compressor *self, PyObject *args, PyObject *kwargs) check = LZMA_CHECK_CRC64; } if (Compressor_init_xz(state, &self->lzs, check, preset, filterspecs) != 0) { - break; + goto error; } - return 0; + break; case FORMAT_ALONE: if (Compressor_init_alone(state, &self->lzs, preset, filterspecs) != 0) { - break; + goto error; } - return 0; + break; case FORMAT_RAW: if (Compressor_init_raw(state, &self->lzs, filterspecs) != 0) { - break; + goto error; } - return 0; + break; default: PyErr_Format(PyExc_ValueError, "Invalid container format: %d", format); - break; + goto error; } - PyThread_free_lock(self->lock); - self->lock = NULL; - return -1; + return (PyObject *)self; + +error: + Py_DECREF(self); + return NULL; } static void @@ -902,8 +912,7 @@ PyDoc_STRVAR(Compressor_doc, static PyType_Slot lzma_compressor_type_slots[] = { {Py_tp_dealloc, Compressor_dealloc}, {Py_tp_methods, Compressor_methods}, - {Py_tp_init, Compressor_init}, - {Py_tp_new, PyType_GenericNew}, + {Py_tp_new, Compressor_new}, {Py_tp_doc, (char *)Compressor_doc}, {Py_tp_traverse, Compressor_traverse}, {0, 0} @@ -1165,7 +1174,8 @@ Decompressor_init_raw(_lzma_state *state, lzma_stream *lzs, PyObject *filterspec } /*[clinic input] -_lzma.LZMADecompressor.__init__ + at classmethod +_lzma.LZMADecompressor.__new__ format: int(c_default="FORMAT_AUTO") = FORMAT_AUTO Specifies the container format of the input stream. If this is @@ -1189,54 +1199,57 @@ Create a decompressor object for decompressing data incrementally. For one-shot decompression, use the decompress() function instead. [clinic start generated code]*/ -static int -_lzma_LZMADecompressor___init___impl(Decompressor *self, int format, - PyObject *memlimit, PyObject *filters) -/*[clinic end generated code: output=3e1821f8aa36564c input=81fe684a6c2f8a27]*/ +static PyObject * +_lzma_LZMADecompressor_impl(PyTypeObject *type, int format, + PyObject *memlimit, PyObject *filters) +/*[clinic end generated code: output=2d46d5e70f10bc7f input=ca40cd1cb1202b0d]*/ { + Decompressor *self; const uint32_t decoder_flags = LZMA_TELL_ANY_CHECK | LZMA_TELL_NO_CHECK; uint64_t memlimit_ = UINT64_MAX; lzma_ret lzret; - _lzma_state *state = PyType_GetModuleState(Py_TYPE(self)); + _lzma_state *state = PyType_GetModuleState(type); assert(state != NULL); if (memlimit != Py_None) { if (format == FORMAT_RAW) { PyErr_SetString(PyExc_ValueError, "Cannot specify memory limit with FORMAT_RAW"); - return -1; + return NULL; } memlimit_ = PyLong_AsUnsignedLongLong(memlimit); if (PyErr_Occurred()) { - return -1; + return NULL; } } if (format == FORMAT_RAW && filters == Py_None) { PyErr_SetString(PyExc_ValueError, "Must specify filters for FORMAT_RAW"); - return -1; + return NULL; } else if (format != FORMAT_RAW && filters != Py_None) { PyErr_SetString(PyExc_ValueError, "Cannot specify filters except with FORMAT_RAW"); - return -1; + return NULL; } + assert(type != NULL && type->tp_alloc != NULL); + self = (Decompressor *)type->tp_alloc(type, 0); + if (self == NULL) { + return NULL; + } self->alloc.opaque = NULL; self->alloc.alloc = PyLzma_Malloc; self->alloc.free = PyLzma_Free; self->lzs.allocator = &self->alloc; self->lzs.next_in = NULL; - PyThread_type_lock lock = PyThread_allocate_lock(); - if (lock == NULL) { + self->lock = PyThread_allocate_lock(); + if (self->lock == NULL) { + Py_DECREF(self); PyErr_SetString(PyExc_MemoryError, "Unable to allocate lock"); - return -1; - } - if (self->lock != NULL) { - PyThread_free_lock(self->lock); + return NULL; } - self->lock = lock; self->check = LZMA_CHECK_UNKNOWN; self->needs_input = 1; @@ -1251,43 +1264,43 @@ _lzma_LZMADecompressor___init___impl(Decompressor *self, int format, case FORMAT_AUTO: lzret = lzma_auto_decoder(&self->lzs, memlimit_, decoder_flags); if (catch_lzma_error(state, lzret)) { - break; + goto error; } - return 0; + break; case FORMAT_XZ: lzret = lzma_stream_decoder(&self->lzs, memlimit_, decoder_flags); if (catch_lzma_error(state, lzret)) { - break; + goto error; } - return 0; + break; case FORMAT_ALONE: self->check = LZMA_CHECK_NONE; lzret = lzma_alone_decoder(&self->lzs, memlimit_); if (catch_lzma_error(state, lzret)) { - break; + goto error; } - return 0; + break; case FORMAT_RAW: self->check = LZMA_CHECK_NONE; if (Decompressor_init_raw(state, &self->lzs, filters) == -1) { - break; + goto error; } - return 0; + break; default: PyErr_Format(PyExc_ValueError, "Invalid container format: %d", format); - break; + goto error; } + return (PyObject *)self; + error: - Py_CLEAR(self->unused_data); - PyThread_free_lock(self->lock); - self->lock = NULL; - return -1; + Py_DECREF(self); + return NULL; } static void @@ -1345,9 +1358,8 @@ static PyMemberDef Decompressor_members[] = { static PyType_Slot lzma_decompressor_type_slots[] = { {Py_tp_dealloc, Decompressor_dealloc}, {Py_tp_methods, Decompressor_methods}, - {Py_tp_init, _lzma_LZMADecompressor___init__}, - {Py_tp_new, PyType_GenericNew}, - {Py_tp_doc, (char *)_lzma_LZMADecompressor___init____doc__}, + {Py_tp_new, _lzma_LZMADecompressor}, + {Py_tp_doc, (char *)_lzma_LZMADecompressor__doc__}, {Py_tp_traverse, Decompressor_traverse}, {Py_tp_members, Decompressor_members}, {0, 0} diff --git a/Modules/clinic/_bz2module.c.h b/Modules/clinic/_bz2module.c.h index 50a48b0bf2b8..d7797d639ae3 100644 --- a/Modules/clinic/_bz2module.c.h +++ b/Modules/clinic/_bz2module.c.h @@ -71,6 +71,48 @@ _bz2_BZ2Compressor_flush(BZ2Compressor *self, PyObject *Py_UNUSED(ignored)) return _bz2_BZ2Compressor_flush_impl(self); } +PyDoc_STRVAR(_bz2_BZ2Compressor__doc__, +"BZ2Compressor(compresslevel=9, /)\n" +"--\n" +"\n" +"Create a compressor object for compressing data incrementally.\n" +"\n" +" compresslevel\n" +" Compression level, as a number between 1 and 9.\n" +"\n" +"For one-shot compression, use the compress() function instead."); + +static PyObject * +_bz2_BZ2Compressor_impl(PyTypeObject *type, int compresslevel); + +static PyObject * +_bz2_BZ2Compressor(PyTypeObject *type, PyObject *args, PyObject *kwargs) +{ + PyObject *return_value = NULL; + PyTypeObject *base_tp = clinic_state()->bz2_compressor_type; + int compresslevel = 9; + + if ((type == base_tp || type->tp_init == base_tp->tp_init) && + !_PyArg_NoKeywords("BZ2Compressor", kwargs)) { + goto exit; + } + if (!_PyArg_CheckPositional("BZ2Compressor", PyTuple_GET_SIZE(args), 0, 1)) { + goto exit; + } + if (PyTuple_GET_SIZE(args) < 1) { + goto skip_optional; + } + compresslevel = _PyLong_AsInt(PyTuple_GET_ITEM(args, 0)); + if (compresslevel == -1 && PyErr_Occurred()) { + goto exit; + } +skip_optional: + return_value = _bz2_BZ2Compressor_impl(type, compresslevel); + +exit: + return return_value; +} + PyDoc_STRVAR(_bz2_BZ2Decompressor_decompress__doc__, "decompress($self, /, data, max_length=-1)\n" "--\n" @@ -168,4 +210,35 @@ _bz2_BZ2Decompressor_decompress(BZ2Decompressor *self, PyObject *const *args, Py return return_value; } -/*[clinic end generated code: output=829bed4097cf2e63 input=a9049054013a1b77]*/ + +PyDoc_STRVAR(_bz2_BZ2Decompressor__doc__, +"BZ2Decompressor()\n" +"--\n" +"\n" +"Create a decompressor object for decompressing data incrementally.\n" +"\n" +"For one-shot decompression, use the decompress() function instead."); + +static PyObject * +_bz2_BZ2Decompressor_impl(PyTypeObject *type); + +static PyObject * +_bz2_BZ2Decompressor(PyTypeObject *type, PyObject *args, PyObject *kwargs) +{ + PyObject *return_value = NULL; + PyTypeObject *base_tp = clinic_state()->bz2_decompressor_type; + + if ((type == base_tp || type->tp_init == base_tp->tp_init) && + !_PyArg_NoPositional("BZ2Decompressor", args)) { + goto exit; + } + if ((type == base_tp || type->tp_init == base_tp->tp_init) && + !_PyArg_NoKeywords("BZ2Decompressor", kwargs)) { + goto exit; + } + return_value = _bz2_BZ2Decompressor_impl(type); + +exit: + return return_value; +} +/*[clinic end generated code: output=805400e4805098ec input=a9049054013a1b77]*/ diff --git a/Modules/clinic/_lzmamodule.c.h b/Modules/clinic/_lzmamodule.c.h index 286d2b007065..9b396a566839 100644 --- a/Modules/clinic/_lzmamodule.c.h +++ b/Modules/clinic/_lzmamodule.c.h @@ -169,7 +169,7 @@ _lzma_LZMADecompressor_decompress(Decompressor *self, PyObject *const *args, Py_ return return_value; } -PyDoc_STRVAR(_lzma_LZMADecompressor___init____doc__, +PyDoc_STRVAR(_lzma_LZMADecompressor__doc__, "LZMADecompressor(format=FORMAT_AUTO, memlimit=None, filters=None)\n" "--\n" "\n" @@ -192,14 +192,14 @@ PyDoc_STRVAR(_lzma_LZMADecompressor___init____doc__, "\n" "For one-shot decompression, use the decompress() function instead."); -static int -_lzma_LZMADecompressor___init___impl(Decompressor *self, int format, - PyObject *memlimit, PyObject *filters); +static PyObject * +_lzma_LZMADecompressor_impl(PyTypeObject *type, int format, + PyObject *memlimit, PyObject *filters); -static int -_lzma_LZMADecompressor___init__(PyObject *self, PyObject *args, PyObject *kwargs) +static PyObject * +_lzma_LZMADecompressor(PyTypeObject *type, PyObject *args, PyObject *kwargs) { - int return_value = -1; + PyObject *return_value = NULL; #if defined(Py_BUILD_CORE) && !defined(Py_BUILD_CORE_MODULE) #define NUM_KEYWORDS 3 @@ -257,7 +257,7 @@ _lzma_LZMADecompressor___init__(PyObject *self, PyObject *args, PyObject *kwargs } filters = fastargs[2]; skip_optional_pos: - return_value = _lzma_LZMADecompressor___init___impl((Decompressor *)self, format, memlimit, filters); + return_value = _lzma_LZMADecompressor_impl(type, format, memlimit, filters); exit: return return_value; @@ -338,4 +338,4 @@ _lzma__decode_filter_properties(PyObject *module, PyObject *const *args, Py_ssiz return return_value; } -/*[clinic end generated code: output=da3e83ba97244044 input=a9049054013a1b77]*/ +/*[clinic end generated code: output=96c1fbdada1ef232 input=a9049054013a1b77]*/ From webhook-mailer at python.org Thu Feb 23 09:10:07 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Thu, 23 Feb 2023 14:10:07 -0000 Subject: [Python-checkins] gh-102151: Correctly fetch CONFIG_ARGS in Tools/freeze/test/freeze.py (#102152) Message-ID: https://github.com/python/cpython/commit/c3a178398c199038f3a0891d09f0363ec73f3b38 commit: c3a178398c199038f3a0891d09f0363ec73f3b38 branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-23T15:09:51+01:00 summary: gh-102151: Correctly fetch CONFIG_ARGS in Tools/freeze/test/freeze.py (#102152) files: M Tools/freeze/test/freeze.py diff --git a/Tools/freeze/test/freeze.py b/Tools/freeze/test/freeze.py index b4c76ff36a87..f6a5adb4519f 100644 --- a/Tools/freeze/test/freeze.py +++ b/Tools/freeze/test/freeze.py @@ -153,7 +153,7 @@ def prepare(script=None, outdir=None): print(f'configuring python in {builddir}...') cmd = [ os.path.join(srcdir, 'configure'), - *shlex.split(get_config_var(builddir, 'CONFIG_ARGS') or ''), + *shlex.split(get_config_var(srcdir, 'CONFIG_ARGS') or ''), ] ensure_opt(cmd, 'cache-file', os.path.join(outdir, 'python-config.cache')) prefix = os.path.join(outdir, 'python-installation') From webhook-mailer at python.org Thu Feb 23 09:36:41 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 23 Feb 2023 14:36:41 -0000 Subject: [Python-checkins] gh-102151: Correctly fetch CONFIG_ARGS in Tools/freeze/test/freeze.py (GH-102152) Message-ID: https://github.com/python/cpython/commit/3cc00127a2f99915d6fa46c82c2fd46b1d4d603f commit: 3cc00127a2f99915d6fa46c82c2fd46b1d4d603f branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-23T06:36:28-08:00 summary: gh-102151: Correctly fetch CONFIG_ARGS in Tools/freeze/test/freeze.py (GH-102152) (cherry picked from commit c3a178398c199038f3a0891d09f0363ec73f3b38) Co-authored-by: Erlend E. Aasland files: M Tools/freeze/test/freeze.py diff --git a/Tools/freeze/test/freeze.py b/Tools/freeze/test/freeze.py index b4c76ff36a87..f6a5adb4519f 100644 --- a/Tools/freeze/test/freeze.py +++ b/Tools/freeze/test/freeze.py @@ -153,7 +153,7 @@ def prepare(script=None, outdir=None): print(f'configuring python in {builddir}...') cmd = [ os.path.join(srcdir, 'configure'), - *shlex.split(get_config_var(builddir, 'CONFIG_ARGS') or ''), + *shlex.split(get_config_var(srcdir, 'CONFIG_ARGS') or ''), ] ensure_opt(cmd, 'cache-file', os.path.join(outdir, 'python-config.cache')) prefix = os.path.join(outdir, 'python-installation') From webhook-mailer at python.org Thu Feb 23 10:02:31 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 23 Feb 2023 15:02:31 -0000 Subject: [Python-checkins] gh-101981: Consolidate macOS configure steps in CI (GH-102131) Message-ID: https://github.com/python/cpython/commit/e07b304bb004e1298283c82bd135dd5ef96a90cc commit: e07b304bb004e1298283c82bd135dd5ef96a90cc branch: main author: Erlend E. Aasland committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-23T07:02:23-08:00 summary: gh-101981: Consolidate macOS configure steps in CI (GH-102131) Automerge-Triggered-By: GH:erlend-aasland files: M .github/workflows/build.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index eec11e25a7c7..2241b0b8aa40 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -162,13 +162,11 @@ jobs: - uses: actions/checkout at v3 - name: Install Homebrew dependencies run: brew install pkg-config openssl at 1.1 xz gdbm tcl-tk - - name: Prepare Homebrew environment variables - run: | - echo "CFLAGS=-I$(brew --prefix gdbm)/include -I$(brew --prefix xz)/include" >> $GITHUB_ENV - echo "LDFLAGS=-L$(brew --prefix gdbm)/lib -I$(brew --prefix xz)/lib" >> $GITHUB_ENV - echo "PKG_CONFIG_PATH=$(brew --prefix openssl at 1.1)/lib/pkgconfig:$(brew --prefix tcl-tk)/lib/pkgconfig" >> $GITHUB_ENV - name: Configure CPython run: | + CFLAGS="-I$(brew --prefix gdbm)/include -I$(brew --prefix xz)/include" \ + LDFLAGS="-L$(brew --prefix gdbm)/lib -I$(brew --prefix xz)/lib" \ + PKG_CONFIG_PATH="$(brew --prefix tcl-tk)/lib/pkgconfig" \ ./configure \ --with-pydebug \ --prefix=/opt/python-dev \ From webhook-mailer at python.org Thu Feb 23 10:03:21 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 23 Feb 2023 15:03:21 -0000 Subject: [Python-checkins] gh-93649: Split exception tests from _testcapimodule.c (GH-102173) Message-ID: https://github.com/python/cpython/commit/efc985a714b6f43c43ae629183f95618054422ae commit: efc985a714b6f43c43ae629183f95618054422ae branch: main author: Erlend E. Aasland committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-23T07:03:13-08:00 summary: gh-93649: Split exception tests from _testcapimodule.c (GH-102173) Automerge-Triggered-By: GH:erlend-aasland files: A Lib/test/test_capi/test_exceptions.py A Modules/_testcapi/exceptions.c M Lib/test/test_capi/test_misc.py M Modules/Setup.stdlib.in M Modules/_testcapi/parts.h M Modules/_testcapimodule.c M PCbuild/_testcapi.vcxproj M PCbuild/_testcapi.vcxproj.filters diff --git a/Lib/test/test_capi/test_exceptions.py b/Lib/test/test_capi/test_exceptions.py new file mode 100644 index 000000000000..b543a1a565a5 --- /dev/null +++ b/Lib/test/test_capi/test_exceptions.py @@ -0,0 +1,145 @@ +import re +import sys +import unittest + +from test import support +from test.support import import_helper +from test.support.script_helper import assert_python_failure + +from .test_misc import decode_stderr + +# Skip this test if the _testcapi module isn't available. +_testcapi = import_helper.import_module('_testcapi') + +class Test_Exceptions(unittest.TestCase): + + def test_exception(self): + raised_exception = ValueError("5") + new_exc = TypeError("TEST") + try: + raise raised_exception + except ValueError as e: + orig_sys_exception = sys.exception() + orig_exception = _testcapi.set_exception(new_exc) + new_sys_exception = sys.exception() + new_exception = _testcapi.set_exception(orig_exception) + reset_sys_exception = sys.exception() + + self.assertEqual(orig_exception, e) + + self.assertEqual(orig_exception, raised_exception) + self.assertEqual(orig_sys_exception, orig_exception) + self.assertEqual(reset_sys_exception, orig_exception) + self.assertEqual(new_exception, new_exc) + self.assertEqual(new_sys_exception, new_exception) + else: + self.fail("Exception not raised") + + def test_exc_info(self): + raised_exception = ValueError("5") + new_exc = TypeError("TEST") + try: + raise raised_exception + except ValueError as e: + tb = e.__traceback__ + orig_sys_exc_info = sys.exc_info() + orig_exc_info = _testcapi.set_exc_info(new_exc.__class__, new_exc, None) + new_sys_exc_info = sys.exc_info() + new_exc_info = _testcapi.set_exc_info(*orig_exc_info) + reset_sys_exc_info = sys.exc_info() + + self.assertEqual(orig_exc_info[1], e) + + self.assertSequenceEqual(orig_exc_info, (raised_exception.__class__, raised_exception, tb)) + self.assertSequenceEqual(orig_sys_exc_info, orig_exc_info) + self.assertSequenceEqual(reset_sys_exc_info, orig_exc_info) + self.assertSequenceEqual(new_exc_info, (new_exc.__class__, new_exc, None)) + self.assertSequenceEqual(new_sys_exc_info, new_exc_info) + else: + self.assertTrue(False) + + +class Test_FatalError(unittest.TestCase): + + def check_fatal_error(self, code, expected, not_expected=()): + with support.SuppressCrashReport(): + rc, out, err = assert_python_failure('-sSI', '-c', code) + + err = decode_stderr(err) + self.assertIn('Fatal Python error: test_fatal_error: MESSAGE\n', + err) + + match = re.search(r'^Extension modules:(.*) \(total: ([0-9]+)\)$', + err, re.MULTILINE) + if not match: + self.fail(f"Cannot find 'Extension modules:' in {err!r}") + modules = set(match.group(1).strip().split(', ')) + total = int(match.group(2)) + + for name in expected: + self.assertIn(name, modules) + for name in not_expected: + self.assertNotIn(name, modules) + self.assertEqual(len(modules), total) + + @support.requires_subprocess() + def test_fatal_error(self): + # By default, stdlib extension modules are ignored, + # but not test modules. + expected = ('_testcapi',) + not_expected = ('sys',) + code = 'import _testcapi, sys; _testcapi.fatal_error(b"MESSAGE")' + self.check_fatal_error(code, expected, not_expected) + + # Mark _testcapi as stdlib module, but not sys + expected = ('sys',) + not_expected = ('_testcapi',) + code = """if True: + import _testcapi, sys + sys.stdlib_module_names = frozenset({"_testcapi"}) + _testcapi.fatal_error(b"MESSAGE") + """ + self.check_fatal_error(code, expected) + + +class Test_ErrSetAndRestore(unittest.TestCase): + + def test_err_set_raised(self): + with self.assertRaises(ValueError): + _testcapi.err_set_raised(ValueError()) + v = ValueError() + try: + _testcapi.err_set_raised(v) + except ValueError as ex: + self.assertIs(v, ex) + + def test_err_restore(self): + with self.assertRaises(ValueError): + _testcapi.err_restore(ValueError) + with self.assertRaises(ValueError): + _testcapi.err_restore(ValueError, 1) + with self.assertRaises(ValueError): + _testcapi.err_restore(ValueError, 1, None) + with self.assertRaises(ValueError): + _testcapi.err_restore(ValueError, ValueError()) + try: + _testcapi.err_restore(KeyError, "hi") + except KeyError as k: + self.assertEqual("hi", k.args[0]) + try: + 1/0 + except Exception as e: + tb = e.__traceback__ + with self.assertRaises(ValueError): + _testcapi.err_restore(ValueError, 1, tb) + with self.assertRaises(TypeError): + _testcapi.err_restore(ValueError, 1, 0) + try: + _testcapi.err_restore(ValueError, 1, tb) + except ValueError as v: + self.assertEqual(1, v.args[0]) + self.assertIs(tb, v.__traceback__.tb_next) + + +if __name__ == "__main__": + unittest.main() diff --git a/Lib/test/test_capi/test_misc.py b/Lib/test/test_capi/test_misc.py index f4569fb00546..c34ee578b5c8 100644 --- a/Lib/test/test_capi/test_misc.py +++ b/Lib/test/test_capi/test_misc.py @@ -8,7 +8,6 @@ import os import pickle import random -import re import subprocess import sys import textwrap @@ -91,51 +90,6 @@ def test_no_FatalError_infinite_loop(self): def test_memoryview_from_NULL_pointer(self): self.assertRaises(ValueError, _testcapi.make_memoryview_from_NULL_pointer) - def test_exception(self): - raised_exception = ValueError("5") - new_exc = TypeError("TEST") - try: - raise raised_exception - except ValueError as e: - orig_sys_exception = sys.exception() - orig_exception = _testcapi.set_exception(new_exc) - new_sys_exception = sys.exception() - new_exception = _testcapi.set_exception(orig_exception) - reset_sys_exception = sys.exception() - - self.assertEqual(orig_exception, e) - - self.assertEqual(orig_exception, raised_exception) - self.assertEqual(orig_sys_exception, orig_exception) - self.assertEqual(reset_sys_exception, orig_exception) - self.assertEqual(new_exception, new_exc) - self.assertEqual(new_sys_exception, new_exception) - else: - self.fail("Exception not raised") - - def test_exc_info(self): - raised_exception = ValueError("5") - new_exc = TypeError("TEST") - try: - raise raised_exception - except ValueError as e: - tb = e.__traceback__ - orig_sys_exc_info = sys.exc_info() - orig_exc_info = _testcapi.set_exc_info(new_exc.__class__, new_exc, None) - new_sys_exc_info = sys.exc_info() - new_exc_info = _testcapi.set_exc_info(*orig_exc_info) - reset_sys_exc_info = sys.exc_info() - - self.assertEqual(orig_exc_info[1], e) - - self.assertSequenceEqual(orig_exc_info, (raised_exception.__class__, raised_exception, tb)) - self.assertSequenceEqual(orig_sys_exc_info, orig_exc_info) - self.assertSequenceEqual(reset_sys_exc_info, orig_exc_info) - self.assertSequenceEqual(new_exc_info, (new_exc.__class__, new_exc, None)) - self.assertSequenceEqual(new_sys_exc_info, new_exc_info) - else: - self.assertTrue(False) - @unittest.skipUnless(_posixsubprocess, '_posixsubprocess required for this test.') def test_seq_bytes_to_charp_array(self): # Issue #15732: crash in _PySequence_BytesToCharpArray() @@ -837,46 +791,6 @@ def __index__(self): self.assertRaises(TypeError, pynumber_tobase, '123', 10) self.assertRaises(SystemError, pynumber_tobase, 123, 0) - def check_fatal_error(self, code, expected, not_expected=()): - with support.SuppressCrashReport(): - rc, out, err = assert_python_failure('-sSI', '-c', code) - - err = decode_stderr(err) - self.assertIn('Fatal Python error: test_fatal_error: MESSAGE\n', - err) - - match = re.search(r'^Extension modules:(.*) \(total: ([0-9]+)\)$', - err, re.MULTILINE) - if not match: - self.fail(f"Cannot find 'Extension modules:' in {err!r}") - modules = set(match.group(1).strip().split(', ')) - total = int(match.group(2)) - - for name in expected: - self.assertIn(name, modules) - for name in not_expected: - self.assertNotIn(name, modules) - self.assertEqual(len(modules), total) - - @support.requires_subprocess() - def test_fatal_error(self): - # By default, stdlib extension modules are ignored, - # but not test modules. - expected = ('_testcapi',) - not_expected = ('sys',) - code = 'import _testcapi, sys; _testcapi.fatal_error(b"MESSAGE")' - self.check_fatal_error(code, expected, not_expected) - - # Mark _testcapi as stdlib module, but not sys - expected = ('sys',) - not_expected = ('_testcapi',) - code = textwrap.dedent(''' - import _testcapi, sys - sys.stdlib_module_names = frozenset({"_testcapi"}) - _testcapi.fatal_error(b"MESSAGE") - ''') - self.check_fatal_error(code, expected) - def test_pyobject_repr_from_null(self): s = _testcapi.pyobject_repr_from_null() self.assertEqual(s, '') @@ -1214,9 +1128,9 @@ def test_pendingcalls_non_threaded(self): self.pendingcalls_wait(l, n) def test_gen_get_code(self): - def genf(): yield - gen = genf() - self.assertEqual(_testcapi.gen_get_code(gen), gen.gi_code) + def genf(): yield + gen = genf() + self.assertEqual(_testcapi.gen_get_code(gen), gen.gi_code) class SubinterpreterTest(unittest.TestCase): @@ -1641,44 +1555,5 @@ def func2(x=None): self.do_test(func2) -class Test_ErrSetAndRestore(unittest.TestCase): - - def test_err_set_raised(self): - with self.assertRaises(ValueError): - _testcapi.err_set_raised(ValueError()) - v = ValueError() - try: - _testcapi.err_set_raised(v) - except ValueError as ex: - self.assertIs(v, ex) - - def test_err_restore(self): - with self.assertRaises(ValueError): - _testcapi.err_restore(ValueError) - with self.assertRaises(ValueError): - _testcapi.err_restore(ValueError, 1) - with self.assertRaises(ValueError): - _testcapi.err_restore(ValueError, 1, None) - with self.assertRaises(ValueError): - _testcapi.err_restore(ValueError, ValueError()) - try: - _testcapi.err_restore(KeyError, "hi") - except KeyError as k: - self.assertEqual("hi", k.args[0]) - try: - 1/0 - except Exception as e: - tb = e.__traceback__ - with self.assertRaises(ValueError): - _testcapi.err_restore(ValueError, 1, tb) - with self.assertRaises(TypeError): - _testcapi.err_restore(ValueError, 1, 0) - try: - _testcapi.err_restore(ValueError, 1, tb) - except ValueError as v: - self.assertEqual(1, v.args[0]) - self.assertIs(tb, v.__traceback__.tb_next) - - if __name__ == "__main__": unittest.main() diff --git a/Modules/Setup.stdlib.in b/Modules/Setup.stdlib.in index d33cd8223999..7551e5b34943 100644 --- a/Modules/Setup.stdlib.in +++ b/Modules/Setup.stdlib.in @@ -169,7 +169,7 @@ @MODULE__XXTESTFUZZ_TRUE at _xxtestfuzz _xxtestfuzz/_xxtestfuzz.c _xxtestfuzz/fuzzer.c @MODULE__TESTBUFFER_TRUE at _testbuffer _testbuffer.c @MODULE__TESTINTERNALCAPI_TRUE at _testinternalcapi _testinternalcapi.c - at MODULE__TESTCAPI_TRUE@_testcapi _testcapimodule.c _testcapi/vectorcall.c _testcapi/vectorcall_limited.c _testcapi/heaptype.c _testcapi/unicode.c _testcapi/getargs.c _testcapi/pytime.c _testcapi/datetime.c _testcapi/docstring.c _testcapi/mem.c _testcapi/watchers.c _testcapi/long.c _testcapi/float.c _testcapi/structmember.c + at MODULE__TESTCAPI_TRUE@_testcapi _testcapimodule.c _testcapi/vectorcall.c _testcapi/vectorcall_limited.c _testcapi/heaptype.c _testcapi/unicode.c _testcapi/getargs.c _testcapi/pytime.c _testcapi/datetime.c _testcapi/docstring.c _testcapi/mem.c _testcapi/watchers.c _testcapi/long.c _testcapi/float.c _testcapi/structmember.c _testcapi/exceptions.c @MODULE__TESTCLINIC_TRUE at _testclinic _testclinic.c # Some testing modules MUST be built as shared libraries. diff --git a/Modules/_testcapi/exceptions.c b/Modules/_testcapi/exceptions.c new file mode 100644 index 000000000000..43b88ccf261d --- /dev/null +++ b/Modules/_testcapi/exceptions.c @@ -0,0 +1,277 @@ +#include "parts.h" + +static PyObject * +err_set_raised(PyObject *self, PyObject *exc) +{ + Py_INCREF(exc); + PyErr_SetRaisedException(exc); + assert(PyErr_Occurred()); + return NULL; +} + +static PyObject * +err_restore(PyObject *self, PyObject *args) { + PyObject *type = NULL, *value = NULL, *traceback = NULL; + switch(PyTuple_Size(args)) { + case 3: + traceback = PyTuple_GetItem(args, 2); + Py_INCREF(traceback); + /* fall through */ + case 2: + value = PyTuple_GetItem(args, 1); + Py_INCREF(value); + /* fall through */ + case 1: + type = PyTuple_GetItem(args, 0); + Py_INCREF(type); + break; + default: + PyErr_SetString(PyExc_TypeError, + "wrong number of arguments"); + return NULL; + } + PyErr_Restore(type, value, traceback); + assert(PyErr_Occurred()); + return NULL; +} + +/* To test the format of exceptions as printed out. */ +static PyObject * +exception_print(PyObject *self, PyObject *args) +{ + PyObject *value; + PyObject *tb = NULL; + + if (!PyArg_ParseTuple(args, "O:exception_print", &value)) { + return NULL; + } + + if (PyExceptionInstance_Check(value)) { + tb = PyException_GetTraceback(value); + } + + PyErr_Display((PyObject *) Py_TYPE(value), value, tb); + Py_XDECREF(tb); + + Py_RETURN_NONE; +} + +/* Test PyErr_NewExceptionWithDoc (also exercise PyErr_NewException). + Run via Lib/test/test_exceptions.py */ +static PyObject * +make_exception_with_doc(PyObject *self, PyObject *args, PyObject *kwargs) +{ + const char *name; + const char *doc = NULL; + PyObject *base = NULL; + PyObject *dict = NULL; + + static char *kwlist[] = {"name", "doc", "base", "dict", NULL}; + + if (!PyArg_ParseTupleAndKeywords(args, kwargs, + "s|sOO:make_exception_with_doc", kwlist, + &name, &doc, &base, &dict)) + { + return NULL; + } + + return PyErr_NewExceptionWithDoc(name, doc, base, dict); +} + +static PyObject * +raise_exception(PyObject *self, PyObject *args) +{ + PyObject *exc; + int num_args; + + if (!PyArg_ParseTuple(args, "Oi:raise_exception", &exc, &num_args)) { + return NULL; + } + + PyObject *exc_args = PyTuple_New(num_args); + if (exc_args == NULL) { + return NULL; + } + for (int i = 0; i < num_args; ++i) { + PyObject *v = PyLong_FromLong(i); + if (v == NULL) { + Py_DECREF(exc_args); + return NULL; + } + PyTuple_SET_ITEM(exc_args, i, v); + } + PyErr_SetObject(exc, exc_args); + Py_DECREF(exc_args); + return NULL; +} + +/* reliably raise a MemoryError */ +static PyObject * +raise_memoryerror(PyObject *self, PyObject *Py_UNUSED(ignored)) +{ + return PyErr_NoMemory(); +} + +static PyObject * +test_fatal_error(PyObject *self, PyObject *args) +{ + char *message; + int release_gil = 0; + if (!PyArg_ParseTuple(args, "y|i:fatal_error", &message, &release_gil)) { + return NULL; + } + if (release_gil) { + Py_BEGIN_ALLOW_THREADS + Py_FatalError(message); + Py_END_ALLOW_THREADS + } + else { + Py_FatalError(message); + } + // Py_FatalError() does not return, but exits the process. + Py_RETURN_NONE; +} + +static PyObject * +test_set_exc_info(PyObject *self, PyObject *args) +{ + PyObject *new_type, *new_value, *new_tb; + PyObject *type, *value, *tb; + if (!PyArg_ParseTuple(args, "OOO:test_set_exc_info", + &new_type, &new_value, &new_tb)) + { + return NULL; + } + + PyErr_GetExcInfo(&type, &value, &tb); + + Py_INCREF(new_type); + Py_INCREF(new_value); + Py_INCREF(new_tb); + PyErr_SetExcInfo(new_type, new_value, new_tb); + + PyObject *orig_exc = PyTuple_Pack(3, + type ? type : Py_None, + value ? value : Py_None, + tb ? tb : Py_None); + Py_XDECREF(type); + Py_XDECREF(value); + Py_XDECREF(tb); + return orig_exc; +} + +static PyObject * +test_set_exception(PyObject *self, PyObject *new_exc) +{ + PyObject *exc = PyErr_GetHandledException(); + assert(PyExceptionInstance_Check(exc) || exc == NULL); + + PyErr_SetHandledException(new_exc); + return exc; +} + +static PyObject * +test_write_unraisable_exc(PyObject *self, PyObject *args) +{ + PyObject *exc, *err_msg, *obj; + if (!PyArg_ParseTuple(args, "OOO", &exc, &err_msg, &obj)) { + return NULL; + } + + const char *err_msg_utf8; + if (err_msg != Py_None) { + err_msg_utf8 = PyUnicode_AsUTF8(err_msg); + if (err_msg_utf8 == NULL) { + return NULL; + } + } + else { + err_msg_utf8 = NULL; + } + + PyErr_SetObject((PyObject *)Py_TYPE(exc), exc); + _PyErr_WriteUnraisableMsg(err_msg_utf8, obj); + Py_RETURN_NONE; +} + +/* To test the format of tracebacks as printed out. */ +static PyObject * +traceback_print(PyObject *self, PyObject *args) +{ + PyObject *file; + PyObject *traceback; + + if (!PyArg_ParseTuple(args, "OO:traceback_print", + &traceback, &file)) + { + return NULL; + } + + if (PyTraceBack_Print(traceback, file) < 0) { + return NULL; + } + Py_RETURN_NONE; +} + + +/* + * Define the PyRecurdingInfinitelyError_Type + */ + +static PyTypeObject PyRecursingInfinitelyError_Type; + +static int +recurse_infinitely_error_init(PyObject *self, PyObject *args, PyObject *kwds) +{ + PyObject *type = (PyObject *)&PyRecursingInfinitelyError_Type; + + /* Instantiating this exception starts infinite recursion. */ + Py_INCREF(type); + PyErr_SetObject(type, NULL); + return -1; +} + +static PyTypeObject PyRecursingInfinitelyError_Type = { + .tp_name = "RecursingInfinitelyError", + .tp_basicsize = sizeof(PyBaseExceptionObject), + .tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, + .tp_doc = PyDoc_STR("Instantiating this exception starts infinite recursion."), + .tp_init = (initproc)recurse_infinitely_error_init, +}; + +static PyMethodDef test_methods[] = { + {"err_restore", err_restore, METH_VARARGS}, + {"err_set_raised", err_set_raised, METH_O}, + {"exception_print", exception_print, METH_VARARGS}, + {"fatal_error", test_fatal_error, METH_VARARGS, + PyDoc_STR("fatal_error(message, release_gil=False): call Py_FatalError(message)")}, + {"make_exception_with_doc", _PyCFunction_CAST(make_exception_with_doc), + METH_VARARGS | METH_KEYWORDS}, + {"raise_exception", raise_exception, METH_VARARGS}, + {"raise_memoryerror", raise_memoryerror, METH_NOARGS}, + {"set_exc_info", test_set_exc_info, METH_VARARGS}, + {"set_exception", test_set_exception, METH_O}, + {"traceback_print", traceback_print, METH_VARARGS}, + {"write_unraisable_exc", test_write_unraisable_exc, METH_VARARGS}, + {NULL}, +}; + +int +_PyTestCapi_Init_Exceptions(PyObject *mod) +{ + PyRecursingInfinitelyError_Type.tp_base = (PyTypeObject *)PyExc_Exception; + if (PyType_Ready(&PyRecursingInfinitelyError_Type) < 0) { + return -1; + } + if (PyModule_AddObjectRef(mod, "RecursingInfinitelyError", + (PyObject *)&PyRecursingInfinitelyError_Type) < 0) + { + return -1; + } + + if (PyModule_AddFunctions(mod, test_methods) < 0) { + return -1; + } + + return 0; +} diff --git a/Modules/_testcapi/parts.h b/Modules/_testcapi/parts.h index 7ba3c4ebff8c..1689f186b833 100644 --- a/Modules/_testcapi/parts.h +++ b/Modules/_testcapi/parts.h @@ -36,6 +36,7 @@ int _PyTestCapi_Init_Watchers(PyObject *module); int _PyTestCapi_Init_Long(PyObject *module); int _PyTestCapi_Init_Float(PyObject *module); int _PyTestCapi_Init_Structmember(PyObject *module); +int _PyTestCapi_Init_Exceptions(PyObject *module); #ifdef LIMITED_API_AVAILABLE int _PyTestCapi_Init_VectorcallLimited(PyObject *module); diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index e2237d25a9a9..6bb424282a87 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -720,33 +720,6 @@ pyobject_bytes_from_null(PyObject *self, PyObject *Py_UNUSED(ignored)) return PyObject_Bytes(NULL); } -static PyObject * -raise_exception(PyObject *self, PyObject *args) -{ - PyObject *exc; - PyObject *exc_args, *v; - int num_args, i; - - if (!PyArg_ParseTuple(args, "Oi:raise_exception", - &exc, &num_args)) - return NULL; - - exc_args = PyTuple_New(num_args); - if (exc_args == NULL) - return NULL; - for (i = 0; i < num_args; ++i) { - v = PyLong_FromLong(i); - if (v == NULL) { - Py_DECREF(exc_args); - return NULL; - } - PyTuple_SET_ITEM(exc_args, i, v); - } - PyErr_SetObject(exc, exc_args); - Py_DECREF(exc_args); - return NULL; -} - static PyObject * set_errno(PyObject *self, PyObject *args) { @@ -759,40 +732,6 @@ set_errno(PyObject *self, PyObject *args) Py_RETURN_NONE; } -static PyObject * -test_set_exception(PyObject *self, PyObject *new_exc) -{ - PyObject *exc = PyErr_GetHandledException(); - assert(PyExceptionInstance_Check(exc) || exc == NULL); - - PyErr_SetHandledException(new_exc); - return exc; -} - -static PyObject * -test_set_exc_info(PyObject *self, PyObject *args) -{ - PyObject *orig_exc; - PyObject *new_type, *new_value, *new_tb; - PyObject *type, *value, *tb; - if (!PyArg_ParseTuple(args, "OOO:test_set_exc_info", - &new_type, &new_value, &new_tb)) - return NULL; - - PyErr_GetExcInfo(&type, &value, &tb); - - Py_INCREF(new_type); - Py_INCREF(new_value); - Py_INCREF(new_tb); - PyErr_SetExcInfo(new_type, new_value, new_tb); - - orig_exc = PyTuple_Pack(3, type ? type : Py_None, value ? value : Py_None, tb ? tb : Py_None); - Py_XDECREF(type); - Py_XDECREF(value); - Py_XDECREF(tb); - return orig_exc; -} - /* test_thread_state spawns a thread of its own, and that thread releases * `thread_done` when it's finished. The driver code has to know when the * thread finishes, because the thread uses a PyObject (the callable) that @@ -1272,57 +1211,6 @@ profile_int(PyObject *self, PyObject* args) } #endif -/* To test the format of tracebacks as printed out. */ -static PyObject * -traceback_print(PyObject *self, PyObject *args) -{ - PyObject *file; - PyObject *traceback; - int result; - - if (!PyArg_ParseTuple(args, "OO:traceback_print", - &traceback, &file)) - return NULL; - - result = PyTraceBack_Print(traceback, file); - if (result < 0) - return NULL; - Py_RETURN_NONE; -} - -/* To test the format of exceptions as printed out. */ -static PyObject * -exception_print(PyObject *self, PyObject *args) -{ - PyObject *value; - PyObject *tb = NULL; - - if (!PyArg_ParseTuple(args, "O:exception_print", - &value)) { - return NULL; - } - - if (PyExceptionInstance_Check(value)) { - tb = PyException_GetTraceback(value); - } - - PyErr_Display((PyObject *) Py_TYPE(value), value, tb); - Py_XDECREF(tb); - - Py_RETURN_NONE; -} - - - - -/* reliably raise a MemoryError */ -static PyObject * -raise_memoryerror(PyObject *self, PyObject *Py_UNUSED(ignored)) -{ - PyErr_NoMemory(); - return NULL; -} - /* Issue 6012 */ static PyObject *str1, *str2; static int @@ -1368,26 +1256,6 @@ code_newempty(PyObject *self, PyObject *args) return (PyObject *)PyCode_NewEmpty(filename, funcname, firstlineno); } -/* Test PyErr_NewExceptionWithDoc (also exercise PyErr_NewException). - Run via Lib/test/test_exceptions.py */ -static PyObject * -make_exception_with_doc(PyObject *self, PyObject *args, PyObject *kwargs) -{ - const char *name; - const char *doc = NULL; - PyObject *base = NULL; - PyObject *dict = NULL; - - static char *kwlist[] = {"name", "doc", "base", "dict", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwargs, - "s|sOO:make_exception_with_doc", kwlist, - &name, &doc, &base, &dict)) - return NULL; - - return PyErr_NewExceptionWithDoc(name, doc, base, dict); -} - static PyObject * make_memoryview_from_NULL_pointer(PyObject *self, PyObject *Py_UNUSED(ignored)) { @@ -2491,31 +2359,6 @@ negative_refcount(PyObject *self, PyObject *Py_UNUSED(args)) #endif -static PyObject* -test_write_unraisable_exc(PyObject *self, PyObject *args) -{ - PyObject *exc, *err_msg, *obj; - if (!PyArg_ParseTuple(args, "OOO", &exc, &err_msg, &obj)) { - return NULL; - } - - const char *err_msg_utf8; - if (err_msg != Py_None) { - err_msg_utf8 = PyUnicode_AsUTF8(err_msg); - if (err_msg_utf8 == NULL) { - return NULL; - } - } - else { - err_msg_utf8 = NULL; - } - - PyErr_SetObject((PyObject *)Py_TYPE(exc), exc); - _PyErr_WriteUnraisableMsg(err_msg_utf8, obj); - Py_RETURN_NONE; -} - - static PyObject * sequence_getitem(PyObject *self, PyObject *args) { @@ -2874,25 +2717,6 @@ test_py_is_funcs(PyObject *self, PyObject *Py_UNUSED(ignored)) } -static PyObject * -test_fatal_error(PyObject *self, PyObject *args) -{ - char *message; - int release_gil = 0; - if (!PyArg_ParseTuple(args, "y|i:fatal_error", &message, &release_gil)) - return NULL; - if (release_gil) { - Py_BEGIN_ALLOW_THREADS - Py_FatalError(message); - Py_END_ALLOW_THREADS - } - else { - Py_FatalError(message); - } - // Py_FatalError() does not return, but exits the process. - Py_RETURN_NONE; -} - // type->tp_version_tag static PyObject * type_get_version(PyObject *self, PyObject *type) @@ -3492,46 +3316,9 @@ function_set_kw_defaults(PyObject *self, PyObject *args) Py_RETURN_NONE; } -static PyObject * -err_set_raised(PyObject *self, PyObject *exc) -{ - Py_INCREF(exc); - PyErr_SetRaisedException(exc); - assert(PyErr_Occurred()); - return NULL; -} - -static PyObject * -err_restore(PyObject *self, PyObject *args) { - PyObject *type = NULL, *value = NULL, *traceback = NULL; - switch(PyTuple_Size(args)) { - case 3: - traceback = PyTuple_GetItem(args, 2); - Py_INCREF(traceback); - /* fall through */ - case 2: - value = PyTuple_GetItem(args, 1); - Py_INCREF(value); - /* fall through */ - case 1: - type = PyTuple_GetItem(args, 0); - Py_INCREF(type); - break; - default: - PyErr_SetString(PyExc_TypeError, - "wrong number of arguments"); - return NULL; - } - PyErr_Restore(type, value, traceback); - assert(PyErr_Occurred()); - return NULL; -} - static PyObject *test_buildvalue_issue38913(PyObject *, PyObject *); static PyMethodDef TestMethods[] = { - {"raise_exception", raise_exception, METH_VARARGS}, - {"raise_memoryerror", raise_memoryerror, METH_NOARGS}, {"set_errno", set_errno, METH_VARARGS}, {"test_config", test_config, METH_NOARGS}, {"test_sizeof_c_types", test_sizeof_c_types, METH_NOARGS}, @@ -3574,15 +3361,9 @@ static PyMethodDef TestMethods[] = { #ifdef HAVE_GETTIMEOFDAY {"profile_int", profile_int, METH_NOARGS}, #endif - {"traceback_print", traceback_print, METH_VARARGS}, - {"exception_print", exception_print, METH_VARARGS}, - {"set_exception", test_set_exception, METH_O}, - {"set_exc_info", test_set_exc_info, METH_VARARGS}, {"argparsing", argparsing, METH_VARARGS}, {"code_newempty", code_newempty, METH_VARARGS}, {"eval_code_ex", eval_eval_code_ex, METH_VARARGS}, - {"make_exception_with_doc", _PyCFunction_CAST(make_exception_with_doc), - METH_VARARGS | METH_KEYWORDS}, {"make_memoryview_from_NULL_pointer", make_memoryview_from_NULL_pointer, METH_NOARGS}, {"crash_no_current_thread", crash_no_current_thread, METH_NOARGS}, @@ -3633,7 +3414,6 @@ static PyMethodDef TestMethods[] = { #ifdef Py_REF_DEBUG {"negative_refcount", negative_refcount, METH_NOARGS}, #endif - {"write_unraisable_exc", test_write_unraisable_exc, METH_VARARGS}, {"sequence_getitem", sequence_getitem, METH_VARARGS}, {"sequence_setitem", sequence_setitem, METH_VARARGS}, {"sequence_delitem", sequence_delitem, METH_VARARGS}, @@ -3653,8 +3433,6 @@ static PyMethodDef TestMethods[] = { {"test_refcount_funcs", test_refcount_funcs, METH_NOARGS}, {"test_py_is_macros", test_py_is_macros, METH_NOARGS}, {"test_py_is_funcs", test_py_is_funcs, METH_NOARGS}, - {"fatal_error", test_fatal_error, METH_VARARGS, - PyDoc_STR("fatal_error(message, release_gil=False): call Py_FatalError(message)")}, {"type_get_version", type_get_version, METH_O, PyDoc_STR("type->tp_version_tag")}, {"test_tstate_capi", test_tstate_capi, METH_NOARGS, NULL}, {"frame_getlocals", frame_getlocals, METH_O, NULL}, @@ -3680,8 +3458,6 @@ static PyMethodDef TestMethods[] = { {"function_set_defaults", function_set_defaults, METH_VARARGS, NULL}, {"function_get_kw_defaults", function_get_kw_defaults, METH_O, NULL}, {"function_set_kw_defaults", function_set_kw_defaults, METH_VARARGS, NULL}, - {"err_set_raised", err_set_raised, METH_O, NULL}, - {"err_restore", err_restore, METH_VARARGS, NULL}, {NULL, NULL} /* sentinel */ }; @@ -3902,61 +3678,6 @@ static PyTypeObject awaitType = { }; -static int recurse_infinitely_error_init(PyObject *, PyObject *, PyObject *); - -static PyTypeObject PyRecursingInfinitelyError_Type = { - PyVarObject_HEAD_INIT(NULL, 0) - "RecursingInfinitelyError", /* tp_name */ - sizeof(PyBaseExceptionObject), /* tp_basicsize */ - 0, /* tp_itemsize */ - 0, /* tp_dealloc */ - 0, /* tp_vectorcall_offset */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_as_async */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */ - PyDoc_STR("Instantiating this exception starts infinite recursion."), /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - (initproc)recurse_infinitely_error_init, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ -}; - -static int -recurse_infinitely_error_init(PyObject *self, PyObject *args, PyObject *kwds) -{ - PyObject *type = (PyObject *)&PyRecursingInfinitelyError_Type; - - /* Instantiating this exception starts infinite recursion. */ - Py_INCREF(type); - PyErr_SetObject(type, NULL); - return -1; -} - - /* Test bpo-35983: create a subclass of "list" which checks that instances * are not deallocated twice */ @@ -4283,14 +4004,6 @@ PyInit__testcapi(void) Py_INCREF(&MethStatic_Type); PyModule_AddObject(m, "MethStatic", (PyObject *)&MethStatic_Type); - PyRecursingInfinitelyError_Type.tp_base = (PyTypeObject *)PyExc_Exception; - if (PyType_Ready(&PyRecursingInfinitelyError_Type) < 0) { - return NULL; - } - Py_INCREF(&PyRecursingInfinitelyError_Type); - PyModule_AddObject(m, "RecursingInfinitelyError", - (PyObject *)&PyRecursingInfinitelyError_Type); - PyModule_AddObject(m, "CHAR_MAX", PyLong_FromLong(CHAR_MAX)); PyModule_AddObject(m, "CHAR_MIN", PyLong_FromLong(CHAR_MIN)); PyModule_AddObject(m, "UCHAR_MAX", PyLong_FromLong(UCHAR_MAX)); @@ -4368,6 +4081,9 @@ PyInit__testcapi(void) if (_PyTestCapi_Init_Structmember(m) < 0) { return NULL; } + if (_PyTestCapi_Init_Exceptions(m) < 0) { + return NULL; + } #ifndef LIMITED_API_AVAILABLE PyModule_AddObjectRef(m, "LIMITED_API_AVAILABLE", Py_False); diff --git a/PCbuild/_testcapi.vcxproj b/PCbuild/_testcapi.vcxproj index 58bf4e1eacbf..742eb3ed2d90 100644 --- a/PCbuild/_testcapi.vcxproj +++ b/PCbuild/_testcapi.vcxproj @@ -107,6 +107,7 @@ + diff --git a/PCbuild/_testcapi.vcxproj.filters b/PCbuild/_testcapi.vcxproj.filters index 101c53227616..ab5afc150c32 100644 --- a/PCbuild/_testcapi.vcxproj.filters +++ b/PCbuild/_testcapi.vcxproj.filters @@ -51,6 +51,9 @@ Source Files + + Source Files + From webhook-mailer at python.org Thu Feb 23 12:11:53 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 23 Feb 2023 17:11:53 -0000 Subject: [Python-checkins] gh-101981: Consolidate macOS configure steps in CI (GH-102131) Message-ID: https://github.com/python/cpython/commit/5e1bbb585be18a160fa3a4099cca8d95efc11552 commit: 5e1bbb585be18a160fa3a4099cca8d95efc11552 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-23T09:11:45-08:00 summary: gh-101981: Consolidate macOS configure steps in CI (GH-102131) (cherry picked from commit e07b304bb004e1298283c82bd135dd5ef96a90cc) Co-authored-by: Erlend E. Aasland Automerge-Triggered-By: GH:erlend-aasland files: M .github/workflows/build.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 5dfc5e5bae50..2df4cfb9e2ae 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -195,13 +195,11 @@ jobs: - uses: actions/checkout at v3 - name: Install Homebrew dependencies run: brew install pkg-config openssl at 1.1 xz gdbm tcl-tk - - name: Prepare Homebrew environment variables - run: | - echo "CFLAGS=-I$(brew --prefix gdbm)/include -I$(brew --prefix xz)/include" >> $GITHUB_ENV - echo "LDFLAGS=-L$(brew --prefix gdbm)/lib -I$(brew --prefix xz)/lib" >> $GITHUB_ENV - echo "PKG_CONFIG_PATH=$(brew --prefix openssl at 1.1)/lib/pkgconfig:$(brew --prefix tcl-tk)/lib/pkgconfig" >> $GITHUB_ENV - name: Configure CPython run: | + CFLAGS="-I$(brew --prefix gdbm)/include -I$(brew --prefix xz)/include" \ + LDFLAGS="-L$(brew --prefix gdbm)/lib -I$(brew --prefix xz)/lib" \ + PKG_CONFIG_PATH="$(brew --prefix tcl-tk)/lib/pkgconfig" \ ./configure \ --with-pydebug \ --prefix=/opt/python-dev \ From webhook-mailer at python.org Thu Feb 23 12:12:20 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 23 Feb 2023 17:12:20 -0000 Subject: [Python-checkins] gh-101981: Consolidate macOS configure steps in CI (GH-102131) Message-ID: https://github.com/python/cpython/commit/9a0116d35ceb5d4a01546da7fc2ad329282579b3 commit: 9a0116d35ceb5d4a01546da7fc2ad329282579b3 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-23T09:12:13-08:00 summary: gh-101981: Consolidate macOS configure steps in CI (GH-102131) (cherry picked from commit e07b304bb004e1298283c82bd135dd5ef96a90cc) Co-authored-by: Erlend E. Aasland Automerge-Triggered-By: GH:erlend-aasland files: M .github/workflows/build.yml diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 1686c172e2a3..ac0dd2b84bb4 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -182,13 +182,11 @@ jobs: - uses: actions/checkout at v3 - name: Install Homebrew dependencies run: brew install pkg-config openssl at 1.1 xz gdbm tcl-tk - - name: Prepare Homebrew environment variables - run: | - echo "CFLAGS=-I$(brew --prefix gdbm)/include -I$(brew --prefix xz)/include" >> $GITHUB_ENV - echo "LDFLAGS=-L$(brew --prefix gdbm)/lib -I$(brew --prefix xz)/lib" >> $GITHUB_ENV - echo "PKG_CONFIG_PATH=$(brew --prefix openssl at 1.1)/lib/pkgconfig:$(brew --prefix tcl-tk)/lib/pkgconfig" >> $GITHUB_ENV - name: Configure CPython run: | + CFLAGS="-I$(brew --prefix gdbm)/include -I$(brew --prefix xz)/include" \ + LDFLAGS="-L$(brew --prefix gdbm)/lib -I$(brew --prefix xz)/lib" \ + PKG_CONFIG_PATH="$(brew --prefix tcl-tk)/lib/pkgconfig" \ ./configure \ --with-pydebug \ --prefix=/opt/python-dev \ From webhook-mailer at python.org Thu Feb 23 12:24:05 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 23 Feb 2023 17:24:05 -0000 Subject: [Python-checkins] gh-99108: Followup fix for Modules/Setup (GH-102183) Message-ID: https://github.com/python/cpython/commit/d43c2652d4b1ca4d0afa468e58c4f50052f4bfa2 commit: d43c2652d4b1ca4d0afa468e58c4f50052f4bfa2 branch: main author: Jonathan Protzenko committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-23T09:23:57-08:00 summary: gh-99108: Followup fix for Modules/Setup (GH-102183) Automerge-Triggered-By: GH:erlend-aasland files: M Modules/Setup diff --git a/Modules/Setup b/Modules/Setup index f9fa26cac9e2..ff432e2929ec 100644 --- a/Modules/Setup +++ b/Modules/Setup @@ -163,8 +163,8 @@ PYTHONPATH=$(COREPYTHONPATH) # hashing builtins #_blake2 _blake2/blake2module.c _blake2/blake2b_impl.c _blake2/blake2s_impl.c -#_md5 md5module.c -I$(srcdir)/Modules/_hacl/include _hacl/libHacl_Hash_MD5.c -#_sha1 sha1module.c -I$(srcdir)/Modules/_hacl/include _hacl/libHacl_Hash_SHA1.c +#_md5 md5module.c -I$(srcdir)/Modules/_hacl/include _hacl/Hacl_Hash_MD5.c +#_sha1 sha1module.c -I$(srcdir)/Modules/_hacl/include _hacl/Hacl_Hash_SHA1.c #_sha2 sha2module.c -I$(srcdir)/Modules/_hacl/include Modules/_hacl/libHacl_Streaming_SHA2.a #_sha3 _sha3/sha3module.c From webhook-mailer at python.org Thu Feb 23 16:42:42 2023 From: webhook-mailer at python.org (miss-islington) Date: Thu, 23 Feb 2023 21:42:42 -0000 Subject: [Python-checkins] gh-101476: Add _PyType_GetModuleState (GH-101477) Message-ID: https://github.com/python/cpython/commit/ccd98a3146d66343499d04a44e038223a1a09e80 commit: ccd98a3146d66343499d04a44e038223a1a09e80 branch: main author: Erlend E. Aasland committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-23T13:42:15-08:00 summary: gh-101476: Add _PyType_GetModuleState (GH-101477) For fast module state access from heap type methods. files: M Include/internal/pycore_typeobject.h M Modules/itertoolsmodule.c diff --git a/Include/internal/pycore_typeobject.h b/Include/internal/pycore_typeobject.h index 4d705740a9a6..cc5ce2875101 100644 --- a/Include/internal/pycore_typeobject.h +++ b/Include/internal/pycore_typeobject.h @@ -4,6 +4,8 @@ extern "C" { #endif +#include "pycore_moduleobject.h" + #ifndef Py_BUILD_CORE # error "this header requires Py_BUILD_CORE define" #endif @@ -62,6 +64,20 @@ _PyStaticType_GET_WEAKREFS_LISTPTR(static_builtin_state *state) return &state->tp_weaklist; } +/* Like PyType_GetModuleState, but skips verification + * that type is a heap type with an associated module */ +static inline void * +_PyType_GetModuleState(PyTypeObject *type) +{ + assert(PyType_Check(type)); + assert(type->tp_flags & Py_TPFLAGS_HEAPTYPE); + PyHeapTypeObject *et = (PyHeapTypeObject *)type; + assert(et->ht_module); + PyModuleObject *mod = (PyModuleObject *)(et->ht_module); + assert(mod != NULL); + return mod->md_state; +} + struct types_state { struct type_cache type_cache; size_t num_builtins_initialized; diff --git a/Modules/itertoolsmodule.c b/Modules/itertoolsmodule.c index 6986695e47b1..c986e02867ca 100644 --- a/Modules/itertoolsmodule.c +++ b/Modules/itertoolsmodule.c @@ -3,6 +3,7 @@ #include "pycore_call.h" // _PyObject_CallNoArgs() #include "pycore_long.h" // _PyLong_GetZero() #include "pycore_moduleobject.h" // _PyModule_GetState() +#include "pycore_typeobject.h" // _PyType_GetModuleState() #include "pycore_object.h" // _PyObject_GC_TRACK() #include "pycore_tuple.h" // _PyTuple_ITEMS() #include "structmember.h" // PyMemberDef @@ -48,7 +49,7 @@ get_module_state(PyObject *mod) static inline itertools_state * get_module_state_by_cls(PyTypeObject *cls) { - void *state = PyType_GetModuleState(cls); + void *state = _PyType_GetModuleState(cls); assert(state != NULL); return (itertools_state *)state; } From webhook-mailer at python.org Thu Feb 23 17:57:13 2023 From: webhook-mailer at python.org (DinoV) Date: Thu, 23 Feb 2023 22:57:13 -0000 Subject: [Python-checkins] Fix deadlock on shutdown if test_current_{exception, frames} fails (#102019) Message-ID: https://github.com/python/cpython/commit/0c857865e4f255f99d58678f878e09c11da89892 commit: 0c857865e4f255f99d58678f878e09c11da89892 branch: main author: Jacob Bower <1978924+jbower-fb at users.noreply.github.com> committer: DinoV date: 2023-02-23T14:57:06-08:00 summary: Fix deadlock on shutdown if test_current_{exception,frames} fails (#102019) * Don't deadlock on shutdown if test_current_{exception,frames} fails These tests spawn a thread that waits on a threading.Event. If the test fails any of its assertions, the Event won't be signaled and the thread will wait indefinitely, causing a deadlock when threading._shutdown() tries to join all outstanding threads. Co-authored-by: Brett Simmers * Add a news entry * Fix whitespace --------- Co-authored-by: Brett Simmers Co-authored-by: Oleg Iarygin files: A Misc/NEWS.d/next/Tests/2023-02-18-10-51-02.gh-issue-102019.0797SJ.rst M Lib/test/test_sys.py diff --git a/Lib/test/test_sys.py b/Lib/test/test_sys.py index 58aa9d10210e..b839985def9a 100644 --- a/Lib/test/test_sys.py +++ b/Lib/test/test_sys.py @@ -445,46 +445,47 @@ def g456(): t.start() entered_g.wait() - # At this point, t has finished its entered_g.set(), although it's - # impossible to guess whether it's still on that line or has moved on - # to its leave_g.wait(). - self.assertEqual(len(thread_info), 1) - thread_id = thread_info[0] - - d = sys._current_frames() - for tid in d: - self.assertIsInstance(tid, int) - self.assertGreater(tid, 0) - - main_id = threading.get_ident() - self.assertIn(main_id, d) - self.assertIn(thread_id, d) - - # Verify that the captured main-thread frame is _this_ frame. - frame = d.pop(main_id) - self.assertTrue(frame is sys._getframe()) - - # Verify that the captured thread frame is blocked in g456, called - # from f123. This is a little tricky, since various bits of - # threading.py are also in the thread's call stack. - frame = d.pop(thread_id) - stack = traceback.extract_stack(frame) - for i, (filename, lineno, funcname, sourceline) in enumerate(stack): - if funcname == "f123": - break - else: - self.fail("didn't find f123() on thread's call stack") - - self.assertEqual(sourceline, "g456()") + try: + # At this point, t has finished its entered_g.set(), although it's + # impossible to guess whether it's still on that line or has moved on + # to its leave_g.wait(). + self.assertEqual(len(thread_info), 1) + thread_id = thread_info[0] + + d = sys._current_frames() + for tid in d: + self.assertIsInstance(tid, int) + self.assertGreater(tid, 0) + + main_id = threading.get_ident() + self.assertIn(main_id, d) + self.assertIn(thread_id, d) + + # Verify that the captured main-thread frame is _this_ frame. + frame = d.pop(main_id) + self.assertTrue(frame is sys._getframe()) + + # Verify that the captured thread frame is blocked in g456, called + # from f123. This is a little tricky, since various bits of + # threading.py are also in the thread's call stack. + frame = d.pop(thread_id) + stack = traceback.extract_stack(frame) + for i, (filename, lineno, funcname, sourceline) in enumerate(stack): + if funcname == "f123": + break + else: + self.fail("didn't find f123() on thread's call stack") - # And the next record must be for g456(). - filename, lineno, funcname, sourceline = stack[i+1] - self.assertEqual(funcname, "g456") - self.assertIn(sourceline, ["leave_g.wait()", "entered_g.set()"]) + self.assertEqual(sourceline, "g456()") - # Reap the spawned thread. - leave_g.set() - t.join() + # And the next record must be for g456(). + filename, lineno, funcname, sourceline = stack[i+1] + self.assertEqual(funcname, "g456") + self.assertIn(sourceline, ["leave_g.wait()", "entered_g.set()"]) + finally: + # Reap the spawned thread. + leave_g.set() + t.join() @threading_helper.reap_threads @threading_helper.requires_working_threading() @@ -516,43 +517,44 @@ def g456(): t.start() entered_g.wait() - # At this point, t has finished its entered_g.set(), although it's - # impossible to guess whether it's still on that line or has moved on - # to its leave_g.wait(). - self.assertEqual(len(thread_info), 1) - thread_id = thread_info[0] - - d = sys._current_exceptions() - for tid in d: - self.assertIsInstance(tid, int) - self.assertGreater(tid, 0) - - main_id = threading.get_ident() - self.assertIn(main_id, d) - self.assertIn(thread_id, d) - self.assertEqual((None, None, None), d.pop(main_id)) - - # Verify that the captured thread frame is blocked in g456, called - # from f123. This is a little tricky, since various bits of - # threading.py are also in the thread's call stack. - exc_type, exc_value, exc_tb = d.pop(thread_id) - stack = traceback.extract_stack(exc_tb.tb_frame) - for i, (filename, lineno, funcname, sourceline) in enumerate(stack): - if funcname == "f123": - break - else: - self.fail("didn't find f123() on thread's call stack") - - self.assertEqual(sourceline, "g456()") + try: + # At this point, t has finished its entered_g.set(), although it's + # impossible to guess whether it's still on that line or has moved on + # to its leave_g.wait(). + self.assertEqual(len(thread_info), 1) + thread_id = thread_info[0] + + d = sys._current_exceptions() + for tid in d: + self.assertIsInstance(tid, int) + self.assertGreater(tid, 0) + + main_id = threading.get_ident() + self.assertIn(main_id, d) + self.assertIn(thread_id, d) + self.assertEqual((None, None, None), d.pop(main_id)) + + # Verify that the captured thread frame is blocked in g456, called + # from f123. This is a little tricky, since various bits of + # threading.py are also in the thread's call stack. + exc_type, exc_value, exc_tb = d.pop(thread_id) + stack = traceback.extract_stack(exc_tb.tb_frame) + for i, (filename, lineno, funcname, sourceline) in enumerate(stack): + if funcname == "f123": + break + else: + self.fail("didn't find f123() on thread's call stack") - # And the next record must be for g456(). - filename, lineno, funcname, sourceline = stack[i+1] - self.assertEqual(funcname, "g456") - self.assertTrue(sourceline.startswith("if leave_g.wait(")) + self.assertEqual(sourceline, "g456()") - # Reap the spawned thread. - leave_g.set() - t.join() + # And the next record must be for g456(). + filename, lineno, funcname, sourceline = stack[i+1] + self.assertEqual(funcname, "g456") + self.assertTrue(sourceline.startswith("if leave_g.wait(")) + finally: + # Reap the spawned thread. + leave_g.set() + t.join() def test_attributes(self): self.assertIsInstance(sys.api_version, int) diff --git a/Misc/NEWS.d/next/Tests/2023-02-18-10-51-02.gh-issue-102019.0797SJ.rst b/Misc/NEWS.d/next/Tests/2023-02-18-10-51-02.gh-issue-102019.0797SJ.rst new file mode 100644 index 000000000000..63e36046d26d --- /dev/null +++ b/Misc/NEWS.d/next/Tests/2023-02-18-10-51-02.gh-issue-102019.0797SJ.rst @@ -0,0 +1,2 @@ +Fix deadlock on shutdown if ``test_current_{exception,frames}`` fails. Patch +by Jacob Bower. From webhook-mailer at python.org Thu Feb 23 21:28:37 2023 From: webhook-mailer at python.org (terryjreedy) Date: Fri, 24 Feb 2023 02:28:37 -0000 Subject: [Python-checkins] gh-102158: Add tests for `softkwlist` (#102159) Message-ID: https://github.com/python/cpython/commit/9f3ecd1aa3566947648a053bd9716ed67dd9a718 commit: 9f3ecd1aa3566947648a053bd9716ed67dd9a718 branch: main author: Eclips4 <80244920+Eclips4 at users.noreply.github.com> committer: terryjreedy date: 2023-02-23T21:28:24-05:00 summary: gh-102158: Add tests for `softkwlist` (#102159) --------- Co-authored-by: Terry Jan Reedy files: M Lib/test/test_keyword.py diff --git a/Lib/test/test_keyword.py b/Lib/test/test_keyword.py index 3e2a8b3fb7f4..f329f88fa01d 100644 --- a/Lib/test/test_keyword.py +++ b/Lib/test/test_keyword.py @@ -20,18 +20,36 @@ def test_changing_the_kwlist_does_not_affect_iskeyword(self): keyword.kwlist = ['its', 'all', 'eggs', 'beans', 'and', 'a', 'slice'] self.assertFalse(keyword.iskeyword('eggs')) + def test_changing_the_softkwlist_does_not_affect_issoftkeyword(self): + oldlist = keyword.softkwlist + self.addCleanup(setattr, keyword, "softkwlist", oldlist) + keyword.softkwlist = ["foo", "bar", "spam", "egs", "case"] + self.assertFalse(keyword.issoftkeyword("spam")) + def test_all_keywords_fail_to_be_used_as_names(self): for key in keyword.kwlist: with self.assertRaises(SyntaxError): exec(f"{key} = 42") + def test_all_soft_keywords_can_be_used_as_names(self): + for key in keyword.softkwlist: + exec(f"{key} = 42") + def test_async_and_await_are_keywords(self): self.assertIn("async", keyword.kwlist) self.assertIn("await", keyword.kwlist) + def test_match_and_case_are_soft_keywords(self): + self.assertIn("match", keyword.softkwlist) + self.assertIn("case", keyword.softkwlist) + self.assertIn("_", keyword.softkwlist) + def test_keywords_are_sorted(self): self.assertListEqual(sorted(keyword.kwlist), keyword.kwlist) + def test_softkeywords_are_sorted(self): + self.assertListEqual(sorted(keyword.softkwlist), keyword.softkwlist) + if __name__ == "__main__": unittest.main() From webhook-mailer at python.org Thu Feb 23 21:52:38 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 24 Feb 2023 02:52:38 -0000 Subject: [Python-checkins] gh-102158: Add tests for `softkwlist` (GH-102159) Message-ID: https://github.com/python/cpython/commit/2e2ab6752b9815490df366f50d022cdda1235511 commit: 2e2ab6752b9815490df366f50d022cdda1235511 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-23T18:52:31-08:00 summary: gh-102158: Add tests for `softkwlist` (GH-102159) --------- (cherry picked from commit 9f3ecd1aa3566947648a053bd9716ed67dd9a718) Co-authored-by: Eclips4 <80244920+Eclips4 at users.noreply.github.com> Co-authored-by: Terry Jan Reedy files: M Lib/test/test_keyword.py diff --git a/Lib/test/test_keyword.py b/Lib/test/test_keyword.py index 3e2a8b3fb7f4..f329f88fa01d 100644 --- a/Lib/test/test_keyword.py +++ b/Lib/test/test_keyword.py @@ -20,18 +20,36 @@ def test_changing_the_kwlist_does_not_affect_iskeyword(self): keyword.kwlist = ['its', 'all', 'eggs', 'beans', 'and', 'a', 'slice'] self.assertFalse(keyword.iskeyword('eggs')) + def test_changing_the_softkwlist_does_not_affect_issoftkeyword(self): + oldlist = keyword.softkwlist + self.addCleanup(setattr, keyword, "softkwlist", oldlist) + keyword.softkwlist = ["foo", "bar", "spam", "egs", "case"] + self.assertFalse(keyword.issoftkeyword("spam")) + def test_all_keywords_fail_to_be_used_as_names(self): for key in keyword.kwlist: with self.assertRaises(SyntaxError): exec(f"{key} = 42") + def test_all_soft_keywords_can_be_used_as_names(self): + for key in keyword.softkwlist: + exec(f"{key} = 42") + def test_async_and_await_are_keywords(self): self.assertIn("async", keyword.kwlist) self.assertIn("await", keyword.kwlist) + def test_match_and_case_are_soft_keywords(self): + self.assertIn("match", keyword.softkwlist) + self.assertIn("case", keyword.softkwlist) + self.assertIn("_", keyword.softkwlist) + def test_keywords_are_sorted(self): self.assertListEqual(sorted(keyword.kwlist), keyword.kwlist) + def test_softkeywords_are_sorted(self): + self.assertListEqual(sorted(keyword.softkwlist), keyword.softkwlist) + if __name__ == "__main__": unittest.main() From webhook-mailer at python.org Thu Feb 23 21:54:54 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 24 Feb 2023 02:54:54 -0000 Subject: [Python-checkins] gh-102158: Add tests for `softkwlist` (GH-102159) Message-ID: https://github.com/python/cpython/commit/dd0843ac1d029cb7331a670b37dd55ab5f0b3c7d commit: dd0843ac1d029cb7331a670b37dd55ab5f0b3c7d branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-23T18:54:47-08:00 summary: gh-102158: Add tests for `softkwlist` (GH-102159) --------- (cherry picked from commit 9f3ecd1aa3566947648a053bd9716ed67dd9a718) Co-authored-by: Eclips4 <80244920+Eclips4 at users.noreply.github.com> Co-authored-by: Terry Jan Reedy files: M Lib/test/test_keyword.py diff --git a/Lib/test/test_keyword.py b/Lib/test/test_keyword.py index 3e2a8b3fb7f4..f329f88fa01d 100644 --- a/Lib/test/test_keyword.py +++ b/Lib/test/test_keyword.py @@ -20,18 +20,36 @@ def test_changing_the_kwlist_does_not_affect_iskeyword(self): keyword.kwlist = ['its', 'all', 'eggs', 'beans', 'and', 'a', 'slice'] self.assertFalse(keyword.iskeyword('eggs')) + def test_changing_the_softkwlist_does_not_affect_issoftkeyword(self): + oldlist = keyword.softkwlist + self.addCleanup(setattr, keyword, "softkwlist", oldlist) + keyword.softkwlist = ["foo", "bar", "spam", "egs", "case"] + self.assertFalse(keyword.issoftkeyword("spam")) + def test_all_keywords_fail_to_be_used_as_names(self): for key in keyword.kwlist: with self.assertRaises(SyntaxError): exec(f"{key} = 42") + def test_all_soft_keywords_can_be_used_as_names(self): + for key in keyword.softkwlist: + exec(f"{key} = 42") + def test_async_and_await_are_keywords(self): self.assertIn("async", keyword.kwlist) self.assertIn("await", keyword.kwlist) + def test_match_and_case_are_soft_keywords(self): + self.assertIn("match", keyword.softkwlist) + self.assertIn("case", keyword.softkwlist) + self.assertIn("_", keyword.softkwlist) + def test_keywords_are_sorted(self): self.assertListEqual(sorted(keyword.kwlist), keyword.kwlist) + def test_softkeywords_are_sorted(self): + self.assertListEqual(sorted(keyword.softkwlist), keyword.softkwlist) + if __name__ == "__main__": unittest.main() From webhook-mailer at python.org Fri Feb 24 05:27:04 2023 From: webhook-mailer at python.org (corona10) Date: Fri, 24 Feb 2023 10:27:04 -0000 Subject: [Python-checkins] gh-81652: Add MAP_ALIGNED_SUPER FreeBSD and MAP_CONCEAL OpenBSD constants (gh-102191) Message-ID: https://github.com/python/cpython/commit/347f7406df62b2bbe551685d72a466f27b951f8e commit: 347f7406df62b2bbe551685d72a466f27b951f8e branch: main author: Yeojin Kim committer: corona10 date: 2023-02-24T19:26:51+09:00 summary: gh-81652: Add MAP_ALIGNED_SUPER FreeBSD and MAP_CONCEAL OpenBSD constants (gh-102191) files: A Misc/NEWS.d/next/Library/2023-02-23-20-39-52.gh-issue-81652.Vxz0Mr.rst M Doc/library/mmap.rst M Misc/ACKS M Modules/mmapmodule.c diff --git a/Doc/library/mmap.rst b/Doc/library/mmap.rst index c4f8781f2ac9..69afadff1f5f 100644 --- a/Doc/library/mmap.rst +++ b/Doc/library/mmap.rst @@ -370,11 +370,19 @@ MAP_* Constants MAP_ANONYMOUS MAP_POPULATE MAP_STACK + MAP_ALIGNED_SUPER + MAP_CONCEAL - These are the various flags that can be passed to :meth:`mmap.mmap`. Note that some options might not be present on some systems. + These are the various flags that can be passed to :meth:`mmap.mmap`. :data:`MAP_ALIGNED_SUPER` + is only available at FreeBSD and :data:`MAP_CONCEAL` is only available at OpenBSD. Note + that some options might not be present on some systems. .. versionchanged:: 3.10 - Added MAP_POPULATE constant. + Added :data:`MAP_POPULATE` constant. .. versionadded:: 3.11 - Added MAP_STACK constant. + Added :data:`MAP_STACK` constant. + + .. versionadded:: 3.12 + Added :data:`MAP_ALIGNED_SUPER` constant. + Added :data:`MAP_CONCEAL` constant. diff --git a/Misc/ACKS b/Misc/ACKS index 33dbf4e989d9..43e420a1373c 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -930,6 +930,7 @@ Derek D. Kim Gihwan Kim Jan Kim Taek Joo Kim +Yeojin Kim Sam Kimbrel Tomohiko Kinebuchi James King diff --git a/Misc/NEWS.d/next/Library/2023-02-23-20-39-52.gh-issue-81652.Vxz0Mr.rst b/Misc/NEWS.d/next/Library/2023-02-23-20-39-52.gh-issue-81652.Vxz0Mr.rst new file mode 100644 index 000000000000..48acce1d863e --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-23-20-39-52.gh-issue-81652.Vxz0Mr.rst @@ -0,0 +1,2 @@ +Add :data:`mmap.MAP_ALIGNED_SUPER` FreeBSD and :data:`mmap.MAP_CONCEAL` +OpenBSD constants to :mod:`mmap`. Patch by Yeojin Kim. diff --git a/Modules/mmapmodule.c b/Modules/mmapmodule.c index 8244202376c7..a01e798265c5 100644 --- a/Modules/mmapmodule.c +++ b/Modules/mmapmodule.c @@ -1603,6 +1603,12 @@ mmap_exec(PyObject *module) // Mostly a no-op on Linux and NetBSD, but useful on OpenBSD // for stack usage (even on x86 arch) ADD_INT_MACRO(module, MAP_STACK); +#endif +#ifdef MAP_ALIGNED_SUPER + ADD_INT_MACRO(module, MAP_ALIGNED_SUPER); +#endif +#ifdef MAP_CONCEAL + ADD_INT_MACRO(module, MAP_CONCEAL); #endif if (PyModule_AddIntConstant(module, "PAGESIZE", (long)my_getpagesize()) < 0 ) { return -1; From webhook-mailer at python.org Fri Feb 24 07:38:29 2023 From: webhook-mailer at python.org (zooba) Date: Fri, 24 Feb 2023 12:38:29 -0000 Subject: [Python-checkins] gh-102141: replace use of getpid on Windows with GetCurrentProcessId (GH-102142) Message-ID: https://github.com/python/cpython/commit/1fa38906f0b228e6b0a6baa89ab6316989b0388a commit: 1fa38906f0b228e6b0a6baa89ab6316989b0388a branch: main author: Max Bachmann committer: zooba date: 2023-02-24T12:38:21Z summary: gh-102141: replace use of getpid on Windows with GetCurrentProcessId (GH-102142) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-22-15-15-32.gh-issue-102027.Km4G-d.rst M Modules/_randommodule.c M Modules/posixmodule.c M PC/pyconfig.h diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-22-15-15-32.gh-issue-102027.Km4G-d.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-22-15-15-32.gh-issue-102027.Km4G-d.rst new file mode 100644 index 000000000000..514a8ef26594 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-22-15-15-32.gh-issue-102027.Km4G-d.rst @@ -0,0 +1,2 @@ +Use ``GetCurrentProcessId`` on Windows when ``getpid`` is unavailable. Patch by +Max Bachmann. diff --git a/Modules/_randommodule.c b/Modules/_randommodule.c index 95f1e505dd18..68060c07033d 100644 --- a/Modules/_randommodule.c +++ b/Modules/_randommodule.c @@ -259,7 +259,9 @@ random_seed_time_pid(RandomObject *self) key[0] = (uint32_t)(now & 0xffffffffU); key[1] = (uint32_t)(now >> 32); -#ifdef HAVE_GETPID +#ifdef MS_WINDOWS_NON_DESKTOP + key[2] = (uint32_t)GetCurrentProcessId(); +#elif defined(HAVE_GETPID) key[2] = (uint32_t)getpid(); #else key[2] = 0; diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c index 524dc7eb1ccc..51aa89ef715a 100644 --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -7946,7 +7946,7 @@ os_getgid_impl(PyObject *module) #endif /* HAVE_GETGID */ -#ifdef HAVE_GETPID +#if defined(HAVE_GETPID) /*[clinic input] os.getpid @@ -7957,9 +7957,13 @@ static PyObject * os_getpid_impl(PyObject *module) /*[clinic end generated code: output=9ea6fdac01ed2b3c input=5a9a00f0ab68aa00]*/ { +#ifdef MS_WINDOWS_NON_DESKTOP + return PyLong_FromUnsignedLong(GetCurrentProcessId()); +#else return PyLong_FromPid(getpid()); +#endif } -#endif /* HAVE_GETPID */ +#endif /* defined(HAVE_GETPID) */ #ifdef NGROUPS_MAX #define MAX_GROUPS NGROUPS_MAX @@ -8265,12 +8269,11 @@ static PyObject* win32_getppid() { HANDLE snapshot; - pid_t mypid; PyObject* result = NULL; BOOL have_record; PROCESSENTRY32 pe; - mypid = getpid(); /* This function never fails */ + DWORD mypid = GetCurrentProcessId(); /* This function never fails */ snapshot = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); if (snapshot == INVALID_HANDLE_VALUE) @@ -8279,9 +8282,9 @@ win32_getppid() pe.dwSize = sizeof(pe); have_record = Process32First(snapshot, &pe); while (have_record) { - if (mypid == (pid_t)pe.th32ProcessID) { + if (mypid == pe.th32ProcessID) { /* We could cache the ulong value in a static variable. */ - result = PyLong_FromPid((pid_t)pe.th32ParentProcessID); + result = PyLong_FromUnsignedLong(pe.th32ParentProcessID); break; } diff --git a/PC/pyconfig.h b/PC/pyconfig.h index f5166a1506c9..a34d420ab7ec 100644 --- a/PC/pyconfig.h +++ b/PC/pyconfig.h @@ -72,6 +72,9 @@ WIN32 is still required for the locale module. #define USE_SOCKET #endif +#if defined(WINAPI_FAMILY) && (WINAPI_FAMILY != WINAPI_FAMILY_DESKTOP_APP) +#define MS_WINDOWS_NON_DESKTOP +#endif /* Compiler specific defines */ From webhook-mailer at python.org Fri Feb 24 09:53:59 2023 From: webhook-mailer at python.org (zooba) Date: Fri, 24 Feb 2023 14:53:59 -0000 Subject: [Python-checkins] Remove references to old Windows source files from internal documentation (GH-102216) Message-ID: https://github.com/python/cpython/commit/e5e1c1fabd8b5626f9193e6c61b9d7ceb7fb2f95 commit: e5e1c1fabd8b5626f9193e6c61b9d7ceb7fb2f95 branch: main author: Max Bachmann committer: zooba date: 2023-02-24T14:53:50Z summary: Remove references to old Windows source files from internal documentation (GH-102216) files: M PC/readme.txt diff --git a/PC/readme.txt b/PC/readme.txt index bef5111c5918..a917a72c1232 100644 --- a/PC/readme.txt +++ b/PC/readme.txt @@ -60,11 +60,6 @@ python_nt.rc Resource compiler input for python15.dll. dl_nt.c Additional sources used for 32-bit Windows features. -getpathp.c Default sys.path calculations (for all PC platforms). - -dllbase_nt.txt A (manually maintained) list of base addresses for - various DLLs, to avoid run-time relocation. - Note for Windows 3.x and DOS users ================================== From webhook-mailer at python.org Fri Feb 24 11:13:13 2023 From: webhook-mailer at python.org (rhettinger) Date: Fri, 24 Feb 2023 16:13:13 -0000 Subject: [Python-checkins] gh-102105 Fix wording in filterfalse/quantify/filter (GH-102189) Message-ID: https://github.com/python/cpython/commit/81bf10e4f20a0f6d36b67085eefafdf7ebb97c33 commit: 81bf10e4f20a0f6d36b67085eefafdf7ebb97c33 branch: main author: Stefan Pochmann <609905+pochmann at users.noreply.github.com> committer: rhettinger date: 2023-02-24T10:13:05-06:00 summary: gh-102105 Fix wording in filterfalse/quantify/filter (GH-102189) files: M Doc/library/functions.rst M Doc/library/itertools.rst diff --git a/Doc/library/functions.rst b/Doc/library/functions.rst index 3ff288490251..f0f374771b0c 100644 --- a/Doc/library/functions.rst +++ b/Doc/library/functions.rst @@ -623,7 +623,7 @@ are always available. They are listed here in alphabetical order. .. function:: filter(function, iterable) Construct an iterator from those elements of *iterable* for which *function* - returns true. *iterable* may be either a sequence, a container which + is true. *iterable* may be either a sequence, a container which supports iteration, or an iterator. If *function* is ``None``, the identity function is assumed, that is, all elements of *iterable* that are false are removed. @@ -634,7 +634,7 @@ are always available. They are listed here in alphabetical order. ``None``. See :func:`itertools.filterfalse` for the complementary function that returns - elements of *iterable* for which *function* returns false. + elements of *iterable* for which *function* is false. .. class:: float(x=0.0) diff --git a/Doc/library/itertools.rst b/Doc/library/itertools.rst index 8d83d92660d6..95da7166842e 100644 --- a/Doc/library/itertools.rst +++ b/Doc/library/itertools.rst @@ -398,7 +398,7 @@ loops that truncate the stream. .. function:: filterfalse(predicate, iterable) Make an iterator that filters elements from iterable returning only those for - which the predicate is ``False``. If *predicate* is ``None``, return the items + which the predicate is false. If *predicate* is ``None``, return the items that are false. Roughly equivalent to:: def filterfalse(predicate, iterable): @@ -831,7 +831,7 @@ which incur interpreter overhead. return next(g, True) and not next(g, False) def quantify(iterable, pred=bool): - "Count how many times the predicate is true" + "Count how many times the predicate is True" return sum(map(pred, iterable)) def ncycles(iterable, n): From webhook-mailer at python.org Fri Feb 24 11:21:48 2023 From: webhook-mailer at python.org (miss-islington) Date: Fri, 24 Feb 2023 16:21:48 -0000 Subject: [Python-checkins] gh-102105 Fix wording in filterfalse/quantify/filter (GH-102189) Message-ID: https://github.com/python/cpython/commit/3b4f8fc83dcea1a9d0bc5bd33592e5a3da41fa71 commit: 3b4f8fc83dcea1a9d0bc5bd33592e5a3da41fa71 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-24T08:21:40-08:00 summary: gh-102105 Fix wording in filterfalse/quantify/filter (GH-102189) (cherry picked from commit 81bf10e4f20a0f6d36b67085eefafdf7ebb97c33) Co-authored-by: Stefan Pochmann <609905+pochmann at users.noreply.github.com> files: M Doc/library/functions.rst M Doc/library/itertools.rst diff --git a/Doc/library/functions.rst b/Doc/library/functions.rst index d0da05f78416..22972c038750 100644 --- a/Doc/library/functions.rst +++ b/Doc/library/functions.rst @@ -622,7 +622,7 @@ are always available. They are listed here in alphabetical order. .. function:: filter(function, iterable) Construct an iterator from those elements of *iterable* for which *function* - returns true. *iterable* may be either a sequence, a container which + is true. *iterable* may be either a sequence, a container which supports iteration, or an iterator. If *function* is ``None``, the identity function is assumed, that is, all elements of *iterable* that are false are removed. @@ -633,7 +633,7 @@ are always available. They are listed here in alphabetical order. ``None``. See :func:`itertools.filterfalse` for the complementary function that returns - elements of *iterable* for which *function* returns false. + elements of *iterable* for which *function* is false. .. class:: float(x=0.0) diff --git a/Doc/library/itertools.rst b/Doc/library/itertools.rst index 06c73fc23490..2fd30db0587f 100644 --- a/Doc/library/itertools.rst +++ b/Doc/library/itertools.rst @@ -359,7 +359,7 @@ loops that truncate the stream. .. function:: filterfalse(predicate, iterable) Make an iterator that filters elements from iterable returning only those for - which the predicate is ``False``. If *predicate* is ``None``, return the items + which the predicate is false. If *predicate* is ``None``, return the items that are false. Roughly equivalent to:: def filterfalse(predicate, iterable): @@ -792,7 +792,7 @@ which incur interpreter overhead. return next(g, True) and not next(g, False) def quantify(iterable, pred=bool): - "Count how many times the predicate is true" + "Count how many times the predicate is True" return sum(map(pred, iterable)) def ncycles(iterable, n): From webhook-mailer at python.org Fri Feb 24 15:16:36 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Fri, 24 Feb 2023 20:16:36 -0000 Subject: [Python-checkins] gh-101476: Use _PyType_GetModuleState where applicable (#102188) Message-ID: https://github.com/python/cpython/commit/568fc0dee42a353f327b059a48f97c911de904b3 commit: 568fc0dee42a353f327b059a48f97c911de904b3 branch: main author: Erlend E. Aasland committer: erlend-aasland date: 2023-02-24T21:16:29+01:00 summary: gh-101476: Use _PyType_GetModuleState where applicable (#102188) files: M Modules/_abc.c M Modules/_asynciomodule.c M Modules/_lsprof.c M Modules/_operator.c M Modules/_sha3/sha3module.c M Modules/_zoneinfo.c M Modules/md5module.c M Modules/sha1module.c M Modules/sha2module.c diff --git a/Modules/_abc.c b/Modules/_abc.c index e146d4fd0cac..9d6654b4e58a 100644 --- a/Modules/_abc.c +++ b/Modules/_abc.c @@ -79,7 +79,7 @@ abc_data_new(PyTypeObject *type, PyObject *args, PyObject *kwds) return NULL; } - state = PyType_GetModuleState(type); + state = _PyType_GetModuleState(type); if (state == NULL) { Py_DECREF(self); return NULL; diff --git a/Modules/_asynciomodule.c b/Modules/_asynciomodule.c index 055dded05431..21b2ca1971f9 100644 --- a/Modules/_asynciomodule.c +++ b/Modules/_asynciomodule.c @@ -76,7 +76,7 @@ get_asyncio_state(PyObject *mod) static inline asyncio_state * get_asyncio_state_by_cls(PyTypeObject *cls) { - asyncio_state *state = (asyncio_state *)PyType_GetModuleState(cls); + asyncio_state *state = (asyncio_state *)_PyType_GetModuleState(cls); assert(state != NULL); return state; } diff --git a/Modules/_lsprof.c b/Modules/_lsprof.c index 37170bbea56a..3237d796dc29 100644 --- a/Modules/_lsprof.c +++ b/Modules/_lsprof.c @@ -607,7 +607,7 @@ _lsprof_Profiler_getstats_impl(ProfilerObject *self, PyTypeObject *cls) /*[clinic end generated code: output=1806ef720019ee03 input=445e193ef4522902]*/ { statscollector_t collect; - collect.state = PyType_GetModuleState(cls); + collect.state = _PyType_GetModuleState(cls); if (pending_exception(self)) { return NULL; } diff --git a/Modules/_operator.c b/Modules/_operator.c index 4f2367150eef..38335b699501 100644 --- a/Modules/_operator.c +++ b/Modules/_operator.c @@ -1002,7 +1002,7 @@ itemgetter_new(PyTypeObject *type, PyObject *args, PyObject *kwds) } else { item = args; } - _operator_state *state = PyType_GetModuleState(type); + _operator_state *state = _PyType_GetModuleState(type); /* create itemgetterobject structure */ ig = PyObject_GC_New(itemgetterobject, (PyTypeObject *) state->itemgetter_type); if (ig == NULL) { @@ -1298,7 +1298,7 @@ attrgetter_new(PyTypeObject *type, PyObject *args, PyObject *kwds) } } - _operator_state *state = PyType_GetModuleState(type); + _operator_state *state = _PyType_GetModuleState(type); /* create attrgetterobject structure */ ag = PyObject_GC_New(attrgetterobject, (PyTypeObject *)state->attrgetter_type); if (ag == NULL) { @@ -1578,7 +1578,7 @@ methodcaller_new(PyTypeObject *type, PyObject *args, PyObject *kwds) return NULL; } - _operator_state *state = PyType_GetModuleState(type); + _operator_state *state = _PyType_GetModuleState(type); /* create methodcallerobject structure */ mc = PyObject_GC_New(methodcallerobject, (PyTypeObject *)state->methodcaller_type); if (mc == NULL) { diff --git a/Modules/_sha3/sha3module.c b/Modules/_sha3/sha3module.c index bd1dd596bdda..633a0c0ea08d 100644 --- a/Modules/_sha3/sha3module.c +++ b/Modules/_sha3/sha3module.c @@ -21,6 +21,7 @@ #include "Python.h" #include "pycore_strhex.h" // _Py_strhex() +#include "pycore_typeobject.h" // _PyType_GetModuleState() #include "../hashlib.h" #include "sha3.c" @@ -106,7 +107,7 @@ py_sha3_new_impl(PyTypeObject *type, PyObject *data, int usedforsecurity) { HashReturn res; Py_buffer buf = {NULL, NULL}; - SHA3State *state = PyType_GetModuleState(type); + SHA3State *state = _PyType_GetModuleState(type); SHA3object *self = newSHA3object(type); if (self == NULL) { goto error; @@ -337,7 +338,7 @@ SHA3_get_name(SHA3object *self, void *closure) { PyTypeObject *type = Py_TYPE(self); - SHA3State *state = PyType_GetModuleState(type); + SHA3State *state = _PyType_GetModuleState(type); assert(state != NULL); if (type == state->sha3_224_type) { @@ -408,7 +409,7 @@ static PyGetSetDef SHA3_getseters[] = { {0,0} \ } -// Using PyType_GetModuleState() on these types is safe since they +// Using _PyType_GetModuleState() on these types is safe since they // cannot be subclassed: it does not have the Py_TPFLAGS_BASETYPE flag. #define SHA3_TYPE_SPEC(type_spec_obj, type_name, type_slots) \ static PyType_Spec type_spec_obj = { \ diff --git a/Modules/_zoneinfo.c b/Modules/_zoneinfo.c index 9f423559f51a..6e1a37611b61 100644 --- a/Modules/_zoneinfo.c +++ b/Modules/_zoneinfo.c @@ -204,7 +204,7 @@ zoneinfo_get_state(PyObject *mod) static inline zoneinfo_state * zoneinfo_get_state_by_cls(PyTypeObject *cls) { - zoneinfo_state *state = (zoneinfo_state *)PyType_GetModuleState(cls); + zoneinfo_state *state = (zoneinfo_state *)_PyType_GetModuleState(cls); assert(state != NULL); return state; } diff --git a/Modules/md5module.c b/Modules/md5module.c index df3d6a4a70d7..4f7bc77a8836 100644 --- a/Modules/md5module.c +++ b/Modules/md5module.c @@ -22,6 +22,7 @@ #include "Python.h" #include "hashlib.h" #include "pycore_strhex.h" // _Py_strhex() +#include "pycore_typeobject.h" // _PyType_GetModuleState() /*[clinic input] module _md5 @@ -108,7 +109,7 @@ static PyObject * MD5Type_copy_impl(MD5object *self, PyTypeObject *cls) /*[clinic end generated code: output=bf055e08244bf5ee input=d89087dcfb2a8620]*/ { - MD5State *st = PyType_GetModuleState(cls); + MD5State *st = _PyType_GetModuleState(cls); MD5object *newobj; if ((newobj = newMD5object(st))==NULL) diff --git a/Modules/sha1module.c b/Modules/sha1module.c index 0f50d532acf9..f8d4056fd34b 100644 --- a/Modules/sha1module.c +++ b/Modules/sha1module.c @@ -22,6 +22,7 @@ #include "Python.h" #include "hashlib.h" #include "pycore_strhex.h" // _Py_strhex() +#include "pycore_typeobject.h" // _PyType_GetModuleState() /*[clinic input] module _sha1 @@ -108,7 +109,7 @@ static PyObject * SHA1Type_copy_impl(SHA1object *self, PyTypeObject *cls) /*[clinic end generated code: output=b32d4461ce8bc7a7 input=6c22e66fcc34c58e]*/ { - SHA1State *st = PyType_GetModuleState(cls); + SHA1State *st = _PyType_GetModuleState(cls); SHA1object *newobj; if ((newobj = newSHA1object(st)) == NULL) diff --git a/Modules/sha2module.c b/Modules/sha2module.c index 9999f255cd57..72de20b44762 100644 --- a/Modules/sha2module.c +++ b/Modules/sha2module.c @@ -23,6 +23,7 @@ #include "Python.h" #include "pycore_bitutils.h" // _Py_bswap32() #include "pycore_moduleobject.h" // _PyModule_GetState() +#include "pycore_typeobject.h" // _PyType_GetModuleState() #include "pycore_strhex.h" // _Py_strhex() #include "structmember.h" // PyMemberDef #include "hashlib.h" @@ -217,7 +218,7 @@ SHA256Type_copy_impl(SHA256object *self, PyTypeObject *cls) /*[clinic end generated code: output=fabd515577805cd3 input=3137146fcb88e212]*/ { SHA256object *newobj; - sha2_state *state = PyType_GetModuleState(cls); + sha2_state *state = _PyType_GetModuleState(cls); if (Py_IS_TYPE(self, state->sha256_type)) { if ((newobj = newSHA256object(state)) == NULL) { return NULL; @@ -245,7 +246,7 @@ SHA512Type_copy_impl(SHA512object *self, PyTypeObject *cls) /*[clinic end generated code: output=66d2a8ef20de8302 input=f673a18f66527c90]*/ { SHA512object *newobj; - sha2_state *state = PyType_GetModuleState(cls); + sha2_state *state = _PyType_GetModuleState(cls); if (Py_IS_TYPE((PyObject*)self, state->sha512_type)) { if ((newobj = newSHA512object(state)) == NULL) { @@ -482,7 +483,7 @@ static PyType_Slot sha512_type_slots[] = { {0,0} }; -// Using PyType_GetModuleState() on these types is safe since they +// Using _PyType_GetModuleState() on these types is safe since they // cannot be subclassed: they don't have the Py_TPFLAGS_BASETYPE flag. static PyType_Spec sha224_type_spec = { .name = "_sha2.SHA224Type", From webhook-mailer at python.org Fri Feb 24 16:43:10 2023 From: webhook-mailer at python.org (iritkatriel) Date: Fri, 24 Feb 2023 21:43:10 -0000 Subject: [Python-checkins] gh-102192: Replace PyErr_Fetch/Restore etc by more efficient alternatives (in Modules/) (#102196) Message-ID: https://github.com/python/cpython/commit/2db23d10bf64bf7c061fd95c6a8079ddc5c9aa4b commit: 2db23d10bf64bf7c061fd95c6a8079ddc5c9aa4b branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-24T21:43:03Z summary: gh-102192: Replace PyErr_Fetch/Restore etc by more efficient alternatives (in Modules/) (#102196) files: M Modules/_asynciomodule.c M Modules/_io/_iomodule.c M Modules/_io/bufferedio.c M Modules/_io/fileio.c M Modules/_io/iobase.c M Modules/_io/textio.c M Modules/_io/winconsoleio.c M Modules/_lsprof.c M Modules/_sqlite/connection.c M Modules/_sqlite/cursor.c M Modules/_testcapi/heaptype.c M Modules/_testcapi/watchers.c M Modules/_testcapimodule.c M Modules/_xxinterpchannelsmodule.c M Modules/_zoneinfo.c M Modules/posixmodule.c M Modules/signalmodule.c M Modules/socketmodule.c diff --git a/Modules/_asynciomodule.c b/Modules/_asynciomodule.c index 21b2ca1971f9..13d98eedf32f 100644 --- a/Modules/_asynciomodule.c +++ b/Modules/_asynciomodule.c @@ -1422,7 +1422,6 @@ _asyncio_Future__make_cancelled_error_impl(FutureObj *self) static void FutureObj_finalize(FutureObj *fut) { - PyObject *error_type, *error_value, *error_traceback; PyObject *context; PyObject *message = NULL; PyObject *func; @@ -1434,7 +1433,7 @@ FutureObj_finalize(FutureObj *fut) fut->fut_log_tb = 0; /* Save the current exception, if any. */ - PyErr_Fetch(&error_type, &error_value, &error_traceback); + PyObject *exc = PyErr_GetRaisedException(); context = PyDict_New(); if (context == NULL) { @@ -1476,7 +1475,7 @@ FutureObj_finalize(FutureObj *fut) Py_XDECREF(message); /* Restore the saved exception. */ - PyErr_Restore(error_type, error_value, error_traceback); + PyErr_SetRaisedException(exc); } static PyMethodDef FutureType_methods[] = { @@ -2491,14 +2490,13 @@ TaskObj_finalize(TaskObj *task) PyObject *context; PyObject *message = NULL; PyObject *func; - PyObject *error_type, *error_value, *error_traceback; if (task->task_state != STATE_PENDING || !task->task_log_destroy_pending) { goto done; } /* Save the current exception, if any. */ - PyErr_Fetch(&error_type, &error_value, &error_traceback); + PyObject *exc = PyErr_GetRaisedException(); context = PyDict_New(); if (context == NULL) { @@ -2541,7 +2539,7 @@ TaskObj_finalize(TaskObj *task) Py_XDECREF(message); /* Restore the saved exception. */ - PyErr_Restore(error_type, error_value, error_traceback); + PyErr_SetRaisedException(exc); done: FutureObj_finalize((FutureObj*)task); @@ -2766,8 +2764,6 @@ task_step_impl(asyncio_state *state, TaskObj *task, PyObject *exc) } if (gen_status == PYGEN_RETURN || gen_status == PYGEN_ERROR) { - PyObject *et, *ev, *tb; - if (result != NULL) { /* The error is StopIteration and that means that the underlying coroutine has resolved */ @@ -2794,52 +2790,39 @@ task_step_impl(asyncio_state *state, TaskObj *task, PyObject *exc) if (PyErr_ExceptionMatches(state->asyncio_CancelledError)) { /* CancelledError */ - PyErr_Fetch(&et, &ev, &tb); - assert(et); - PyErr_NormalizeException(&et, &ev, &tb); - if (tb != NULL) { - PyException_SetTraceback(ev, tb); - Py_DECREF(tb); - } - Py_XDECREF(et); + + PyObject *exc = PyErr_GetRaisedException(); + assert(exc); FutureObj *fut = (FutureObj*)task; /* transfer ownership */ - fut->fut_cancelled_exc = ev; + fut->fut_cancelled_exc = exc; return future_cancel(state, fut, NULL); } /* Some other exception; pop it and call Task.set_exception() */ - PyErr_Fetch(&et, &ev, &tb); - assert(et); - PyErr_NormalizeException(&et, &ev, &tb); - if (tb != NULL) { - PyException_SetTraceback(ev, tb); - } + PyObject *exc = PyErr_GetRaisedException(); + assert(exc); - o = future_set_exception(state, (FutureObj*)task, ev); + o = future_set_exception(state, (FutureObj*)task, exc); if (!o) { /* An exception in Task.set_exception() */ - Py_DECREF(et); - Py_XDECREF(tb); - Py_XDECREF(ev); + Py_DECREF(exc); goto fail; } assert(o == Py_None); Py_DECREF(o); - if (PyErr_GivenExceptionMatches(et, PyExc_KeyboardInterrupt) || - PyErr_GivenExceptionMatches(et, PyExc_SystemExit)) + if (PyErr_GivenExceptionMatches(exc, PyExc_KeyboardInterrupt) || + PyErr_GivenExceptionMatches(exc, PyExc_SystemExit)) { /* We've got a KeyboardInterrupt or a SystemError; re-raise it */ - PyErr_Restore(et, ev, tb); + PyErr_SetRaisedException(exc); goto fail; } - Py_DECREF(et); - Py_XDECREF(tb); - Py_XDECREF(ev); + Py_DECREF(exc); Py_RETURN_NONE; } @@ -3059,10 +3042,9 @@ task_step(asyncio_state *state, TaskObj *task, PyObject *exc) res = task_step_impl(state, task, exc); if (res == NULL) { - PyObject *et, *ev, *tb; - PyErr_Fetch(&et, &ev, &tb); + PyObject *exc = PyErr_GetRaisedException(); leave_task(state, task->task_loop, (PyObject*)task); - _PyErr_ChainExceptions(et, ev, tb); /* Normalizes (et, ev, tb) */ + _PyErr_ChainExceptions1(exc); return NULL; } else { @@ -3079,7 +3061,6 @@ task_step(asyncio_state *state, TaskObj *task, PyObject *exc) static PyObject * task_wakeup(TaskObj *task, PyObject *o) { - PyObject *et, *ev, *tb; PyObject *result; assert(o); @@ -3111,18 +3092,12 @@ task_wakeup(TaskObj *task, PyObject *o) /* exception raised */ } - PyErr_Fetch(&et, &ev, &tb); - assert(et); - PyErr_NormalizeException(&et, &ev, &tb); - if (tb != NULL) { - PyException_SetTraceback(ev, tb); - } + PyObject *exc = PyErr_GetRaisedException(); + assert(exc); - result = task_step(state, task, ev); + result = task_step(state, task, exc); - Py_DECREF(et); - Py_XDECREF(tb); - Py_XDECREF(ev); + Py_DECREF(exc); return result; } diff --git a/Modules/_io/_iomodule.c b/Modules/_io/_iomodule.c index 55b6535eb34b..d8d836b8382e 100644 --- a/Modules/_io/_iomodule.c +++ b/Modules/_io/_iomodule.c @@ -437,10 +437,9 @@ _io_open_impl(PyObject *module, PyObject *file, const char *mode, error: if (result != NULL) { - PyObject *exc, *val, *tb, *close_result; - PyErr_Fetch(&exc, &val, &tb); - close_result = PyObject_CallMethodNoArgs(result, &_Py_ID(close)); - _PyErr_ChainExceptions(exc, val, tb); + PyObject *exc = PyErr_GetRaisedException(); + PyObject *close_result = PyObject_CallMethodNoArgs(result, &_Py_ID(close)); + _PyErr_ChainExceptions1(exc); Py_XDECREF(close_result); Py_DECREF(result); } diff --git a/Modules/_io/bufferedio.c b/Modules/_io/bufferedio.c index 56491f097100..960026707fc5 100644 --- a/Modules/_io/bufferedio.c +++ b/Modules/_io/bufferedio.c @@ -472,12 +472,13 @@ buffered_closed_get(buffered *self, void *context) static PyObject * buffered_close(buffered *self, PyObject *args) { - PyObject *res = NULL, *exc = NULL, *val, *tb; + PyObject *res = NULL; int r; CHECK_INITIALIZED(self) - if (!ENTER_BUFFERED(self)) + if (!ENTER_BUFFERED(self)) { return NULL; + } r = buffered_closed(self); if (r < 0) @@ -497,12 +498,16 @@ buffered_close(buffered *self, PyObject *args) /* flush() will most probably re-take the lock, so drop it first */ LEAVE_BUFFERED(self) res = PyObject_CallMethodNoArgs((PyObject *)self, &_Py_ID(flush)); - if (!ENTER_BUFFERED(self)) + if (!ENTER_BUFFERED(self)) { return NULL; - if (res == NULL) - PyErr_Fetch(&exc, &val, &tb); - else + } + PyObject *exc = NULL; + if (res == NULL) { + exc = PyErr_GetRaisedException(); + } + else { Py_DECREF(res); + } res = PyObject_CallMethodNoArgs(self->raw, &_Py_ID(close)); @@ -512,7 +517,7 @@ buffered_close(buffered *self, PyObject *args) } if (exc != NULL) { - _PyErr_ChainExceptions(exc, val, tb); + _PyErr_ChainExceptions1(exc); Py_CLEAR(res); } @@ -637,17 +642,14 @@ _set_BlockingIOError(const char *msg, Py_ssize_t written) static Py_ssize_t * _buffered_check_blocking_error(void) { - PyObject *t, *v, *tb; - PyOSErrorObject *err; - - PyErr_Fetch(&t, &v, &tb); - if (v == NULL || !PyErr_GivenExceptionMatches(v, PyExc_BlockingIOError)) { - PyErr_Restore(t, v, tb); + PyObject *exc = PyErr_GetRaisedException(); + if (exc == NULL || !PyErr_GivenExceptionMatches(exc, PyExc_BlockingIOError)) { + PyErr_SetRaisedException(exc); return NULL; } - err = (PyOSErrorObject *) v; + PyOSErrorObject *err = (PyOSErrorObject *)exc; /* TODO: sanity check (err->written >= 0) */ - PyErr_Restore(t, v, tb); + PyErr_SetRaisedException(exc); return &err->written; } @@ -749,13 +751,11 @@ _buffered_init(buffered *self) int _PyIO_trap_eintr(void) { - PyObject *typ, *val, *tb; - PyOSErrorObject *env_err; - if (!PyErr_ExceptionMatches(PyExc_OSError)) + if (!PyErr_ExceptionMatches(PyExc_OSError)) { return 0; - PyErr_Fetch(&typ, &val, &tb); - PyErr_NormalizeException(&typ, &val, &tb); - env_err = (PyOSErrorObject *) val; + } + PyObject *exc = PyErr_GetRaisedException(); + PyOSErrorObject *env_err = (PyOSErrorObject *)exc; assert(env_err != NULL); if (env_err->myerrno != NULL) { assert(EINTR > 0 && EINTR < INT_MAX); @@ -764,14 +764,12 @@ _PyIO_trap_eintr(void) int myerrno = PyLong_AsLongAndOverflow(env_err->myerrno, &overflow); PyErr_Clear(); if (myerrno == EINTR) { - Py_DECREF(typ); - Py_DECREF(val); - Py_XDECREF(tb); + Py_DECREF(exc); return 1; } } /* This silences any error set by PyObject_RichCompareBool() */ - PyErr_Restore(typ, val, tb); + PyErr_SetRaisedException(exc); return 0; } @@ -2228,15 +2226,17 @@ bufferedrwpair_writable(rwpair *self, PyObject *Py_UNUSED(ignored)) static PyObject * bufferedrwpair_close(rwpair *self, PyObject *Py_UNUSED(ignored)) { - PyObject *exc = NULL, *val, *tb; + PyObject *exc = NULL; PyObject *ret = _forward_call(self->writer, &_Py_ID(close), NULL); - if (ret == NULL) - PyErr_Fetch(&exc, &val, &tb); - else + if (ret == NULL) { + exc = PyErr_GetRaisedException(); + } + else { Py_DECREF(ret); + } ret = _forward_call(self->reader, &_Py_ID(close), NULL); if (exc != NULL) { - _PyErr_ChainExceptions(exc, val, tb); + _PyErr_ChainExceptions1(exc); Py_CLEAR(ret); } return ret; diff --git a/Modules/_io/fileio.c b/Modules/_io/fileio.c index f424fb8439d7..35a498ce5a83 100644 --- a/Modules/_io/fileio.c +++ b/Modules/_io/fileio.c @@ -88,14 +88,13 @@ static PyObject * fileio_dealloc_warn(fileio *self, PyObject *source) { if (self->fd >= 0 && self->closefd) { - PyObject *exc, *val, *tb; - PyErr_Fetch(&exc, &val, &tb); + PyObject *exc = PyErr_GetRaisedException(); if (PyErr_ResourceWarning(source, 1, "unclosed file %R", source)) { /* Spurious errors can appear at shutdown */ if (PyErr_ExceptionMatches(PyExc_Warning)) PyErr_WriteUnraisable((PyObject *) self); } - PyErr_Restore(exc, val, tb); + PyErr_SetRaisedException(exc); } Py_RETURN_NONE; } @@ -140,7 +139,7 @@ _io_FileIO_close_impl(fileio *self) /*[clinic end generated code: output=7737a319ef3bad0b input=f35231760d54a522]*/ { PyObject *res; - PyObject *exc, *val, *tb; + PyObject *exc; int rc; res = PyObject_CallMethodOneArg((PyObject*)&PyRawIOBase_Type, &_Py_ID(close), (PyObject *)self); @@ -148,20 +147,25 @@ _io_FileIO_close_impl(fileio *self) self->fd = -1; return res; } - if (res == NULL) - PyErr_Fetch(&exc, &val, &tb); + if (res == NULL) { + exc = PyErr_GetRaisedException(); + } if (self->finalizing) { PyObject *r = fileio_dealloc_warn(self, (PyObject *) self); - if (r) + if (r) { Py_DECREF(r); - else + } + else { PyErr_Clear(); + } } rc = internal_close(self); - if (res == NULL) - _PyErr_ChainExceptions(exc, val, tb); - if (rc < 0) + if (res == NULL) { + _PyErr_ChainExceptions1(exc); + } + if (rc < 0) { Py_CLEAR(res); + } return res; } @@ -487,10 +491,9 @@ _io_FileIO___init___impl(fileio *self, PyObject *nameobj, const char *mode, if (!fd_is_own) self->fd = -1; if (self->fd >= 0) { - PyObject *exc, *val, *tb; - PyErr_Fetch(&exc, &val, &tb); + PyObject *exc = PyErr_GetRaisedException(); internal_close(self); - _PyErr_ChainExceptions(exc, val, tb); + _PyErr_ChainExceptions1(exc); } done: diff --git a/Modules/_io/iobase.c b/Modules/_io/iobase.c index 7b9391ec54d7..682ed000eb1f 100644 --- a/Modules/_io/iobase.c +++ b/Modules/_io/iobase.c @@ -220,7 +220,6 @@ static PyObject * _io__IOBase_close_impl(PyObject *self) /*[clinic end generated code: output=63c6a6f57d783d6d input=f4494d5c31dbc6b7]*/ { - PyObject *res, *exc, *val, *tb; int rc, closed = iobase_is_closed(self); if (closed < 0) { @@ -230,11 +229,11 @@ _io__IOBase_close_impl(PyObject *self) Py_RETURN_NONE; } - res = PyObject_CallMethodNoArgs(self, &_Py_ID(flush)); + PyObject *res = PyObject_CallMethodNoArgs(self, &_Py_ID(flush)); - PyErr_Fetch(&exc, &val, &tb); + PyObject *exc = PyErr_GetRaisedException(); rc = PyObject_SetAttr(self, &_Py_ID(__IOBase_closed), Py_True); - _PyErr_ChainExceptions(exc, val, tb); + _PyErr_ChainExceptions1(exc); if (rc < 0) { Py_CLEAR(res); } @@ -252,11 +251,10 @@ static void iobase_finalize(PyObject *self) { PyObject *res; - PyObject *error_type, *error_value, *error_traceback; int closed; /* Save the current exception, if any. */ - PyErr_Fetch(&error_type, &error_value, &error_traceback); + PyObject *exc = PyErr_GetRaisedException(); /* If `closed` doesn't exist or can't be evaluated as bool, then the object is probably in an unusable state, so ignore. */ @@ -297,7 +295,7 @@ iobase_finalize(PyObject *self) } /* Restore the saved exception. */ - PyErr_Restore(error_type, error_value, error_traceback); + PyErr_SetRaisedException(exc); } int diff --git a/Modules/_io/textio.c b/Modules/_io/textio.c index fbf0bf468403..3ff84cb623af 100644 --- a/Modules/_io/textio.c +++ b/Modules/_io/textio.c @@ -2827,11 +2827,10 @@ _io_TextIOWrapper_tell_impl(textio *self) fail: if (saved_state) { - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); + PyObject *exc = PyErr_GetRaisedException(); res = PyObject_CallMethodOneArg( self->decoder, &_Py_ID(setstate), saved_state); - _PyErr_ChainExceptions(type, value, traceback); + _PyErr_ChainExceptions1(exc); Py_DECREF(saved_state); Py_XDECREF(res); } @@ -3028,24 +3027,28 @@ _io_TextIOWrapper_close_impl(textio *self) Py_RETURN_NONE; /* stream already closed */ } else { - PyObject *exc = NULL, *val, *tb; + PyObject *exc = NULL; if (self->finalizing) { res = PyObject_CallMethodOneArg(self->buffer, &_Py_ID(_dealloc_warn), (PyObject *)self); - if (res) + if (res) { Py_DECREF(res); - else + } + else { PyErr_Clear(); + } } res = PyObject_CallMethodNoArgs((PyObject *)self, &_Py_ID(flush)); - if (res == NULL) - PyErr_Fetch(&exc, &val, &tb); - else + if (res == NULL) { + exc = PyErr_GetRaisedException(); + } + else { Py_DECREF(res); + } res = PyObject_CallMethodNoArgs(self->buffer, &_Py_ID(close)); if (exc != NULL) { - _PyErr_ChainExceptions(exc, val, tb); + _PyErr_ChainExceptions1(exc); Py_CLEAR(res); } return res; diff --git a/Modules/_io/winconsoleio.c b/Modules/_io/winconsoleio.c index e913d8318746..de07b50f5ce4 100644 --- a/Modules/_io/winconsoleio.c +++ b/Modules/_io/winconsoleio.c @@ -192,7 +192,7 @@ _io__WindowsConsoleIO_close_impl(winconsoleio *self) /*[clinic end generated code: output=27ef95b66c29057b input=68c4e5754f8136c2]*/ { PyObject *res; - PyObject *exc, *val, *tb; + PyObject *exc; int rc; res = PyObject_CallMethodOneArg((PyObject*)&PyRawIOBase_Type, &_Py_ID(close), (PyObject*)self); @@ -200,13 +200,16 @@ _io__WindowsConsoleIO_close_impl(winconsoleio *self) self->fd = -1; return res; } - if (res == NULL) - PyErr_Fetch(&exc, &val, &tb); + if (res == NULL) { + exc = PyErr_GetRaisedException(); + } rc = internal_close(self); - if (res == NULL) - _PyErr_ChainExceptions(exc, val, tb); - if (rc < 0) + if (res == NULL) { + _PyErr_ChainExceptions1(exc); + } + if (rc < 0) { Py_CLEAR(res); + } return res; } diff --git a/Modules/_lsprof.c b/Modules/_lsprof.c index 3237d796dc29..83d034ae7eed 100644 --- a/Modules/_lsprof.c +++ b/Modules/_lsprof.c @@ -348,8 +348,7 @@ ptrace_enter_call(PyObject *self, void *key, PyObject *userObj) * exception, and some of the code under here assumes that * PyErr_* is its own to mess around with, so we have to * save and restore any current exception. */ - PyObject *last_type, *last_value, *last_tb; - PyErr_Fetch(&last_type, &last_value, &last_tb); + PyObject *exc = PyErr_GetRaisedException(); profEntry = getEntry(pObj, key); if (profEntry == NULL) { @@ -374,7 +373,7 @@ ptrace_enter_call(PyObject *self, void *key, PyObject *userObj) initContext(pObj, pContext, profEntry); restorePyerr: - PyErr_Restore(last_type, last_value, last_tb); + PyErr_SetRaisedException(exc); } static void diff --git a/Modules/_sqlite/connection.c b/Modules/_sqlite/connection.c index 4c07d5e0b61f..fb61ef82ef86 100644 --- a/Modules/_sqlite/connection.c +++ b/Modules/_sqlite/connection.c @@ -908,7 +908,6 @@ final_callback(sqlite3_context *context) PyObject* function_result; PyObject** aggregate_instance; int ok; - PyObject *exception, *value, *tb; aggregate_instance = (PyObject**)sqlite3_aggregate_context(context, 0); if (aggregate_instance == NULL) { @@ -923,7 +922,7 @@ final_callback(sqlite3_context *context) } // Keep the exception (if any) of the last call to step, value, or inverse - PyErr_Fetch(&exception, &value, &tb); + PyObject *exc = PyErr_GetRaisedException(); callback_context *ctx = (callback_context *)sqlite3_user_data(context); assert(ctx != NULL); @@ -938,7 +937,7 @@ final_callback(sqlite3_context *context) } if (!ok) { int attr_err = PyErr_ExceptionMatches(PyExc_AttributeError); - _PyErr_ChainExceptions(exception, value, tb); + _PyErr_ChainExceptions1(exc); /* Note: contrary to the step, value, and inverse callbacks, SQLite * does _not_, as of SQLite 3.38.0, propagate errors to sqlite3_step() @@ -949,7 +948,7 @@ final_callback(sqlite3_context *context) : "user-defined aggregate's 'finalize' method raised error"); } else { - PyErr_Restore(exception, value, tb); + PyErr_SetRaisedException(exc); } error: @@ -2274,15 +2273,14 @@ pysqlite_connection_exit_impl(pysqlite_Connection *self, PyObject *exc_type, if (commit) { /* Commit failed; try to rollback in order to unlock the database. * If rollback also fails, chain the exceptions. */ - PyObject *exc, *val, *tb; - PyErr_Fetch(&exc, &val, &tb); + PyObject *exc = PyErr_GetRaisedException(); result = pysqlite_connection_rollback_impl(self); if (result == NULL) { - _PyErr_ChainExceptions(exc, val, tb); + _PyErr_ChainExceptions1(exc); } else { Py_DECREF(result); - PyErr_Restore(exc, val, tb); + PyErr_SetRaisedException(exc); } } return NULL; diff --git a/Modules/_sqlite/cursor.c b/Modules/_sqlite/cursor.c index 6f7970cf8197..caeedbddb8d8 100644 --- a/Modules/_sqlite/cursor.c +++ b/Modules/_sqlite/cursor.c @@ -705,11 +705,10 @@ bind_parameters(pysqlite_state *state, pysqlite_Statement *self, Py_DECREF(adapted); if (rc != SQLITE_OK) { - PyObject *exc, *val, *tb; - PyErr_Fetch(&exc, &val, &tb); + PyObject *exc = PyErr_GetRaisedException(); sqlite3 *db = sqlite3_db_handle(self->st); _pysqlite_seterror(state, db); - _PyErr_ChainExceptions(exc, val, tb); + _PyErr_ChainExceptions1(exc); return; } } @@ -765,11 +764,10 @@ bind_parameters(pysqlite_state *state, pysqlite_Statement *self, Py_DECREF(adapted); if (rc != SQLITE_OK) { - PyObject *exc, *val, *tb; - PyErr_Fetch(&exc, &val, &tb); + PyObject *exc = PyErr_GetRaisedException(); sqlite3 *db = sqlite3_db_handle(self->st); _pysqlite_seterror(state, db); - _PyErr_ChainExceptions(exc, val, tb); + _PyErr_ChainExceptions1(exc); return; } } diff --git a/Modules/_testcapi/heaptype.c b/Modules/_testcapi/heaptype.c index 39639f7ed048..df2a061ed82b 100644 --- a/Modules/_testcapi/heaptype.c +++ b/Modules/_testcapi/heaptype.c @@ -623,16 +623,15 @@ heapctypesubclasswithfinalizer_init(PyObject *self, PyObject *args, PyObject *kw static void heapctypesubclasswithfinalizer_finalize(PyObject *self) { - PyObject *error_type, *error_value, *error_traceback, *m; PyObject *oldtype = NULL, *newtype = NULL, *refcnt = NULL; /* Save the current exception, if any. */ - PyErr_Fetch(&error_type, &error_value, &error_traceback); + PyObject *exc = PyErr_GetRaisedException(); if (_testcapimodule == NULL) { goto cleanup_finalize; } - m = PyState_FindModule(_testcapimodule); + PyObject *m = PyState_FindModule(_testcapimodule); if (m == NULL) { goto cleanup_finalize; } @@ -667,7 +666,7 @@ heapctypesubclasswithfinalizer_finalize(PyObject *self) Py_XDECREF(refcnt); /* Restore the saved exception. */ - PyErr_Restore(error_type, error_value, error_traceback); + PyErr_SetRaisedException(exc); } static PyType_Slot HeapCTypeSubclassWithFinalizer_slots[] = { diff --git a/Modules/_testcapi/watchers.c b/Modules/_testcapi/watchers.c index 2e8fe1dbf786..d9ace632768a 100644 --- a/Modules/_testcapi/watchers.c +++ b/Modules/_testcapi/watchers.c @@ -389,16 +389,15 @@ allocate_too_many_code_watchers(PyObject *self, PyObject *args) watcher_ids[i] = watcher_id; num_watchers++; } - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); + PyObject *exc = PyErr_GetRaisedException(); for (int i = 0; i < num_watchers; i++) { if (PyCode_ClearWatcher(watcher_ids[i]) < 0) { PyErr_WriteUnraisable(Py_None); break; } } - if (type) { - PyErr_Restore(type, value, traceback); + if (exc) { + PyErr_SetRaisedException(exc); return NULL; } else if (PyErr_Occurred()) { @@ -578,16 +577,15 @@ allocate_too_many_func_watchers(PyObject *self, PyObject *args) watcher_ids[i] = watcher_id; num_watchers++; } - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); + PyObject *exc = PyErr_GetRaisedException(); for (int i = 0; i < num_watchers; i++) { if (PyFunction_ClearWatcher(watcher_ids[i]) < 0) { PyErr_WriteUnraisable(Py_None); break; } } - if (type) { - PyErr_Restore(type, value, traceback); + if (exc) { + PyErr_SetRaisedException(exc); return NULL; } else if (PyErr_Occurred()) { diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index 6bb424282a87..fc716a3564d3 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -1611,19 +1611,18 @@ static void slot_tp_del(PyObject *self) { PyObject *del, *res; - PyObject *error_type, *error_value, *error_traceback; /* Temporarily resurrect the object. */ assert(Py_REFCNT(self) == 0); Py_SET_REFCNT(self, 1); /* Save the current exception, if any. */ - PyErr_Fetch(&error_type, &error_value, &error_traceback); + PyObject *exc = PyErr_GetRaisedException(); PyObject *tp_del = PyUnicode_InternFromString("__tp_del__"); if (tp_del == NULL) { PyErr_WriteUnraisable(NULL); - PyErr_Restore(error_type, error_value, error_traceback); + PyErr_SetRaisedException(exc); return; } /* Execute __del__ method, if any. */ @@ -1638,7 +1637,7 @@ slot_tp_del(PyObject *self) } /* Restore the saved exception. */ - PyErr_Restore(error_type, error_value, error_traceback); + PyErr_SetRaisedException(exc); /* Undo the temporary resurrection; can't use DECREF here, it would * cause a recursive call. diff --git a/Modules/_xxinterpchannelsmodule.c b/Modules/_xxinterpchannelsmodule.c index 60538c318748..a0cd4a2363fb 100644 --- a/Modules/_xxinterpchannelsmodule.c +++ b/Modules/_xxinterpchannelsmodule.c @@ -100,9 +100,9 @@ add_new_type(PyObject *mod, PyType_Spec *spec, crossinterpdatafunc shared) static int _release_xid_data(_PyCrossInterpreterData *data, int ignoreexc) { - PyObject *exctype, *excval, *exctb; + PyObject *exc; if (ignoreexc) { - PyErr_Fetch(&exctype, &excval, &exctb); + exc = PyErr_GetRaisedException(); } int res = _PyCrossInterpreterData_Release(data); if (res < 0) { @@ -125,7 +125,7 @@ _release_xid_data(_PyCrossInterpreterData *data, int ignoreexc) } } if (ignoreexc) { - PyErr_Restore(exctype, excval, exctb); + PyErr_SetRaisedException(exc); } return res; } diff --git a/Modules/_zoneinfo.c b/Modules/_zoneinfo.c index 6e1a37611b61..c215a75b804f 100644 --- a/Modules/_zoneinfo.c +++ b/Modules/_zoneinfo.c @@ -270,10 +270,9 @@ zoneinfo_new_instance(zoneinfo_state *state, PyTypeObject *type, PyObject *key) Py_CLEAR(self); cleanup: if (file_obj != NULL) { - PyObject *exc, *val, *tb; - PyErr_Fetch(&exc, &val, &tb); + PyObject *exc = PyErr_GetRaisedException(); PyObject *tmp = PyObject_CallMethod(file_obj, "close", NULL); - _PyErr_ChainExceptions(exc, val, tb); + _PyErr_ChainExceptions1(exc); if (tmp == NULL) { Py_CLEAR(self); } diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c index 51aa89ef715a..6ea216b3bf5e 100644 --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -14759,10 +14759,9 @@ ScandirIterator_exit(ScandirIterator *self, PyObject *args) static void ScandirIterator_finalize(ScandirIterator *iterator) { - PyObject *error_type, *error_value, *error_traceback; /* Save the current exception, if any. */ - PyErr_Fetch(&error_type, &error_value, &error_traceback); + PyObject *exc = PyErr_GetRaisedException(); if (!ScandirIterator_is_closed(iterator)) { ScandirIterator_closedir(iterator); @@ -14779,7 +14778,7 @@ ScandirIterator_finalize(ScandirIterator *iterator) path_cleanup(&iterator->path); /* Restore the saved exception. */ - PyErr_Restore(error_type, error_value, error_traceback); + PyErr_SetRaisedException(exc); } static void diff --git a/Modules/signalmodule.c b/Modules/signalmodule.c index 44a5ecf63e9d..cd26eca351c0 100644 --- a/Modules/signalmodule.c +++ b/Modules/signalmodule.c @@ -234,14 +234,13 @@ signal_default_int_handler_impl(PyObject *module, int signalnum, static int report_wakeup_write_error(void *data) { - PyObject *exc, *val, *tb; int save_errno = errno; errno = (int) (intptr_t) data; - PyErr_Fetch(&exc, &val, &tb); + PyObject *exc = PyErr_GetRaisedException(); PyErr_SetFromErrno(PyExc_OSError); _PyErr_WriteUnraisableMsg("when trying to write to the signal wakeup fd", NULL); - PyErr_Restore(exc, val, tb); + PyErr_SetRaisedException(exc); errno = save_errno; return 0; } @@ -252,14 +251,13 @@ report_wakeup_send_error(void* data) { int send_errno = (int) (intptr_t) data; - PyObject *exc, *val, *tb; - PyErr_Fetch(&exc, &val, &tb); + PyObject *exc = PyErr_GetRaisedException(); /* PyErr_SetExcFromWindowsErr() invokes FormatMessage() which recognizes the error codes used by both GetLastError() and WSAGetLastError */ PyErr_SetExcFromWindowsErr(PyExc_OSError, send_errno); _PyErr_WriteUnraisableMsg("when trying to send to the signal wakeup fd", NULL); - PyErr_Restore(exc, val, tb); + PyErr_SetRaisedException(exc); return 0; } #endif /* MS_WINDOWS */ diff --git a/Modules/socketmodule.c b/Modules/socketmodule.c index 2d300f19436b..00608be38f61 100644 --- a/Modules/socketmodule.c +++ b/Modules/socketmodule.c @@ -5175,10 +5175,9 @@ static void sock_finalize(PySocketSockObject *s) { SOCKET_T fd; - PyObject *error_type, *error_value, *error_traceback; /* Save the current exception, if any. */ - PyErr_Fetch(&error_type, &error_value, &error_traceback); + PyObject *exc = PyErr_GetRaisedException(); if (s->sock_fd != INVALID_SOCKET) { if (PyErr_ResourceWarning((PyObject *)s, 1, "unclosed %R", s)) { @@ -5202,7 +5201,7 @@ sock_finalize(PySocketSockObject *s) } /* Restore the saved exception. */ - PyErr_Restore(error_type, error_value, error_traceback); + PyErr_SetRaisedException(exc); } static void From webhook-mailer at python.org Fri Feb 24 17:58:16 2023 From: webhook-mailer at python.org (jaraco) Date: Fri, 24 Feb 2023 22:58:16 -0000 Subject: [Python-checkins] gh-102209: Disable the timeout in test_implied_dirs_performance. (#102225) Message-ID: https://github.com/python/cpython/commit/89b4c1205327cc8032f4a39e3dfbdb59009a0704 commit: 89b4c1205327cc8032f4a39e3dfbdb59009a0704 branch: main author: Jason R. Coombs committer: jaraco date: 2023-02-24T17:58:10-05:00 summary: gh-102209: Disable the timeout in test_implied_dirs_performance. (#102225) Disable the timeout in test_implied_dirs_performance. Workaround for #102209 until I can work out a more robust test for linearity. files: M Lib/test/test_zipfile/test_path.py diff --git a/Lib/test/test_zipfile/test_path.py b/Lib/test/test_zipfile/test_path.py index 92fda690b2fc..53cbef15a17d 100644 --- a/Lib/test/test_zipfile/test_path.py +++ b/Lib/test/test_zipfile/test_path.py @@ -330,7 +330,8 @@ def test_joinpath_constant_time(self): # Check the file iterated all items assert entries.count == self.HUGE_ZIPFILE_NUM_ENTRIES - @set_timeout(3) + # timeout disabled due to #102209 + # @set_timeout(3) def test_implied_dirs_performance(self): data = ['/'.join(string.ascii_lowercase + str(n)) for n in range(10000)] zipfile.CompleteDirs._implied_dirs(data) From webhook-mailer at python.org Fri Feb 24 18:02:11 2023 From: webhook-mailer at python.org (JelleZijlstra) Date: Fri, 24 Feb 2023 23:02:11 -0000 Subject: [Python-checkins] gh-101765: Fix SystemError / segmentation fault in iter `__reduce__` when internal access of `builtins.__dict__` exhausts the iterator (#101769) Message-ID: https://github.com/python/cpython/commit/54dfa14c5a94b893b67a4d9e9e403ff538ce9023 commit: 54dfa14c5a94b893b67a4d9e9e403ff538ce9023 branch: main author: Ionite committer: JelleZijlstra date: 2023-02-24T15:02:04-08:00 summary: gh-101765: Fix SystemError / segmentation fault in iter `__reduce__` when internal access of `builtins.__dict__` exhausts the iterator (#101769) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-10-07-21-47.gh-issue-101765.MO5LlC.rst M Lib/test/test_iter.py M Objects/bytearrayobject.c M Objects/bytesobject.c M Objects/genericaliasobject.c M Objects/iterobject.c M Objects/listobject.c M Objects/tupleobject.c M Objects/unicodeobject.c diff --git a/Lib/test/test_iter.py b/Lib/test/test_iter.py index acbdcb5f3020..6ab9b7a72303 100644 --- a/Lib/test/test_iter.py +++ b/Lib/test/test_iter.py @@ -7,6 +7,9 @@ from test.support import check_free_after_iterating, ALWAYS_EQ, NEVER_EQ import pickle import collections.abc +import functools +import contextlib +import builtins # Test result of triple loop (too big to inline) TRIPLETS = [(0, 0, 0), (0, 0, 1), (0, 0, 2), @@ -91,6 +94,12 @@ def __call__(self): raise IndexError # Emergency stop return i +class EmptyIterClass: + def __len__(self): + return 0 + def __getitem__(self, i): + raise StopIteration + # Main test suite class TestCase(unittest.TestCase): @@ -238,6 +247,78 @@ def test_mutating_seq_class_exhausted_iter(self): self.assertEqual(list(empit), [5, 6]) self.assertEqual(list(a), [0, 1, 2, 3, 4, 5, 6]) + def test_reduce_mutating_builtins_iter(self): + # This is a reproducer of issue #101765 + # where iter `__reduce__` calls could lead to a segfault or SystemError + # depending on the order of C argument evaluation, which is undefined + + # Backup builtins + builtins_dict = builtins.__dict__ + orig = {"iter": iter, "reversed": reversed} + + def run(builtin_name, item, sentinel=None): + it = iter(item) if sentinel is None else iter(item, sentinel) + + class CustomStr: + def __init__(self, name, iterator): + self.name = name + self.iterator = iterator + def __hash__(self): + return hash(self.name) + def __eq__(self, other): + # Here we exhaust our iterator, possibly changing + # its `it_seq` pointer to NULL + # The `__reduce__` call should correctly get + # the pointers after this call + list(self.iterator) + return other == self.name + + # del is required here + # to not prematurely call __eq__ from + # the hash collision with the old key + del builtins_dict[builtin_name] + builtins_dict[CustomStr(builtin_name, it)] = orig[builtin_name] + + return it.__reduce__() + + types = [ + (EmptyIterClass(),), + (bytes(8),), + (bytearray(8),), + ((1, 2, 3),), + (lambda: 0, 0), + (tuple[int],) # GenericAlias + ] + + try: + run_iter = functools.partial(run, "iter") + # The returned value of `__reduce__` should not only be valid + # but also *empty*, as `it` was exhausted during `__eq__` + # i.e "xyz" returns (iter, ("",)) + self.assertEqual(run_iter("xyz"), (orig["iter"], ("",))) + self.assertEqual(run_iter([1, 2, 3]), (orig["iter"], ([],))) + + # _PyEval_GetBuiltin is also called for `reversed` in a branch of + # listiter_reduce_general + self.assertEqual( + run("reversed", orig["reversed"](list(range(8)))), + (iter, ([],)) + ) + + for case in types: + self.assertEqual(run_iter(*case), (orig["iter"], ((),))) + finally: + # Restore original builtins + for key, func in orig.items(): + # need to suppress KeyErrors in case + # a failed test deletes the key without setting anything + with contextlib.suppress(KeyError): + # del is required here + # to not invoke our custom __eq__ from + # the hash collision with the old key + del builtins_dict[key] + builtins_dict[key] = func + # Test a new_style class with __iter__ but no next() method def test_new_style_iter_class(self): class IterClass(object): diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-10-07-21-47.gh-issue-101765.MO5LlC.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-10-07-21-47.gh-issue-101765.MO5LlC.rst new file mode 100644 index 000000000000..cc99779a944e --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-10-07-21-47.gh-issue-101765.MO5LlC.rst @@ -0,0 +1 @@ +Fix SystemError / segmentation fault in iter ``__reduce__`` when internal access of ``builtins.__dict__`` keys mutates the iter object. diff --git a/Objects/bytearrayobject.c b/Objects/bytearrayobject.c index 072089e39aa2..49d4dd524696 100644 --- a/Objects/bytearrayobject.c +++ b/Objects/bytearrayobject.c @@ -2391,11 +2391,16 @@ PyDoc_STRVAR(length_hint_doc, static PyObject * bytearrayiter_reduce(bytesiterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq != NULL) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } else { - return Py_BuildValue("N(())", _PyEval_GetBuiltin(&_Py_ID(iter))); + return Py_BuildValue("N(())", iter); } } diff --git a/Objects/bytesobject.c b/Objects/bytesobject.c index ba2c2e978c6e..657443f31fa7 100644 --- a/Objects/bytesobject.c +++ b/Objects/bytesobject.c @@ -3169,11 +3169,16 @@ PyDoc_STRVAR(length_hint_doc, static PyObject * striter_reduce(striterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq != NULL) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } else { - return Py_BuildValue("N(())", _PyEval_GetBuiltin(&_Py_ID(iter))); + return Py_BuildValue("N(())", iter); } } diff --git a/Objects/genericaliasobject.c b/Objects/genericaliasobject.c index 675fd496eefd..888cb16edd1b 100644 --- a/Objects/genericaliasobject.c +++ b/Objects/genericaliasobject.c @@ -877,8 +877,17 @@ ga_iter_clear(PyObject *self) { static PyObject * ga_iter_reduce(PyObject *self, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); gaiterobject *gi = (gaiterobject *)self; - return Py_BuildValue("N(O)", _PyEval_GetBuiltin(&_Py_ID(iter)), gi->obj); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + + if (gi->obj) + return Py_BuildValue("N(O)", iter, gi->obj); + else + return Py_BuildValue("N(())", iter); } static PyMethodDef ga_iter_methods[] = { diff --git a/Objects/iterobject.c b/Objects/iterobject.c index cfd6d0a7c959..149b701b68e9 100644 --- a/Objects/iterobject.c +++ b/Objects/iterobject.c @@ -102,11 +102,16 @@ PyDoc_STRVAR(length_hint_doc, "Private method returning an estimate of len(list( static PyObject * iter_reduce(seqiterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq != NULL) - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); else - return Py_BuildValue("N(())", _PyEval_GetBuiltin(&_Py_ID(iter))); + return Py_BuildValue("N(())", iter); } PyDoc_STRVAR(reduce_doc, "Return state information for pickling."); @@ -239,11 +244,16 @@ calliter_iternext(calliterobject *it) static PyObject * calliter_reduce(calliterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_callable != NULL && it->it_sentinel != NULL) - return Py_BuildValue("N(OO)", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_callable, it->it_sentinel); + return Py_BuildValue("N(OO)", iter, it->it_callable, it->it_sentinel); else - return Py_BuildValue("N(())", _PyEval_GetBuiltin(&_Py_ID(iter))); + return Py_BuildValue("N(())", iter); } static PyMethodDef calliter_methods[] = { diff --git a/Objects/listobject.c b/Objects/listobject.c index ca6b71231134..494b40178f9f 100644 --- a/Objects/listobject.c +++ b/Objects/listobject.c @@ -3444,18 +3444,22 @@ listiter_reduce_general(void *_it, int forward) { PyObject *list; + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + /* the objects are not the same, index is of different types! */ if (forward) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); _PyListIterObject *it = (_PyListIterObject *)_it; if (it->it_seq) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } } else { + PyObject *reversed = _PyEval_GetBuiltin(&_Py_ID(reversed)); listreviterobject *it = (listreviterobject *)_it; if (it->it_seq) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(reversed)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", reversed, it->it_seq, it->it_index); } } /* empty iterator, create an empty list */ diff --git a/Objects/tupleobject.c b/Objects/tupleobject.c index 7d6d0e1bea24..6ee93ab5adc2 100644 --- a/Objects/tupleobject.c +++ b/Objects/tupleobject.c @@ -1048,11 +1048,16 @@ PyDoc_STRVAR(length_hint_doc, "Private method returning an estimate of len(list( static PyObject * tupleiter_reduce(_PyTupleIterObject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq) - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); else - return Py_BuildValue("N(())", _PyEval_GetBuiltin(&_Py_ID(iter))); + return Py_BuildValue("N(())", iter); } static PyObject * diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c index f0c7aa7707fd..6403e359c707 100644 --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -14784,14 +14784,19 @@ PyDoc_STRVAR(length_hint_doc, "Private method returning an estimate of len(list( static PyObject * unicodeiter_reduce(unicodeiterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq != NULL) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } else { PyObject *u = unicode_new_empty(); if (u == NULL) return NULL; - return Py_BuildValue("N(N)", _PyEval_GetBuiltin(&_Py_ID(iter)), u); + return Py_BuildValue("N(N)", iter, u); } } From webhook-mailer at python.org Fri Feb 24 21:26:46 2023 From: webhook-mailer at python.org (corona10) Date: Sat, 25 Feb 2023 02:26:46 -0000 Subject: [Python-checkins] gh-95675: fix uid and gid at test_add_dir_getmember (gh-102207) Message-ID: https://github.com/python/cpython/commit/56e93c8020e89e1712aa238574bca2076a225028 commit: 56e93c8020e89e1712aa238574bca2076a225028 branch: main author: SKO <41810398+uyw4687 at users.noreply.github.com> committer: corona10 date: 2023-02-25T11:26:40+09:00 summary: gh-95675: fix uid and gid at test_add_dir_getmember (gh-102207) Co-authored-by: Seonkyo Ok files: M Lib/test/test_tarfile.py M Misc/ACKS diff --git a/Lib/test/test_tarfile.py b/Lib/test/test_tarfile.py index f15a80097668..75b60e9a50e7 100644 --- a/Lib/test/test_tarfile.py +++ b/Lib/test/test_tarfile.py @@ -225,18 +225,19 @@ def test_add_dir_getmember(self): self.add_dir_and_getmember('bar') self.add_dir_and_getmember('a'*101) - @unittest.skipIf( - (hasattr(os, 'getuid') and os.getuid() > 0o777_7777) or - (hasattr(os, 'getgid') and os.getgid() > 0o777_7777), - "uid or gid too high for USTAR format." - ) + @unittest.skipUnless(hasattr(os, "getuid") and hasattr(os, "getgid"), + "Missing getuid or getgid implementation") def add_dir_and_getmember(self, name): + def filter(tarinfo): + tarinfo.uid = tarinfo.gid = 100 + return tarinfo + with os_helper.temp_cwd(): with tarfile.open(tmpname, 'w') as tar: tar.format = tarfile.USTAR_FORMAT try: os.mkdir(name) - tar.add(name) + tar.add(name, filter=filter) finally: os.rmdir(name) with tarfile.open(tmpname) as tar: diff --git a/Misc/ACKS b/Misc/ACKS index 43e420a1373c..3403aee4cc78 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1308,6 +1308,7 @@ Jon Oberheide Milan Oberkirch Pascal Oberndoerfer G?ry Ogam +Seonkyo Ok Jeffrey Ollie Adam Olsen Bryan Olson From webhook-mailer at python.org Fri Feb 24 21:55:55 2023 From: webhook-mailer at python.org (corona10) Date: Sat, 25 Feb 2023 02:55:55 -0000 Subject: [Python-checkins] [3.11] gh-95675: fix uid and gid at test_add_dir_getmember (gh-102207) (gh-102231) Message-ID: https://github.com/python/cpython/commit/9da3e7f3894aceefed5543cdd4e236cc83bfa003 commit: 9da3e7f3894aceefed5543cdd4e236cc83bfa003 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: corona10 date: 2023-02-25T11:55:48+09:00 summary: [3.11] gh-95675: fix uid and gid at test_add_dir_getmember (gh-102207) (gh-102231) gh-95675: fix uid and gid at test_add_dir_getmember (gh-102207) (cherry picked from commit 56e93c8020e89e1712aa238574bca2076a225028) Co-authored-by: Seonkyo Ok files: M Lib/test/test_tarfile.py M Misc/ACKS diff --git a/Lib/test/test_tarfile.py b/Lib/test/test_tarfile.py index 7a0830f68602..bde105f78a82 100644 --- a/Lib/test/test_tarfile.py +++ b/Lib/test/test_tarfile.py @@ -225,18 +225,19 @@ def test_add_dir_getmember(self): self.add_dir_and_getmember('bar') self.add_dir_and_getmember('a'*101) - @unittest.skipIf( - (hasattr(os, 'getuid') and os.getuid() > 0o777_7777) or - (hasattr(os, 'getgid') and os.getgid() > 0o777_7777), - "uid or gid too high for USTAR format." - ) + @unittest.skipUnless(hasattr(os, "getuid") and hasattr(os, "getgid"), + "Missing getuid or getgid implementation") def add_dir_and_getmember(self, name): + def filter(tarinfo): + tarinfo.uid = tarinfo.gid = 100 + return tarinfo + with os_helper.temp_cwd(): with tarfile.open(tmpname, 'w') as tar: tar.format = tarfile.USTAR_FORMAT try: os.mkdir(name) - tar.add(name) + tar.add(name, filter=filter) finally: os.rmdir(name) with tarfile.open(tmpname) as tar: diff --git a/Misc/ACKS b/Misc/ACKS index 95e0589076c6..22df1b998fb1 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1297,6 +1297,7 @@ Jon Oberheide Milan Oberkirch Pascal Oberndoerfer G?ry Ogam +Seonkyo Ok Jeffrey Ollie Adam Olsen Bryan Olson From webhook-mailer at python.org Fri Feb 24 21:56:14 2023 From: webhook-mailer at python.org (corona10) Date: Sat, 25 Feb 2023 02:56:14 -0000 Subject: [Python-checkins] [3.10] gh-95675: fix uid and gid at test_add_dir_getmember (gh-102207) (gh-102230) Message-ID: https://github.com/python/cpython/commit/3e80d21b7673edf70753e14d88907c60bc6970c3 commit: 3e80d21b7673edf70753e14d88907c60bc6970c3 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: corona10 date: 2023-02-25T11:56:08+09:00 summary: [3.10] gh-95675: fix uid and gid at test_add_dir_getmember (gh-102207) (gh-102230) gh-95675: fix uid and gid at test_add_dir_getmember (gh-102207) (cherry picked from commit 56e93c8020e89e1712aa238574bca2076a225028) Co-authored-by: Seonkyo Ok files: M Lib/test/test_tarfile.py M Misc/ACKS diff --git a/Lib/test/test_tarfile.py b/Lib/test/test_tarfile.py index cba95e9320ec..89f5a561b4aa 100644 --- a/Lib/test/test_tarfile.py +++ b/Lib/test/test_tarfile.py @@ -225,18 +225,19 @@ def test_add_dir_getmember(self): self.add_dir_and_getmember('bar') self.add_dir_and_getmember('a'*101) - @unittest.skipIf( - (hasattr(os, 'getuid') and os.getuid() > 0o777_7777) or - (hasattr(os, 'getgid') and os.getgid() > 0o777_7777), - "uid or gid too high for USTAR format." - ) + @unittest.skipUnless(hasattr(os, "getuid") and hasattr(os, "getgid"), + "Missing getuid or getgid implementation") def add_dir_and_getmember(self, name): + def filter(tarinfo): + tarinfo.uid = tarinfo.gid = 100 + return tarinfo + with os_helper.temp_cwd(): with tarfile.open(tmpname, 'w') as tar: tar.format = tarfile.USTAR_FORMAT try: os.mkdir(name) - tar.add(name) + tar.add(name, filter=filter) finally: os.rmdir(name) with tarfile.open(tmpname) as tar: diff --git a/Misc/ACKS b/Misc/ACKS index 10a35fd64abf..b4be8af3d99f 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1286,6 +1286,7 @@ Jon Oberheide Milan Oberkirch Pascal Oberndoerfer G?ry Ogam +Seonkyo Ok Jeffrey Ollie Adam Olsen Bryan Olson From webhook-mailer at python.org Fri Feb 24 22:50:29 2023 From: webhook-mailer at python.org (JelleZijlstra) Date: Sat, 25 Feb 2023 03:50:29 -0000 Subject: [Python-checkins] [3.11] gh-101765: Fix SystemError / segmentation fault in iter `__reduce__` when internal access of `builtins.__dict__` exhausts the iterator (GH-101769) (#102228) Message-ID: https://github.com/python/cpython/commit/5d461225a5595bc682b5e9fd4cab41199304ff94 commit: 5d461225a5595bc682b5e9fd4cab41199304ff94 branch: 3.11 author: Ionite committer: JelleZijlstra date: 2023-02-24T19:49:59-08:00 summary: [3.11] gh-101765: Fix SystemError / segmentation fault in iter `__reduce__` when internal access of `builtins.__dict__` exhausts the iterator (GH-101769) (#102228) (cherry picked from commit 54dfa14c5a94b893b67a4d9e9e403ff538ce9023) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-10-07-21-47.gh-issue-101765.MO5LlC.rst M Lib/test/test_iter.py M Objects/bytearrayobject.c M Objects/bytesobject.c M Objects/genericaliasobject.c M Objects/iterobject.c M Objects/listobject.c M Objects/tupleobject.c M Objects/unicodeobject.c diff --git a/Lib/test/test_iter.py b/Lib/test/test_iter.py index acbdcb5f3020..6ab9b7a72303 100644 --- a/Lib/test/test_iter.py +++ b/Lib/test/test_iter.py @@ -7,6 +7,9 @@ from test.support import check_free_after_iterating, ALWAYS_EQ, NEVER_EQ import pickle import collections.abc +import functools +import contextlib +import builtins # Test result of triple loop (too big to inline) TRIPLETS = [(0, 0, 0), (0, 0, 1), (0, 0, 2), @@ -91,6 +94,12 @@ def __call__(self): raise IndexError # Emergency stop return i +class EmptyIterClass: + def __len__(self): + return 0 + def __getitem__(self, i): + raise StopIteration + # Main test suite class TestCase(unittest.TestCase): @@ -238,6 +247,78 @@ def test_mutating_seq_class_exhausted_iter(self): self.assertEqual(list(empit), [5, 6]) self.assertEqual(list(a), [0, 1, 2, 3, 4, 5, 6]) + def test_reduce_mutating_builtins_iter(self): + # This is a reproducer of issue #101765 + # where iter `__reduce__` calls could lead to a segfault or SystemError + # depending on the order of C argument evaluation, which is undefined + + # Backup builtins + builtins_dict = builtins.__dict__ + orig = {"iter": iter, "reversed": reversed} + + def run(builtin_name, item, sentinel=None): + it = iter(item) if sentinel is None else iter(item, sentinel) + + class CustomStr: + def __init__(self, name, iterator): + self.name = name + self.iterator = iterator + def __hash__(self): + return hash(self.name) + def __eq__(self, other): + # Here we exhaust our iterator, possibly changing + # its `it_seq` pointer to NULL + # The `__reduce__` call should correctly get + # the pointers after this call + list(self.iterator) + return other == self.name + + # del is required here + # to not prematurely call __eq__ from + # the hash collision with the old key + del builtins_dict[builtin_name] + builtins_dict[CustomStr(builtin_name, it)] = orig[builtin_name] + + return it.__reduce__() + + types = [ + (EmptyIterClass(),), + (bytes(8),), + (bytearray(8),), + ((1, 2, 3),), + (lambda: 0, 0), + (tuple[int],) # GenericAlias + ] + + try: + run_iter = functools.partial(run, "iter") + # The returned value of `__reduce__` should not only be valid + # but also *empty*, as `it` was exhausted during `__eq__` + # i.e "xyz" returns (iter, ("",)) + self.assertEqual(run_iter("xyz"), (orig["iter"], ("",))) + self.assertEqual(run_iter([1, 2, 3]), (orig["iter"], ([],))) + + # _PyEval_GetBuiltin is also called for `reversed` in a branch of + # listiter_reduce_general + self.assertEqual( + run("reversed", orig["reversed"](list(range(8)))), + (iter, ([],)) + ) + + for case in types: + self.assertEqual(run_iter(*case), (orig["iter"], ((),))) + finally: + # Restore original builtins + for key, func in orig.items(): + # need to suppress KeyErrors in case + # a failed test deletes the key without setting anything + with contextlib.suppress(KeyError): + # del is required here + # to not invoke our custom __eq__ from + # the hash collision with the old key + del builtins_dict[key] + builtins_dict[key] = func + # Test a new_style class with __iter__ but no next() method def test_new_style_iter_class(self): class IterClass(object): diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-10-07-21-47.gh-issue-101765.MO5LlC.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-10-07-21-47.gh-issue-101765.MO5LlC.rst new file mode 100644 index 000000000000..cc99779a944e --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-10-07-21-47.gh-issue-101765.MO5LlC.rst @@ -0,0 +1 @@ +Fix SystemError / segmentation fault in iter ``__reduce__`` when internal access of ``builtins.__dict__`` keys mutates the iter object. diff --git a/Objects/bytearrayobject.c b/Objects/bytearrayobject.c index dfeed68416cf..6d46ebe2a448 100644 --- a/Objects/bytearrayobject.c +++ b/Objects/bytearrayobject.c @@ -2394,11 +2394,16 @@ PyDoc_STRVAR(length_hint_doc, static PyObject * bytearrayiter_reduce(bytesiterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq != NULL) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } else { - return Py_BuildValue("N(())", _PyEval_GetBuiltin(&_Py_ID(iter))); + return Py_BuildValue("N(())", iter); } } diff --git a/Objects/bytesobject.c b/Objects/bytesobject.c index 17a2661ef3be..8dd1a2d52467 100644 --- a/Objects/bytesobject.c +++ b/Objects/bytesobject.c @@ -3191,11 +3191,16 @@ PyDoc_STRVAR(length_hint_doc, static PyObject * striter_reduce(striterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq != NULL) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } else { - return Py_BuildValue("N(())", _PyEval_GetBuiltin(&_Py_ID(iter))); + return Py_BuildValue("N(())", iter); } } diff --git a/Objects/genericaliasobject.c b/Objects/genericaliasobject.c index 77acd1bc57a5..139bab7e6c03 100644 --- a/Objects/genericaliasobject.c +++ b/Objects/genericaliasobject.c @@ -885,8 +885,17 @@ ga_iter_clear(PyObject *self) { static PyObject * ga_iter_reduce(PyObject *self, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); gaiterobject *gi = (gaiterobject *)self; - return Py_BuildValue("N(O)", _PyEval_GetBuiltin(&_Py_ID(iter)), gi->obj); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + + if (gi->obj) + return Py_BuildValue("N(O)", iter, gi->obj); + else + return Py_BuildValue("N(())", iter); } static PyMethodDef ga_iter_methods[] = { diff --git a/Objects/iterobject.c b/Objects/iterobject.c index 1732a037600c..ae09d2c0570e 100644 --- a/Objects/iterobject.c +++ b/Objects/iterobject.c @@ -103,11 +103,16 @@ PyDoc_STRVAR(length_hint_doc, "Private method returning an estimate of len(list( static PyObject * iter_reduce(seqiterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq != NULL) - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); else - return Py_BuildValue("N(())", _PyEval_GetBuiltin(&_Py_ID(iter))); + return Py_BuildValue("N(())", iter); } PyDoc_STRVAR(reduce_doc, "Return state information for pickling."); @@ -242,11 +247,16 @@ calliter_iternext(calliterobject *it) static PyObject * calliter_reduce(calliterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_callable != NULL && it->it_sentinel != NULL) - return Py_BuildValue("N(OO)", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_callable, it->it_sentinel); + return Py_BuildValue("N(OO)", iter, it->it_callable, it->it_sentinel); else - return Py_BuildValue("N(())", _PyEval_GetBuiltin(&_Py_ID(iter))); + return Py_BuildValue("N(())", iter); } static PyMethodDef calliter_methods[] = { diff --git a/Objects/listobject.c b/Objects/listobject.c index 19009133c827..0b20829a280a 100644 --- a/Objects/listobject.c +++ b/Objects/listobject.c @@ -3452,18 +3452,22 @@ listiter_reduce_general(void *_it, int forward) { PyObject *list; + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + /* the objects are not the same, index is of different types! */ if (forward) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); listiterobject *it = (listiterobject *)_it; if (it->it_seq) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } } else { + PyObject *reversed = _PyEval_GetBuiltin(&_Py_ID(reversed)); listreviterobject *it = (listreviterobject *)_it; if (it->it_seq) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(reversed)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", reversed, it->it_seq, it->it_index); } } /* empty iterator, create an empty list */ diff --git a/Objects/tupleobject.c b/Objects/tupleobject.c index dfb8597b876e..1e82a5b674e0 100644 --- a/Objects/tupleobject.c +++ b/Objects/tupleobject.c @@ -1072,11 +1072,16 @@ PyDoc_STRVAR(length_hint_doc, "Private method returning an estimate of len(list( static PyObject * tupleiter_reduce(tupleiterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq) - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); else - return Py_BuildValue("N(())", _PyEval_GetBuiltin(&_Py_ID(iter))); + return Py_BuildValue("N(())", iter); } static PyObject * diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c index a19eaadcfc2e..02343d7c9326 100644 --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -15757,14 +15757,19 @@ PyDoc_STRVAR(length_hint_doc, "Private method returning an estimate of len(list( static PyObject * unicodeiter_reduce(unicodeiterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + + /* _PyEval_GetBuiltin can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq != NULL) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltin(&_Py_ID(iter)), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } else { PyObject *u = (PyObject *)_PyUnicode_New(0); if (u == NULL) return NULL; - return Py_BuildValue("N(N)", _PyEval_GetBuiltin(&_Py_ID(iter)), u); + return Py_BuildValue("N(N)", iter, u); } } From webhook-mailer at python.org Fri Feb 24 22:51:00 2023 From: webhook-mailer at python.org (JelleZijlstra) Date: Sat, 25 Feb 2023 03:51:00 -0000 Subject: [Python-checkins] [3.10] gh-101765: Fix SystemError / segmentation fault in iter `__reduce__` when internal access of `builtins.__dict__` exhausts the iterator (GH-101769) (#102229) Message-ID: https://github.com/python/cpython/commit/9f472f81bc2636eda212c12796b8d6581783bcef commit: 9f472f81bc2636eda212c12796b8d6581783bcef branch: 3.10 author: Ionite committer: JelleZijlstra date: 2023-02-24T19:50:53-08:00 summary: [3.10] gh-101765: Fix SystemError / segmentation fault in iter `__reduce__` when internal access of `builtins.__dict__` exhausts the iterator (GH-101769) (#102229) (cherry picked from commit 54dfa14c5a94b893b67a4d9e9e403ff538ce9023) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-10-07-21-47.gh-issue-101765.MO5LlC.rst M Lib/test/test_iter.py M Objects/bytearrayobject.c M Objects/bytesobject.c M Objects/iterobject.c M Objects/listobject.c M Objects/tupleobject.c M Objects/unicodeobject.c diff --git a/Lib/test/test_iter.py b/Lib/test/test_iter.py index 554f602f6252..3130ec5410c9 100644 --- a/Lib/test/test_iter.py +++ b/Lib/test/test_iter.py @@ -7,6 +7,9 @@ from test.support import check_free_after_iterating, ALWAYS_EQ, NEVER_EQ import pickle import collections.abc +import functools +import contextlib +import builtins # Test result of triple loop (too big to inline) TRIPLETS = [(0, 0, 0), (0, 0, 1), (0, 0, 2), @@ -81,6 +84,12 @@ class BadIterableClass: def __iter__(self): raise ZeroDivisionError +class EmptyIterClass: + def __len__(self): + return 0 + def __getitem__(self, i): + raise StopIteration + # Main test suite class TestCase(unittest.TestCase): @@ -228,6 +237,77 @@ def test_mutating_seq_class_exhausted_iter(self): self.assertEqual(list(empit), [5, 6]) self.assertEqual(list(a), [0, 1, 2, 3, 4, 5, 6]) + def test_reduce_mutating_builtins_iter(self): + # This is a reproducer of issue #101765 + # where iter `__reduce__` calls could lead to a segfault or SystemError + # depending on the order of C argument evaluation, which is undefined + + # Backup builtins + builtins_dict = builtins.__dict__ + orig = {"iter": iter, "reversed": reversed} + + def run(builtin_name, item, sentinel=None): + it = iter(item) if sentinel is None else iter(item, sentinel) + + class CustomStr: + def __init__(self, name, iterator): + self.name = name + self.iterator = iterator + def __hash__(self): + return hash(self.name) + def __eq__(self, other): + # Here we exhaust our iterator, possibly changing + # its `it_seq` pointer to NULL + # The `__reduce__` call should correctly get + # the pointers after this call + list(self.iterator) + return other == self.name + + # del is required here + # to not prematurely call __eq__ from + # the hash collision with the old key + del builtins_dict[builtin_name] + builtins_dict[CustomStr(builtin_name, it)] = orig[builtin_name] + + return it.__reduce__() + + types = [ + (EmptyIterClass(),), + (bytes(8),), + (bytearray(8),), + ((1, 2, 3),), + (lambda: 0, 0), + ] + + try: + run_iter = functools.partial(run, "iter") + # The returned value of `__reduce__` should not only be valid + # but also *empty*, as `it` was exhausted during `__eq__` + # i.e "xyz" returns (iter, ("",)) + self.assertEqual(run_iter("xyz"), (orig["iter"], ("",))) + self.assertEqual(run_iter([1, 2, 3]), (orig["iter"], ([],))) + + # _PyEval_GetBuiltin is also called for `reversed` in a branch of + # listiter_reduce_general + self.assertEqual( + run("reversed", orig["reversed"](list(range(8)))), + (iter, ([],)) + ) + + for case in types: + self.assertEqual(run_iter(*case), (orig["iter"], ((),))) + finally: + # Restore original builtins + for key, func in orig.items(): + # need to suppress KeyErrors in case + # a failed test deletes the key without setting anything + with contextlib.suppress(KeyError): + # del is required here + # to not invoke our custom __eq__ from + # the hash collision with the old key + del builtins_dict[key] + builtins_dict[key] = func + # Test a new_style class with __iter__ but no next() method def test_new_style_iter_class(self): class IterClass(object): diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-10-07-21-47.gh-issue-101765.MO5LlC.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-10-07-21-47.gh-issue-101765.MO5LlC.rst new file mode 100644 index 000000000000..cc99779a944e --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-10-07-21-47.gh-issue-101765.MO5LlC.rst @@ -0,0 +1 @@ +Fix SystemError / segmentation fault in iter ``__reduce__`` when internal access of ``builtins.__dict__`` keys mutates the iter object. diff --git a/Objects/bytearrayobject.c b/Objects/bytearrayobject.c index 2a675d5d460d..c7fe8b22086c 100644 --- a/Objects/bytearrayobject.c +++ b/Objects/bytearrayobject.c @@ -2442,11 +2442,16 @@ static PyObject * bytearrayiter_reduce(bytesiterobject *it, PyObject *Py_UNUSED(ignored)) { _Py_IDENTIFIER(iter); + PyObject *iter = _PyEval_GetBuiltinId(&PyId_iter); + + /* _PyEval_GetBuiltinId can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq != NULL) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltinId(&PyId_iter), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } else { - return Py_BuildValue("N(())", _PyEval_GetBuiltinId(&PyId_iter)); + return Py_BuildValue("N(())", iter); } } diff --git a/Objects/bytesobject.c b/Objects/bytesobject.c index 4bcb2eb49298..acbc26ada89b 100644 --- a/Objects/bytesobject.c +++ b/Objects/bytesobject.c @@ -3146,11 +3146,16 @@ static PyObject * striter_reduce(striterobject *it, PyObject *Py_UNUSED(ignored)) { _Py_IDENTIFIER(iter); + PyObject *iter = _PyEval_GetBuiltinId(&PyId_iter); + + /* _PyEval_GetBuiltinId can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq != NULL) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltinId(&PyId_iter), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } else { - return Py_BuildValue("N(())", _PyEval_GetBuiltinId(&PyId_iter)); + return Py_BuildValue("N(())", iter); } } diff --git a/Objects/iterobject.c b/Objects/iterobject.c index e493e41131b7..980c04bd1154 100644 --- a/Objects/iterobject.c +++ b/Objects/iterobject.c @@ -104,11 +104,16 @@ PyDoc_STRVAR(length_hint_doc, "Private method returning an estimate of len(list( static PyObject * iter_reduce(seqiterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltinId(&PyId_iter); + + /* _PyEval_GetBuiltinId can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq != NULL) - return Py_BuildValue("N(O)n", _PyEval_GetBuiltinId(&PyId_iter), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); else - return Py_BuildValue("N(())", _PyEval_GetBuiltinId(&PyId_iter)); + return Py_BuildValue("N(())", iter); } PyDoc_STRVAR(reduce_doc, "Return state information for pickling."); @@ -243,11 +248,16 @@ calliter_iternext(calliterobject *it) static PyObject * calliter_reduce(calliterobject *it, PyObject *Py_UNUSED(ignored)) { + PyObject *iter = _PyEval_GetBuiltinId(&PyId_iter); + + /* _PyEval_GetBuiltinId can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_callable != NULL && it->it_sentinel != NULL) - return Py_BuildValue("N(OO)", _PyEval_GetBuiltinId(&PyId_iter), - it->it_callable, it->it_sentinel); + return Py_BuildValue("N(OO)", iter, it->it_callable, it->it_sentinel); else - return Py_BuildValue("N(())", _PyEval_GetBuiltinId(&PyId_iter)); + return Py_BuildValue("N(())", iter); } static PyMethodDef calliter_methods[] = { diff --git a/Objects/listobject.c b/Objects/listobject.c index 484d3741ec08..f46d5e45cc5e 100644 --- a/Objects/listobject.c +++ b/Objects/listobject.c @@ -3405,17 +3405,23 @@ listiter_reduce_general(void *_it, int forward) _Py_IDENTIFIER(reversed); PyObject *list; + /* _PyEval_GetBuiltinId can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + /* the objects are not the same, index is of different types! */ if (forward) { + PyObject *iter = _PyEval_GetBuiltinId(&PyId_iter); listiterobject *it = (listiterobject *)_it; - if (it->it_seq) - return Py_BuildValue("N(O)n", _PyEval_GetBuiltinId(&PyId_iter), - it->it_seq, it->it_index); + if (it->it_seq) { + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); + } } else { + PyObject *reversed = _PyEval_GetBuiltinId(&PyId_reversed); listreviterobject *it = (listreviterobject *)_it; - if (it->it_seq) - return Py_BuildValue("N(O)n", _PyEval_GetBuiltinId(&PyId_reversed), - it->it_seq, it->it_index); + if (it->it_seq) { + return Py_BuildValue("N(O)n", reversed, it->it_seq, it->it_index); + } } /* empty iterator, create an empty list */ list = PyList_New(0); diff --git a/Objects/tupleobject.c b/Objects/tupleobject.c index 6b1ab740121e..a401bf9d088b 100644 --- a/Objects/tupleobject.c +++ b/Objects/tupleobject.c @@ -1115,11 +1115,16 @@ static PyObject * tupleiter_reduce(tupleiterobject *it, PyObject *Py_UNUSED(ignored)) { _Py_IDENTIFIER(iter); + PyObject *iter = _PyEval_GetBuiltinId(&PyId_iter); + + /* _PyEval_GetBuiltinId can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq) - return Py_BuildValue("N(O)n", _PyEval_GetBuiltinId(&PyId_iter), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); else - return Py_BuildValue("N(())", _PyEval_GetBuiltinId(&PyId_iter)); + return Py_BuildValue("N(())", iter); } static PyObject * diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c index b7ec1f28d7e8..5a1865fb3493 100644 --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -16070,14 +16070,19 @@ static PyObject * unicodeiter_reduce(unicodeiterobject *it, PyObject *Py_UNUSED(ignored)) { _Py_IDENTIFIER(iter); + PyObject *iter = _PyEval_GetBuiltinId(&PyId_iter); + + /* _PyEval_GetBuiltinId can invoke arbitrary code, + * call must be before access of iterator pointers. + * see issue #101765 */ + if (it->it_seq != NULL) { - return Py_BuildValue("N(O)n", _PyEval_GetBuiltinId(&PyId_iter), - it->it_seq, it->it_index); + return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } else { PyObject *u = (PyObject *)_PyUnicode_New(0); if (u == NULL) return NULL; - return Py_BuildValue("N(N)", _PyEval_GetBuiltinId(&PyId_iter), u); + return Py_BuildValue("N(N)", iter, u); } } From webhook-mailer at python.org Sat Feb 25 01:52:13 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Sat, 25 Feb 2023 06:52:13 -0000 Subject: [Python-checkins] GH-102126: fix deadlock at shutdown when clearing thread states (#102222) Message-ID: https://github.com/python/cpython/commit/5f11478ce7fda826d399530af4c5ca96c592f144 commit: 5f11478ce7fda826d399530af4c5ca96c592f144 branch: main author: Kumar Aditya <59607654+kumaraditya303 at users.noreply.github.com> committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-25T12:21:36+05:30 summary: GH-102126: fix deadlock at shutdown when clearing thread states (#102222) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-24-17-59-39.gh-issue-102126.HTT8Vc.rst M Python/pystate.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-24-17-59-39.gh-issue-102126.HTT8Vc.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-24-17-59-39.gh-issue-102126.HTT8Vc.rst new file mode 100644 index 000000000000..68c43688c3df --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-24-17-59-39.gh-issue-102126.HTT8Vc.rst @@ -0,0 +1 @@ +Fix deadlock at shutdown when clearing thread states if any finalizer tries to acquire the runtime head lock. Patch by Kumar Aditya. diff --git a/Python/pystate.c b/Python/pystate.c index 32b17fd19e34..3c655bf38958 100644 --- a/Python/pystate.c +++ b/Python/pystate.c @@ -754,12 +754,19 @@ interpreter_clear(PyInterpreterState *interp, PyThreadState *tstate) _PyErr_Clear(tstate); } + // Clear the current/main thread state last. HEAD_LOCK(runtime); - // XXX Clear the current/main thread state last. - for (PyThreadState *p = interp->threads.head; p != NULL; p = p->next) { + PyThreadState *p = interp->threads.head; + HEAD_UNLOCK(runtime); + while (p != NULL) { + // See https://github.com/python/cpython/issues/102126 + // Must be called without HEAD_LOCK held as it can deadlock + // if any finalizer tries to acquire that lock. PyThreadState_Clear(p); + HEAD_LOCK(runtime); + p = p->next; + HEAD_UNLOCK(runtime); } - HEAD_UNLOCK(runtime); /* It is possible that any of the objects below have a finalizer that runs Python code or otherwise relies on a thread state From webhook-mailer at python.org Sat Feb 25 04:42:52 2023 From: webhook-mailer at python.org (hugovk) Date: Sat, 25 Feb 2023 09:42:52 -0000 Subject: [Python-checkins] gh-101100: Fix Sphinx warnings in `decimal` module (#102125) Message-ID: https://github.com/python/cpython/commit/b7c11264476ccc11e4bdf4bd3c3664ccd1b7c5f9 commit: b7c11264476ccc11e4bdf4bd3c3664ccd1b7c5f9 branch: main author: Hugo van Kemenade committer: hugovk date: 2023-02-25T11:42:45+02:00 summary: gh-101100: Fix Sphinx warnings in `decimal` module (#102125) Co-authored-by: C.A.M. Gerlach files: M Doc/library/decimal.rst diff --git a/Doc/library/decimal.rst b/Doc/library/decimal.rst index fec9b86864c5..6187098a7520 100644 --- a/Doc/library/decimal.rst +++ b/Doc/library/decimal.rst @@ -40,23 +40,23 @@ decimal floating point arithmetic. It offers several advantages over the people learn at school." -- excerpt from the decimal arithmetic specification. * Decimal numbers can be represented exactly. In contrast, numbers like - :const:`1.1` and :const:`2.2` do not have exact representations in binary + ``1.1`` and ``2.2`` do not have exact representations in binary floating point. End users typically would not expect ``1.1 + 2.2`` to display - as :const:`3.3000000000000003` as it does with binary floating point. + as ``3.3000000000000003`` as it does with binary floating point. * The exactness carries over into arithmetic. In decimal floating point, ``0.1 + 0.1 + 0.1 - 0.3`` is exactly equal to zero. In binary floating point, the result - is :const:`5.5511151231257827e-017`. While near to zero, the differences + is ``5.5511151231257827e-017``. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal is preferred in accounting applications which have strict equality invariants. * The decimal module incorporates a notion of significant places so that ``1.30 - + 1.20`` is :const:`2.50`. The trailing zero is kept to indicate significance. + + 1.20`` is ``2.50``. The trailing zero is kept to indicate significance. This is the customary presentation for monetary applications. For multiplication, the "schoolbook" approach uses all the figures in the - multiplicands. For instance, ``1.3 * 1.2`` gives :const:`1.56` while ``1.30 * - 1.20`` gives :const:`1.5600`. + multiplicands. For instance, ``1.3 * 1.2`` gives ``1.56`` while ``1.30 * + 1.20`` gives ``1.5600``. * Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for @@ -88,8 +88,8 @@ context for arithmetic, and signals. A decimal number is immutable. It has a sign, coefficient digits, and an exponent. To preserve significance, the coefficient digits do not truncate trailing zeros. Decimals also include special values such as -:const:`Infinity`, :const:`-Infinity`, and :const:`NaN`. The standard also -differentiates :const:`-0` from :const:`+0`. +``Infinity``, ``-Infinity``, and ``NaN``. The standard also +differentiates ``-0`` from ``+0``. The context for arithmetic is an environment specifying precision, rounding rules, limits on exponents, flags indicating the results of operations, and trap @@ -139,8 +139,8 @@ precision, rounding, or enabled traps:: Decimal instances can be constructed from integers, strings, floats, or tuples. Construction from an integer or a float performs an exact conversion of the value of that integer or float. Decimal numbers include special values such as -:const:`NaN` which stands for "Not a number", positive and negative -:const:`Infinity`, and :const:`-0`:: +``NaN`` which stands for "Not a number", positive and negative +``Infinity``, and ``-0``:: >>> getcontext().prec = 28 >>> Decimal(10) @@ -250,7 +250,7 @@ And some mathematical functions are also available to Decimal: >>> Decimal('10').log10() Decimal('1') -The :meth:`quantize` method rounds a number to a fixed exponent. This method is +The :meth:`~Decimal.quantize` method rounds a number to a fixed exponent. This method is useful for monetary applications that often round results to a fixed number of places: @@ -299,7 +299,7 @@ enabled: Contexts also have signal flags for monitoring exceptional conditions encountered during computations. The flags remain set until explicitly cleared, so it is best to clear the flags before each set of monitored computations by -using the :meth:`clear_flags` method. :: +using the :meth:`~Context.clear_flags` method. :: >>> setcontext(ExtendedContext) >>> getcontext().clear_flags() @@ -309,12 +309,12 @@ using the :meth:`clear_flags` method. :: Context(prec=9, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[Inexact, Rounded], traps=[]) -The *flags* entry shows that the rational approximation to :const:`Pi` was +The *flags* entry shows that the rational approximation to pi was rounded (digits beyond the context precision were thrown away) and that the result is inexact (some of the discarded digits were non-zero). -Individual traps are set using the dictionary in the :attr:`traps` field of a -context: +Individual traps are set using the dictionary in the :attr:`~Context.traps` +attribute of a context: .. doctest:: newcontext @@ -369,7 +369,7 @@ Decimal objects with the fullwidth digits ``'\uff10'`` through ``'\uff19'``. If *value* is a :class:`tuple`, it should have three components, a sign - (:const:`0` for positive or :const:`1` for negative), a :class:`tuple` of + (``0`` for positive or ``1`` for negative), a :class:`tuple` of digits, and an integer exponent. For example, ``Decimal((0, (1, 4, 1, 4), -3))`` returns ``Decimal('1.414')``. @@ -387,7 +387,7 @@ Decimal objects The purpose of the *context* argument is determining what to do if *value* is a malformed string. If the context traps :const:`InvalidOperation`, an exception is raised; otherwise, the constructor returns a new Decimal with the value of - :const:`NaN`. + ``NaN``. Once constructed, :class:`Decimal` objects are immutable. @@ -701,7 +701,7 @@ Decimal objects .. method:: max(other, context=None) Like ``max(self, other)`` except that the context rounding rule is applied - before returning and that :const:`NaN` values are either signaled or + before returning and that ``NaN`` values are either signaled or ignored (depending on the context and whether they are signaling or quiet). @@ -713,7 +713,7 @@ Decimal objects .. method:: min(other, context=None) Like ``min(self, other)`` except that the context rounding rule is applied - before returning and that :const:`NaN` values are either signaled or + before returning and that ``NaN`` values are either signaled or ignored (depending on the context and whether they are signaling or quiet). @@ -744,8 +744,8 @@ Decimal objects .. method:: normalize(context=None) Normalize the number by stripping the rightmost trailing zeros and - converting any result equal to :const:`Decimal('0')` to - :const:`Decimal('0e0')`. Used for producing canonical values for attributes + converting any result equal to ``Decimal('0')`` to + ``Decimal('0e0')``. Used for producing canonical values for attributes of an equivalence class. For example, ``Decimal('32.100')`` and ``Decimal('0.321000e+2')`` both normalize to the equivalent value ``Decimal('32.1')``. @@ -790,7 +790,7 @@ Decimal objects the current thread's context is used. An error is returned whenever the resulting exponent is greater than - :attr:`Emax` or less than :attr:`Etiny`. + :attr:`~Context.Emax` or less than :meth:`~Context.Etiny`. .. method:: radix() @@ -830,7 +830,7 @@ Decimal objects .. method:: same_quantum(other, context=None) Test whether self and other have the same exponent or whether both are - :const:`NaN`. + ``NaN``. This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise @@ -892,11 +892,11 @@ Decimal objects Logical operands ^^^^^^^^^^^^^^^^ -The :meth:`logical_and`, :meth:`logical_invert`, :meth:`logical_or`, -and :meth:`logical_xor` methods expect their arguments to be *logical +The :meth:`~Decimal.logical_and`, :meth:`~Decimal.logical_invert`, :meth:`~Decimal.logical_or`, +and :meth:`~Decimal.logical_xor` methods expect their arguments to be *logical operands*. A *logical operand* is a :class:`Decimal` instance whose exponent and sign are both zero, and whose digits are all either -:const:`0` or :const:`1`. +``0`` or ``1``. .. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% @@ -982,7 +982,7 @@ described below. In addition, the module provides three pre-made contexts: exceptions are not raised during computations). Because the traps are disabled, this context is useful for applications that - prefer to have result value of :const:`NaN` or :const:`Infinity` instead of + prefer to have result value of ``NaN`` or ``Infinity`` instead of raising exceptions. This allows an application to complete a run in the presence of conditions that would otherwise halt the program. @@ -1001,8 +1001,8 @@ described below. In addition, the module provides three pre-made contexts: In single threaded environments, it is preferable to not use this context at all. Instead, simply create contexts explicitly as described below. - The default values are :attr:`prec`\ =\ :const:`28`, - :attr:`rounding`\ =\ :const:`ROUND_HALF_EVEN`, + The default values are :attr:`Context.prec`\ =\ ``28``, + :attr:`Context.rounding`\ =\ :const:`ROUND_HALF_EVEN`, and enabled traps for :class:`Overflow`, :class:`InvalidOperation`, and :class:`DivisionByZero`. @@ -1016,7 +1016,7 @@ In addition to the three supplied contexts, new contexts can be created with the default values are copied from the :const:`DefaultContext`. If the *flags* field is not specified or is :const:`None`, all flags are cleared. - *prec* is an integer in the range [:const:`1`, :const:`MAX_PREC`] that sets + *prec* is an integer in the range [``1``, :const:`MAX_PREC`] that sets the precision for arithmetic operations in the context. The *rounding* option is one of the constants listed in the section @@ -1026,20 +1026,20 @@ In addition to the three supplied contexts, new contexts can be created with the contexts should only set traps and leave the flags clear. The *Emin* and *Emax* fields are integers specifying the outer limits allowable - for exponents. *Emin* must be in the range [:const:`MIN_EMIN`, :const:`0`], - *Emax* in the range [:const:`0`, :const:`MAX_EMAX`]. + for exponents. *Emin* must be in the range [:const:`MIN_EMIN`, ``0``], + *Emax* in the range [``0``, :const:`MAX_EMAX`]. - The *capitals* field is either :const:`0` or :const:`1` (the default). If set to - :const:`1`, exponents are printed with a capital :const:`E`; otherwise, a - lowercase :const:`e` is used: :const:`Decimal('6.02e+23')`. + The *capitals* field is either ``0`` or ``1`` (the default). If set to + ``1``, exponents are printed with a capital ``E``; otherwise, a + lowercase ``e`` is used: ``Decimal('6.02e+23')``. - The *clamp* field is either :const:`0` (the default) or :const:`1`. - If set to :const:`1`, the exponent ``e`` of a :class:`Decimal` + The *clamp* field is either ``0`` (the default) or ``1``. + If set to ``1``, the exponent ``e`` of a :class:`Decimal` instance representable in this context is strictly limited to the range ``Emin - prec + 1 <= e <= Emax - prec + 1``. If *clamp* is - :const:`0` then a weaker condition holds: the adjusted exponent of - the :class:`Decimal` instance is at most ``Emax``. When *clamp* is - :const:`1`, a large normal number will, where possible, have its + ``0`` then a weaker condition holds: the adjusted exponent of + the :class:`Decimal` instance is at most :attr:`~Context.Emax`. When *clamp* is + ``1``, a large normal number will, where possible, have its exponent reduced and a corresponding number of zeros added to its coefficient, in order to fit the exponent constraints; this preserves the value of the number but loses information about @@ -1048,13 +1048,13 @@ In addition to the three supplied contexts, new contexts can be created with the >>> Context(prec=6, Emax=999, clamp=1).create_decimal('1.23e999') Decimal('1.23000E+999') - A *clamp* value of :const:`1` allows compatibility with the + A *clamp* value of ``1`` allows compatibility with the fixed-width decimal interchange formats specified in IEEE 754. The :class:`Context` class defines several general purpose methods as well as a large number of methods for doing arithmetic directly in a given context. In addition, for each of the :class:`Decimal` methods described above (with - the exception of the :meth:`adjusted` and :meth:`as_tuple` methods) there is + the exception of the :meth:`~Decimal.adjusted` and :meth:`~Decimal.as_tuple` methods) there is a corresponding :class:`Context` method. For example, for a :class:`Context` instance ``C`` and :class:`Decimal` instance ``x``, ``C.exp(x)`` is equivalent to ``x.exp(context=C)``. Each :class:`Context` method accepts a @@ -1064,11 +1064,11 @@ In addition to the three supplied contexts, new contexts can be created with the .. method:: clear_flags() - Resets all of the flags to :const:`0`. + Resets all of the flags to ``0``. .. method:: clear_traps() - Resets all of the traps to :const:`0`. + Resets all of the traps to ``0``. .. versionadded:: 3.3 @@ -1483,13 +1483,13 @@ are also included in the pure Python version for compatibility. +---------------------+---------------------+-------------------------------+ | | 32-bit | 64-bit | +=====================+=====================+===============================+ -| .. data:: MAX_PREC | :const:`425000000` | :const:`999999999999999999` | +| .. data:: MAX_PREC | ``425000000`` | ``999999999999999999`` | +---------------------+---------------------+-------------------------------+ -| .. data:: MAX_EMAX | :const:`425000000` | :const:`999999999999999999` | +| .. data:: MAX_EMAX | ``425000000`` | ``999999999999999999`` | +---------------------+---------------------+-------------------------------+ -| .. data:: MIN_EMIN | :const:`-425000000` | :const:`-999999999999999999` | +| .. data:: MIN_EMIN | ``-425000000`` | ``-999999999999999999`` | +---------------------+---------------------+-------------------------------+ -| .. data:: MIN_ETINY | :const:`-849999999` | :const:`-1999999999999999997` | +| .. data:: MIN_ETINY | ``-849999999`` | ``-1999999999999999997`` | +---------------------+---------------------+-------------------------------+ @@ -1514,7 +1514,7 @@ Rounding modes .. data:: ROUND_CEILING - Round towards :const:`Infinity`. + Round towards ``Infinity``. .. data:: ROUND_DOWN @@ -1522,7 +1522,7 @@ Rounding modes .. data:: ROUND_FLOOR - Round towards :const:`-Infinity`. + Round towards ``-Infinity``. .. data:: ROUND_HALF_DOWN @@ -1570,7 +1570,7 @@ condition. Altered an exponent to fit representation constraints. Typically, clamping occurs when an exponent falls outside the context's - :attr:`Emin` and :attr:`Emax` limits. If possible, the exponent is reduced to + :attr:`~Context.Emin` and :attr:`~Context.Emax` limits. If possible, the exponent is reduced to fit by adding zeros to the coefficient. @@ -1584,8 +1584,8 @@ condition. Signals the division of a non-infinite number by zero. Can occur with division, modulo division, or when raising a number to a negative - power. If this signal is not trapped, returns :const:`Infinity` or - :const:`-Infinity` with the sign determined by the inputs to the calculation. + power. If this signal is not trapped, returns ``Infinity`` or + ``-Infinity`` with the sign determined by the inputs to the calculation. .. class:: Inexact @@ -1602,7 +1602,7 @@ condition. An invalid operation was performed. Indicates that an operation was requested that does not make sense. If not - trapped, returns :const:`NaN`. Possible causes include:: + trapped, returns ``NaN``. Possible causes include:: Infinity - Infinity 0 * Infinity @@ -1619,10 +1619,10 @@ condition. Numerical overflow. - Indicates the exponent is larger than :attr:`Emax` after rounding has + Indicates the exponent is larger than :attr:`Context.Emax` after rounding has occurred. If not trapped, the result depends on the rounding mode, either pulling inward to the largest representable finite number or rounding outward - to :const:`Infinity`. In either case, :class:`Inexact` and :class:`Rounded` + to ``Infinity``. In either case, :class:`Inexact` and :class:`Rounded` are also signaled. @@ -1631,14 +1631,14 @@ condition. Rounding occurred though possibly no information was lost. Signaled whenever rounding discards digits; even if those digits are zero - (such as rounding :const:`5.00` to :const:`5.0`). If not trapped, returns + (such as rounding ``5.00`` to ``5.0``). If not trapped, returns the result unchanged. This signal is used to detect loss of significant digits. .. class:: Subnormal - Exponent was lower than :attr:`Emin` prior to rounding. + Exponent was lower than :attr:`~Context.Emin` prior to rounding. Occurs when an operation result is subnormal (the exponent is too small). If not trapped, returns the result unchanged. @@ -1696,7 +1696,7 @@ Mitigating round-off error with increased precision ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The use of decimal floating point eliminates decimal representation error -(making it possible to represent :const:`0.1` exactly); however, some operations +(making it possible to represent ``0.1`` exactly); however, some operations can still incur round-off error when non-zero digits exceed the fixed precision. The effects of round-off error can be amplified by the addition or subtraction @@ -1746,8 +1746,8 @@ Special values ^^^^^^^^^^^^^^ The number system for the :mod:`decimal` module provides special values -including :const:`NaN`, :const:`sNaN`, :const:`-Infinity`, :const:`Infinity`, -and two zeros, :const:`+0` and :const:`-0`. +including ``NaN``, ``sNaN``, ``-Infinity``, ``Infinity``, +and two zeros, ``+0`` and ``-0``. Infinities can be constructed directly with: ``Decimal('Infinity')``. Also, they can arise from dividing by zero when the :exc:`DivisionByZero` signal is @@ -1758,30 +1758,30 @@ The infinities are signed (affine) and can be used in arithmetic operations where they get treated as very large, indeterminate numbers. For instance, adding a constant to infinity gives another infinite result. -Some operations are indeterminate and return :const:`NaN`, or if the +Some operations are indeterminate and return ``NaN``, or if the :exc:`InvalidOperation` signal is trapped, raise an exception. For example, -``0/0`` returns :const:`NaN` which means "not a number". This variety of -:const:`NaN` is quiet and, once created, will flow through other computations -always resulting in another :const:`NaN`. This behavior can be useful for a +``0/0`` returns ``NaN`` which means "not a number". This variety of +``NaN`` is quiet and, once created, will flow through other computations +always resulting in another ``NaN``. This behavior can be useful for a series of computations that occasionally have missing inputs --- it allows the calculation to proceed while flagging specific results as invalid. -A variant is :const:`sNaN` which signals rather than remaining quiet after every +A variant is ``sNaN`` which signals rather than remaining quiet after every operation. This is a useful return value when an invalid result needs to interrupt a calculation for special handling. The behavior of Python's comparison operators can be a little surprising where a -:const:`NaN` is involved. A test for equality where one of the operands is a -quiet or signaling :const:`NaN` always returns :const:`False` (even when doing +``NaN`` is involved. A test for equality where one of the operands is a +quiet or signaling ``NaN`` always returns :const:`False` (even when doing ``Decimal('NaN')==Decimal('NaN')``), while a test for inequality always returns :const:`True`. An attempt to compare two Decimals using any of the ``<``, ``<=``, ``>`` or ``>=`` operators will raise the :exc:`InvalidOperation` signal -if either operand is a :const:`NaN`, and return :const:`False` if this signal is +if either operand is a ``NaN``, and return :const:`False` if this signal is not trapped. Note that the General Decimal Arithmetic specification does not specify the behavior of direct comparisons; these rules for comparisons -involving a :const:`NaN` were taken from the IEEE 854 standard (see Table 3 in -section 5.7). To ensure strict standards-compliance, use the :meth:`compare` -and :meth:`compare-signal` methods instead. +involving a ``NaN`` were taken from the IEEE 854 standard (see Table 3 in +section 5.7). To ensure strict standards-compliance, use the :meth:`~Decimal.compare` +and :meth:`~Decimal.compare_signal` methods instead. The signed zeros can result from calculations that underflow. They keep the sign that would have resulted if the calculation had been carried out to greater @@ -2013,7 +2013,7 @@ Q. In a fixed-point application with two decimal places, some inputs have many places and need to be rounded. Others are not supposed to have excess digits and need to be validated. What methods should be used? -A. The :meth:`quantize` method rounds to a fixed number of decimal places. If +A. The :meth:`~Decimal.quantize` method rounds to a fixed number of decimal places. If the :const:`Inexact` trap is set, it is also useful for validation: >>> TWOPLACES = Decimal(10) ** -2 # same as Decimal('0.01') @@ -2037,7 +2037,7 @@ throughout an application? A. Some operations like addition, subtraction, and multiplication by an integer will automatically preserve fixed point. Others operations, like division and non-integer multiplication, will change the number of decimal places and need to -be followed-up with a :meth:`quantize` step: +be followed-up with a :meth:`~Decimal.quantize` step: >>> a = Decimal('102.72') # Initial fixed-point values >>> b = Decimal('3.17') @@ -2053,7 +2053,7 @@ be followed-up with a :meth:`quantize` step: Decimal('0.03') In developing fixed-point applications, it is convenient to define functions -to handle the :meth:`quantize` step: +to handle the :meth:`~Decimal.quantize` step: >>> def mul(x, y, fp=TWOPLACES): ... return (x * y).quantize(fp) @@ -2066,12 +2066,12 @@ to handle the :meth:`quantize` step: >>> div(b, a) Decimal('0.03') -Q. There are many ways to express the same value. The numbers :const:`200`, -:const:`200.000`, :const:`2E2`, and :const:`.02E+4` all have the same value at +Q. There are many ways to express the same value. The numbers ``200``, +``200.000``, ``2E2``, and ``.02E+4`` all have the same value at various precisions. Is there a way to transform them to a single recognizable canonical value? -A. The :meth:`normalize` method maps all equivalent values to a single +A. The :meth:`~Decimal.normalize` method maps all equivalent values to a single representative: >>> values = map(Decimal, '200 200.000 2E2 .02E+4'.split()) @@ -2083,7 +2083,7 @@ to get a non-exponential representation? A. For some values, exponential notation is the only way to express the number of significant places in the coefficient. For example, expressing -:const:`5.0E+3` as :const:`5000` keeps the value constant but cannot show the +``5.0E+3`` as ``5000`` keeps the value constant but cannot show the original's two-place significance. If an application does not care about tracking significance, it is easy to @@ -2159,12 +2159,12 @@ for medium-sized numbers and the `Number Theoretic Transform `_ for very large numbers. -The context must be adapted for exact arbitrary precision arithmetic. :attr:`Emin` -and :attr:`Emax` should always be set to the maximum values, :attr:`clamp` -should always be 0 (the default). Setting :attr:`prec` requires some care. +The context must be adapted for exact arbitrary precision arithmetic. :attr:`~Context.Emin` +and :attr:`~Context.Emax` should always be set to the maximum values, :attr:`~Context.clamp` +should always be 0 (the default). Setting :attr:`~Context.prec` requires some care. The easiest approach for trying out bignum arithmetic is to use the maximum -value for :attr:`prec` as well [#]_:: +value for :attr:`~Context.prec` as well [#]_:: >>> setcontext(Context(prec=MAX_PREC, Emax=MAX_EMAX, Emin=MIN_EMIN)) >>> x = Decimal(2) ** 256 @@ -2181,7 +2181,7 @@ the available memory will be insufficient:: MemoryError On systems with overallocation (e.g. Linux), a more sophisticated approach is to -adjust :attr:`prec` to the amount of available RAM. Suppose that you have 8GB of +adjust :attr:`~Context.prec` to the amount of available RAM. Suppose that you have 8GB of RAM and expect 10 simultaneous operands using a maximum of 500MB each:: >>> import sys From webhook-mailer at python.org Sat Feb 25 04:51:42 2023 From: webhook-mailer at python.org (miss-islington) Date: Sat, 25 Feb 2023 09:51:42 -0000 Subject: [Python-checkins] gh-101100: Fix Sphinx warnings in `decimal` module (GH-102125) Message-ID: https://github.com/python/cpython/commit/9563cf47bc2944d1f0fb0a016b6880c273ecd248 commit: 9563cf47bc2944d1f0fb0a016b6880c273ecd248 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-25T01:51:36-08:00 summary: gh-101100: Fix Sphinx warnings in `decimal` module (GH-102125) (cherry picked from commit b7c11264476ccc11e4bdf4bd3c3664ccd1b7c5f9) Co-authored-by: Hugo van Kemenade Co-authored-by: C.A.M. Gerlach files: M Doc/library/decimal.rst diff --git a/Doc/library/decimal.rst b/Doc/library/decimal.rst index 38ebb44cc661..1ce5f4e0e873 100644 --- a/Doc/library/decimal.rst +++ b/Doc/library/decimal.rst @@ -40,23 +40,23 @@ decimal floating point arithmetic. It offers several advantages over the people learn at school." -- excerpt from the decimal arithmetic specification. * Decimal numbers can be represented exactly. In contrast, numbers like - :const:`1.1` and :const:`2.2` do not have exact representations in binary + ``1.1`` and ``2.2`` do not have exact representations in binary floating point. End users typically would not expect ``1.1 + 2.2`` to display - as :const:`3.3000000000000003` as it does with binary floating point. + as ``3.3000000000000003`` as it does with binary floating point. * The exactness carries over into arithmetic. In decimal floating point, ``0.1 + 0.1 + 0.1 - 0.3`` is exactly equal to zero. In binary floating point, the result - is :const:`5.5511151231257827e-017`. While near to zero, the differences + is ``5.5511151231257827e-017``. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal is preferred in accounting applications which have strict equality invariants. * The decimal module incorporates a notion of significant places so that ``1.30 - + 1.20`` is :const:`2.50`. The trailing zero is kept to indicate significance. + + 1.20`` is ``2.50``. The trailing zero is kept to indicate significance. This is the customary presentation for monetary applications. For multiplication, the "schoolbook" approach uses all the figures in the - multiplicands. For instance, ``1.3 * 1.2`` gives :const:`1.56` while ``1.30 * - 1.20`` gives :const:`1.5600`. + multiplicands. For instance, ``1.3 * 1.2`` gives ``1.56`` while ``1.30 * + 1.20`` gives ``1.5600``. * Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for @@ -88,8 +88,8 @@ context for arithmetic, and signals. A decimal number is immutable. It has a sign, coefficient digits, and an exponent. To preserve significance, the coefficient digits do not truncate trailing zeros. Decimals also include special values such as -:const:`Infinity`, :const:`-Infinity`, and :const:`NaN`. The standard also -differentiates :const:`-0` from :const:`+0`. +``Infinity``, ``-Infinity``, and ``NaN``. The standard also +differentiates ``-0`` from ``+0``. The context for arithmetic is an environment specifying precision, rounding rules, limits on exponents, flags indicating the results of operations, and trap @@ -139,8 +139,8 @@ precision, rounding, or enabled traps:: Decimal instances can be constructed from integers, strings, floats, or tuples. Construction from an integer or a float performs an exact conversion of the value of that integer or float. Decimal numbers include special values such as -:const:`NaN` which stands for "Not a number", positive and negative -:const:`Infinity`, and :const:`-0`:: +``NaN`` which stands for "Not a number", positive and negative +``Infinity``, and ``-0``:: >>> getcontext().prec = 28 >>> Decimal(10) @@ -250,7 +250,7 @@ And some mathematical functions are also available to Decimal: >>> Decimal('10').log10() Decimal('1') -The :meth:`quantize` method rounds a number to a fixed exponent. This method is +The :meth:`~Decimal.quantize` method rounds a number to a fixed exponent. This method is useful for monetary applications that often round results to a fixed number of places: @@ -299,7 +299,7 @@ enabled: Contexts also have signal flags for monitoring exceptional conditions encountered during computations. The flags remain set until explicitly cleared, so it is best to clear the flags before each set of monitored computations by -using the :meth:`clear_flags` method. :: +using the :meth:`~Context.clear_flags` method. :: >>> setcontext(ExtendedContext) >>> getcontext().clear_flags() @@ -309,12 +309,12 @@ using the :meth:`clear_flags` method. :: Context(prec=9, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[Inexact, Rounded], traps=[]) -The *flags* entry shows that the rational approximation to :const:`Pi` was +The *flags* entry shows that the rational approximation to pi was rounded (digits beyond the context precision were thrown away) and that the result is inexact (some of the discarded digits were non-zero). -Individual traps are set using the dictionary in the :attr:`traps` field of a -context: +Individual traps are set using the dictionary in the :attr:`~Context.traps` +attribute of a context: .. doctest:: newcontext @@ -369,7 +369,7 @@ Decimal objects with the fullwidth digits ``'\uff10'`` through ``'\uff19'``. If *value* is a :class:`tuple`, it should have three components, a sign - (:const:`0` for positive or :const:`1` for negative), a :class:`tuple` of + (``0`` for positive or ``1`` for negative), a :class:`tuple` of digits, and an integer exponent. For example, ``Decimal((0, (1, 4, 1, 4), -3))`` returns ``Decimal('1.414')``. @@ -387,7 +387,7 @@ Decimal objects The purpose of the *context* argument is determining what to do if *value* is a malformed string. If the context traps :const:`InvalidOperation`, an exception is raised; otherwise, the constructor returns a new Decimal with the value of - :const:`NaN`. + ``NaN``. Once constructed, :class:`Decimal` objects are immutable. @@ -701,7 +701,7 @@ Decimal objects .. method:: max(other, context=None) Like ``max(self, other)`` except that the context rounding rule is applied - before returning and that :const:`NaN` values are either signaled or + before returning and that ``NaN`` values are either signaled or ignored (depending on the context and whether they are signaling or quiet). @@ -713,7 +713,7 @@ Decimal objects .. method:: min(other, context=None) Like ``min(self, other)`` except that the context rounding rule is applied - before returning and that :const:`NaN` values are either signaled or + before returning and that ``NaN`` values are either signaled or ignored (depending on the context and whether they are signaling or quiet). @@ -744,8 +744,8 @@ Decimal objects .. method:: normalize(context=None) Normalize the number by stripping the rightmost trailing zeros and - converting any result equal to :const:`Decimal('0')` to - :const:`Decimal('0e0')`. Used for producing canonical values for attributes + converting any result equal to ``Decimal('0')`` to + ``Decimal('0e0')``. Used for producing canonical values for attributes of an equivalence class. For example, ``Decimal('32.100')`` and ``Decimal('0.321000e+2')`` both normalize to the equivalent value ``Decimal('32.1')``. @@ -790,7 +790,7 @@ Decimal objects the current thread's context is used. An error is returned whenever the resulting exponent is greater than - :attr:`Emax` or less than :attr:`Etiny`. + :attr:`~Context.Emax` or less than :meth:`~Context.Etiny`. .. method:: radix() @@ -830,7 +830,7 @@ Decimal objects .. method:: same_quantum(other, context=None) Test whether self and other have the same exponent or whether both are - :const:`NaN`. + ``NaN``. This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise @@ -892,11 +892,11 @@ Decimal objects Logical operands ^^^^^^^^^^^^^^^^ -The :meth:`logical_and`, :meth:`logical_invert`, :meth:`logical_or`, -and :meth:`logical_xor` methods expect their arguments to be *logical +The :meth:`~Decimal.logical_and`, :meth:`~Decimal.logical_invert`, :meth:`~Decimal.logical_or`, +and :meth:`~Decimal.logical_xor` methods expect their arguments to be *logical operands*. A *logical operand* is a :class:`Decimal` instance whose exponent and sign are both zero, and whose digits are all either -:const:`0` or :const:`1`. +``0`` or ``1``. .. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% @@ -966,7 +966,7 @@ described below. In addition, the module provides three pre-made contexts: exceptions are not raised during computations). Because the traps are disabled, this context is useful for applications that - prefer to have result value of :const:`NaN` or :const:`Infinity` instead of + prefer to have result value of ``NaN`` or ``Infinity`` instead of raising exceptions. This allows an application to complete a run in the presence of conditions that would otherwise halt the program. @@ -985,8 +985,8 @@ described below. In addition, the module provides three pre-made contexts: In single threaded environments, it is preferable to not use this context at all. Instead, simply create contexts explicitly as described below. - The default values are :attr:`prec`\ =\ :const:`28`, - :attr:`rounding`\ =\ :const:`ROUND_HALF_EVEN`, + The default values are :attr:`Context.prec`\ =\ ``28``, + :attr:`Context.rounding`\ =\ :const:`ROUND_HALF_EVEN`, and enabled traps for :class:`Overflow`, :class:`InvalidOperation`, and :class:`DivisionByZero`. @@ -1000,7 +1000,7 @@ In addition to the three supplied contexts, new contexts can be created with the default values are copied from the :const:`DefaultContext`. If the *flags* field is not specified or is :const:`None`, all flags are cleared. - *prec* is an integer in the range [:const:`1`, :const:`MAX_PREC`] that sets + *prec* is an integer in the range [``1``, :const:`MAX_PREC`] that sets the precision for arithmetic operations in the context. The *rounding* option is one of the constants listed in the section @@ -1010,20 +1010,20 @@ In addition to the three supplied contexts, new contexts can be created with the contexts should only set traps and leave the flags clear. The *Emin* and *Emax* fields are integers specifying the outer limits allowable - for exponents. *Emin* must be in the range [:const:`MIN_EMIN`, :const:`0`], - *Emax* in the range [:const:`0`, :const:`MAX_EMAX`]. + for exponents. *Emin* must be in the range [:const:`MIN_EMIN`, ``0``], + *Emax* in the range [``0``, :const:`MAX_EMAX`]. - The *capitals* field is either :const:`0` or :const:`1` (the default). If set to - :const:`1`, exponents are printed with a capital :const:`E`; otherwise, a - lowercase :const:`e` is used: :const:`Decimal('6.02e+23')`. + The *capitals* field is either ``0`` or ``1`` (the default). If set to + ``1``, exponents are printed with a capital ``E``; otherwise, a + lowercase ``e`` is used: ``Decimal('6.02e+23')``. - The *clamp* field is either :const:`0` (the default) or :const:`1`. - If set to :const:`1`, the exponent ``e`` of a :class:`Decimal` + The *clamp* field is either ``0`` (the default) or ``1``. + If set to ``1``, the exponent ``e`` of a :class:`Decimal` instance representable in this context is strictly limited to the range ``Emin - prec + 1 <= e <= Emax - prec + 1``. If *clamp* is - :const:`0` then a weaker condition holds: the adjusted exponent of - the :class:`Decimal` instance is at most ``Emax``. When *clamp* is - :const:`1`, a large normal number will, where possible, have its + ``0`` then a weaker condition holds: the adjusted exponent of + the :class:`Decimal` instance is at most :attr:`~Context.Emax`. When *clamp* is + ``1``, a large normal number will, where possible, have its exponent reduced and a corresponding number of zeros added to its coefficient, in order to fit the exponent constraints; this preserves the value of the number but loses information about @@ -1032,13 +1032,13 @@ In addition to the three supplied contexts, new contexts can be created with the >>> Context(prec=6, Emax=999, clamp=1).create_decimal('1.23e999') Decimal('1.23000E+999') - A *clamp* value of :const:`1` allows compatibility with the + A *clamp* value of ``1`` allows compatibility with the fixed-width decimal interchange formats specified in IEEE 754. The :class:`Context` class defines several general purpose methods as well as a large number of methods for doing arithmetic directly in a given context. In addition, for each of the :class:`Decimal` methods described above (with - the exception of the :meth:`adjusted` and :meth:`as_tuple` methods) there is + the exception of the :meth:`~Decimal.adjusted` and :meth:`~Decimal.as_tuple` methods) there is a corresponding :class:`Context` method. For example, for a :class:`Context` instance ``C`` and :class:`Decimal` instance ``x``, ``C.exp(x)`` is equivalent to ``x.exp(context=C)``. Each :class:`Context` method accepts a @@ -1048,11 +1048,11 @@ In addition to the three supplied contexts, new contexts can be created with the .. method:: clear_flags() - Resets all of the flags to :const:`0`. + Resets all of the flags to ``0``. .. method:: clear_traps() - Resets all of the traps to :const:`0`. + Resets all of the traps to ``0``. .. versionadded:: 3.3 @@ -1467,13 +1467,13 @@ are also included in the pure Python version for compatibility. +---------------------+---------------------+-------------------------------+ | | 32-bit | 64-bit | +=====================+=====================+===============================+ -| .. data:: MAX_PREC | :const:`425000000` | :const:`999999999999999999` | +| .. data:: MAX_PREC | ``425000000`` | ``999999999999999999`` | +---------------------+---------------------+-------------------------------+ -| .. data:: MAX_EMAX | :const:`425000000` | :const:`999999999999999999` | +| .. data:: MAX_EMAX | ``425000000`` | ``999999999999999999`` | +---------------------+---------------------+-------------------------------+ -| .. data:: MIN_EMIN | :const:`-425000000` | :const:`-999999999999999999` | +| .. data:: MIN_EMIN | ``-425000000`` | ``-999999999999999999`` | +---------------------+---------------------+-------------------------------+ -| .. data:: MIN_ETINY | :const:`-849999999` | :const:`-1999999999999999997` | +| .. data:: MIN_ETINY | ``-849999999`` | ``-1999999999999999997`` | +---------------------+---------------------+-------------------------------+ @@ -1498,7 +1498,7 @@ Rounding modes .. data:: ROUND_CEILING - Round towards :const:`Infinity`. + Round towards ``Infinity``. .. data:: ROUND_DOWN @@ -1506,7 +1506,7 @@ Rounding modes .. data:: ROUND_FLOOR - Round towards :const:`-Infinity`. + Round towards ``-Infinity``. .. data:: ROUND_HALF_DOWN @@ -1554,7 +1554,7 @@ condition. Altered an exponent to fit representation constraints. Typically, clamping occurs when an exponent falls outside the context's - :attr:`Emin` and :attr:`Emax` limits. If possible, the exponent is reduced to + :attr:`~Context.Emin` and :attr:`~Context.Emax` limits. If possible, the exponent is reduced to fit by adding zeros to the coefficient. @@ -1568,8 +1568,8 @@ condition. Signals the division of a non-infinite number by zero. Can occur with division, modulo division, or when raising a number to a negative - power. If this signal is not trapped, returns :const:`Infinity` or - :const:`-Infinity` with the sign determined by the inputs to the calculation. + power. If this signal is not trapped, returns ``Infinity`` or + ``-Infinity`` with the sign determined by the inputs to the calculation. .. class:: Inexact @@ -1586,7 +1586,7 @@ condition. An invalid operation was performed. Indicates that an operation was requested that does not make sense. If not - trapped, returns :const:`NaN`. Possible causes include:: + trapped, returns ``NaN``. Possible causes include:: Infinity - Infinity 0 * Infinity @@ -1603,10 +1603,10 @@ condition. Numerical overflow. - Indicates the exponent is larger than :attr:`Emax` after rounding has + Indicates the exponent is larger than :attr:`Context.Emax` after rounding has occurred. If not trapped, the result depends on the rounding mode, either pulling inward to the largest representable finite number or rounding outward - to :const:`Infinity`. In either case, :class:`Inexact` and :class:`Rounded` + to ``Infinity``. In either case, :class:`Inexact` and :class:`Rounded` are also signaled. @@ -1615,14 +1615,14 @@ condition. Rounding occurred though possibly no information was lost. Signaled whenever rounding discards digits; even if those digits are zero - (such as rounding :const:`5.00` to :const:`5.0`). If not trapped, returns + (such as rounding ``5.00`` to ``5.0``). If not trapped, returns the result unchanged. This signal is used to detect loss of significant digits. .. class:: Subnormal - Exponent was lower than :attr:`Emin` prior to rounding. + Exponent was lower than :attr:`~Context.Emin` prior to rounding. Occurs when an operation result is subnormal (the exponent is too small). If not trapped, returns the result unchanged. @@ -1680,7 +1680,7 @@ Mitigating round-off error with increased precision ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The use of decimal floating point eliminates decimal representation error -(making it possible to represent :const:`0.1` exactly); however, some operations +(making it possible to represent ``0.1`` exactly); however, some operations can still incur round-off error when non-zero digits exceed the fixed precision. The effects of round-off error can be amplified by the addition or subtraction @@ -1730,8 +1730,8 @@ Special values ^^^^^^^^^^^^^^ The number system for the :mod:`decimal` module provides special values -including :const:`NaN`, :const:`sNaN`, :const:`-Infinity`, :const:`Infinity`, -and two zeros, :const:`+0` and :const:`-0`. +including ``NaN``, ``sNaN``, ``-Infinity``, ``Infinity``, +and two zeros, ``+0`` and ``-0``. Infinities can be constructed directly with: ``Decimal('Infinity')``. Also, they can arise from dividing by zero when the :exc:`DivisionByZero` signal is @@ -1742,30 +1742,30 @@ The infinities are signed (affine) and can be used in arithmetic operations where they get treated as very large, indeterminate numbers. For instance, adding a constant to infinity gives another infinite result. -Some operations are indeterminate and return :const:`NaN`, or if the +Some operations are indeterminate and return ``NaN``, or if the :exc:`InvalidOperation` signal is trapped, raise an exception. For example, -``0/0`` returns :const:`NaN` which means "not a number". This variety of -:const:`NaN` is quiet and, once created, will flow through other computations -always resulting in another :const:`NaN`. This behavior can be useful for a +``0/0`` returns ``NaN`` which means "not a number". This variety of +``NaN`` is quiet and, once created, will flow through other computations +always resulting in another ``NaN``. This behavior can be useful for a series of computations that occasionally have missing inputs --- it allows the calculation to proceed while flagging specific results as invalid. -A variant is :const:`sNaN` which signals rather than remaining quiet after every +A variant is ``sNaN`` which signals rather than remaining quiet after every operation. This is a useful return value when an invalid result needs to interrupt a calculation for special handling. The behavior of Python's comparison operators can be a little surprising where a -:const:`NaN` is involved. A test for equality where one of the operands is a -quiet or signaling :const:`NaN` always returns :const:`False` (even when doing +``NaN`` is involved. A test for equality where one of the operands is a +quiet or signaling ``NaN`` always returns :const:`False` (even when doing ``Decimal('NaN')==Decimal('NaN')``), while a test for inequality always returns :const:`True`. An attempt to compare two Decimals using any of the ``<``, ``<=``, ``>`` or ``>=`` operators will raise the :exc:`InvalidOperation` signal -if either operand is a :const:`NaN`, and return :const:`False` if this signal is +if either operand is a ``NaN``, and return :const:`False` if this signal is not trapped. Note that the General Decimal Arithmetic specification does not specify the behavior of direct comparisons; these rules for comparisons -involving a :const:`NaN` were taken from the IEEE 854 standard (see Table 3 in -section 5.7). To ensure strict standards-compliance, use the :meth:`compare` -and :meth:`compare-signal` methods instead. +involving a ``NaN`` were taken from the IEEE 854 standard (see Table 3 in +section 5.7). To ensure strict standards-compliance, use the :meth:`~Decimal.compare` +and :meth:`~Decimal.compare_signal` methods instead. The signed zeros can result from calculations that underflow. They keep the sign that would have resulted if the calculation had been carried out to greater @@ -1997,7 +1997,7 @@ Q. In a fixed-point application with two decimal places, some inputs have many places and need to be rounded. Others are not supposed to have excess digits and need to be validated. What methods should be used? -A. The :meth:`quantize` method rounds to a fixed number of decimal places. If +A. The :meth:`~Decimal.quantize` method rounds to a fixed number of decimal places. If the :const:`Inexact` trap is set, it is also useful for validation: >>> TWOPLACES = Decimal(10) ** -2 # same as Decimal('0.01') @@ -2021,7 +2021,7 @@ throughout an application? A. Some operations like addition, subtraction, and multiplication by an integer will automatically preserve fixed point. Others operations, like division and non-integer multiplication, will change the number of decimal places and need to -be followed-up with a :meth:`quantize` step: +be followed-up with a :meth:`~Decimal.quantize` step: >>> a = Decimal('102.72') # Initial fixed-point values >>> b = Decimal('3.17') @@ -2037,7 +2037,7 @@ be followed-up with a :meth:`quantize` step: Decimal('0.03') In developing fixed-point applications, it is convenient to define functions -to handle the :meth:`quantize` step: +to handle the :meth:`~Decimal.quantize` step: >>> def mul(x, y, fp=TWOPLACES): ... return (x * y).quantize(fp) @@ -2049,12 +2049,12 @@ to handle the :meth:`quantize` step: >>> div(b, a) Decimal('0.03') -Q. There are many ways to express the same value. The numbers :const:`200`, -:const:`200.000`, :const:`2E2`, and :const:`.02E+4` all have the same value at +Q. There are many ways to express the same value. The numbers ``200``, +``200.000``, ``2E2``, and ``.02E+4`` all have the same value at various precisions. Is there a way to transform them to a single recognizable canonical value? -A. The :meth:`normalize` method maps all equivalent values to a single +A. The :meth:`~Decimal.normalize` method maps all equivalent values to a single representative: >>> values = map(Decimal, '200 200.000 2E2 .02E+4'.split()) @@ -2066,7 +2066,7 @@ to get a non-exponential representation? A. For some values, exponential notation is the only way to express the number of significant places in the coefficient. For example, expressing -:const:`5.0E+3` as :const:`5000` keeps the value constant but cannot show the +``5.0E+3`` as ``5000`` keeps the value constant but cannot show the original's two-place significance. If an application does not care about tracking significance, it is easy to @@ -2142,12 +2142,12 @@ for medium-sized numbers and the `Number Theoretic Transform `_ for very large numbers. -The context must be adapted for exact arbitrary precision arithmetic. :attr:`Emin` -and :attr:`Emax` should always be set to the maximum values, :attr:`clamp` -should always be 0 (the default). Setting :attr:`prec` requires some care. +The context must be adapted for exact arbitrary precision arithmetic. :attr:`~Context.Emin` +and :attr:`~Context.Emax` should always be set to the maximum values, :attr:`~Context.clamp` +should always be 0 (the default). Setting :attr:`~Context.prec` requires some care. The easiest approach for trying out bignum arithmetic is to use the maximum -value for :attr:`prec` as well [#]_:: +value for :attr:`~Context.prec` as well [#]_:: >>> setcontext(Context(prec=MAX_PREC, Emax=MAX_EMAX, Emin=MIN_EMIN)) >>> x = Decimal(2) ** 256 @@ -2164,7 +2164,7 @@ the available memory will be insufficient:: MemoryError On systems with overallocation (e.g. Linux), a more sophisticated approach is to -adjust :attr:`prec` to the amount of available RAM. Suppose that you have 8GB of +adjust :attr:`~Context.prec` to the amount of available RAM. Suppose that you have 8GB of RAM and expect 10 simultaneous operands using a maximum of 500MB each:: >>> import sys From webhook-mailer at python.org Sat Feb 25 04:58:58 2023 From: webhook-mailer at python.org (miss-islington) Date: Sat, 25 Feb 2023 09:58:58 -0000 Subject: [Python-checkins] gh-101100: Fix Sphinx warnings in `decimal` module (GH-102125) Message-ID: https://github.com/python/cpython/commit/a109ce00de22599232b83fa417247d641dec691e commit: a109ce00de22599232b83fa417247d641dec691e branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-25T01:58:52-08:00 summary: gh-101100: Fix Sphinx warnings in `decimal` module (GH-102125) (cherry picked from commit b7c11264476ccc11e4bdf4bd3c3664ccd1b7c5f9) Co-authored-by: Hugo van Kemenade Co-authored-by: C.A.M. Gerlach files: M Doc/library/decimal.rst diff --git a/Doc/library/decimal.rst b/Doc/library/decimal.rst index 260108136df7..a4c6a6479fd1 100644 --- a/Doc/library/decimal.rst +++ b/Doc/library/decimal.rst @@ -40,23 +40,23 @@ decimal floating point arithmetic. It offers several advantages over the people learn at school." -- excerpt from the decimal arithmetic specification. * Decimal numbers can be represented exactly. In contrast, numbers like - :const:`1.1` and :const:`2.2` do not have exact representations in binary + ``1.1`` and ``2.2`` do not have exact representations in binary floating point. End users typically would not expect ``1.1 + 2.2`` to display - as :const:`3.3000000000000003` as it does with binary floating point. + as ``3.3000000000000003`` as it does with binary floating point. * The exactness carries over into arithmetic. In decimal floating point, ``0.1 + 0.1 + 0.1 - 0.3`` is exactly equal to zero. In binary floating point, the result - is :const:`5.5511151231257827e-017`. While near to zero, the differences + is ``5.5511151231257827e-017``. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal is preferred in accounting applications which have strict equality invariants. * The decimal module incorporates a notion of significant places so that ``1.30 - + 1.20`` is :const:`2.50`. The trailing zero is kept to indicate significance. + + 1.20`` is ``2.50``. The trailing zero is kept to indicate significance. This is the customary presentation for monetary applications. For multiplication, the "schoolbook" approach uses all the figures in the - multiplicands. For instance, ``1.3 * 1.2`` gives :const:`1.56` while ``1.30 * - 1.20`` gives :const:`1.5600`. + multiplicands. For instance, ``1.3 * 1.2`` gives ``1.56`` while ``1.30 * + 1.20`` gives ``1.5600``. * Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for @@ -88,8 +88,8 @@ context for arithmetic, and signals. A decimal number is immutable. It has a sign, coefficient digits, and an exponent. To preserve significance, the coefficient digits do not truncate trailing zeros. Decimals also include special values such as -:const:`Infinity`, :const:`-Infinity`, and :const:`NaN`. The standard also -differentiates :const:`-0` from :const:`+0`. +``Infinity``, ``-Infinity``, and ``NaN``. The standard also +differentiates ``-0`` from ``+0``. The context for arithmetic is an environment specifying precision, rounding rules, limits on exponents, flags indicating the results of operations, and trap @@ -139,8 +139,8 @@ precision, rounding, or enabled traps:: Decimal instances can be constructed from integers, strings, floats, or tuples. Construction from an integer or a float performs an exact conversion of the value of that integer or float. Decimal numbers include special values such as -:const:`NaN` which stands for "Not a number", positive and negative -:const:`Infinity`, and :const:`-0`:: +``NaN`` which stands for "Not a number", positive and negative +``Infinity``, and ``-0``:: >>> getcontext().prec = 28 >>> Decimal(10) @@ -250,7 +250,7 @@ And some mathematical functions are also available to Decimal: >>> Decimal('10').log10() Decimal('1') -The :meth:`quantize` method rounds a number to a fixed exponent. This method is +The :meth:`~Decimal.quantize` method rounds a number to a fixed exponent. This method is useful for monetary applications that often round results to a fixed number of places: @@ -299,7 +299,7 @@ enabled: Contexts also have signal flags for monitoring exceptional conditions encountered during computations. The flags remain set until explicitly cleared, so it is best to clear the flags before each set of monitored computations by -using the :meth:`clear_flags` method. :: +using the :meth:`~Context.clear_flags` method. :: >>> setcontext(ExtendedContext) >>> getcontext().clear_flags() @@ -309,12 +309,12 @@ using the :meth:`clear_flags` method. :: Context(prec=9, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[Inexact, Rounded], traps=[]) -The *flags* entry shows that the rational approximation to :const:`Pi` was +The *flags* entry shows that the rational approximation to pi was rounded (digits beyond the context precision were thrown away) and that the result is inexact (some of the discarded digits were non-zero). -Individual traps are set using the dictionary in the :attr:`traps` field of a -context: +Individual traps are set using the dictionary in the :attr:`~Context.traps` +attribute of a context: .. doctest:: newcontext @@ -369,7 +369,7 @@ Decimal objects with the fullwidth digits ``'\uff10'`` through ``'\uff19'``. If *value* is a :class:`tuple`, it should have three components, a sign - (:const:`0` for positive or :const:`1` for negative), a :class:`tuple` of + (``0`` for positive or ``1`` for negative), a :class:`tuple` of digits, and an integer exponent. For example, ``Decimal((0, (1, 4, 1, 4), -3))`` returns ``Decimal('1.414')``. @@ -387,7 +387,7 @@ Decimal objects The purpose of the *context* argument is determining what to do if *value* is a malformed string. If the context traps :const:`InvalidOperation`, an exception is raised; otherwise, the constructor returns a new Decimal with the value of - :const:`NaN`. + ``NaN``. Once constructed, :class:`Decimal` objects are immutable. @@ -701,7 +701,7 @@ Decimal objects .. method:: max(other, context=None) Like ``max(self, other)`` except that the context rounding rule is applied - before returning and that :const:`NaN` values are either signaled or + before returning and that ``NaN`` values are either signaled or ignored (depending on the context and whether they are signaling or quiet). @@ -713,7 +713,7 @@ Decimal objects .. method:: min(other, context=None) Like ``min(self, other)`` except that the context rounding rule is applied - before returning and that :const:`NaN` values are either signaled or + before returning and that ``NaN`` values are either signaled or ignored (depending on the context and whether they are signaling or quiet). @@ -744,8 +744,8 @@ Decimal objects .. method:: normalize(context=None) Normalize the number by stripping the rightmost trailing zeros and - converting any result equal to :const:`Decimal('0')` to - :const:`Decimal('0e0')`. Used for producing canonical values for attributes + converting any result equal to ``Decimal('0')`` to + ``Decimal('0e0')``. Used for producing canonical values for attributes of an equivalence class. For example, ``Decimal('32.100')`` and ``Decimal('0.321000e+2')`` both normalize to the equivalent value ``Decimal('32.1')``. @@ -790,7 +790,7 @@ Decimal objects the current thread's context is used. An error is returned whenever the resulting exponent is greater than - :attr:`Emax` or less than :attr:`Etiny`. + :attr:`~Context.Emax` or less than :meth:`~Context.Etiny`. .. method:: radix() @@ -830,7 +830,7 @@ Decimal objects .. method:: same_quantum(other, context=None) Test whether self and other have the same exponent or whether both are - :const:`NaN`. + ``NaN``. This operation is unaffected by context and is quiet: no flags are changed and no rounding is performed. As an exception, the C version may raise @@ -892,11 +892,11 @@ Decimal objects Logical operands ^^^^^^^^^^^^^^^^ -The :meth:`logical_and`, :meth:`logical_invert`, :meth:`logical_or`, -and :meth:`logical_xor` methods expect their arguments to be *logical +The :meth:`~Decimal.logical_and`, :meth:`~Decimal.logical_invert`, :meth:`~Decimal.logical_or`, +and :meth:`~Decimal.logical_xor` methods expect their arguments to be *logical operands*. A *logical operand* is a :class:`Decimal` instance whose exponent and sign are both zero, and whose digits are all either -:const:`0` or :const:`1`. +``0`` or ``1``. .. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% @@ -982,7 +982,7 @@ described below. In addition, the module provides three pre-made contexts: exceptions are not raised during computations). Because the traps are disabled, this context is useful for applications that - prefer to have result value of :const:`NaN` or :const:`Infinity` instead of + prefer to have result value of ``NaN`` or ``Infinity`` instead of raising exceptions. This allows an application to complete a run in the presence of conditions that would otherwise halt the program. @@ -1001,8 +1001,8 @@ described below. In addition, the module provides three pre-made contexts: In single threaded environments, it is preferable to not use this context at all. Instead, simply create contexts explicitly as described below. - The default values are :attr:`prec`\ =\ :const:`28`, - :attr:`rounding`\ =\ :const:`ROUND_HALF_EVEN`, + The default values are :attr:`Context.prec`\ =\ ``28``, + :attr:`Context.rounding`\ =\ :const:`ROUND_HALF_EVEN`, and enabled traps for :class:`Overflow`, :class:`InvalidOperation`, and :class:`DivisionByZero`. @@ -1016,7 +1016,7 @@ In addition to the three supplied contexts, new contexts can be created with the default values are copied from the :const:`DefaultContext`. If the *flags* field is not specified or is :const:`None`, all flags are cleared. - *prec* is an integer in the range [:const:`1`, :const:`MAX_PREC`] that sets + *prec* is an integer in the range [``1``, :const:`MAX_PREC`] that sets the precision for arithmetic operations in the context. The *rounding* option is one of the constants listed in the section @@ -1026,20 +1026,20 @@ In addition to the three supplied contexts, new contexts can be created with the contexts should only set traps and leave the flags clear. The *Emin* and *Emax* fields are integers specifying the outer limits allowable - for exponents. *Emin* must be in the range [:const:`MIN_EMIN`, :const:`0`], - *Emax* in the range [:const:`0`, :const:`MAX_EMAX`]. + for exponents. *Emin* must be in the range [:const:`MIN_EMIN`, ``0``], + *Emax* in the range [``0``, :const:`MAX_EMAX`]. - The *capitals* field is either :const:`0` or :const:`1` (the default). If set to - :const:`1`, exponents are printed with a capital :const:`E`; otherwise, a - lowercase :const:`e` is used: :const:`Decimal('6.02e+23')`. + The *capitals* field is either ``0`` or ``1`` (the default). If set to + ``1``, exponents are printed with a capital ``E``; otherwise, a + lowercase ``e`` is used: ``Decimal('6.02e+23')``. - The *clamp* field is either :const:`0` (the default) or :const:`1`. - If set to :const:`1`, the exponent ``e`` of a :class:`Decimal` + The *clamp* field is either ``0`` (the default) or ``1``. + If set to ``1``, the exponent ``e`` of a :class:`Decimal` instance representable in this context is strictly limited to the range ``Emin - prec + 1 <= e <= Emax - prec + 1``. If *clamp* is - :const:`0` then a weaker condition holds: the adjusted exponent of - the :class:`Decimal` instance is at most ``Emax``. When *clamp* is - :const:`1`, a large normal number will, where possible, have its + ``0`` then a weaker condition holds: the adjusted exponent of + the :class:`Decimal` instance is at most :attr:`~Context.Emax`. When *clamp* is + ``1``, a large normal number will, where possible, have its exponent reduced and a corresponding number of zeros added to its coefficient, in order to fit the exponent constraints; this preserves the value of the number but loses information about @@ -1048,13 +1048,13 @@ In addition to the three supplied contexts, new contexts can be created with the >>> Context(prec=6, Emax=999, clamp=1).create_decimal('1.23e999') Decimal('1.23000E+999') - A *clamp* value of :const:`1` allows compatibility with the + A *clamp* value of ``1`` allows compatibility with the fixed-width decimal interchange formats specified in IEEE 754. The :class:`Context` class defines several general purpose methods as well as a large number of methods for doing arithmetic directly in a given context. In addition, for each of the :class:`Decimal` methods described above (with - the exception of the :meth:`adjusted` and :meth:`as_tuple` methods) there is + the exception of the :meth:`~Decimal.adjusted` and :meth:`~Decimal.as_tuple` methods) there is a corresponding :class:`Context` method. For example, for a :class:`Context` instance ``C`` and :class:`Decimal` instance ``x``, ``C.exp(x)`` is equivalent to ``x.exp(context=C)``. Each :class:`Context` method accepts a @@ -1064,11 +1064,11 @@ In addition to the three supplied contexts, new contexts can be created with the .. method:: clear_flags() - Resets all of the flags to :const:`0`. + Resets all of the flags to ``0``. .. method:: clear_traps() - Resets all of the traps to :const:`0`. + Resets all of the traps to ``0``. .. versionadded:: 3.3 @@ -1483,13 +1483,13 @@ are also included in the pure Python version for compatibility. +---------------------+---------------------+-------------------------------+ | | 32-bit | 64-bit | +=====================+=====================+===============================+ -| .. data:: MAX_PREC | :const:`425000000` | :const:`999999999999999999` | +| .. data:: MAX_PREC | ``425000000`` | ``999999999999999999`` | +---------------------+---------------------+-------------------------------+ -| .. data:: MAX_EMAX | :const:`425000000` | :const:`999999999999999999` | +| .. data:: MAX_EMAX | ``425000000`` | ``999999999999999999`` | +---------------------+---------------------+-------------------------------+ -| .. data:: MIN_EMIN | :const:`-425000000` | :const:`-999999999999999999` | +| .. data:: MIN_EMIN | ``-425000000`` | ``-999999999999999999`` | +---------------------+---------------------+-------------------------------+ -| .. data:: MIN_ETINY | :const:`-849999999` | :const:`-1999999999999999997` | +| .. data:: MIN_ETINY | ``-849999999`` | ``-1999999999999999997`` | +---------------------+---------------------+-------------------------------+ @@ -1514,7 +1514,7 @@ Rounding modes .. data:: ROUND_CEILING - Round towards :const:`Infinity`. + Round towards ``Infinity``. .. data:: ROUND_DOWN @@ -1522,7 +1522,7 @@ Rounding modes .. data:: ROUND_FLOOR - Round towards :const:`-Infinity`. + Round towards ``-Infinity``. .. data:: ROUND_HALF_DOWN @@ -1570,7 +1570,7 @@ condition. Altered an exponent to fit representation constraints. Typically, clamping occurs when an exponent falls outside the context's - :attr:`Emin` and :attr:`Emax` limits. If possible, the exponent is reduced to + :attr:`~Context.Emin` and :attr:`~Context.Emax` limits. If possible, the exponent is reduced to fit by adding zeros to the coefficient. @@ -1584,8 +1584,8 @@ condition. Signals the division of a non-infinite number by zero. Can occur with division, modulo division, or when raising a number to a negative - power. If this signal is not trapped, returns :const:`Infinity` or - :const:`-Infinity` with the sign determined by the inputs to the calculation. + power. If this signal is not trapped, returns ``Infinity`` or + ``-Infinity`` with the sign determined by the inputs to the calculation. .. class:: Inexact @@ -1602,7 +1602,7 @@ condition. An invalid operation was performed. Indicates that an operation was requested that does not make sense. If not - trapped, returns :const:`NaN`. Possible causes include:: + trapped, returns ``NaN``. Possible causes include:: Infinity - Infinity 0 * Infinity @@ -1619,10 +1619,10 @@ condition. Numerical overflow. - Indicates the exponent is larger than :attr:`Emax` after rounding has + Indicates the exponent is larger than :attr:`Context.Emax` after rounding has occurred. If not trapped, the result depends on the rounding mode, either pulling inward to the largest representable finite number or rounding outward - to :const:`Infinity`. In either case, :class:`Inexact` and :class:`Rounded` + to ``Infinity``. In either case, :class:`Inexact` and :class:`Rounded` are also signaled. @@ -1631,14 +1631,14 @@ condition. Rounding occurred though possibly no information was lost. Signaled whenever rounding discards digits; even if those digits are zero - (such as rounding :const:`5.00` to :const:`5.0`). If not trapped, returns + (such as rounding ``5.00`` to ``5.0``). If not trapped, returns the result unchanged. This signal is used to detect loss of significant digits. .. class:: Subnormal - Exponent was lower than :attr:`Emin` prior to rounding. + Exponent was lower than :attr:`~Context.Emin` prior to rounding. Occurs when an operation result is subnormal (the exponent is too small). If not trapped, returns the result unchanged. @@ -1696,7 +1696,7 @@ Mitigating round-off error with increased precision ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The use of decimal floating point eliminates decimal representation error -(making it possible to represent :const:`0.1` exactly); however, some operations +(making it possible to represent ``0.1`` exactly); however, some operations can still incur round-off error when non-zero digits exceed the fixed precision. The effects of round-off error can be amplified by the addition or subtraction @@ -1746,8 +1746,8 @@ Special values ^^^^^^^^^^^^^^ The number system for the :mod:`decimal` module provides special values -including :const:`NaN`, :const:`sNaN`, :const:`-Infinity`, :const:`Infinity`, -and two zeros, :const:`+0` and :const:`-0`. +including ``NaN``, ``sNaN``, ``-Infinity``, ``Infinity``, +and two zeros, ``+0`` and ``-0``. Infinities can be constructed directly with: ``Decimal('Infinity')``. Also, they can arise from dividing by zero when the :exc:`DivisionByZero` signal is @@ -1758,30 +1758,30 @@ The infinities are signed (affine) and can be used in arithmetic operations where they get treated as very large, indeterminate numbers. For instance, adding a constant to infinity gives another infinite result. -Some operations are indeterminate and return :const:`NaN`, or if the +Some operations are indeterminate and return ``NaN``, or if the :exc:`InvalidOperation` signal is trapped, raise an exception. For example, -``0/0`` returns :const:`NaN` which means "not a number". This variety of -:const:`NaN` is quiet and, once created, will flow through other computations -always resulting in another :const:`NaN`. This behavior can be useful for a +``0/0`` returns ``NaN`` which means "not a number". This variety of +``NaN`` is quiet and, once created, will flow through other computations +always resulting in another ``NaN``. This behavior can be useful for a series of computations that occasionally have missing inputs --- it allows the calculation to proceed while flagging specific results as invalid. -A variant is :const:`sNaN` which signals rather than remaining quiet after every +A variant is ``sNaN`` which signals rather than remaining quiet after every operation. This is a useful return value when an invalid result needs to interrupt a calculation for special handling. The behavior of Python's comparison operators can be a little surprising where a -:const:`NaN` is involved. A test for equality where one of the operands is a -quiet or signaling :const:`NaN` always returns :const:`False` (even when doing +``NaN`` is involved. A test for equality where one of the operands is a +quiet or signaling ``NaN`` always returns :const:`False` (even when doing ``Decimal('NaN')==Decimal('NaN')``), while a test for inequality always returns :const:`True`. An attempt to compare two Decimals using any of the ``<``, ``<=``, ``>`` or ``>=`` operators will raise the :exc:`InvalidOperation` signal -if either operand is a :const:`NaN`, and return :const:`False` if this signal is +if either operand is a ``NaN``, and return :const:`False` if this signal is not trapped. Note that the General Decimal Arithmetic specification does not specify the behavior of direct comparisons; these rules for comparisons -involving a :const:`NaN` were taken from the IEEE 854 standard (see Table 3 in -section 5.7). To ensure strict standards-compliance, use the :meth:`compare` -and :meth:`compare-signal` methods instead. +involving a ``NaN`` were taken from the IEEE 854 standard (see Table 3 in +section 5.7). To ensure strict standards-compliance, use the :meth:`~Decimal.compare` +and :meth:`~Decimal.compare_signal` methods instead. The signed zeros can result from calculations that underflow. They keep the sign that would have resulted if the calculation had been carried out to greater @@ -2013,7 +2013,7 @@ Q. In a fixed-point application with two decimal places, some inputs have many places and need to be rounded. Others are not supposed to have excess digits and need to be validated. What methods should be used? -A. The :meth:`quantize` method rounds to a fixed number of decimal places. If +A. The :meth:`~Decimal.quantize` method rounds to a fixed number of decimal places. If the :const:`Inexact` trap is set, it is also useful for validation: >>> TWOPLACES = Decimal(10) ** -2 # same as Decimal('0.01') @@ -2037,7 +2037,7 @@ throughout an application? A. Some operations like addition, subtraction, and multiplication by an integer will automatically preserve fixed point. Others operations, like division and non-integer multiplication, will change the number of decimal places and need to -be followed-up with a :meth:`quantize` step: +be followed-up with a :meth:`~Decimal.quantize` step: >>> a = Decimal('102.72') # Initial fixed-point values >>> b = Decimal('3.17') @@ -2053,7 +2053,7 @@ be followed-up with a :meth:`quantize` step: Decimal('0.03') In developing fixed-point applications, it is convenient to define functions -to handle the :meth:`quantize` step: +to handle the :meth:`~Decimal.quantize` step: >>> def mul(x, y, fp=TWOPLACES): ... return (x * y).quantize(fp) @@ -2065,12 +2065,12 @@ to handle the :meth:`quantize` step: >>> div(b, a) Decimal('0.03') -Q. There are many ways to express the same value. The numbers :const:`200`, -:const:`200.000`, :const:`2E2`, and :const:`.02E+4` all have the same value at +Q. There are many ways to express the same value. The numbers ``200``, +``200.000``, ``2E2``, and ``.02E+4`` all have the same value at various precisions. Is there a way to transform them to a single recognizable canonical value? -A. The :meth:`normalize` method maps all equivalent values to a single +A. The :meth:`~Decimal.normalize` method maps all equivalent values to a single representative: >>> values = map(Decimal, '200 200.000 2E2 .02E+4'.split()) @@ -2082,7 +2082,7 @@ to get a non-exponential representation? A. For some values, exponential notation is the only way to express the number of significant places in the coefficient. For example, expressing -:const:`5.0E+3` as :const:`5000` keeps the value constant but cannot show the +``5.0E+3`` as ``5000`` keeps the value constant but cannot show the original's two-place significance. If an application does not care about tracking significance, it is easy to @@ -2158,12 +2158,12 @@ for medium-sized numbers and the `Number Theoretic Transform `_ for very large numbers. -The context must be adapted for exact arbitrary precision arithmetic. :attr:`Emin` -and :attr:`Emax` should always be set to the maximum values, :attr:`clamp` -should always be 0 (the default). Setting :attr:`prec` requires some care. +The context must be adapted for exact arbitrary precision arithmetic. :attr:`~Context.Emin` +and :attr:`~Context.Emax` should always be set to the maximum values, :attr:`~Context.clamp` +should always be 0 (the default). Setting :attr:`~Context.prec` requires some care. The easiest approach for trying out bignum arithmetic is to use the maximum -value for :attr:`prec` as well [#]_:: +value for :attr:`~Context.prec` as well [#]_:: >>> setcontext(Context(prec=MAX_PREC, Emax=MAX_EMAX, Emin=MIN_EMIN)) >>> x = Decimal(2) ** 256 @@ -2180,7 +2180,7 @@ the available memory will be insufficient:: MemoryError On systems with overallocation (e.g. Linux), a more sophisticated approach is to -adjust :attr:`prec` to the amount of available RAM. Suppose that you have 8GB of +adjust :attr:`~Context.prec` to the amount of available RAM. Suppose that you have 8GB of RAM and expect 10 simultaneous operands using a maximum of 500MB each:: >>> import sys From webhook-mailer at python.org Sat Feb 25 07:00:19 2023 From: webhook-mailer at python.org (pradyunsg) Date: Sat, 25 Feb 2023 12:00:19 -0000 Subject: [Python-checkins] gh-101997: Update bundled pip version to 23.0.1 (#101998) Message-ID: https://github.com/python/cpython/commit/89d9ff0f48c51a85920c7372a7df4a2204e32ea5 commit: 89d9ff0f48c51a85920c7372a7df4a2204e32ea5 branch: main author: Pradyun Gedam committer: pradyunsg date: 2023-02-25T12:00:12Z summary: gh-101997: Update bundled pip version to 23.0.1 (#101998) files: A Lib/ensurepip/_bundled/pip-23.0.1-py3-none-any.whl A Misc/NEWS.d/next/Library/2023-02-17-18-44-27.gh-issue-101997.A6_blD.rst D Lib/ensurepip/_bundled/pip-23.0-py3-none-any.whl M Lib/ensurepip/__init__.py diff --git a/Lib/ensurepip/__init__.py b/Lib/ensurepip/__init__.py index 052b7bca28dd..00e77749e25e 100644 --- a/Lib/ensurepip/__init__.py +++ b/Lib/ensurepip/__init__.py @@ -11,7 +11,7 @@ __all__ = ["version", "bootstrap"] _PACKAGE_NAMES = ('setuptools', 'pip') _SETUPTOOLS_VERSION = "65.5.0" -_PIP_VERSION = "23.0" +_PIP_VERSION = "23.0.1" _PROJECTS = [ ("setuptools", _SETUPTOOLS_VERSION, "py3"), ("pip", _PIP_VERSION, "py3"), diff --git a/Lib/ensurepip/_bundled/pip-23.0-py3-none-any.whl b/Lib/ensurepip/_bundled/pip-23.0.1-py3-none-any.whl similarity index 93% rename from Lib/ensurepip/_bundled/pip-23.0-py3-none-any.whl rename to Lib/ensurepip/_bundled/pip-23.0.1-py3-none-any.whl index bb9aebf474cf..a855dc40e863 100644 Binary files a/Lib/ensurepip/_bundled/pip-23.0-py3-none-any.whl and b/Lib/ensurepip/_bundled/pip-23.0.1-py3-none-any.whl differ diff --git a/Misc/NEWS.d/next/Library/2023-02-17-18-44-27.gh-issue-101997.A6_blD.rst b/Misc/NEWS.d/next/Library/2023-02-17-18-44-27.gh-issue-101997.A6_blD.rst new file mode 100644 index 000000000000..f9dfd46d1ed4 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-17-18-44-27.gh-issue-101997.A6_blD.rst @@ -0,0 +1 @@ +Upgrade pip wheel bundled with ensurepip (pip 23.0.1) From webhook-mailer at python.org Sat Feb 25 08:21:38 2023 From: webhook-mailer at python.org (JelleZijlstra) Date: Sat, 25 Feb 2023 13:21:38 -0000 Subject: [Python-checkins] asyncio docs: Fix dangling hyphen (#102227) Message-ID: https://github.com/python/cpython/commit/207e1c5cae11108213dff5ff07443ee4cfa0d2ea commit: 207e1c5cae11108213dff5ff07443ee4cfa0d2ea branch: main author: Jelle Zijlstra committer: JelleZijlstra date: 2023-02-25T05:21:32-08:00 summary: asyncio docs: Fix dangling hyphen (#102227) Currently this gets rendered with a dangling hyphen. files: M Doc/library/asyncio-eventloop.rst diff --git a/Doc/library/asyncio-eventloop.rst b/Doc/library/asyncio-eventloop.rst index f86e78428802..5138afc2bbe4 100644 --- a/Doc/library/asyncio-eventloop.rst +++ b/Doc/library/asyncio-eventloop.rst @@ -524,8 +524,8 @@ Opening network connections When a server's IPv4 path and protocol are working, but the server's IPv6 path and protocol are not working, a dual-stack client application experiences significant connection delay compared to an - IPv4-only client. This is undesirable because it causes the dual- - stack client to have a worse user experience. This document + IPv4-only client. This is undesirable because it causes the + dual-stack client to have a worse user experience. This document specifies requirements for algorithms that reduce this user-visible delay and provides an algorithm. From webhook-mailer at python.org Sat Feb 25 08:28:22 2023 From: webhook-mailer at python.org (miss-islington) Date: Sat, 25 Feb 2023 13:28:22 -0000 Subject: [Python-checkins] asyncio docs: Fix dangling hyphen (GH-102227) Message-ID: https://github.com/python/cpython/commit/ac631697928505708e41b181c5692fc8c3db7e90 commit: ac631697928505708e41b181c5692fc8c3db7e90 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-25T05:28:16-08:00 summary: asyncio docs: Fix dangling hyphen (GH-102227) Currently this gets rendered with a dangling hyphen. (cherry picked from commit 207e1c5cae11108213dff5ff07443ee4cfa0d2ea) Co-authored-by: Jelle Zijlstra files: M Doc/library/asyncio-eventloop.rst diff --git a/Doc/library/asyncio-eventloop.rst b/Doc/library/asyncio-eventloop.rst index 759e30a73f27..f68fa013d1b8 100644 --- a/Doc/library/asyncio-eventloop.rst +++ b/Doc/library/asyncio-eventloop.rst @@ -498,8 +498,8 @@ Opening network connections When a server's IPv4 path and protocol are working, but the server's IPv6 path and protocol are not working, a dual-stack client application experiences significant connection delay compared to an - IPv4-only client. This is undesirable because it causes the dual- - stack client to have a worse user experience. This document + IPv4-only client. This is undesirable because it causes the + dual-stack client to have a worse user experience. This document specifies requirements for algorithms that reduce this user-visible delay and provides an algorithm. From webhook-mailer at python.org Sat Feb 25 08:29:42 2023 From: webhook-mailer at python.org (miss-islington) Date: Sat, 25 Feb 2023 13:29:42 -0000 Subject: [Python-checkins] asyncio docs: Fix dangling hyphen (GH-102227) Message-ID: https://github.com/python/cpython/commit/5775863e9d5995edba34ccfae6d1a5bed3ab1eac commit: 5775863e9d5995edba34ccfae6d1a5bed3ab1eac branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-25T05:29:35-08:00 summary: asyncio docs: Fix dangling hyphen (GH-102227) Currently this gets rendered with a dangling hyphen. (cherry picked from commit 207e1c5cae11108213dff5ff07443ee4cfa0d2ea) Co-authored-by: Jelle Zijlstra files: M Doc/library/asyncio-eventloop.rst diff --git a/Doc/library/asyncio-eventloop.rst b/Doc/library/asyncio-eventloop.rst index 020d8a0ca8a8..fa1b2b536888 100644 --- a/Doc/library/asyncio-eventloop.rst +++ b/Doc/library/asyncio-eventloop.rst @@ -510,8 +510,8 @@ Opening network connections When a server's IPv4 path and protocol are working, but the server's IPv6 path and protocol are not working, a dual-stack client application experiences significant connection delay compared to an - IPv4-only client. This is undesirable because it causes the dual- - stack client to have a worse user experience. This document + IPv4-only client. This is undesirable because it causes the + dual-stack client to have a worse user experience. This document specifies requirements for algorithms that reduce this user-visible delay and provides an algorithm. From webhook-mailer at python.org Sat Feb 25 10:30:11 2023 From: webhook-mailer at python.org (kumaraditya303) Date: Sat, 25 Feb 2023 15:30:11 -0000 Subject: [Python-checkins] =?utf-8?q?=5B3=2E11=5D_GH-102126=3A_fix_deadlo?= =?utf-8?q?ck_at_shutdown_when_clearing_thread_state=E2=80=A6_=28=23102234?= =?utf-8?q?=29?= Message-ID: https://github.com/python/cpython/commit/026faf20cc9d1d5913ff7c01a93d8934594d7fec commit: 026faf20cc9d1d5913ff7c01a93d8934594d7fec branch: 3.11 author: Kumar Aditya <59607654+kumaraditya303 at users.noreply.github.com> committer: kumaraditya303 <59607654+kumaraditya303 at users.noreply.github.com> date: 2023-02-25T21:00:05+05:30 summary: [3.11] GH-102126: fix deadlock at shutdown when clearing thread state? (#102234) [3.11] GH-102126: fix deadlock at shutdown when clearing thread states (GH-102222) (cherry picked from commit 5f11478ce7fda826d399530af4c5ca96c592f144) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-24-17-59-39.gh-issue-102126.HTT8Vc.rst M Python/pystate.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-24-17-59-39.gh-issue-102126.HTT8Vc.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-24-17-59-39.gh-issue-102126.HTT8Vc.rst new file mode 100644 index 000000000000..68c43688c3df --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-24-17-59-39.gh-issue-102126.HTT8Vc.rst @@ -0,0 +1 @@ +Fix deadlock at shutdown when clearing thread states if any finalizer tries to acquire the runtime head lock. Patch by Kumar Aditya. diff --git a/Python/pystate.c b/Python/pystate.c index c0d161f894c9..dfca3f5fd713 100644 --- a/Python/pystate.c +++ b/Python/pystate.c @@ -396,11 +396,19 @@ interpreter_clear(PyInterpreterState *interp, PyThreadState *tstate) _PyErr_Clear(tstate); } + // Clear the current/main thread state last. HEAD_LOCK(runtime); - for (PyThreadState *p = interp->threads.head; p != NULL; p = p->next) { + PyThreadState *p = interp->threads.head; + HEAD_UNLOCK(runtime); + while (p != NULL) { + // See https://github.com/python/cpython/issues/102126 + // Must be called without HEAD_LOCK held as it can deadlock + // if any finalizer tries to acquire that lock. PyThreadState_Clear(p); + HEAD_LOCK(runtime); + p = p->next; + HEAD_UNLOCK(runtime); } - HEAD_UNLOCK(runtime); Py_CLEAR(interp->audit_hooks); From webhook-mailer at python.org Sat Feb 25 11:15:55 2023 From: webhook-mailer at python.org (jaraco) Date: Sat, 25 Feb 2023 16:15:55 -0000 Subject: [Python-checkins] gh-102209: Sync with zipp 3.15 moving complexity tests into dedicated module (#102232) Message-ID: https://github.com/python/cpython/commit/a35fd38b57d3eb05074ca36f3d57e1993c44ddc9 commit: a35fd38b57d3eb05074ca36f3d57e1993c44ddc9 branch: main author: Jason R. Coombs committer: jaraco date: 2023-02-25T11:15:48-05:00 summary: gh-102209: Sync with zipp 3.15 moving complexity tests into dedicated module (#102232) Sync with jaraco/zipp at 757a4e1a. files: A Lib/test/test_zipfile/_support.py A Lib/test/test_zipfile/test_complexity.py D Lib/test/test_zipfile/_context.py D Lib/test/test_zipfile/_func_timeout_compat.py M Lib/test/test_zipfile/_itertools.py M Lib/test/test_zipfile/test_path.py M Lib/zipfile/_path.py diff --git a/Lib/test/test_zipfile/_context.py b/Lib/test/test_zipfile/_context.py deleted file mode 100644 index 348798aa08d4..000000000000 --- a/Lib/test/test_zipfile/_context.py +++ /dev/null @@ -1,30 +0,0 @@ -import contextlib -import time - - -class DeadlineExceeded(Exception): - pass - - -class TimedContext(contextlib.ContextDecorator): - """ - A context that will raise DeadlineExceeded if the - max duration is reached during the execution. - - >>> TimedContext(1)(time.sleep)(.1) - >>> TimedContext(0)(time.sleep)(.1) - Traceback (most recent call last): - ... - tests._context.DeadlineExceeded: (..., 0) - """ - - def __init__(self, max_duration: int): - self.max_duration = max_duration - - def __enter__(self): - self.start = time.monotonic() - - def __exit__(self, *err): - duration = time.monotonic() - self.start - if duration > self.max_duration: - raise DeadlineExceeded(duration, self.max_duration) diff --git a/Lib/test/test_zipfile/_func_timeout_compat.py b/Lib/test/test_zipfile/_func_timeout_compat.py deleted file mode 100644 index b1f2b2698a85..000000000000 --- a/Lib/test/test_zipfile/_func_timeout_compat.py +++ /dev/null @@ -1,8 +0,0 @@ -try: - from func_timeout import func_set_timeout as set_timeout -except ImportError: # pragma: no cover - # provide a fallback that doesn't actually time out - from ._context import TimedContext as set_timeout - - -__all__ = ['set_timeout'] diff --git a/Lib/test/test_zipfile/_itertools.py b/Lib/test/test_zipfile/_itertools.py index 74f01fe5ba3d..f735dd217330 100644 --- a/Lib/test/test_zipfile/_itertools.py +++ b/Lib/test/test_zipfile/_itertools.py @@ -1,4 +1,6 @@ import itertools +from collections import deque +from itertools import islice # from jaraco.itertools 6.3.0 @@ -39,3 +41,39 @@ def always_iterable(obj, base_type=(str, bytes)): return iter(obj) except TypeError: return iter((obj,)) + + +# from more_itertools v9.0.0 +def consume(iterator, n=None): + """Advance *iterable* by *n* steps. If *n* is ``None``, consume it + entirely. + Efficiently exhausts an iterator without returning values. Defaults to + consuming the whole iterator, but an optional second argument may be + provided to limit consumption. + >>> i = (x for x in range(10)) + >>> next(i) + 0 + >>> consume(i, 3) + >>> next(i) + 4 + >>> consume(i) + >>> next(i) + Traceback (most recent call last): + File "", line 1, in + StopIteration + If the iterator has fewer items remaining than the provided limit, the + whole iterator will be consumed. + >>> i = (x for x in range(3)) + >>> consume(i, 5) + >>> next(i) + Traceback (most recent call last): + File "", line 1, in + StopIteration + """ + # Use functions that consume iterators at C speed. + if n is None: + # feed the entire iterator into a zero-length deque + deque(iterator, maxlen=0) + else: + # advance to the empty slice starting at position n + next(islice(iterator, n, n), None) diff --git a/Lib/test/test_zipfile/_support.py b/Lib/test/test_zipfile/_support.py new file mode 100644 index 000000000000..1afdf3b3a773 --- /dev/null +++ b/Lib/test/test_zipfile/_support.py @@ -0,0 +1,9 @@ +import importlib +import unittest + + +def import_or_skip(name): + try: + return importlib.import_module(name) + except ImportError: # pragma: no cover + raise unittest.SkipTest(f'Unable to import {name}') diff --git a/Lib/test/test_zipfile/test_complexity.py b/Lib/test/test_zipfile/test_complexity.py new file mode 100644 index 000000000000..3432dc39e56c --- /dev/null +++ b/Lib/test/test_zipfile/test_complexity.py @@ -0,0 +1,24 @@ +import unittest +import string +import zipfile + +from ._functools import compose +from ._itertools import consume + +from ._support import import_or_skip + + +big_o = import_or_skip('big_o') + + +class TestComplexity(unittest.TestCase): + def test_implied_dirs_performance(self): + best, others = big_o.big_o( + compose(consume, zipfile.CompleteDirs._implied_dirs), + lambda size: [ + '/'.join(string.ascii_lowercase + str(n)) for n in range(size) + ], + max_n=1000, + min_n=1, + ) + assert best <= big_o.complexities.Linear diff --git a/Lib/test/test_zipfile/test_path.py b/Lib/test/test_zipfile/test_path.py index 53cbef15a17d..aff91e539958 100644 --- a/Lib/test/test_zipfile/test_path.py +++ b/Lib/test/test_zipfile/test_path.py @@ -3,7 +3,6 @@ import contextlib import pathlib import pickle -import string import sys import unittest import zipfile @@ -12,7 +11,6 @@ from ._itertools import Counter from ._test_params import parameterize, Invoked -from ._func_timeout_compat import set_timeout from test.support.os_helper import temp_dir @@ -22,9 +20,6 @@ class itertools: Counter = Counter -consume = tuple - - def add_dirs(zf): """ Given a writable zip file zf, inject directory entries for @@ -330,12 +325,6 @@ def test_joinpath_constant_time(self): # Check the file iterated all items assert entries.count == self.HUGE_ZIPFILE_NUM_ENTRIES - # timeout disabled due to #102209 - # @set_timeout(3) - def test_implied_dirs_performance(self): - data = ['/'.join(string.ascii_lowercase + str(n)) for n in range(10000)] - zipfile.CompleteDirs._implied_dirs(data) - @pass_alpharep def test_read_does_not_close(self, alpharep): alpharep = self.zipfile_ondisk(alpharep) @@ -513,7 +502,7 @@ def test_pickle(self, alpharep, path_type, subpath): saved_1 = pickle.dumps(zipfile.Path(zipfile_ondisk, at=subpath)) restored_1 = pickle.loads(saved_1) first, *rest = restored_1.iterdir() - assert first.read_text().startswith('content of ') + assert first.read_text(encoding='utf-8').startswith('content of ') @pass_alpharep def test_extract_orig_with_implied_dirs(self, alpharep): @@ -525,3 +514,12 @@ def test_extract_orig_with_implied_dirs(self, alpharep): # wrap the zipfile for its side effect zipfile.Path(zf) zf.extractall(source_path.parent) + + @pass_alpharep + def test_getinfo_missing(self, alpharep): + """ + Validate behavior of getinfo on original zipfile after wrapping. + """ + zipfile.Path(alpharep) + with self.assertRaises(KeyError): + alpharep.getinfo('does-not-exist') diff --git a/Lib/zipfile/_path.py b/Lib/zipfile/_path.py index c2c804e96d6b..fd49a3ea91db 100644 --- a/Lib/zipfile/_path.py +++ b/Lib/zipfile/_path.py @@ -86,6 +86,11 @@ class CompleteDirs(InitializedState, zipfile.ZipFile): """ A ZipFile subclass that ensures that implied directories are always included in the namelist. + + >>> list(CompleteDirs._implied_dirs(['foo/bar.txt', 'foo/bar/baz.txt'])) + ['foo/', 'foo/bar/'] + >>> list(CompleteDirs._implied_dirs(['foo/bar.txt', 'foo/bar/baz.txt', 'foo/bar/'])) + ['foo/'] """ @staticmethod @@ -215,7 +220,7 @@ class Path: Read text: - >>> c.read_text() + >>> c.read_text(encoding='utf-8') 'content of c' existence: From webhook-mailer at python.org Sat Feb 25 15:50:35 2023 From: webhook-mailer at python.org (terryjreedy) Date: Sat, 25 Feb 2023 20:50:35 -0000 Subject: [Python-checkins] gh-102252: Improve coverage of test_bool.py (#102253) Message-ID: https://github.com/python/cpython/commit/41970436373f4be813fe8f5a07b6da04d5f4c80e commit: 41970436373f4be813fe8f5a07b6da04d5f4c80e branch: main author: Eclips4 <80244920+Eclips4 at users.noreply.github.com> committer: terryjreedy date: 2023-02-25T15:50:24-05:00 summary: gh-102252: Improve coverage of test_bool.py (#102253) Add tests for conversion from bool to complex. files: M Lib/test/test_bool.py diff --git a/Lib/test/test_bool.py b/Lib/test/test_bool.py index f46f21da8da3..b711ffb9a3ec 100644 --- a/Lib/test/test_bool.py +++ b/Lib/test/test_bool.py @@ -40,6 +40,12 @@ def test_float(self): self.assertEqual(float(True), 1.0) self.assertIsNot(float(True), True) + def test_complex(self): + self.assertEqual(complex(False), 0j) + self.assertEqual(complex(False), False) + self.assertEqual(complex(True), 1+0j) + self.assertEqual(complex(True), True) + def test_math(self): self.assertEqual(+False, 0) self.assertIsNot(+False, False) From webhook-mailer at python.org Sat Feb 25 16:09:46 2023 From: webhook-mailer at python.org (miss-islington) Date: Sat, 25 Feb 2023 21:09:46 -0000 Subject: [Python-checkins] gh-102252: Improve coverage of test_bool.py (GH-102253) Message-ID: https://github.com/python/cpython/commit/f894995eb606a2cf831be56383720adc23ebcb80 commit: f894995eb606a2cf831be56383720adc23ebcb80 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-25T13:09:39-08:00 summary: gh-102252: Improve coverage of test_bool.py (GH-102253) Add tests for conversion from bool to complex. (cherry picked from commit 41970436373f4be813fe8f5a07b6da04d5f4c80e) Co-authored-by: Eclips4 <80244920+Eclips4 at users.noreply.github.com> files: M Lib/test/test_bool.py diff --git a/Lib/test/test_bool.py b/Lib/test/test_bool.py index 4b32aad2419f..5215f775c615 100644 --- a/Lib/test/test_bool.py +++ b/Lib/test/test_bool.py @@ -40,6 +40,12 @@ def test_float(self): self.assertEqual(float(True), 1.0) self.assertIsNot(float(True), True) + def test_complex(self): + self.assertEqual(complex(False), 0j) + self.assertEqual(complex(False), False) + self.assertEqual(complex(True), 1+0j) + self.assertEqual(complex(True), True) + def test_math(self): self.assertEqual(+False, 0) self.assertIsNot(+False, False) From webhook-mailer at python.org Sat Feb 25 16:15:51 2023 From: webhook-mailer at python.org (miss-islington) Date: Sat, 25 Feb 2023 21:15:51 -0000 Subject: [Python-checkins] gh-102252: Improve coverage of test_bool.py (GH-102253) Message-ID: https://github.com/python/cpython/commit/3fe74199fda1139db6f32f03b3653d0eba805f9b commit: 3fe74199fda1139db6f32f03b3653d0eba805f9b branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-25T13:15:45-08:00 summary: gh-102252: Improve coverage of test_bool.py (GH-102253) Add tests for conversion from bool to complex. (cherry picked from commit 41970436373f4be813fe8f5a07b6da04d5f4c80e) Co-authored-by: Eclips4 <80244920+Eclips4 at users.noreply.github.com> files: M Lib/test/test_bool.py diff --git a/Lib/test/test_bool.py b/Lib/test/test_bool.py index f46f21da8da3..b711ffb9a3ec 100644 --- a/Lib/test/test_bool.py +++ b/Lib/test/test_bool.py @@ -40,6 +40,12 @@ def test_float(self): self.assertEqual(float(True), 1.0) self.assertIsNot(float(True), True) + def test_complex(self): + self.assertEqual(complex(False), 0j) + self.assertEqual(complex(False), False) + self.assertEqual(complex(True), 1+0j) + self.assertEqual(complex(True), True) + def test_math(self): self.assertEqual(+False, 0) self.assertIsNot(+False, False) From webhook-mailer at python.org Sat Feb 25 16:48:07 2023 From: webhook-mailer at python.org (AlexWaygood) Date: Sat, 25 Feb 2023 21:48:07 -0000 Subject: [Python-checkins] gh-101100: Fix sphinx warnings in `typing` module docs (#102260) Message-ID: https://github.com/python/cpython/commit/a498de4c0ef9e264cab3320afbc4d38df6394800 commit: a498de4c0ef9e264cab3320afbc4d38df6394800 branch: main author: Nikita Sobolev committer: AlexWaygood date: 2023-02-25T21:48:00Z summary: gh-101100: Fix sphinx warnings in `typing` module docs (#102260) files: M Doc/library/typing.rst diff --git a/Doc/library/typing.rst b/Doc/library/typing.rst index 169f7196a74e..bbbf6920ddec 100644 --- a/Doc/library/typing.rst +++ b/Doc/library/typing.rst @@ -1588,7 +1588,7 @@ These are not used in annotations. They are building blocks for creating generic methods, not their type signatures. For example, :class:`ssl.SSLObject` is a class, therefore it passes an :func:`issubclass` check against :data:`Callable`. However, the - :meth:`ssl.SSLObject.__init__` method exists only to raise a + ``ssl.SSLObject.__init__`` method exists only to raise a :exc:`TypeError` with a more informative message, therefore making it impossible to call (instantiate) :class:`ssl.SSLObject`. From webhook-mailer at python.org Sat Feb 25 16:54:37 2023 From: webhook-mailer at python.org (miss-islington) Date: Sat, 25 Feb 2023 21:54:37 -0000 Subject: [Python-checkins] gh-101100: Fix sphinx warnings in `typing` module docs (GH-102260) Message-ID: https://github.com/python/cpython/commit/d22a2cd422bde897531a50ce5bb9cfaf7a3bfc4e commit: d22a2cd422bde897531a50ce5bb9cfaf7a3bfc4e branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-25T13:54:31-08:00 summary: gh-101100: Fix sphinx warnings in `typing` module docs (GH-102260) (cherry picked from commit a498de4c0ef9e264cab3320afbc4d38df6394800) Co-authored-by: Nikita Sobolev files: M Doc/library/typing.rst diff --git a/Doc/library/typing.rst b/Doc/library/typing.rst index 276133653376..5deaa7adbe1e 100644 --- a/Doc/library/typing.rst +++ b/Doc/library/typing.rst @@ -1356,7 +1356,7 @@ These are not used in annotations. They are building blocks for creating generic methods, not their type signatures. For example, :class:`ssl.SSLObject` is a class, therefore it passes an :func:`issubclass` check against :data:`Callable`. However, the - :meth:`ssl.SSLObject.__init__` method exists only to raise a + ``ssl.SSLObject.__init__`` method exists only to raise a :exc:`TypeError` with a more informative message, therefore making it impossible to call (instantiate) :class:`ssl.SSLObject`. From webhook-mailer at python.org Sat Feb 25 16:55:55 2023 From: webhook-mailer at python.org (miss-islington) Date: Sat, 25 Feb 2023 21:55:55 -0000 Subject: [Python-checkins] gh-101100: Fix sphinx warnings in `typing` module docs (GH-102260) Message-ID: https://github.com/python/cpython/commit/c2f42f177816ebfe6e351a76cb275bcfd4ad3329 commit: c2f42f177816ebfe6e351a76cb275bcfd4ad3329 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-25T13:55:49-08:00 summary: gh-101100: Fix sphinx warnings in `typing` module docs (GH-102260) (cherry picked from commit a498de4c0ef9e264cab3320afbc4d38df6394800) Co-authored-by: Nikita Sobolev files: M Doc/library/typing.rst diff --git a/Doc/library/typing.rst b/Doc/library/typing.rst index b08977ef56be..02c3e0336d34 100644 --- a/Doc/library/typing.rst +++ b/Doc/library/typing.rst @@ -1588,7 +1588,7 @@ These are not used in annotations. They are building blocks for creating generic methods, not their type signatures. For example, :class:`ssl.SSLObject` is a class, therefore it passes an :func:`issubclass` check against :data:`Callable`. However, the - :meth:`ssl.SSLObject.__init__` method exists only to raise a + ``ssl.SSLObject.__init__`` method exists only to raise a :exc:`TypeError` with a more informative message, therefore making it impossible to call (instantiate) :class:`ssl.SSLObject`. From webhook-mailer at python.org Sat Feb 25 18:41:02 2023 From: webhook-mailer at python.org (FFY00) Date: Sat, 25 Feb 2023 23:41:02 -0000 Subject: [Python-checkins] [3.10] GH-99818: improve the documentation for zipfile.Path and Traversable (GH-101589) (#102267) Message-ID: https://github.com/python/cpython/commit/4667c4dcdefb810bcde045db651c585c3bbc36bb commit: 4667c4dcdefb810bcde045db651c585c3bbc36bb branch: 3.10 author: Shantanu <12621235+hauntsaninja at users.noreply.github.com> committer: FFY00 date: 2023-02-25T23:40:55Z summary: [3.10] GH-99818: improve the documentation for zipfile.Path and Traversable (GH-101589) (#102267) files: M Doc/library/importlib.rst M Doc/library/zipfile.rst diff --git a/Doc/library/importlib.rst b/Doc/library/importlib.rst index 20305150d648..21c199034649 100644 --- a/Doc/library/importlib.rst +++ b/Doc/library/importlib.rst @@ -809,9 +809,12 @@ ABC hierarchy:: .. class:: Traversable - An object with a subset of pathlib.Path methods suitable for + An object with a subset of :class:`pathlib.Path` methods suitable for traversing directories and opening files. + For a representation of the object on the file-system, use + :meth:`importlib.resources.as_file`. + .. versionadded:: 3.9 .. abstractmethod:: name() diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst index eab1eaf4312d..5072daa63354 100644 --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -55,8 +55,9 @@ The module defines the following items: .. class:: Path :noindex: - A pathlib-compatible wrapper for zip files. See section - :ref:`path-objects` for details. + Class that implements a subset of the interface provided by + :class:`pathlib.Path`, including the full + :class:`importlib.resources.abc.Traversable` interface. .. versionadded:: 3.8 From webhook-mailer at python.org Sat Feb 25 18:42:37 2023 From: webhook-mailer at python.org (FFY00) Date: Sat, 25 Feb 2023 23:42:37 -0000 Subject: [Python-checkins] [3.11] GH-99818: improve the documentation for zipfile.Path and Traversable (GH-101589) (#102266) Message-ID: https://github.com/python/cpython/commit/735ff5ae27fd417ef4dfdcbbbecb739ee4b73013 commit: 735ff5ae27fd417ef4dfdcbbbecb739ee4b73013 branch: 3.11 author: Shantanu <12621235+hauntsaninja at users.noreply.github.com> committer: FFY00 date: 2023-02-25T23:42:30Z summary: [3.11] GH-99818: improve the documentation for zipfile.Path and Traversable (GH-101589) (#102266) Automerge-Triggered-By: GH:FFY00 (cherry picked from commit 84181c14040ed2befe7f1a55b4f560c80fa61154) Co-authored-by: Filipe La?ns files: M Doc/library/importlib.resources.abc.rst M Doc/library/zipfile.rst diff --git a/Doc/library/importlib.resources.abc.rst b/Doc/library/importlib.resources.abc.rst index a028537f3cb2..4a563c8a5390 100644 --- a/Doc/library/importlib.resources.abc.rst +++ b/Doc/library/importlib.resources.abc.rst @@ -86,9 +86,12 @@ .. class:: Traversable - An object with a subset of pathlib.Path methods suitable for + An object with a subset of :class:`pathlib.Path` methods suitable for traversing directories and opening files. + For a representation of the object on the file-system, use + :meth:`importlib.resources.as_file`. + .. versionadded:: 3.9 .. attribute:: name diff --git a/Doc/library/zipfile.rst b/Doc/library/zipfile.rst index af1f26385553..9ad0a979e498 100644 --- a/Doc/library/zipfile.rst +++ b/Doc/library/zipfile.rst @@ -55,8 +55,9 @@ The module defines the following items: .. class:: Path :noindex: - A pathlib-compatible wrapper for zip files. See section - :ref:`path-objects` for details. + Class that implements a subset of the interface provided by + :class:`pathlib.Path`, including the full + :class:`importlib.resources.abc.Traversable` interface. .. versionadded:: 3.8 From webhook-mailer at python.org Sat Feb 25 19:02:05 2023 From: webhook-mailer at python.org (JelleZijlstra) Date: Sun, 26 Feb 2023 00:02:05 -0000 Subject: [Python-checkins] gh-101765: Fix refcount issues in list and unicode pickling (#102265) Message-ID: https://github.com/python/cpython/commit/d71edbd1b7437706519a9786211597d95934331a commit: d71edbd1b7437706519a9786211597d95934331a branch: main author: Jelle Zijlstra committer: JelleZijlstra date: 2023-02-25T16:01:58-08:00 summary: gh-101765: Fix refcount issues in list and unicode pickling (#102265) Followup from #101769. files: M Objects/listobject.c M Objects/unicodeobject.c diff --git a/Objects/listobject.c b/Objects/listobject.c index 494b40178f9f..1a210e77d55c 100644 --- a/Objects/listobject.c +++ b/Objects/listobject.c @@ -3451,16 +3451,24 @@ listiter_reduce_general(void *_it, int forward) /* the objects are not the same, index is of different types! */ if (forward) { PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + if (!iter) { + return NULL; + } _PyListIterObject *it = (_PyListIterObject *)_it; if (it->it_seq) { return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } + Py_DECREF(iter); } else { PyObject *reversed = _PyEval_GetBuiltin(&_Py_ID(reversed)); + if (!reversed) { + return NULL; + } listreviterobject *it = (listreviterobject *)_it; if (it->it_seq) { return Py_BuildValue("N(O)n", reversed, it->it_seq, it->it_index); } + Py_DECREF(reversed); } /* empty iterator, create an empty list */ list = PyList_New(0); diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c index 6403e359c707..2f4c3d3793ef 100644 --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -14794,8 +14794,10 @@ unicodeiter_reduce(unicodeiterobject *it, PyObject *Py_UNUSED(ignored)) return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } else { PyObject *u = unicode_new_empty(); - if (u == NULL) + if (u == NULL) { + Py_DECREF(iter); return NULL; + } return Py_BuildValue("N(N)", iter, u); } } From webhook-mailer at python.org Sat Feb 25 19:38:07 2023 From: webhook-mailer at python.org (JelleZijlstra) Date: Sun, 26 Feb 2023 00:38:07 -0000 Subject: [Python-checkins] [3.10] gh-101765: Fix refcount issues in list and unicode pickling (GH-102265) (#102269) Message-ID: https://github.com/python/cpython/commit/6fa6c2a470e86057a253f9a159c2cbcdcc184112 commit: 6fa6c2a470e86057a253f9a159c2cbcdcc184112 branch: 3.10 author: Jelle Zijlstra committer: JelleZijlstra date: 2023-02-25T16:38:00-08:00 summary: [3.10] gh-101765: Fix refcount issues in list and unicode pickling (GH-102265) (#102269) (cherry picked from commit d71edbd1b7437706519a9786211597d95934331a) files: M Objects/listobject.c M Objects/unicodeobject.c diff --git a/Objects/listobject.c b/Objects/listobject.c index f46d5e45cc5e..2143475d9351 100644 --- a/Objects/listobject.c +++ b/Objects/listobject.c @@ -3412,16 +3412,24 @@ listiter_reduce_general(void *_it, int forward) /* the objects are not the same, index is of different types! */ if (forward) { PyObject *iter = _PyEval_GetBuiltinId(&PyId_iter); + if (!iter) { + return NULL; + } listiterobject *it = (listiterobject *)_it; if (it->it_seq) { return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } + Py_DECREF(iter); } else { PyObject *reversed = _PyEval_GetBuiltinId(&PyId_reversed); + if (!reversed) { + return NULL; + } listreviterobject *it = (listreviterobject *)_it; if (it->it_seq) { return Py_BuildValue("N(O)n", reversed, it->it_seq, it->it_index); } + Py_DECREF(reversed); } /* empty iterator, create an empty list */ list = PyList_New(0); diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c index 5a1865fb3493..16225ed8ac34 100644 --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -16080,8 +16080,10 @@ unicodeiter_reduce(unicodeiterobject *it, PyObject *Py_UNUSED(ignored)) return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } else { PyObject *u = (PyObject *)_PyUnicode_New(0); - if (u == NULL) + if (u == NULL) { + Py_DECREF(iter); return NULL; + } return Py_BuildValue("N(N)", iter, u); } } From webhook-mailer at python.org Sat Feb 25 19:38:25 2023 From: webhook-mailer at python.org (JelleZijlstra) Date: Sun, 26 Feb 2023 00:38:25 -0000 Subject: [Python-checkins] [3.11] gh-101765: Fix refcount issues in list and unicode pickling (GH-102265) (#102268) Message-ID: https://github.com/python/cpython/commit/b36c49899b04fff0768364c17e14facf27975336 commit: b36c49899b04fff0768364c17e14facf27975336 branch: 3.11 author: Jelle Zijlstra committer: JelleZijlstra date: 2023-02-25T16:38:19-08:00 summary: [3.11] gh-101765: Fix refcount issues in list and unicode pickling (GH-102265) (#102268) (cherry picked from commit d71edbd1b7437706519a9786211597d95934331a) files: M Objects/listobject.c M Objects/unicodeobject.c diff --git a/Objects/listobject.c b/Objects/listobject.c index 0b20829a280a..0d4e4741fdf3 100644 --- a/Objects/listobject.c +++ b/Objects/listobject.c @@ -3459,16 +3459,24 @@ listiter_reduce_general(void *_it, int forward) /* the objects are not the same, index is of different types! */ if (forward) { PyObject *iter = _PyEval_GetBuiltin(&_Py_ID(iter)); + if (!iter) { + return NULL; + } listiterobject *it = (listiterobject *)_it; if (it->it_seq) { return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } + Py_DECREF(iter); } else { PyObject *reversed = _PyEval_GetBuiltin(&_Py_ID(reversed)); + if (!reversed) { + return NULL; + } listreviterobject *it = (listreviterobject *)_it; if (it->it_seq) { return Py_BuildValue("N(O)n", reversed, it->it_seq, it->it_index); } + Py_DECREF(reversed); } /* empty iterator, create an empty list */ list = PyList_New(0); diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c index 02343d7c9326..9b8296ca6bb0 100644 --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -15767,8 +15767,10 @@ unicodeiter_reduce(unicodeiterobject *it, PyObject *Py_UNUSED(ignored)) return Py_BuildValue("N(O)n", iter, it->it_seq, it->it_index); } else { PyObject *u = (PyObject *)_PyUnicode_New(0); - if (u == NULL) + if (u == NULL) { + Py_DECREF(iter); return NULL; + } return Py_BuildValue("N(N)", iter, u); } } From webhook-mailer at python.org Sat Feb 25 21:22:24 2023 From: webhook-mailer at python.org (terryjreedy) Date: Sun, 26 Feb 2023 02:22:24 -0000 Subject: [Python-checkins] gh-102259: Fix re doc issue regarding right square brackets (#102264) Message-ID: https://github.com/python/cpython/commit/bcadcde7122f6d3d08b35671d67e105149371a2f commit: bcadcde7122f6d3d08b35671d67e105149371a2f branch: main author: Skip Montanaro committer: terryjreedy date: 2023-02-25T21:22:16-05:00 summary: gh-102259: Fix re doc issue regarding right square brackets (#102264) Co-authored-by: Terry Jan Reedy files: M Doc/library/re.rst diff --git a/Doc/library/re.rst b/Doc/library/re.rst index d0a16b951844..b7510b93d754 100644 --- a/Doc/library/re.rst +++ b/Doc/library/re.rst @@ -271,7 +271,8 @@ The special characters are: * To match a literal ``']'`` inside a set, precede it with a backslash, or place it at the beginning of the set. For example, both ``[()[\]{}]`` and - ``[]()[{}]`` will both match a parenthesis. + ``[]()[{}]`` will match a right bracket, as well as left bracket, braces, + and parentheses. .. .. index:: single: --; in regular expressions .. .. index:: single: &&; in regular expressions From webhook-mailer at python.org Sat Feb 25 21:32:49 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 26 Feb 2023 02:32:49 -0000 Subject: [Python-checkins] gh-102259: Fix re doc issue regarding right square brackets (GH-102264) Message-ID: https://github.com/python/cpython/commit/7a0dc8a802814503eb93662a3486b890de070734 commit: 7a0dc8a802814503eb93662a3486b890de070734 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-25T18:32:42-08:00 summary: gh-102259: Fix re doc issue regarding right square brackets (GH-102264) (cherry picked from commit bcadcde7122f6d3d08b35671d67e105149371a2f) Co-authored-by: Skip Montanaro Co-authored-by: Terry Jan Reedy files: M Doc/library/re.rst diff --git a/Doc/library/re.rst b/Doc/library/re.rst index 670569956522..7d8920169207 100644 --- a/Doc/library/re.rst +++ b/Doc/library/re.rst @@ -271,7 +271,8 @@ The special characters are: * To match a literal ``']'`` inside a set, precede it with a backslash, or place it at the beginning of the set. For example, both ``[()[\]{}]`` and - ``[]()[{}]`` will both match a parenthesis. + ``[]()[{}]`` will match a right bracket, as well as left bracket, braces, + and parentheses. .. .. index:: single: --; in regular expressions .. .. index:: single: &&; in regular expressions From webhook-mailer at python.org Sat Feb 25 21:33:03 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 26 Feb 2023 02:33:03 -0000 Subject: [Python-checkins] gh-102259: Fix re doc issue regarding right square brackets (GH-102264) Message-ID: https://github.com/python/cpython/commit/972396143f9cb2478ab933a5ede39fa840d514bf commit: 972396143f9cb2478ab933a5ede39fa840d514bf branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-25T18:32:57-08:00 summary: gh-102259: Fix re doc issue regarding right square brackets (GH-102264) (cherry picked from commit bcadcde7122f6d3d08b35671d67e105149371a2f) Co-authored-by: Skip Montanaro Co-authored-by: Terry Jan Reedy files: M Doc/library/re.rst diff --git a/Doc/library/re.rst b/Doc/library/re.rst index 716d3401a2bb..1df9cbe5ea1c 100644 --- a/Doc/library/re.rst +++ b/Doc/library/re.rst @@ -232,7 +232,8 @@ The special characters are: * To match a literal ``']'`` inside a set, precede it with a backslash, or place it at the beginning of the set. For example, both ``[()[\]{}]`` and - ``[]()[{}]`` will both match a parenthesis. + ``[]()[{}]`` will match a right bracket, as well as left bracket, braces, + and parentheses. .. .. index:: single: --; in regular expressions .. .. index:: single: &&; in regular expressions From webhook-mailer at python.org Sun Feb 26 06:55:20 2023 From: webhook-mailer at python.org (mdickinson) Date: Sun, 26 Feb 2023 11:55:20 -0000 Subject: [Python-checkins] [3.10] gh-85417: Clarify behaviour on branch cuts in cmath module (GH-102046) (#102275) Message-ID: https://github.com/python/cpython/commit/601c9db4550791125d7c3101c7028c49cc704e14 commit: 601c9db4550791125d7c3101c7028c49cc704e14 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: mdickinson date: 2023-02-26T11:55:13Z summary: [3.10] gh-85417: Clarify behaviour on branch cuts in cmath module (GH-102046) (#102275) gh-85417: Clarify behaviour on branch cuts in cmath module (GH-102046) This PR updates the cmath module documentation to reflect the reality that Python is almost always (and as far as I can tell, that "almost" can be omitted) running on a machine whose C double supports signed zeros. * Removes misleading references to functions being continuous from above / below / the left / the right at branch cuts * Expands the note on branch cuts at the top of the module documentation to explain the double-sided sign-of-zero-based behaviour (cherry picked from commit b513c46d998344dc07eb6d510782c2e23d2b859e) Co-authored-by: Mark Dickinson files: A Misc/NEWS.d/next/Documentation/2023-02-19-10-33-01.gh-issue-85417.kYO8u3.rst M Doc/library/cmath.rst diff --git a/Doc/library/cmath.rst b/Doc/library/cmath.rst index 28cd96b0e12d..5ed7a09b3e9d 100644 --- a/Doc/library/cmath.rst +++ b/Doc/library/cmath.rst @@ -15,11 +15,27 @@ the function is then applied to the result of the conversion. .. note:: - On platforms with hardware and system-level support for signed - zeros, functions involving branch cuts are continuous on *both* - sides of the branch cut: the sign of the zero distinguishes one - side of the branch cut from the other. On platforms that do not - support signed zeros the continuity is as specified below. + For functions involving branch cuts, we have the problem of deciding how to + define those functions on the cut itself. Following Kahan's "Branch cuts for + complex elementary functions" paper, as well as Annex G of C99 and later C + standards, we use the sign of zero to distinguish one side of the branch cut + from the other: for a branch cut along (a portion of) the real axis we look + at the sign of the imaginary part, while for a branch cut along the + imaginary axis we look at the sign of the real part. + + For example, the :func:`cmath.sqrt` function has a branch cut along the + negative real axis. An argument of ``complex(-2.0, -0.0)`` is treated as + though it lies *below* the branch cut, and so gives a result on the negative + imaginary axis:: + + >>> cmath.sqrt(complex(-2.0, -0.0)) + -1.4142135623730951j + + But an argument of ``complex(-2.0, 0.0)`` is treated as though it lies above + the branch cut:: + + >>> cmath.sqrt(complex(-2.0, 0.0)) + 1.4142135623730951j Conversions to and from polar coordinates @@ -44,14 +60,11 @@ rectangular coordinates to polar coordinates and back. .. function:: phase(x) - Return the phase of *x* (also known as the *argument* of *x*), as a - float. ``phase(x)`` is equivalent to ``math.atan2(x.imag, - x.real)``. The result lies in the range [-\ *?*, *?*], and the branch - cut for this operation lies along the negative real axis, - continuous from above. On systems with support for signed zeros - (which includes most systems in current use), this means that the - sign of the result is the same as the sign of ``x.imag``, even when - ``x.imag`` is zero:: + Return the phase of *x* (also known as the *argument* of *x*), as a float. + ``phase(x)`` is equivalent to ``math.atan2(x.imag, x.real)``. The result + lies in the range [-\ *?*, *?*], and the branch cut for this operation lies + along the negative real axis. The sign of the result is the same as the + sign of ``x.imag``, even when ``x.imag`` is zero:: >>> phase(complex(-1.0, 0.0)) 3.141592653589793 @@ -92,8 +105,8 @@ Power and logarithmic functions .. function:: log(x[, base]) Returns the logarithm of *x* to the given *base*. If the *base* is not - specified, returns the natural logarithm of *x*. There is one branch cut, from 0 - along the negative real axis to -?, continuous from above. + specified, returns the natural logarithm of *x*. There is one branch cut, + from 0 along the negative real axis to -?. .. function:: log10(x) @@ -112,9 +125,9 @@ Trigonometric functions .. function:: acos(x) - Return the arc cosine of *x*. There are two branch cuts: One extends right from - 1 along the real axis to ?, continuous from below. The other extends left from - -1 along the real axis to -?, continuous from above. + Return the arc cosine of *x*. There are two branch cuts: One extends right + from 1 along the real axis to ?. The other extends left from -1 along the + real axis to -?. .. function:: asin(x) @@ -125,9 +138,8 @@ Trigonometric functions .. function:: atan(x) Return the arc tangent of *x*. There are two branch cuts: One extends from - ``1j`` along the imaginary axis to ``?j``, continuous from the right. The - other extends from ``-1j`` along the imaginary axis to ``-?j``, continuous - from the left. + ``1j`` along the imaginary axis to ``?j``. The other extends from ``-1j`` + along the imaginary axis to ``-?j``. .. function:: cos(x) @@ -151,23 +163,21 @@ Hyperbolic functions .. function:: acosh(x) Return the inverse hyperbolic cosine of *x*. There is one branch cut, - extending left from 1 along the real axis to -?, continuous from above. + extending left from 1 along the real axis to -?. .. function:: asinh(x) Return the inverse hyperbolic sine of *x*. There are two branch cuts: - One extends from ``1j`` along the imaginary axis to ``?j``, - continuous from the right. The other extends from ``-1j`` along - the imaginary axis to ``-?j``, continuous from the left. + One extends from ``1j`` along the imaginary axis to ``?j``. The other + extends from ``-1j`` along the imaginary axis to ``-?j``. .. function:: atanh(x) Return the inverse hyperbolic tangent of *x*. There are two branch cuts: One - extends from ``1`` along the real axis to ``?``, continuous from below. The - other extends from ``-1`` along the real axis to ``-?``, continuous from - above. + extends from ``1`` along the real axis to ``?``. The other extends from + ``-1`` along the real axis to ``-?``. .. function:: cosh(x) diff --git a/Misc/NEWS.d/next/Documentation/2023-02-19-10-33-01.gh-issue-85417.kYO8u3.rst b/Misc/NEWS.d/next/Documentation/2023-02-19-10-33-01.gh-issue-85417.kYO8u3.rst new file mode 100644 index 000000000000..a5532df14795 --- /dev/null +++ b/Misc/NEWS.d/next/Documentation/2023-02-19-10-33-01.gh-issue-85417.kYO8u3.rst @@ -0,0 +1 @@ +Update :mod:`cmath` documentation to clarify behaviour on branch cuts. From webhook-mailer at python.org Sun Feb 26 06:55:32 2023 From: webhook-mailer at python.org (mdickinson) Date: Sun, 26 Feb 2023 11:55:32 -0000 Subject: [Python-checkins] [3.11] gh-85417: Clarify behaviour on branch cuts in cmath module (GH-102046) (#102276) Message-ID: https://github.com/python/cpython/commit/626f471d88bb611a0bbb4966e09798b2cf81818d commit: 626f471d88bb611a0bbb4966e09798b2cf81818d branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: mdickinson date: 2023-02-26T11:55:25Z summary: [3.11] gh-85417: Clarify behaviour on branch cuts in cmath module (GH-102046) (#102276) gh-85417: Clarify behaviour on branch cuts in cmath module (GH-102046) This PR updates the cmath module documentation to reflect the reality that Python is almost always (and as far as I can tell, that "almost" can be omitted) running on a machine whose C double supports signed zeros. * Removes misleading references to functions being continuous from above / below / the left / the right at branch cuts * Expands the note on branch cuts at the top of the module documentation to explain the double-sided sign-of-zero-based behaviour (cherry picked from commit b513c46d998344dc07eb6d510782c2e23d2b859e) Co-authored-by: Mark Dickinson files: A Misc/NEWS.d/next/Documentation/2023-02-19-10-33-01.gh-issue-85417.kYO8u3.rst M Doc/library/cmath.rst diff --git a/Doc/library/cmath.rst b/Doc/library/cmath.rst index 28cd96b0e12d..5ed7a09b3e9d 100644 --- a/Doc/library/cmath.rst +++ b/Doc/library/cmath.rst @@ -15,11 +15,27 @@ the function is then applied to the result of the conversion. .. note:: - On platforms with hardware and system-level support for signed - zeros, functions involving branch cuts are continuous on *both* - sides of the branch cut: the sign of the zero distinguishes one - side of the branch cut from the other. On platforms that do not - support signed zeros the continuity is as specified below. + For functions involving branch cuts, we have the problem of deciding how to + define those functions on the cut itself. Following Kahan's "Branch cuts for + complex elementary functions" paper, as well as Annex G of C99 and later C + standards, we use the sign of zero to distinguish one side of the branch cut + from the other: for a branch cut along (a portion of) the real axis we look + at the sign of the imaginary part, while for a branch cut along the + imaginary axis we look at the sign of the real part. + + For example, the :func:`cmath.sqrt` function has a branch cut along the + negative real axis. An argument of ``complex(-2.0, -0.0)`` is treated as + though it lies *below* the branch cut, and so gives a result on the negative + imaginary axis:: + + >>> cmath.sqrt(complex(-2.0, -0.0)) + -1.4142135623730951j + + But an argument of ``complex(-2.0, 0.0)`` is treated as though it lies above + the branch cut:: + + >>> cmath.sqrt(complex(-2.0, 0.0)) + 1.4142135623730951j Conversions to and from polar coordinates @@ -44,14 +60,11 @@ rectangular coordinates to polar coordinates and back. .. function:: phase(x) - Return the phase of *x* (also known as the *argument* of *x*), as a - float. ``phase(x)`` is equivalent to ``math.atan2(x.imag, - x.real)``. The result lies in the range [-\ *?*, *?*], and the branch - cut for this operation lies along the negative real axis, - continuous from above. On systems with support for signed zeros - (which includes most systems in current use), this means that the - sign of the result is the same as the sign of ``x.imag``, even when - ``x.imag`` is zero:: + Return the phase of *x* (also known as the *argument* of *x*), as a float. + ``phase(x)`` is equivalent to ``math.atan2(x.imag, x.real)``. The result + lies in the range [-\ *?*, *?*], and the branch cut for this operation lies + along the negative real axis. The sign of the result is the same as the + sign of ``x.imag``, even when ``x.imag`` is zero:: >>> phase(complex(-1.0, 0.0)) 3.141592653589793 @@ -92,8 +105,8 @@ Power and logarithmic functions .. function:: log(x[, base]) Returns the logarithm of *x* to the given *base*. If the *base* is not - specified, returns the natural logarithm of *x*. There is one branch cut, from 0 - along the negative real axis to -?, continuous from above. + specified, returns the natural logarithm of *x*. There is one branch cut, + from 0 along the negative real axis to -?. .. function:: log10(x) @@ -112,9 +125,9 @@ Trigonometric functions .. function:: acos(x) - Return the arc cosine of *x*. There are two branch cuts: One extends right from - 1 along the real axis to ?, continuous from below. The other extends left from - -1 along the real axis to -?, continuous from above. + Return the arc cosine of *x*. There are two branch cuts: One extends right + from 1 along the real axis to ?. The other extends left from -1 along the + real axis to -?. .. function:: asin(x) @@ -125,9 +138,8 @@ Trigonometric functions .. function:: atan(x) Return the arc tangent of *x*. There are two branch cuts: One extends from - ``1j`` along the imaginary axis to ``?j``, continuous from the right. The - other extends from ``-1j`` along the imaginary axis to ``-?j``, continuous - from the left. + ``1j`` along the imaginary axis to ``?j``. The other extends from ``-1j`` + along the imaginary axis to ``-?j``. .. function:: cos(x) @@ -151,23 +163,21 @@ Hyperbolic functions .. function:: acosh(x) Return the inverse hyperbolic cosine of *x*. There is one branch cut, - extending left from 1 along the real axis to -?, continuous from above. + extending left from 1 along the real axis to -?. .. function:: asinh(x) Return the inverse hyperbolic sine of *x*. There are two branch cuts: - One extends from ``1j`` along the imaginary axis to ``?j``, - continuous from the right. The other extends from ``-1j`` along - the imaginary axis to ``-?j``, continuous from the left. + One extends from ``1j`` along the imaginary axis to ``?j``. The other + extends from ``-1j`` along the imaginary axis to ``-?j``. .. function:: atanh(x) Return the inverse hyperbolic tangent of *x*. There are two branch cuts: One - extends from ``1`` along the real axis to ``?``, continuous from below. The - other extends from ``-1`` along the real axis to ``-?``, continuous from - above. + extends from ``1`` along the real axis to ``?``. The other extends from + ``-1`` along the real axis to ``-?``. .. function:: cosh(x) diff --git a/Misc/NEWS.d/next/Documentation/2023-02-19-10-33-01.gh-issue-85417.kYO8u3.rst b/Misc/NEWS.d/next/Documentation/2023-02-19-10-33-01.gh-issue-85417.kYO8u3.rst new file mode 100644 index 000000000000..a5532df14795 --- /dev/null +++ b/Misc/NEWS.d/next/Documentation/2023-02-19-10-33-01.gh-issue-85417.kYO8u3.rst @@ -0,0 +1 @@ +Update :mod:`cmath` documentation to clarify behaviour on branch cuts. From webhook-mailer at python.org Sun Feb 26 07:34:29 2023 From: webhook-mailer at python.org (mdickinson) Date: Sun, 26 Feb 2023 12:34:29 -0000 Subject: [Python-checkins] [3.10] gh-97786: Fix compiler warnings in pytime.c (GH-101826) (#102150) Message-ID: https://github.com/python/cpython/commit/5b610b59c75953aef4b523b833c7d6f2df963d90 commit: 5b610b59c75953aef4b523b833c7d6f2df963d90 branch: 3.10 author: Mark Dickinson committer: mdickinson date: 2023-02-26T12:34:21Z summary: [3.10] gh-97786: Fix compiler warnings in pytime.c (GH-101826) (#102150) * [3.10] gh-97786: Fix compiler warnings in pytime.c (GH-101826) Fixes compiler warnings in pytime.c.. (cherry picked from commit b1b375e2670a58fc37cb4c2629ed73b045159918) Co-authored-by: Mark Dickinson * Add comment about the casts --------- Co-authored-by: Gregory P. Smith files: A Misc/NEWS.d/next/Library/2023-02-11-13-23-29.gh-issue-97786.QjvQ1B.rst M Python/pytime.c diff --git a/Misc/NEWS.d/next/Library/2023-02-11-13-23-29.gh-issue-97786.QjvQ1B.rst b/Misc/NEWS.d/next/Library/2023-02-11-13-23-29.gh-issue-97786.QjvQ1B.rst new file mode 100644 index 000000000000..df194b67590d --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-11-13-23-29.gh-issue-97786.QjvQ1B.rst @@ -0,0 +1,2 @@ +Fix potential undefined behaviour in corner cases of floating-point-to-time +conversions. diff --git a/Python/pytime.c b/Python/pytime.c index 1ef99aee7484..ee490b8ed915 100644 --- a/Python/pytime.c +++ b/Python/pytime.c @@ -34,6 +34,25 @@ #define NS_TO_MS (1000 * 1000) #define NS_TO_US (1000) +#if SIZEOF_TIME_T == SIZEOF_LONG_LONG +# define PY_TIME_T_MAX LLONG_MAX +# define PY_TIME_T_MIN LLONG_MIN +#elif SIZEOF_TIME_T == SIZEOF_LONG +# define PY_TIME_T_MAX LONG_MAX +# define PY_TIME_T_MIN LONG_MIN +#else +# error "unsupported time_t size" +#endif + +#if PY_TIME_T_MAX + PY_TIME_T_MIN != -1 +# error "time_t is not a two's complement integer type" +#endif + +#if _PyTime_MIN + _PyTime_MAX != -1 +# error "_PyTime_t is not a two's complement integer type" +#endif + + static void error_time_t_overflow(void) { @@ -157,7 +176,21 @@ _PyTime_DoubleToDenominator(double d, time_t *sec, long *numerator, } assert(0.0 <= floatpart && floatpart < denominator); - if (!_Py_InIntegralTypeRange(time_t, intpart)) { + /* + Conversion of an out-of-range value to time_t gives undefined behaviour + (C99 ?6.3.1.4p1), so we must guard against it. However, checking that + `intpart` is in range is delicate: the obvious expression `intpart <= + PY_TIME_T_MAX` will first convert the value `PY_TIME_T_MAX` to a double, + potentially changing its value and leading to us failing to catch some + UB-inducing values. The code below works correctly under the mild + assumption that time_t is a two's complement integer type with no trap + representation, and that `PY_TIME_T_MIN` is within the representable + range of a C double. + + Note: we want the `if` condition below to be true for NaNs; therefore, + resist any temptation to simplify by applying De Morgan's laws. + */ + if (!((double)PY_TIME_T_MIN <= intpart && intpart < -(double)PY_TIME_T_MIN)) { error_time_t_overflow(); return -1; } @@ -210,7 +243,8 @@ _PyTime_ObjectToTime_t(PyObject *obj, time_t *sec, _PyTime_round_t round) d = _PyTime_Round(d, round); (void)modf(d, &intpart); - if (!_Py_InIntegralTypeRange(time_t, intpart)) { + /* See comments in _PyTime_DoubleToDenominator */ + if (!((double)PY_TIME_T_MIN <= intpart && intpart < -(double)PY_TIME_T_MIN)) { error_time_t_overflow(); return -1; } @@ -395,7 +429,8 @@ _PyTime_FromDouble(_PyTime_t *t, double value, _PyTime_round_t round, d *= (double)unit_to_ns; d = _PyTime_Round(d, round); - if (!_Py_InIntegralTypeRange(_PyTime_t, d)) { + /* See comments in _PyTime_DoubleToDenominator */ + if (!((double)_PyTime_MIN <= d && d < -(double)_PyTime_MIN)) { _PyTime_overflow(); return -1; } @@ -722,7 +757,9 @@ py_get_system_clock(_PyTime_t *tp, _Py_clock_info_t *info, int raise) info->monotonic = 0; info->adjustable = 1; if (clock_getres(CLOCK_REALTIME, &res) == 0) { - info->resolution = res.tv_sec + res.tv_nsec * 1e-9; + /* the explicit (double) casts silence loss-of-precision warnings + on some platforms */ + info->resolution = (double)res.tv_sec + (double)res.tv_nsec * 1e-9; } else { info->resolution = 1e-9; From webhook-mailer at python.org Sun Feb 26 08:15:34 2023 From: webhook-mailer at python.org (vsajip) Date: Sun, 26 Feb 2023 13:15:34 -0000 Subject: [Python-checkins] [doc] Improve grammar/fix missing word (GH-102060) Message-ID: https://github.com/python/cpython/commit/6daf42b28e1c6d5f0c1a6350cfcc382789e11293 commit: 6daf42b28e1c6d5f0c1a6350cfcc382789e11293 branch: main author: VMan committer: vsajip date: 2023-02-26T18:45:27+05:30 summary: [doc] Improve grammar/fix missing word (GH-102060) files: M Doc/bugs.rst M Doc/howto/logging-cookbook.rst diff --git a/Doc/bugs.rst b/Doc/bugs.rst index 4f30ef19ee4d..d98192b36960 100644 --- a/Doc/bugs.rst +++ b/Doc/bugs.rst @@ -70,7 +70,7 @@ Click on the "New issue" button in the top bar to report a new issue. The submission form has two fields, "Title" and "Comment". For the "Title" field, enter a *very* short description of the problem; -less than ten words is good. +fewer than ten words is good. In the "Comment" field, describe the problem in detail, including what you expected to happen and what did happen. Be sure to include whether any diff --git a/Doc/howto/logging-cookbook.rst b/Doc/howto/logging-cookbook.rst index 7661249ad522..1a0afb6940da 100644 --- a/Doc/howto/logging-cookbook.rst +++ b/Doc/howto/logging-cookbook.rst @@ -2538,7 +2538,7 @@ should be logged, or the ``extra`` keyword parameter to indicate additional contextual information to be added to the log). So you cannot directly make logging calls using :meth:`str.format` or :class:`string.Template` syntax, because internally the logging package uses %-formatting to merge the format -string and the variable arguments. There would no changing this while preserving +string and the variable arguments. There would be no changing this while preserving backward compatibility, since all logging calls which are out there in existing code will be using %-format strings. From webhook-mailer at python.org Sun Feb 26 08:28:27 2023 From: webhook-mailer at python.org (vsajip) Date: Sun, 26 Feb 2023 13:28:27 -0000 Subject: [Python-checkins] [3.10] [doc] Improve grammar/fix missing word (GH-102060) (GH-102278) Message-ID: https://github.com/python/cpython/commit/4732f551e2b3b0c229269be48bfba7274e122f86 commit: 4732f551e2b3b0c229269be48bfba7274e122f86 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: vsajip date: 2023-02-26T18:58:20+05:30 summary: [3.10] [doc] Improve grammar/fix missing word (GH-102060) (GH-102278) [doc] Improve grammar/fix missing word (GH-102060) (cherry picked from commit 6daf42b28e1c6d5f0c1a6350cfcc382789e11293) Co-authored-by: VMan files: M Doc/bugs.rst M Doc/howto/logging-cookbook.rst diff --git a/Doc/bugs.rst b/Doc/bugs.rst index 4f30ef19ee4d..d98192b36960 100644 --- a/Doc/bugs.rst +++ b/Doc/bugs.rst @@ -70,7 +70,7 @@ Click on the "New issue" button in the top bar to report a new issue. The submission form has two fields, "Title" and "Comment". For the "Title" field, enter a *very* short description of the problem; -less than ten words is good. +fewer than ten words is good. In the "Comment" field, describe the problem in detail, including what you expected to happen and what did happen. Be sure to include whether any diff --git a/Doc/howto/logging-cookbook.rst b/Doc/howto/logging-cookbook.rst index 8b1e7e48bff7..1272ea1e3998 100644 --- a/Doc/howto/logging-cookbook.rst +++ b/Doc/howto/logging-cookbook.rst @@ -2538,7 +2538,7 @@ should be logged, or the ``extra`` keyword parameter to indicate additional contextual information to be added to the log). So you cannot directly make logging calls using :meth:`str.format` or :class:`string.Template` syntax, because internally the logging package uses %-formatting to merge the format -string and the variable arguments. There would no changing this while preserving +string and the variable arguments. There would be no changing this while preserving backward compatibility, since all logging calls which are out there in existing code will be using %-format strings. From webhook-mailer at python.org Sun Feb 26 08:29:01 2023 From: webhook-mailer at python.org (vsajip) Date: Sun, 26 Feb 2023 13:29:01 -0000 Subject: [Python-checkins] [3.11] [doc] Improve grammar/fix missing word (GH-102060) (GH-102277) Message-ID: https://github.com/python/cpython/commit/2c4fc87ac18414266ab62105a3cabb94a82a8640 commit: 2c4fc87ac18414266ab62105a3cabb94a82a8640 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: vsajip date: 2023-02-26T18:58:55+05:30 summary: [3.11] [doc] Improve grammar/fix missing word (GH-102060) (GH-102277) [doc] Improve grammar/fix missing word (GH-102060) (cherry picked from commit 6daf42b28e1c6d5f0c1a6350cfcc382789e11293) Co-authored-by: VMan files: M Doc/bugs.rst M Doc/howto/logging-cookbook.rst diff --git a/Doc/bugs.rst b/Doc/bugs.rst index 4f30ef19ee4d..d98192b36960 100644 --- a/Doc/bugs.rst +++ b/Doc/bugs.rst @@ -70,7 +70,7 @@ Click on the "New issue" button in the top bar to report a new issue. The submission form has two fields, "Title" and "Comment". For the "Title" field, enter a *very* short description of the problem; -less than ten words is good. +fewer than ten words is good. In the "Comment" field, describe the problem in detail, including what you expected to happen and what did happen. Be sure to include whether any diff --git a/Doc/howto/logging-cookbook.rst b/Doc/howto/logging-cookbook.rst index 97e990d0dd07..e4338ab912ae 100644 --- a/Doc/howto/logging-cookbook.rst +++ b/Doc/howto/logging-cookbook.rst @@ -2538,7 +2538,7 @@ should be logged, or the ``extra`` keyword parameter to indicate additional contextual information to be added to the log). So you cannot directly make logging calls using :meth:`str.format` or :class:`string.Template` syntax, because internally the logging package uses %-formatting to merge the format -string and the variable arguments. There would no changing this while preserving +string and the variable arguments. There would be no changing this while preserving backward compatibility, since all logging calls which are out there in existing code will be using %-format strings. From webhook-mailer at python.org Sun Feb 26 09:55:51 2023 From: webhook-mailer at python.org (mdickinson) Date: Sun, 26 Feb 2023 14:55:51 -0000 Subject: [Python-checkins] [3.11] Add missing 'is' to `cmath.log()` docstring (GH-102049) (#102279) Message-ID: https://github.com/python/cpython/commit/eb5565918ab9f814fc3490f0604ee0c96cd981b4 commit: eb5565918ab9f814fc3490f0604ee0c96cd981b4 branch: 3.11 author: Mark Dickinson committer: mdickinson date: 2023-02-26T14:55:44Z summary: [3.11] Add missing 'is' to `cmath.log()` docstring (GH-102049) (#102279) Fix missing 'is' in cmath.log() docstring. (cherry picked from commit 71f614ef2a3d66213b9cae807cbbc1ed03741221) Co-authored-by: Owain Davies <116417456+OTheDev at users.noreply.github.com> files: M Modules/clinic/cmathmodule.c.h M Modules/cmathmodule.c diff --git a/Modules/clinic/cmathmodule.c.h b/Modules/clinic/cmathmodule.c.h index ab556922c029..1f2a9a1cff58 100644 --- a/Modules/clinic/cmathmodule.c.h +++ b/Modules/clinic/cmathmodule.c.h @@ -638,7 +638,7 @@ PyDoc_STRVAR(cmath_log__doc__, "\n" "log(z[, base]) -> the logarithm of z to the given base.\n" "\n" -"If the base not specified, returns the natural logarithm (base e) of z."); +"If the base is not specified, returns the natural logarithm (base e) of z."); #define CMATH_LOG_METHODDEF \ {"log", _PyCFunction_CAST(cmath_log), METH_FASTCALL, cmath_log__doc__}, @@ -953,4 +953,4 @@ cmath_isclose(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObjec exit: return return_value; } -/*[clinic end generated code: output=b8e445fcd2a3da65 input=a9049054013a1b77]*/ +/*[clinic end generated code: output=c6e3f37cd562c942 input=a9049054013a1b77]*/ diff --git a/Modules/cmathmodule.c b/Modules/cmathmodule.c index 2038ac26e658..53e34061d537 100644 --- a/Modules/cmathmodule.c +++ b/Modules/cmathmodule.c @@ -957,12 +957,12 @@ cmath.log log(z[, base]) -> the logarithm of z to the given base. -If the base not specified, returns the natural logarithm (base e) of z. +If the base is not specified, returns the natural logarithm (base e) of z. [clinic start generated code]*/ static PyObject * cmath_log_impl(PyObject *module, Py_complex x, PyObject *y_obj) -/*[clinic end generated code: output=4effdb7d258e0d94 input=230ed3a71ecd000a]*/ +/*[clinic end generated code: output=4effdb7d258e0d94 input=e1f81d4fcfd26497]*/ { Py_complex y; From webhook-mailer at python.org Sun Feb 26 09:56:16 2023 From: webhook-mailer at python.org (mdickinson) Date: Sun, 26 Feb 2023 14:56:16 -0000 Subject: [Python-checkins] [3.10] Add missing 'is' to `cmath.log()` docstring (GH-102049) (#102280) Message-ID: https://github.com/python/cpython/commit/8e9ffd956aaec3aa82bb91a4fa0dd390209013f2 commit: 8e9ffd956aaec3aa82bb91a4fa0dd390209013f2 branch: 3.10 author: Mark Dickinson committer: mdickinson date: 2023-02-26T14:56:10Z summary: [3.10] Add missing 'is' to `cmath.log()` docstring (GH-102049) (#102280) Fix missing 'is' in cmath.log() docstring. (cherry picked from commit 71f614ef2a3d66213b9cae807cbbc1ed03741221) Co-authored-by: Owain Davies <116417456+OTheDev at users.noreply.github.com> files: M Modules/clinic/cmathmodule.c.h M Modules/cmathmodule.c diff --git a/Modules/clinic/cmathmodule.c.h b/Modules/clinic/cmathmodule.c.h index 4b6653aa2194..d4779a98d803 100644 --- a/Modules/clinic/cmathmodule.c.h +++ b/Modules/clinic/cmathmodule.c.h @@ -638,7 +638,7 @@ PyDoc_STRVAR(cmath_log__doc__, "\n" "log(z[, base]) -> the logarithm of z to the given base.\n" "\n" -"If the base not specified, returns the natural logarithm (base e) of z."); +"If the base is not specified, returns the natural logarithm (base e) of z."); #define CMATH_LOG_METHODDEF \ {"log", (PyCFunction)(void(*)(void))cmath_log, METH_FASTCALL, cmath_log__doc__}, @@ -953,4 +953,4 @@ cmath_isclose(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObjec exit: return return_value; } -/*[clinic end generated code: output=353347db2e808e0d input=a9049054013a1b77]*/ +/*[clinic end generated code: output=717d3d9f0640e893 input=a9049054013a1b77]*/ diff --git a/Modules/cmathmodule.c b/Modules/cmathmodule.c index 0f22049a1708..0c3241fd4b4f 100644 --- a/Modules/cmathmodule.c +++ b/Modules/cmathmodule.c @@ -950,12 +950,12 @@ cmath.log log(z[, base]) -> the logarithm of z to the given base. -If the base not specified, returns the natural logarithm (base e) of z. +If the base is not specified, returns the natural logarithm (base e) of z. [clinic start generated code]*/ static PyObject * cmath_log_impl(PyObject *module, Py_complex x, PyObject *y_obj) -/*[clinic end generated code: output=4effdb7d258e0d94 input=230ed3a71ecd000a]*/ +/*[clinic end generated code: output=4effdb7d258e0d94 input=e1f81d4fcfd26497]*/ { Py_complex y; From webhook-mailer at python.org Sun Feb 26 17:45:49 2023 From: webhook-mailer at python.org (JelleZijlstra) Date: Sun, 26 Feb 2023 22:45:49 -0000 Subject: [Python-checkins] gh-101765: unicodeobject: use Py_XDECREF correctly (#102283) Message-ID: https://github.com/python/cpython/commit/8d0f09b1beafd95763a5da53acc58dac0bd63a53 commit: 8d0f09b1beafd95763a5da53acc58dac0bd63a53 branch: main author: Jelle Zijlstra committer: JelleZijlstra date: 2023-02-26T14:45:37-08:00 summary: gh-101765: unicodeobject: use Py_XDECREF correctly (#102283) files: M Objects/unicodeobject.c diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c index 2f4c3d3793ef..1ba30421c66d 100644 --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -14795,7 +14795,7 @@ unicodeiter_reduce(unicodeiterobject *it, PyObject *Py_UNUSED(ignored)) } else { PyObject *u = unicode_new_empty(); if (u == NULL) { - Py_DECREF(iter); + Py_XDECREF(iter); return NULL; } return Py_BuildValue("N(N)", iter, u); From webhook-mailer at python.org Sun Feb 26 18:09:25 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 26 Feb 2023 23:09:25 -0000 Subject: [Python-checkins] gh-101765: unicodeobject: use Py_XDECREF correctly (GH-102283) Message-ID: https://github.com/python/cpython/commit/64d3715de89467971f40f24cce8c435a24f6b3c4 commit: 64d3715de89467971f40f24cce8c435a24f6b3c4 branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-26T15:09:18-08:00 summary: gh-101765: unicodeobject: use Py_XDECREF correctly (GH-102283) (cherry picked from commit 8d0f09b1beafd95763a5da53acc58dac0bd63a53) Co-authored-by: Jelle Zijlstra files: M Objects/unicodeobject.c diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c index 9b8296ca6bb0..84d17f000b41 100644 --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -15768,7 +15768,7 @@ unicodeiter_reduce(unicodeiterobject *it, PyObject *Py_UNUSED(ignored)) } else { PyObject *u = (PyObject *)_PyUnicode_New(0); if (u == NULL) { - Py_DECREF(iter); + Py_XDECREF(iter); return NULL; } return Py_BuildValue("N(N)", iter, u); From webhook-mailer at python.org Sun Feb 26 18:09:50 2023 From: webhook-mailer at python.org (miss-islington) Date: Sun, 26 Feb 2023 23:09:50 -0000 Subject: [Python-checkins] gh-101765: unicodeobject: use Py_XDECREF correctly (GH-102283) Message-ID: https://github.com/python/cpython/commit/4cc363611caa1b91e1c97800c5e18f4dd5966bdd commit: 4cc363611caa1b91e1c97800c5e18f4dd5966bdd branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-26T15:09:43-08:00 summary: gh-101765: unicodeobject: use Py_XDECREF correctly (GH-102283) (cherry picked from commit 8d0f09b1beafd95763a5da53acc58dac0bd63a53) Co-authored-by: Jelle Zijlstra files: M Objects/unicodeobject.c diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c index 16225ed8ac34..156c29bf212e 100644 --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -16081,7 +16081,7 @@ unicodeiter_reduce(unicodeiterobject *it, PyObject *Py_UNUSED(ignored)) } else { PyObject *u = (PyObject *)_PyUnicode_New(0); if (u == NULL) { - Py_DECREF(iter); + Py_XDECREF(iter); return NULL; } return Py_BuildValue("N(N)", iter, u); From webhook-mailer at python.org Sun Feb 26 21:10:41 2023 From: webhook-mailer at python.org (JelleZijlstra) Date: Mon, 27 Feb 2023 02:10:41 -0000 Subject: [Python-checkins] gh-91038: Change default argument value to `False` instead of `0` (#31621) Message-ID: https://github.com/python/cpython/commit/f3cb15c88afa2e87067de3c6106664b3f7cd4035 commit: f3cb15c88afa2e87067de3c6106664b3f7cd4035 branch: main author: Rotzbua committer: JelleZijlstra date: 2023-02-26T18:10:34-08:00 summary: gh-91038: Change default argument value to `False` instead of `0` (#31621) The argument is used as a switch and corresponds to a boolean logic. Therefore it is more intuitive to use the corresponding constant `False` as default value instead of the integer `0`. Co-authored-by: Shantanu <12621235+hauntsaninja at users.noreply.github.com> Co-authored-by: Oleg Iarygin files: A Misc/NEWS.d/next/Library/2023-02-26-12-37-17.gh-issue-91038.S4rFH_.rst M Doc/library/platform.rst M Lib/platform.py diff --git a/Doc/library/platform.rst b/Doc/library/platform.rst index a0c9f63ab9f9..69c4dfc422c9 100644 --- a/Doc/library/platform.rst +++ b/Doc/library/platform.rst @@ -63,7 +63,7 @@ Cross Platform string is returned if the value cannot be determined. -.. function:: platform(aliased=0, terse=0) +.. function:: platform(aliased=False, terse=False) Returns a single string identifying the underlying platform with as much useful information as possible. diff --git a/Lib/platform.py b/Lib/platform.py index 2dfaf76252db..f2b0d1d1bd3f 100755 --- a/Lib/platform.py +++ b/Lib/platform.py @@ -1246,7 +1246,7 @@ def python_compiler(): _platform_cache = {} -def platform(aliased=0, terse=0): +def platform(aliased=False, terse=False): """ Returns a single string identifying the underlying platform with as much useful information as possible (but no more :). diff --git a/Misc/NEWS.d/next/Library/2023-02-26-12-37-17.gh-issue-91038.S4rFH_.rst b/Misc/NEWS.d/next/Library/2023-02-26-12-37-17.gh-issue-91038.S4rFH_.rst new file mode 100644 index 000000000000..2667ff120fd4 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-26-12-37-17.gh-issue-91038.S4rFH_.rst @@ -0,0 +1 @@ +:meth:`platform.platform` now has boolean default arguments. From webhook-mailer at python.org Mon Feb 27 02:26:29 2023 From: webhook-mailer at python.org (AlexWaygood) Date: Mon, 27 Feb 2023 07:26:29 -0000 Subject: [Python-checkins] gh-101100: Fix sphinx warnings in `types` module (#102274) Message-ID: https://github.com/python/cpython/commit/101a12c5767a8c6ca6e32b8e24a462d2606d24ca commit: 101a12c5767a8c6ca6e32b8e24a462d2606d24ca branch: main author: Nikita Sobolev committer: AlexWaygood date: 2023-02-27T07:26:21Z summary: gh-101100: Fix sphinx warnings in `types` module (#102274) Co-authored-by: Alex Waygood files: M Doc/library/types.rst diff --git a/Doc/library/types.rst b/Doc/library/types.rst index 415413c4cd97..747ba58bb225 100644 --- a/Doc/library/types.rst +++ b/Doc/library/types.rst @@ -486,7 +486,7 @@ Coroutine Utility Functions The generator-based coroutine is still a :term:`generator iterator`, but is also considered to be a :term:`coroutine` object and is :term:`awaitable`. However, it may not necessarily implement - the :meth:`__await__` method. + the :meth:`~object.__await__` method. If *gen_func* is a generator function, it will be modified in-place. From webhook-mailer at python.org Mon Feb 27 02:33:16 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 27 Feb 2023 07:33:16 -0000 Subject: [Python-checkins] gh-101100: Fix sphinx warnings in `types` module (GH-102274) Message-ID: https://github.com/python/cpython/commit/bd4a7090385c52e3c88477cc3027f73257a9171e commit: bd4a7090385c52e3c88477cc3027f73257a9171e branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-26T23:33:05-08:00 summary: gh-101100: Fix sphinx warnings in `types` module (GH-102274) (cherry picked from commit 101a12c5767a8c6ca6e32b8e24a462d2606d24ca) Co-authored-by: Nikita Sobolev Co-authored-by: Alex Waygood files: M Doc/library/types.rst diff --git a/Doc/library/types.rst b/Doc/library/types.rst index aafba29b4cf6..3416871c0c1f 100644 --- a/Doc/library/types.rst +++ b/Doc/library/types.rst @@ -480,7 +480,7 @@ Coroutine Utility Functions The generator-based coroutine is still a :term:`generator iterator`, but is also considered to be a :term:`coroutine` object and is :term:`awaitable`. However, it may not necessarily implement - the :meth:`__await__` method. + the :meth:`~object.__await__` method. If *gen_func* is a generator function, it will be modified in-place. From webhook-mailer at python.org Mon Feb 27 02:34:17 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 27 Feb 2023 07:34:17 -0000 Subject: [Python-checkins] gh-101100: Fix sphinx warnings in `types` module (GH-102274) Message-ID: https://github.com/python/cpython/commit/cba52ec0603406206af75edd2091922e353da91a commit: cba52ec0603406206af75edd2091922e353da91a branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-26T23:34:10-08:00 summary: gh-101100: Fix sphinx warnings in `types` module (GH-102274) (cherry picked from commit 101a12c5767a8c6ca6e32b8e24a462d2606d24ca) Co-authored-by: Nikita Sobolev Co-authored-by: Alex Waygood files: M Doc/library/types.rst diff --git a/Doc/library/types.rst b/Doc/library/types.rst index e0e77dfbfe7e..d2cb462eaa61 100644 --- a/Doc/library/types.rst +++ b/Doc/library/types.rst @@ -480,7 +480,7 @@ Coroutine Utility Functions The generator-based coroutine is still a :term:`generator iterator`, but is also considered to be a :term:`coroutine` object and is :term:`awaitable`. However, it may not necessarily implement - the :meth:`__await__` method. + the :meth:`~object.__await__` method. If *gen_func* is a generator function, it will be modified in-place. From webhook-mailer at python.org Mon Feb 27 05:47:16 2023 From: webhook-mailer at python.org (markshannon) Date: Mon, 27 Feb 2023 10:47:16 -0000 Subject: [Python-checkins] gh-102250: Fix double-decref in COMPARE_AND_BRANCH error case (GH-102287) Message-ID: https://github.com/python/cpython/commit/e3c3f9fec099fe78d2f98912be337d632f6fcdd1 commit: e3c3f9fec099fe78d2f98912be337d632f6fcdd1 branch: main author: Dennis Sweeney <36520290+sweeneyde at users.noreply.github.com> committer: markshannon date: 2023-02-27T10:46:40Z summary: gh-102250: Fix double-decref in COMPARE_AND_BRANCH error case (GH-102287) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-26-23-10-32.gh-issue-102250.7MUKoC.rst M Lib/test/test_bool.py M Python/bytecodes.c M Python/generated_cases.c.h diff --git a/Lib/test/test_bool.py b/Lib/test/test_bool.py index b711ffb9a3ec..916e22a527a8 100644 --- a/Lib/test/test_bool.py +++ b/Lib/test/test_bool.py @@ -319,6 +319,26 @@ def __len__(self): return -1 self.assertRaises(ValueError, bool, Eggs()) + def test_interpreter_convert_to_bool_raises(self): + class SymbolicBool: + def __bool__(self): + raise TypeError + + class Symbol: + def __gt__(self, other): + return SymbolicBool() + + x = Symbol() + + with self.assertRaises(TypeError): + if x > 0: + msg = "x > 0 was true" + else: + msg = "x > 0 was false" + + # This used to create negative refcounts, see gh-102250 + del x + def test_from_bytes(self): self.assertIs(bool.from_bytes(b'\x00'*8, 'big'), False) self.assertIs(bool.from_bytes(b'abcd', 'little'), True) diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-26-23-10-32.gh-issue-102250.7MUKoC.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-26-23-10-32.gh-issue-102250.7MUKoC.rst new file mode 100644 index 000000000000..17ab0cd43679 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-26-23-10-32.gh-issue-102250.7MUKoC.rst @@ -0,0 +1 @@ +Fixed a segfault occurring when the interpreter calls a ``__bool__`` method that raises. diff --git a/Python/bytecodes.c b/Python/bytecodes.c index ad68c794fe7a..7e9b36f69721 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -1754,9 +1754,7 @@ dummy_func( int offset = next_instr[1].op.arg; int err = PyObject_IsTrue(cond); Py_DECREF(cond); - if (err < 0) { - goto error; - } + ERROR_IF(err < 0, error); if (jump_on_true == (err != 0)) { JUMPBY(offset); } diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index 2987adc3bba5..271ba26f4895 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -2205,9 +2205,7 @@ int offset = next_instr[1].op.arg; int err = PyObject_IsTrue(cond); Py_DECREF(cond); - if (err < 0) { - goto error; - } + if (err < 0) goto pop_2_error; if (jump_on_true == (err != 0)) { JUMPBY(offset); } From webhook-mailer at python.org Mon Feb 27 10:13:26 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 27 Feb 2023 15:13:26 -0000 Subject: [Python-checkins] gh-102296 Document that inspect.Parameter kinds support ordering (GH-102297) Message-ID: https://github.com/python/cpython/commit/0db6f442598a1994c37f24e704892a2bb71a0a1b commit: 0db6f442598a1994c37f24e704892a2bb71a0a1b branch: main author: Gouvernathor <44340603+Gouvernathor at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-27T07:13:18-08:00 summary: gh-102296 Document that inspect.Parameter kinds support ordering (GH-102297) Automerge-Triggered-By: GH:AlexWaygood files: M Doc/library/inspect.rst diff --git a/Doc/library/inspect.rst b/Doc/library/inspect.rst index 9c3be5a250a6..789e9839d22f 100644 --- a/Doc/library/inspect.rst +++ b/Doc/library/inspect.rst @@ -802,8 +802,9 @@ function. .. attribute:: Parameter.kind - Describes how argument values are bound to the parameter. Possible values - (accessible via :class:`Parameter`, like ``Parameter.KEYWORD_ONLY``): + Describes how argument values are bound to the parameter. The possible + values are accessible via :class:`Parameter` (like ``Parameter.KEYWORD_ONLY``), + and support comparison and ordering, in the following order: .. tabularcolumns:: |l|L| From webhook-mailer at python.org Mon Feb 27 10:21:34 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 27 Feb 2023 15:21:34 -0000 Subject: [Python-checkins] gh-102296 Document that inspect.Parameter kinds support ordering (GH-102297) Message-ID: https://github.com/python/cpython/commit/1ef6e45f963852d0a4363c1d2e180dc27758c0ee commit: 1ef6e45f963852d0a4363c1d2e180dc27758c0ee branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-27T07:21:27-08:00 summary: gh-102296 Document that inspect.Parameter kinds support ordering (GH-102297) (cherry picked from commit 0db6f442598a1994c37f24e704892a2bb71a0a1b) Co-authored-by: Gouvernathor <44340603+Gouvernathor at users.noreply.github.com> Automerge-Triggered-By: GH:AlexWaygood files: M Doc/library/inspect.rst diff --git a/Doc/library/inspect.rst b/Doc/library/inspect.rst index 6a696c977bf1..caca1050d7bf 100644 --- a/Doc/library/inspect.rst +++ b/Doc/library/inspect.rst @@ -780,8 +780,9 @@ function. .. attribute:: Parameter.kind - Describes how argument values are bound to the parameter. Possible values - (accessible via :class:`Parameter`, like ``Parameter.KEYWORD_ONLY``): + Describes how argument values are bound to the parameter. The possible + values are accessible via :class:`Parameter` (like ``Parameter.KEYWORD_ONLY``), + and support comparison and ordering, in the following order: .. tabularcolumns:: |l|L| From webhook-mailer at python.org Mon Feb 27 10:22:15 2023 From: webhook-mailer at python.org (miss-islington) Date: Mon, 27 Feb 2023 15:22:15 -0000 Subject: [Python-checkins] gh-102296 Document that inspect.Parameter kinds support ordering (GH-102297) Message-ID: https://github.com/python/cpython/commit/ca1cc1484f3045e7f4472be56383de2597437fc3 commit: ca1cc1484f3045e7f4472be56383de2597437fc3 branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-27T07:22:07-08:00 summary: gh-102296 Document that inspect.Parameter kinds support ordering (GH-102297) (cherry picked from commit 0db6f442598a1994c37f24e704892a2bb71a0a1b) Co-authored-by: Gouvernathor <44340603+Gouvernathor at users.noreply.github.com> Automerge-Triggered-By: GH:AlexWaygood files: M Doc/library/inspect.rst diff --git a/Doc/library/inspect.rst b/Doc/library/inspect.rst index cc9c22fe3d97..35aa99464ed0 100644 --- a/Doc/library/inspect.rst +++ b/Doc/library/inspect.rst @@ -753,8 +753,9 @@ function. .. attribute:: Parameter.kind - Describes how argument values are bound to the parameter. Possible values - (accessible via :class:`Parameter`, like ``Parameter.KEYWORD_ONLY``): + Describes how argument values are bound to the parameter. The possible + values are accessible via :class:`Parameter` (like ``Parameter.KEYWORD_ONLY``), + and support comparison and ordering, in the following order: .. tabularcolumns:: |l|L| From webhook-mailer at python.org Mon Feb 27 11:21:54 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Mon, 27 Feb 2023 16:21:54 -0000 Subject: [Python-checkins] gh-102251: Updates to test_imp Toward Fixing Some Refleaks (gh-102254) Message-ID: https://github.com/python/cpython/commit/bb0cf8fd60e71581a90179af63e60e8704c3814b commit: bb0cf8fd60e71581a90179af63e60e8704c3814b branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-27T09:21:18-07:00 summary: gh-102251: Updates to test_imp Toward Fixing Some Refleaks (gh-102254) This is related to fixing the refleaks introduced by commit 096d009. I haven't been able to find the leak yet, but these changes are a consequence of that effort. This includes some cleanup, some tweaks to the existing tests, and a bunch of new test cases. The only change here that might have impact outside the tests in question is in imp.py, where I update imp.load_dynamic() to use spec_from_file_location() instead of creating a ModuleSpec directly. Also note that I've updated the tests to only skip if we're checking for refleaks (regrtest's --huntrleaks), whereas in gh-101969 I had skipped the tests entirely. The tests will be useful for some upcoming work and I'd rather the refleaks not hold that up. (It isn't clear how quickly we'll be able to fix the leaking code, though it will certainly be done in the short term.) https://github.com/python/cpython/issues/102251 files: M Lib/imp.py M Lib/test/test_imp.py M Modules/_testsinglephase.c M Python/import.c diff --git a/Lib/imp.py b/Lib/imp.py index fc42c1576585..fe850f6a0018 100644 --- a/Lib/imp.py +++ b/Lib/imp.py @@ -338,8 +338,8 @@ def load_dynamic(name, path, file=None): # Issue #24748: Skip the sys.modules check in _load_module_shim; # always load new extension - spec = importlib.machinery.ModuleSpec( - name=name, loader=loader, origin=path) + spec = importlib.util.spec_from_file_location( + name, path, loader=loader) return _load(spec) else: diff --git a/Lib/test/test_imp.py b/Lib/test/test_imp.py index 2292bb209395..03e3adba221e 100644 --- a/Lib/test/test_imp.py +++ b/Lib/test/test_imp.py @@ -1,4 +1,5 @@ import gc +import json import importlib import importlib.util import os @@ -11,6 +12,7 @@ from test.support import script_helper from test.support import warnings_helper import textwrap +import types import unittest import warnings imp = warnings_helper.import_deprecated('imp') @@ -39,6 +41,169 @@ def requires_load_dynamic(meth): 'imp.load_dynamic() required')(meth) +class ModuleSnapshot(types.SimpleNamespace): + """A representation of a module for testing. + + Fields: + + * id - the module's object ID + * module - the actual module or an adequate substitute + * __file__ + * __spec__ + * name + * origin + * ns - a copy (dict) of the module's __dict__ (or None) + * ns_id - the object ID of the module's __dict__ + * cached - the sys.modules[mod.__spec__.name] entry (or None) + * cached_id - the object ID of the sys.modules entry (or None) + + In cases where the value is not available (e.g. due to serialization), + the value will be None. + """ + _fields = tuple('id module ns ns_id cached cached_id'.split()) + + @classmethod + def from_module(cls, mod): + name = mod.__spec__.name + cached = sys.modules.get(name) + return cls( + id=id(mod), + module=mod, + ns=types.SimpleNamespace(**mod.__dict__), + ns_id=id(mod.__dict__), + cached=cached, + cached_id=id(cached), + ) + + SCRIPT = textwrap.dedent(''' + {imports} + + name = {name!r} + + {prescript} + + mod = {name} + + {body} + + {postscript} + ''') + IMPORTS = textwrap.dedent(''' + import sys + ''').strip() + SCRIPT_BODY = textwrap.dedent(''' + # Capture the snapshot data. + cached = sys.modules.get(name) + snapshot = dict( + id=id(mod), + module=dict( + __file__=mod.__file__, + __spec__=dict( + name=mod.__spec__.name, + origin=mod.__spec__.origin, + ), + ), + ns=None, + ns_id=id(mod.__dict__), + cached=None, + cached_id=id(cached) if cached else None, + ) + ''').strip() + CLEANUP_SCRIPT = textwrap.dedent(''' + # Clean up the module. + sys.modules.pop(name, None) + ''').strip() + + @classmethod + def build_script(cls, name, *, + prescript=None, + import_first=False, + postscript=None, + postcleanup=False, + ): + if postcleanup is True: + postcleanup = cls.CLEANUP_SCRIPT + elif isinstance(postcleanup, str): + postcleanup = textwrap.dedent(postcleanup).strip() + postcleanup = cls.CLEANUP_SCRIPT + os.linesep + postcleanup + else: + postcleanup = '' + prescript = textwrap.dedent(prescript).strip() if prescript else '' + postscript = textwrap.dedent(postscript).strip() if postscript else '' + + if postcleanup: + if postscript: + postscript = postscript + os.linesep * 2 + postcleanup + else: + postscript = postcleanup + + if import_first: + prescript += textwrap.dedent(f''' + + # Now import the module. + assert name not in sys.modules + import {name}''') + + return cls.SCRIPT.format( + imports=cls.IMPORTS.strip(), + name=name, + prescript=prescript.strip(), + body=cls.SCRIPT_BODY.strip(), + postscript=postscript, + ) + + @classmethod + def parse(cls, text): + raw = json.loads(text) + mod = raw['module'] + mod['__spec__'] = types.SimpleNamespace(**mod['__spec__']) + raw['module'] = types.SimpleNamespace(**mod) + return cls(**raw) + + @classmethod + def from_subinterp(cls, name, interpid=None, *, pipe=None, **script_kwds): + if pipe is not None: + return cls._from_subinterp(name, interpid, pipe, script_kwds) + pipe = os.pipe() + try: + return cls._from_subinterp(name, interpid, pipe, script_kwds) + finally: + r, w = pipe + os.close(r) + os.close(w) + + @classmethod + def _from_subinterp(cls, name, interpid, pipe, script_kwargs): + r, w = pipe + + # Build the script. + postscript = textwrap.dedent(f''' + # Send the result over the pipe. + import json + import os + os.write({w}, json.dumps(snapshot).encode()) + + ''') + _postscript = script_kwargs.get('postscript') + if _postscript: + _postscript = textwrap.dedent(_postscript).lstrip() + postscript += _postscript + script_kwargs['postscript'] = postscript.strip() + script = cls.build_script(name, **script_kwargs) + + # Run the script. + if interpid is None: + ret = support.run_in_subinterp(script) + if ret != 0: + raise AssertionError(f'{ret} != 0') + else: + _interpreters.run_string(interpid, script) + + # Parse the results. + text = os.read(r, 1000) + return cls.parse(text.decode()) + + class LockTests(unittest.TestCase): """Very basic test of import lock functions.""" @@ -263,288 +428,6 @@ def test_issue16421_multiple_modules_in_one_dll(self): with self.assertRaises(ImportError): imp.load_dynamic('nonexistent', pathname) - @unittest.skip('known refleak (temporarily skipping)') - @requires_subinterpreters - @requires_load_dynamic - def test_singlephase_multiple_interpreters(self): - # Currently, for every single-phrase init module loaded - # in multiple interpreters, those interpreters share a - # PyModuleDef for that object, which can be a problem. - - # This single-phase module has global state, which is shared - # by the interpreters. - import _testsinglephase - name = _testsinglephase.__name__ - filename = _testsinglephase.__file__ - - del sys.modules[name] - _testsinglephase._clear_globals() - _testinternalcapi.clear_extension(name, filename) - init_count = _testsinglephase.initialized_count() - assert init_count == -1, (init_count,) - - def clean_up(): - _testsinglephase._clear_globals() - _testinternalcapi.clear_extension(name, filename) - self.addCleanup(clean_up) - - interp1 = _interpreters.create(isolated=False) - self.addCleanup(_interpreters.destroy, interp1) - interp2 = _interpreters.create(isolated=False) - self.addCleanup(_interpreters.destroy, interp2) - - script = textwrap.dedent(f''' - import _testsinglephase - - expected = %d - init_count = _testsinglephase.initialized_count() - if init_count != expected: - raise Exception(init_count) - - lookedup = _testsinglephase.look_up_self() - if lookedup is not _testsinglephase: - raise Exception((_testsinglephase, lookedup)) - - # Attrs set in the module init func are in m_copy. - _initialized = _testsinglephase._initialized - initialized = _testsinglephase.initialized() - if _initialized != initialized: - raise Exception((_initialized, initialized)) - - # Attrs set after loading are not in m_copy. - if hasattr(_testsinglephase, 'spam'): - raise Exception(_testsinglephase.spam) - _testsinglephase.spam = expected - ''') - - # Use an interpreter that gets destroyed right away. - ret = support.run_in_subinterp(script % 1) - self.assertEqual(ret, 0) - - # The module's init func gets run again. - # The module's globals did not get destroyed. - _interpreters.run_string(interp1, script % 2) - - # The module's init func is not run again. - # The second interpreter copies the module's m_copy. - # However, globals are still shared. - _interpreters.run_string(interp2, script % 2) - - @unittest.skip('known refleak (temporarily skipping)') - @requires_load_dynamic - def test_singlephase_variants(self): - # Exercise the most meaningful variants described in Python/import.c. - self.maxDiff = None - - basename = '_testsinglephase' - fileobj, pathname, _ = imp.find_module(basename) - fileobj.close() - - def clean_up(): - import _testsinglephase - _testsinglephase._clear_globals() - self.addCleanup(clean_up) - - def add_ext_cleanup(name): - def clean_up(): - _testinternalcapi.clear_extension(name, pathname) - self.addCleanup(clean_up) - - modules = {} - def load(name): - assert name not in modules - module = imp.load_dynamic(name, pathname) - self.assertNotIn(module, modules.values()) - modules[name] = module - return module - - def re_load(name, module): - assert sys.modules[name] is module - before = type(module)(module.__name__) - before.__dict__.update(vars(module)) - - reloaded = imp.load_dynamic(name, pathname) - - return before, reloaded - - def check_common(name, module): - summed = module.sum(1, 2) - lookedup = module.look_up_self() - initialized = module.initialized() - cached = sys.modules[name] - - # module.__name__ might not match, but the spec will. - self.assertEqual(module.__spec__.name, name) - if initialized is not None: - self.assertIsInstance(initialized, float) - self.assertGreater(initialized, 0) - self.assertEqual(summed, 3) - self.assertTrue(issubclass(module.error, Exception)) - self.assertEqual(module.int_const, 1969) - self.assertEqual(module.str_const, 'something different') - self.assertIs(cached, module) - - return lookedup, initialized, cached - - def check_direct(name, module, lookedup): - # The module has its own PyModuleDef, with a matching name. - self.assertEqual(module.__name__, name) - self.assertIs(lookedup, module) - - def check_indirect(name, module, lookedup, orig): - # The module re-uses another's PyModuleDef, with a different name. - assert orig is not module - assert orig.__name__ != name - self.assertNotEqual(module.__name__, name) - self.assertIs(lookedup, module) - - def check_basic(module, initialized): - init_count = module.initialized_count() - - self.assertIsNot(initialized, None) - self.assertIsInstance(init_count, int) - self.assertGreater(init_count, 0) - - return init_count - - def check_common_reloaded(name, module, cached, before, reloaded): - recached = sys.modules[name] - - self.assertEqual(reloaded.__spec__.name, name) - self.assertEqual(reloaded.__name__, before.__name__) - self.assertEqual(before.__dict__, module.__dict__) - self.assertIs(recached, reloaded) - - def check_basic_reloaded(module, lookedup, initialized, init_count, - before, reloaded): - relookedup = reloaded.look_up_self() - reinitialized = reloaded.initialized() - reinit_count = reloaded.initialized_count() - - self.assertIs(reloaded, module) - self.assertIs(reloaded.__dict__, module.__dict__) - # It only happens to be the same but that's good enough here. - # We really just want to verify that the re-loaded attrs - # didn't change. - self.assertIs(relookedup, lookedup) - self.assertEqual(reinitialized, initialized) - self.assertEqual(reinit_count, init_count) - - def check_with_reinit_reloaded(module, lookedup, initialized, - before, reloaded): - relookedup = reloaded.look_up_self() - reinitialized = reloaded.initialized() - - self.assertIsNot(reloaded, module) - self.assertIsNot(reloaded, module) - self.assertNotEqual(reloaded.__dict__, module.__dict__) - self.assertIs(relookedup, reloaded) - if initialized is None: - self.assertIs(reinitialized, None) - else: - self.assertGreater(reinitialized, initialized) - - # Check the "basic" module. - - name = basename - add_ext_cleanup(name) - expected_init_count = 1 - with self.subTest(name): - mod = load(name) - lookedup, initialized, cached = check_common(name, mod) - check_direct(name, mod, lookedup) - init_count = check_basic(mod, initialized) - self.assertEqual(init_count, expected_init_count) - - before, reloaded = re_load(name, mod) - check_common_reloaded(name, mod, cached, before, reloaded) - check_basic_reloaded(mod, lookedup, initialized, init_count, - before, reloaded) - basic = mod - - # Check its indirect variants. - - name = f'{basename}_basic_wrapper' - add_ext_cleanup(name) - expected_init_count += 1 - with self.subTest(name): - mod = load(name) - lookedup, initialized, cached = check_common(name, mod) - check_indirect(name, mod, lookedup, basic) - init_count = check_basic(mod, initialized) - self.assertEqual(init_count, expected_init_count) - - before, reloaded = re_load(name, mod) - check_common_reloaded(name, mod, cached, before, reloaded) - check_basic_reloaded(mod, lookedup, initialized, init_count, - before, reloaded) - - # Currently PyState_AddModule() always replaces the cached module. - self.assertIs(basic.look_up_self(), mod) - self.assertEqual(basic.initialized_count(), expected_init_count) - - # The cached module shouldn't be changed after this point. - basic_lookedup = mod - - # Check its direct variant. - - name = f'{basename}_basic_copy' - add_ext_cleanup(name) - expected_init_count += 1 - with self.subTest(name): - mod = load(name) - lookedup, initialized, cached = check_common(name, mod) - check_direct(name, mod, lookedup) - init_count = check_basic(mod, initialized) - self.assertEqual(init_count, expected_init_count) - - before, reloaded = re_load(name, mod) - check_common_reloaded(name, mod, cached, before, reloaded) - check_basic_reloaded(mod, lookedup, initialized, init_count, - before, reloaded) - - # This should change the cached module for _testsinglephase. - self.assertIs(basic.look_up_self(), basic_lookedup) - self.assertEqual(basic.initialized_count(), expected_init_count) - - # Check the non-basic variant that has no state. - - name = f'{basename}_with_reinit' - add_ext_cleanup(name) - with self.subTest(name): - mod = load(name) - lookedup, initialized, cached = check_common(name, mod) - self.assertIs(initialized, None) - check_direct(name, mod, lookedup) - - before, reloaded = re_load(name, mod) - check_common_reloaded(name, mod, cached, before, reloaded) - check_with_reinit_reloaded(mod, lookedup, initialized, - before, reloaded) - - # This should change the cached module for _testsinglephase. - self.assertIs(basic.look_up_self(), basic_lookedup) - self.assertEqual(basic.initialized_count(), expected_init_count) - - # Check the basic variant that has state. - - name = f'{basename}_with_state' - add_ext_cleanup(name) - with self.subTest(name): - mod = load(name) - lookedup, initialized, cached = check_common(name, mod) - self.assertIsNot(initialized, None) - check_direct(name, mod, lookedup) - - before, reloaded = re_load(name, mod) - check_common_reloaded(name, mod, cached, before, reloaded) - check_with_reinit_reloaded(mod, lookedup, initialized, - before, reloaded) - - # This should change the cached module for _testsinglephase. - self.assertIs(basic.look_up_self(), basic_lookedup) - self.assertEqual(basic.initialized_count(), expected_init_count) - @requires_load_dynamic def test_load_dynamic_ImportError_path(self): # Issue #1559549 added `name` and `path` attributes to ImportError @@ -737,6 +620,669 @@ def check_get_builtins(): check_get_builtins() +class TestSinglePhaseSnapshot(ModuleSnapshot): + + @classmethod + def from_module(cls, mod): + self = super().from_module(mod) + self.summed = mod.sum(1, 2) + self.lookedup = mod.look_up_self() + self.lookedup_id = id(self.lookedup) + self.state_initialized = mod.state_initialized() + if hasattr(mod, 'initialized_count'): + self.init_count = mod.initialized_count() + return self + + SCRIPT_BODY = ModuleSnapshot.SCRIPT_BODY + textwrap.dedent(f''' + snapshot['module'].update(dict( + int_const=mod.int_const, + str_const=mod.str_const, + _module_initialized=mod._module_initialized, + )) + snapshot.update(dict( + summed=mod.sum(1, 2), + lookedup_id=id(mod.look_up_self()), + state_initialized=mod.state_initialized(), + init_count=mod.initialized_count(), + has_spam=hasattr(mod, 'spam'), + spam=getattr(mod, 'spam', None), + )) + ''').rstrip() + + @classmethod + def parse(cls, text): + self = super().parse(text) + if not self.has_spam: + del self.spam + del self.has_spam + return self + + + at requires_load_dynamic +class SinglephaseInitTests(unittest.TestCase): + + NAME = '_testsinglephase' + + @classmethod + def setUpClass(cls): + if '-R' in sys.argv or '--huntrleaks' in sys.argv: + # https://github.com/python/cpython/issues/102251 + raise unittest.SkipTest('unresolved refleaks (see gh-102251)') + fileobj, filename, _ = imp.find_module(cls.NAME) + fileobj.close() + cls.FILE = filename + + # Start fresh. + cls.clean_up() + + def tearDown(self): + # Clean up the module. + self.clean_up() + + @classmethod + def clean_up(cls): + name = cls.NAME + filename = cls.FILE + if name in sys.modules: + if hasattr(sys.modules[name], '_clear_globals'): + assert sys.modules[name].__file__ == filename + sys.modules[name]._clear_globals() + del sys.modules[name] + # Clear all internally cached data for the extension. + _testinternalcapi.clear_extension(name, filename) + + ######################### + # helpers + + def add_module_cleanup(self, name): + def clean_up(): + # Clear all internally cached data for the extension. + _testinternalcapi.clear_extension(name, self.FILE) + self.addCleanup(clean_up) + + def load(self, name): + try: + already_loaded = self.already_loaded + except AttributeError: + already_loaded = self.already_loaded = {} + assert name not in already_loaded + mod = imp.load_dynamic(name, self.FILE) + self.assertNotIn(mod, already_loaded.values()) + already_loaded[name] = mod + return types.SimpleNamespace( + name=name, + module=mod, + snapshot=TestSinglePhaseSnapshot.from_module(mod), + ) + + def re_load(self, name, mod): + assert sys.modules[name] is mod + assert mod.__dict__ == mod.__dict__ + reloaded = imp.load_dynamic(name, self.FILE) + return types.SimpleNamespace( + name=name, + module=reloaded, + snapshot=TestSinglePhaseSnapshot.from_module(reloaded), + ) + + # subinterpreters + + def add_subinterpreter(self): + interpid = _interpreters.create(isolated=False) + _interpreters.run_string(interpid, textwrap.dedent(''' + import sys + import _testinternalcapi + ''')) + def clean_up(): + _interpreters.run_string(interpid, textwrap.dedent(f''' + name = {self.NAME!r} + if name in sys.modules: + sys.modules[name]._clear_globals() + _testinternalcapi.clear_extension(name, {self.FILE!r}) + ''')) + _interpreters.destroy(interpid) + self.addCleanup(clean_up) + return interpid + + def import_in_subinterp(self, interpid=None, *, + postscript=None, + postcleanup=False, + ): + name = self.NAME + + if postcleanup: + import_ = 'import _testinternalcapi' if interpid is None else '' + postcleanup = f''' + {import_} + mod._clear_globals() + _testinternalcapi.clear_extension(name, {self.FILE!r}) + ''' + + try: + pipe = self._pipe + except AttributeError: + r, w = pipe = self._pipe = os.pipe() + self.addCleanup(os.close, r) + self.addCleanup(os.close, w) + + snapshot = TestSinglePhaseSnapshot.from_subinterp( + name, + interpid, + pipe=pipe, + import_first=True, + postscript=postscript, + postcleanup=postcleanup, + ) + + return types.SimpleNamespace( + name=name, + module=None, + snapshot=snapshot, + ) + + # checks + + def check_common(self, loaded): + isolated = False + + mod = loaded.module + if not mod: + # It came from a subinterpreter. + isolated = True + mod = loaded.snapshot.module + # mod.__name__ might not match, but the spec will. + self.assertEqual(mod.__spec__.name, loaded.name) + self.assertEqual(mod.__file__, self.FILE) + self.assertEqual(mod.__spec__.origin, self.FILE) + if not isolated: + self.assertTrue(issubclass(mod.error, Exception)) + self.assertEqual(mod.int_const, 1969) + self.assertEqual(mod.str_const, 'something different') + self.assertIsInstance(mod._module_initialized, float) + self.assertGreater(mod._module_initialized, 0) + + snap = loaded.snapshot + self.assertEqual(snap.summed, 3) + if snap.state_initialized is not None: + self.assertIsInstance(snap.state_initialized, float) + self.assertGreater(snap.state_initialized, 0) + if isolated: + # The "looked up" module is interpreter-specific + # (interp->imports.modules_by_index was set for the module). + self.assertEqual(snap.lookedup_id, snap.id) + self.assertEqual(snap.cached_id, snap.id) + with self.assertRaises(AttributeError): + snap.spam + else: + self.assertIs(snap.lookedup, mod) + self.assertIs(snap.cached, mod) + + def check_direct(self, loaded): + # The module has its own PyModuleDef, with a matching name. + self.assertEqual(loaded.module.__name__, loaded.name) + self.assertIs(loaded.snapshot.lookedup, loaded.module) + + def check_indirect(self, loaded, orig): + # The module re-uses another's PyModuleDef, with a different name. + assert orig is not loaded.module + assert orig.__name__ != loaded.name + self.assertNotEqual(loaded.module.__name__, loaded.name) + self.assertIs(loaded.snapshot.lookedup, loaded.module) + + def check_basic(self, loaded, expected_init_count): + # m_size == -1 + # The module loads fresh the first time and copies m_copy after. + snap = loaded.snapshot + self.assertIsNot(snap.state_initialized, None) + self.assertIsInstance(snap.init_count, int) + self.assertGreater(snap.init_count, 0) + self.assertEqual(snap.init_count, expected_init_count) + + def check_with_reinit(self, loaded): + # m_size >= 0 + # The module loads fresh every time. + pass + + def check_fresh(self, loaded): + """ + The module had not been loaded before (at least since fully reset). + """ + snap = loaded.snapshot + # The module's init func was run. + # A copy of the module's __dict__ was stored in def->m_base.m_copy. + # The previous m_copy was deleted first. + # _PyRuntime.imports.extensions was set. + self.assertEqual(snap.init_count, 1) + # The global state was initialized. + # The module attrs were initialized from that state. + self.assertEqual(snap.module._module_initialized, + snap.state_initialized) + + def check_semi_fresh(self, loaded, base, prev): + """ + The module had been loaded before and then reset + (but the module global state wasn't). + """ + snap = loaded.snapshot + # The module's init func was run again. + # A copy of the module's __dict__ was stored in def->m_base.m_copy. + # The previous m_copy was deleted first. + # The module globals did not get reset. + self.assertNotEqual(snap.id, base.snapshot.id) + self.assertNotEqual(snap.id, prev.snapshot.id) + self.assertEqual(snap.init_count, prev.snapshot.init_count + 1) + # The global state was updated. + # The module attrs were initialized from that state. + self.assertEqual(snap.module._module_initialized, + snap.state_initialized) + self.assertNotEqual(snap.state_initialized, + base.snapshot.state_initialized) + self.assertNotEqual(snap.state_initialized, + prev.snapshot.state_initialized) + + def check_copied(self, loaded, base): + """ + The module had been loaded before and never reset. + """ + snap = loaded.snapshot + # The module's init func was not run again. + # The interpreter copied m_copy, as set by the other interpreter, + # with objects owned by the other interpreter. + # The module globals did not get reset. + self.assertNotEqual(snap.id, base.snapshot.id) + self.assertEqual(snap.init_count, base.snapshot.init_count) + # The global state was not updated since the init func did not run. + # The module attrs were not directly initialized from that state. + # The state and module attrs still match the previous loading. + self.assertEqual(snap.module._module_initialized, + snap.state_initialized) + self.assertEqual(snap.state_initialized, + base.snapshot.state_initialized) + + ######################### + # the tests + + def test_cleared_globals(self): + loaded = self.load(self.NAME) + _testsinglephase = loaded.module + init_before = _testsinglephase.state_initialized() + + _testsinglephase._clear_globals() + init_after = _testsinglephase.state_initialized() + init_count = _testsinglephase.initialized_count() + + self.assertGreater(init_before, 0) + self.assertEqual(init_after, 0) + self.assertEqual(init_count, -1) + + def test_variants(self): + # Exercise the most meaningful variants described in Python/import.c. + self.maxDiff = None + + # Check the "basic" module. + + name = self.NAME + expected_init_count = 1 + with self.subTest(name): + loaded = self.load(name) + + self.check_common(loaded) + self.check_direct(loaded) + self.check_basic(loaded, expected_init_count) + basic = loaded.module + + # Check its indirect variants. + + name = f'{self.NAME}_basic_wrapper' + self.add_module_cleanup(name) + expected_init_count += 1 + with self.subTest(name): + loaded = self.load(name) + + self.check_common(loaded) + self.check_indirect(loaded, basic) + self.check_basic(loaded, expected_init_count) + + # Currently PyState_AddModule() always replaces the cached module. + self.assertIs(basic.look_up_self(), loaded.module) + self.assertEqual(basic.initialized_count(), expected_init_count) + + # The cached module shouldn't change after this point. + basic_lookedup = loaded.module + + # Check its direct variant. + + name = f'{self.NAME}_basic_copy' + self.add_module_cleanup(name) + expected_init_count += 1 + with self.subTest(name): + loaded = self.load(name) + + self.check_common(loaded) + self.check_direct(loaded) + self.check_basic(loaded, expected_init_count) + + # This should change the cached module for _testsinglephase. + self.assertIs(basic.look_up_self(), basic_lookedup) + self.assertEqual(basic.initialized_count(), expected_init_count) + + # Check the non-basic variant that has no state. + + name = f'{self.NAME}_with_reinit' + self.add_module_cleanup(name) + with self.subTest(name): + loaded = self.load(name) + + self.check_common(loaded) + self.assertIs(loaded.snapshot.state_initialized, None) + self.check_direct(loaded) + self.check_with_reinit(loaded) + + # This should change the cached module for _testsinglephase. + self.assertIs(basic.look_up_self(), basic_lookedup) + self.assertEqual(basic.initialized_count(), expected_init_count) + + # Check the basic variant that has state. + + name = f'{self.NAME}_with_state' + self.add_module_cleanup(name) + with self.subTest(name): + loaded = self.load(name) + + self.check_common(loaded) + self.assertIsNot(loaded.snapshot.state_initialized, None) + self.check_direct(loaded) + self.check_with_reinit(loaded) + + # This should change the cached module for _testsinglephase. + self.assertIs(basic.look_up_self(), basic_lookedup) + self.assertEqual(basic.initialized_count(), expected_init_count) + + def test_basic_reloaded(self): + # m_copy is copied into the existing module object. + # Global state is not changed. + self.maxDiff = None + + for name in [ + self.NAME, # the "basic" module + f'{self.NAME}_basic_wrapper', # the indirect variant + f'{self.NAME}_basic_copy', # the direct variant + ]: + self.add_module_cleanup(name) + with self.subTest(name): + loaded = self.load(name) + reloaded = self.re_load(name, loaded.module) + + self.check_common(loaded) + self.check_common(reloaded) + + # Make sure the original __dict__ did not get replaced. + self.assertEqual(id(loaded.module.__dict__), + loaded.snapshot.ns_id) + self.assertEqual(loaded.snapshot.ns.__dict__, + loaded.module.__dict__) + + self.assertEqual(reloaded.module.__spec__.name, reloaded.name) + self.assertEqual(reloaded.module.__name__, + reloaded.snapshot.ns.__name__) + + self.assertIs(reloaded.module, loaded.module) + self.assertIs(reloaded.module.__dict__, loaded.module.__dict__) + # It only happens to be the same but that's good enough here. + # We really just want to verify that the re-loaded attrs + # didn't change. + self.assertIs(reloaded.snapshot.lookedup, + loaded.snapshot.lookedup) + self.assertEqual(reloaded.snapshot.state_initialized, + loaded.snapshot.state_initialized) + self.assertEqual(reloaded.snapshot.init_count, + loaded.snapshot.init_count) + + self.assertIs(reloaded.snapshot.cached, reloaded.module) + + def test_with_reinit_reloaded(self): + # The module's m_init func is run again. + self.maxDiff = None + + # Keep a reference around. + basic = self.load(self.NAME) + + for name in [ + f'{self.NAME}_with_reinit', # m_size == 0 + f'{self.NAME}_with_state', # m_size > 0 + ]: + self.add_module_cleanup(name) + with self.subTest(name): + loaded = self.load(name) + reloaded = self.re_load(name, loaded.module) + + self.check_common(loaded) + self.check_common(reloaded) + + # Make sure the original __dict__ did not get replaced. + self.assertEqual(id(loaded.module.__dict__), + loaded.snapshot.ns_id) + self.assertEqual(loaded.snapshot.ns.__dict__, + loaded.module.__dict__) + + self.assertEqual(reloaded.module.__spec__.name, reloaded.name) + self.assertEqual(reloaded.module.__name__, + reloaded.snapshot.ns.__name__) + + self.assertIsNot(reloaded.module, loaded.module) + self.assertNotEqual(reloaded.module.__dict__, + loaded.module.__dict__) + self.assertIs(reloaded.snapshot.lookedup, reloaded.module) + if loaded.snapshot.state_initialized is None: + self.assertIs(reloaded.snapshot.state_initialized, None) + else: + self.assertGreater(reloaded.snapshot.state_initialized, + loaded.snapshot.state_initialized) + + self.assertIs(reloaded.snapshot.cached, reloaded.module) + + # Currently, for every single-phrase init module loaded + # in multiple interpreters, those interpreters share a + # PyModuleDef for that object, which can be a problem. + # Also, we test with a single-phase module that has global state, + # which is shared by all interpreters. + + @requires_subinterpreters + def test_basic_multiple_interpreters_main_no_reset(self): + # without resetting; already loaded in main interpreter + + # At this point: + # * alive in 0 interpreters + # * module def may or may not be loaded already + # * module def not in _PyRuntime.imports.extensions + # * mod init func has not run yet (since reset, at least) + # * m_copy not set (hasn't been loaded yet or already cleared) + # * module's global state has not been initialized yet + # (or already cleared) + + main_loaded = self.load(self.NAME) + _testsinglephase = main_loaded.module + # Attrs set after loading are not in m_copy. + _testsinglephase.spam = 'spam, spam, spam, spam, eggs, and spam' + + self.check_common(main_loaded) + self.check_fresh(main_loaded) + + interpid1 = self.add_subinterpreter() + interpid2 = self.add_subinterpreter() + + # At this point: + # * alive in 1 interpreter (main) + # * module def in _PyRuntime.imports.extensions + # * mod init func ran for the first time (since reset, at least) + # * m_copy was copied from the main interpreter (was NULL) + # * module's global state was initialized + + # Use an interpreter that gets destroyed right away. + loaded = self.import_in_subinterp() + self.check_common(loaded) + self.check_copied(loaded, main_loaded) + + # At this point: + # * alive in 1 interpreter (main) + # * module def still in _PyRuntime.imports.extensions + # * mod init func ran again + # * m_copy is NULL (claered when the interpreter was destroyed) + # (was from main interpreter) + # * module's global state was updated, not reset + + # Use a subinterpreter that sticks around. + loaded = self.import_in_subinterp(interpid1) + self.check_common(loaded) + self.check_copied(loaded, main_loaded) + + # At this point: + # * alive in 2 interpreters (main, interp1) + # * module def still in _PyRuntime.imports.extensions + # * mod init func ran again + # * m_copy was copied from interp1 + # * module's global state was updated, not reset + + # Use a subinterpreter while the previous one is still alive. + loaded = self.import_in_subinterp(interpid2) + self.check_common(loaded) + self.check_copied(loaded, main_loaded) + + # At this point: + # * alive in 3 interpreters (main, interp1, interp2) + # * module def still in _PyRuntime.imports.extensions + # * mod init func ran again + # * m_copy was copied from interp2 (was from interp1) + # * module's global state was updated, not reset + + @requires_subinterpreters + def test_basic_multiple_interpreters_deleted_no_reset(self): + # without resetting; already loaded in a deleted interpreter + + # At this point: + # * alive in 0 interpreters + # * module def may or may not be loaded already + # * module def not in _PyRuntime.imports.extensions + # * mod init func has not run yet (since reset, at least) + # * m_copy not set (hasn't been loaded yet or already cleared) + # * module's global state has not been initialized yet + # (or already cleared) + + interpid1 = self.add_subinterpreter() + interpid2 = self.add_subinterpreter() + + # First, load in the main interpreter but then completely clear it. + loaded_main = self.load(self.NAME) + loaded_main.module._clear_globals() + _testinternalcapi.clear_extension(self.NAME, self.FILE) + + # At this point: + # * alive in 0 interpreters + # * module def loaded already + # * module def was in _PyRuntime.imports.extensions, but cleared + # * mod init func ran for the first time (since reset, at least) + # * m_copy was set, but cleared (was NULL) + # * module's global state was initialized but cleared + + # Start with an interpreter that gets destroyed right away. + base = self.import_in_subinterp(postscript=''' + # Attrs set after loading are not in m_copy. + mod.spam = 'spam, spam, mash, spam, eggs, and spam' + ''') + self.check_common(base) + self.check_fresh(base) + + # At this point: + # * alive in 0 interpreters + # * module def in _PyRuntime.imports.extensions + # * mod init func ran again + # * m_copy is NULL (claered when the interpreter was destroyed) + # * module's global state was initialized, not reset + + # Use a subinterpreter that sticks around. + loaded_interp1 = self.import_in_subinterp(interpid1) + self.check_common(loaded_interp1) + self.check_semi_fresh(loaded_interp1, loaded_main, base) + + # At this point: + # * alive in 1 interpreter (interp1) + # * module def still in _PyRuntime.imports.extensions + # * mod init func ran again + # * m_copy was copied from interp1 (was NULL) + # * module's global state was updated, not reset + + # Use a subinterpreter while the previous one is still alive. + loaded_interp2 = self.import_in_subinterp(interpid2) + self.check_common(loaded_interp2) + self.check_copied(loaded_interp2, loaded_interp1) + + # At this point: + # * alive in 2 interpreters (interp1, interp2) + # * module def still in _PyRuntime.imports.extensions + # * mod init func ran again + # * m_copy was copied from interp2 (was from interp1) + # * module's global state was updated, not reset + + @requires_subinterpreters + @requires_load_dynamic + def test_basic_multiple_interpreters_reset_each(self): + # resetting between each interpreter + + # At this point: + # * alive in 0 interpreters + # * module def may or may not be loaded already + # * module def not in _PyRuntime.imports.extensions + # * mod init func has not run yet (since reset, at least) + # * m_copy not set (hasn't been loaded yet or already cleared) + # * module's global state has not been initialized yet + # (or already cleared) + + interpid1 = self.add_subinterpreter() + interpid2 = self.add_subinterpreter() + + # Use an interpreter that gets destroyed right away. + loaded = self.import_in_subinterp( + postscript=''' + # Attrs set after loading are not in m_copy. + mod.spam = 'spam, spam, mash, spam, eggs, and spam' + ''', + postcleanup=True, + ) + self.check_common(loaded) + self.check_fresh(loaded) + + # At this point: + # * alive in 0 interpreters + # * module def in _PyRuntime.imports.extensions + # * mod init func ran for the first time (since reset, at least) + # * m_copy is NULL (claered when the interpreter was destroyed) + # * module's global state was initialized, not reset + + # Use a subinterpreter that sticks around. + loaded = self.import_in_subinterp(interpid1, postcleanup=True) + self.check_common(loaded) + self.check_fresh(loaded) + + # At this point: + # * alive in 1 interpreter (interp1) + # * module def still in _PyRuntime.imports.extensions + # * mod init func ran again + # * m_copy was copied from interp1 (was NULL) + # * module's global state was initialized, not reset + + # Use a subinterpreter while the previous one is still alive. + loaded = self.import_in_subinterp(interpid2, postcleanup=True) + self.check_common(loaded) + self.check_fresh(loaded) + + # At this point: + # * alive in 2 interpreters (interp2, interp2) + # * module def still in _PyRuntime.imports.extensions + # * mod init func ran again + # * m_copy was copied from interp2 (was from interp1) + # * module's global state was initialized, not reset + + class ReloadTests(unittest.TestCase): """Very basic tests to make sure that imp.reload() operates just like diff --git a/Modules/_testsinglephase.c b/Modules/_testsinglephase.c index 565221c887e5..a16157702ae7 100644 --- a/Modules/_testsinglephase.c +++ b/Modules/_testsinglephase.c @@ -140,7 +140,7 @@ init_module(PyObject *module, module_state *state) if (initialized == NULL) { return -1; } - if (PyModule_AddObjectRef(module, "_initialized", initialized) != 0) { + if (PyModule_AddObjectRef(module, "_module_initialized", initialized) != 0) { return -1; } @@ -148,13 +148,13 @@ init_module(PyObject *module, module_state *state) } -PyDoc_STRVAR(common_initialized_doc, -"initialized()\n\ +PyDoc_STRVAR(common_state_initialized_doc, +"state_initialized()\n\ \n\ -Return the seconds-since-epoch when the module was initialized."); +Return the seconds-since-epoch when the module state was initialized."); static PyObject * -common_initialized(PyObject *self, PyObject *Py_UNUSED(ignored)) +common_state_initialized(PyObject *self, PyObject *Py_UNUSED(ignored)) { module_state *state = get_module_state(self); if (state == NULL) { @@ -164,9 +164,9 @@ common_initialized(PyObject *self, PyObject *Py_UNUSED(ignored)) return PyFloat_FromDouble(d); } -#define INITIALIZED_METHODDEF \ - {"initialized", common_initialized, METH_NOARGS, \ - common_initialized_doc} +#define STATE_INITIALIZED_METHODDEF \ + {"state_initialized", common_state_initialized, METH_NOARGS, \ + common_state_initialized_doc} PyDoc_STRVAR(common_look_up_self_doc, @@ -265,7 +265,7 @@ basic__clear_globals(PyObject *self, PyObject *Py_UNUSED(ignored)) static PyMethodDef TestMethods_Basic[] = { LOOK_UP_SELF_METHODDEF, SUM_METHODDEF, - INITIALIZED_METHODDEF, + STATE_INITIALIZED_METHODDEF, INITIALIZED_COUNT_METHODDEF, _CLEAR_GLOBALS_METHODDEF, {NULL, NULL} /* sentinel */ @@ -360,7 +360,7 @@ PyInit__testsinglephase_basic_copy(void) static PyMethodDef TestMethods_Reinit[] = { LOOK_UP_SELF_METHODDEF, SUM_METHODDEF, - INITIALIZED_METHODDEF, + STATE_INITIALIZED_METHODDEF, {NULL, NULL} /* sentinel */ }; @@ -421,7 +421,7 @@ PyInit__testsinglephase_with_reinit(void) static PyMethodDef TestMethods_WithState[] = { LOOK_UP_SELF_METHODDEF, SUM_METHODDEF, - INITIALIZED_METHODDEF, + STATE_INITIALIZED_METHODDEF, {NULL, NULL} /* sentinel */ }; diff --git a/Python/import.c b/Python/import.c index fabf03b1c5d6..57d4eea14881 100644 --- a/Python/import.c +++ b/Python/import.c @@ -465,7 +465,7 @@ _modules_by_index_set(PyInterpreterState *interp, } static int -_modules_by_index_clear(PyInterpreterState *interp, PyModuleDef *def) +_modules_by_index_clear_one(PyInterpreterState *interp, PyModuleDef *def) { Py_ssize_t index = def->m_base.m_index; const char *err = _modules_by_index_check(interp, index); @@ -546,7 +546,7 @@ PyState_RemoveModule(PyModuleDef* def) "PyState_RemoveModule called on module with slots"); return -1; } - return _modules_by_index_clear(tstate->interp, def); + return _modules_by_index_clear_one(tstate->interp, def); } @@ -584,6 +584,109 @@ _PyImport_ClearModulesByIndex(PyInterpreterState *interp) /* extension modules */ /*********************/ +/* + It may help to have a big picture view of what happens + when an extension is loaded. This includes when it is imported + for the first time or via imp.load_dynamic(). + + Here's a summary, using imp.load_dynamic() as the starting point: + + 1. imp.load_dynamic() -> importlib._bootstrap._load() + 2. _load(): acquire import lock + 3. _load() -> importlib._bootstrap._load_unlocked() + 4. _load_unlocked() -> importlib._bootstrap.module_from_spec() + 5. module_from_spec() -> ExtensionFileLoader.create_module() + 6. create_module() -> _imp.create_dynamic() + (see below) + 7. module_from_spec() -> importlib._bootstrap._init_module_attrs() + 8. _load_unlocked(): sys.modules[name] = module + 9. _load_unlocked() -> ExtensionFileLoader.exec_module() + 10. exec_module() -> _imp.exec_dynamic() + (see below) + 11. _load(): release import lock + + + ...for single-phase init modules, where m_size == -1: + + (6). first time (not found in _PyRuntime.imports.extensions): + 1. _imp_create_dynamic_impl() -> import_find_extension() + 2. _imp_create_dynamic_impl() -> _PyImport_LoadDynamicModuleWithSpec() + 3. _PyImport_LoadDynamicModuleWithSpec(): load + 4. _PyImport_LoadDynamicModuleWithSpec(): call + 5. -> PyModule_Create() -> PyModule_Create2() -> PyModule_CreateInitialized() + 6. PyModule_CreateInitialized() -> PyModule_New() + 7. PyModule_CreateInitialized(): allocate mod->md_state + 8. PyModule_CreateInitialized() -> PyModule_AddFunctions() + 9. PyModule_CreateInitialized() -> PyModule_SetDocString() + 10. PyModule_CreateInitialized(): set mod->md_def + 11. : initialize the module + 12. _PyImport_LoadDynamicModuleWithSpec() -> _PyImport_CheckSubinterpIncompatibleExtensionAllowed() + 13. _PyImport_LoadDynamicModuleWithSpec(): set def->m_base.m_init + 14. _PyImport_LoadDynamicModuleWithSpec(): set __file__ + 15. _PyImport_LoadDynamicModuleWithSpec() -> _PyImport_FixupExtensionObject() + 16. _PyImport_FixupExtensionObject(): add it to interp->imports.modules_by_index + 17. _PyImport_FixupExtensionObject(): copy __dict__ into def->m_base.m_copy + 18. _PyImport_FixupExtensionObject(): add it to _PyRuntime.imports.extensions + + (6). subsequent times (found in _PyRuntime.imports.extensions): + 1. _imp_create_dynamic_impl() -> import_find_extension() + 2. import_find_extension() -> import_add_module() + 3. if name in sys.modules: use that module + 4. else: + 1. import_add_module() -> PyModule_NewObject() + 2. import_add_module(): set it on sys.modules + 5. import_find_extension(): copy the "m_copy" dict into __dict__ + 6. _imp_create_dynamic_impl() -> _PyImport_CheckSubinterpIncompatibleExtensionAllowed() + + (10). (every time): + 1. noop + + + ...for single-phase init modules, where m_size >= 0: + + (6). not main interpreter and never loaded there - every time (not found in _PyRuntime.imports.extensions): + 1-16. (same as for m_size == -1) + + (6). main interpreter - first time (not found in _PyRuntime.imports.extensions): + 1-16. (same as for m_size == -1) + 17. _PyImport_FixupExtensionObject(): add it to _PyRuntime.imports.extensions + + (6). previously loaded in main interpreter (found in _PyRuntime.imports.extensions): + 1. _imp_create_dynamic_impl() -> import_find_extension() + 2. import_find_extension(): call def->m_base.m_init + 3. import_find_extension(): add the module to sys.modules + + (10). every time: + 1. noop + + + ...for multi-phase init modules: + + (6). every time: + 1. _imp_create_dynamic_impl() -> import_find_extension() (not found) + 2. _imp_create_dynamic_impl() -> _PyImport_LoadDynamicModuleWithSpec() + 3. _PyImport_LoadDynamicModuleWithSpec(): load module init func + 4. _PyImport_LoadDynamicModuleWithSpec(): call module init func + 5. _PyImport_LoadDynamicModuleWithSpec() -> PyModule_FromDefAndSpec() + 6. PyModule_FromDefAndSpec(): gather/check moduledef slots + 7. if there's a Py_mod_create slot: + 1. PyModule_FromDefAndSpec(): call its function + 8. else: + 1. PyModule_FromDefAndSpec() -> PyModule_NewObject() + 9: PyModule_FromDefAndSpec(): set mod->md_def + 10. PyModule_FromDefAndSpec() -> _add_methods_to_object() + 11. PyModule_FromDefAndSpec() -> PyModule_SetDocString() + + (10). every time: + 1. _imp_exec_dynamic_impl() -> exec_builtin_or_dynamic() + 2. if mod->md_state == NULL (including if m_size == 0): + 1. exec_builtin_or_dynamic() -> PyModule_ExecDef() + 2. PyModule_ExecDef(): allocate mod->md_state + 3. if there's a Py_mod_exec slot: + 1. PyModule_ExecDef(): call its function + */ + + /* Make sure name is fully qualified. This is a bit of a hack: when the shared library is loaded, @@ -1007,13 +1110,17 @@ clear_singlephase_extension(PyInterpreterState *interp, /* Clear the PyState_*Module() cache entry. */ if (_modules_by_index_check(interp, def->m_base.m_index) == NULL) { - if (_modules_by_index_clear(interp, def) < 0) { + if (_modules_by_index_clear_one(interp, def) < 0) { return -1; } } /* Clear the cached module def. */ - return _extensions_cache_delete(filename, name); + if (_extensions_cache_delete(filename, name) < 0) { + return -1; + } + + return 0; } From webhook-mailer at python.org Mon Feb 27 13:53:36 2023 From: webhook-mailer at python.org (mdickinson) Date: Mon, 27 Feb 2023 18:53:36 -0000 Subject: [Python-checkins] gh-101773: Optimize creation of Fractions in private methods (#101780) Message-ID: https://github.com/python/cpython/commit/4f3786b7616dd464242b88ad6914053d409fe9d2 commit: 4f3786b7616dd464242b88ad6914053d409fe9d2 branch: main author: Sergey B Kirpichev committer: mdickinson date: 2023-02-27T18:53:22Z summary: gh-101773: Optimize creation of Fractions in private methods (#101780) This PR adds a private `Fraction._from_coprime_ints` classmethod for internal creations of `Fraction` objects, replacing the use of `_normalize=False` in the existing constructor. This speeds up creation of `Fraction` objects arising from calculations. The `_normalize` argument to the `Fraction` constructor has been removed. Co-authored-by: Pieter Eendebak Co-authored-by: Mark Dickinson files: A Misc/NEWS.d/next/Library/2023-02-10-11-59-13.gh-issue-101773.J_kI7y.rst M Lib/fractions.py M Lib/test/test_fractions.py M Lib/test/test_numeric_tower.py diff --git a/Lib/fractions.py b/Lib/fractions.py index 49a3f2841a2e..f718b35639be 100644 --- a/Lib/fractions.py +++ b/Lib/fractions.py @@ -183,7 +183,7 @@ class Fraction(numbers.Rational): __slots__ = ('_numerator', '_denominator') # We're immutable, so use __new__ not __init__ - def __new__(cls, numerator=0, denominator=None, *, _normalize=True): + def __new__(cls, numerator=0, denominator=None): """Constructs a Rational. Takes a string like '3/2' or '1.5', another Rational instance, a @@ -279,12 +279,11 @@ def __new__(cls, numerator=0, denominator=None, *, _normalize=True): if denominator == 0: raise ZeroDivisionError('Fraction(%s, 0)' % numerator) - if _normalize: - g = math.gcd(numerator, denominator) - if denominator < 0: - g = -g - numerator //= g - denominator //= g + g = math.gcd(numerator, denominator) + if denominator < 0: + g = -g + numerator //= g + denominator //= g self._numerator = numerator self._denominator = denominator return self @@ -301,7 +300,7 @@ def from_float(cls, f): elif not isinstance(f, float): raise TypeError("%s.from_float() only takes floats, not %r (%s)" % (cls.__name__, f, type(f).__name__)) - return cls(*f.as_integer_ratio()) + return cls._from_coprime_ints(*f.as_integer_ratio()) @classmethod def from_decimal(cls, dec): @@ -313,7 +312,19 @@ def from_decimal(cls, dec): raise TypeError( "%s.from_decimal() only takes Decimals, not %r (%s)" % (cls.__name__, dec, type(dec).__name__)) - return cls(*dec.as_integer_ratio()) + return cls._from_coprime_ints(*dec.as_integer_ratio()) + + @classmethod + def _from_coprime_ints(cls, numerator, denominator, /): + """Convert a pair of ints to a rational number, for internal use. + + The ratio of integers should be in lowest terms and the denominator + should be positive. + """ + obj = super(Fraction, cls).__new__(cls) + obj._numerator = numerator + obj._denominator = denominator + return obj def is_integer(self): """Return True if the Fraction is an integer.""" @@ -380,9 +391,9 @@ def limit_denominator(self, max_denominator=1000000): # the distance from p1/q1 to self is d/(q1*self._denominator). So we # need to compare 2*(q0+k*q1) with self._denominator/d. if 2*d*(q0+k*q1) <= self._denominator: - return Fraction(p1, q1, _normalize=False) + return Fraction._from_coprime_ints(p1, q1) else: - return Fraction(p0+k*p1, q0+k*q1, _normalize=False) + return Fraction._from_coprime_ints(p0+k*p1, q0+k*q1) @property def numerator(a): @@ -703,13 +714,13 @@ def _add(a, b): nb, db = b._numerator, b._denominator g = math.gcd(da, db) if g == 1: - return Fraction(na * db + da * nb, da * db, _normalize=False) + return Fraction._from_coprime_ints(na * db + da * nb, da * db) s = da // g t = na * (db // g) + nb * s g2 = math.gcd(t, g) if g2 == 1: - return Fraction(t, s * db, _normalize=False) - return Fraction(t // g2, s * (db // g2), _normalize=False) + return Fraction._from_coprime_ints(t, s * db) + return Fraction._from_coprime_ints(t // g2, s * (db // g2)) __add__, __radd__ = _operator_fallbacks(_add, operator.add) @@ -719,13 +730,13 @@ def _sub(a, b): nb, db = b._numerator, b._denominator g = math.gcd(da, db) if g == 1: - return Fraction(na * db - da * nb, da * db, _normalize=False) + return Fraction._from_coprime_ints(na * db - da * nb, da * db) s = da // g t = na * (db // g) - nb * s g2 = math.gcd(t, g) if g2 == 1: - return Fraction(t, s * db, _normalize=False) - return Fraction(t // g2, s * (db // g2), _normalize=False) + return Fraction._from_coprime_ints(t, s * db) + return Fraction._from_coprime_ints(t // g2, s * (db // g2)) __sub__, __rsub__ = _operator_fallbacks(_sub, operator.sub) @@ -741,15 +752,17 @@ def _mul(a, b): if g2 > 1: nb //= g2 da //= g2 - return Fraction(na * nb, db * da, _normalize=False) + return Fraction._from_coprime_ints(na * nb, db * da) __mul__, __rmul__ = _operator_fallbacks(_mul, operator.mul) def _div(a, b): """a / b""" # Same as _mul(), with inversed b. - na, da = a._numerator, a._denominator nb, db = b._numerator, b._denominator + if nb == 0: + raise ZeroDivisionError('Fraction(%s, 0)' % db) + na, da = a._numerator, a._denominator g1 = math.gcd(na, nb) if g1 > 1: na //= g1 @@ -761,7 +774,7 @@ def _div(a, b): n, d = na * db, nb * da if d < 0: n, d = -n, -d - return Fraction(n, d, _normalize=False) + return Fraction._from_coprime_ints(n, d) __truediv__, __rtruediv__ = _operator_fallbacks(_div, operator.truediv) @@ -798,17 +811,17 @@ def __pow__(a, b): if b.denominator == 1: power = b.numerator if power >= 0: - return Fraction(a._numerator ** power, - a._denominator ** power, - _normalize=False) - elif a._numerator >= 0: - return Fraction(a._denominator ** -power, - a._numerator ** -power, - _normalize=False) + return Fraction._from_coprime_ints(a._numerator ** power, + a._denominator ** power) + elif a._numerator > 0: + return Fraction._from_coprime_ints(a._denominator ** -power, + a._numerator ** -power) + elif a._numerator == 0: + raise ZeroDivisionError('Fraction(%s, 0)' % + a._denominator ** -power) else: - return Fraction((-a._denominator) ** -power, - (-a._numerator) ** -power, - _normalize=False) + return Fraction._from_coprime_ints((-a._denominator) ** -power, + (-a._numerator) ** -power) else: # A fractional power will generally produce an # irrational number. @@ -832,15 +845,15 @@ def __rpow__(b, a): def __pos__(a): """+a: Coerces a subclass instance to Fraction""" - return Fraction(a._numerator, a._denominator, _normalize=False) + return Fraction._from_coprime_ints(a._numerator, a._denominator) def __neg__(a): """-a""" - return Fraction(-a._numerator, a._denominator, _normalize=False) + return Fraction._from_coprime_ints(-a._numerator, a._denominator) def __abs__(a): """abs(a)""" - return Fraction(abs(a._numerator), a._denominator, _normalize=False) + return Fraction._from_coprime_ints(abs(a._numerator), a._denominator) def __int__(a, _index=operator.index): """int(a)""" diff --git a/Lib/test/test_fractions.py b/Lib/test/test_fractions.py index 3bc6b409e05d..e112f49d2e79 100644 --- a/Lib/test/test_fractions.py +++ b/Lib/test/test_fractions.py @@ -488,6 +488,7 @@ def testArithmetic(self): self.assertEqual(F(5, 6), F(2, 3) * F(5, 4)) self.assertEqual(F(1, 4), F(1, 10) / F(2, 5)) self.assertEqual(F(-15, 8), F(3, 4) / F(-2, 5)) + self.assertRaises(ZeroDivisionError, operator.truediv, F(1), F(0)) self.assertTypedEquals(2, F(9, 10) // F(2, 5)) self.assertTypedEquals(10**23, F(10**23, 1) // F(1)) self.assertEqual(F(5, 6), F(7, 3) % F(3, 2)) diff --git a/Lib/test/test_numeric_tower.py b/Lib/test/test_numeric_tower.py index 9cd85e13634c..337682d6bac9 100644 --- a/Lib/test/test_numeric_tower.py +++ b/Lib/test/test_numeric_tower.py @@ -145,7 +145,7 @@ def test_fractions(self): # The numbers ABC doesn't enforce that the "true" division # of integers produces a float. This tests that the # Rational.__float__() method has required type conversions. - x = F(DummyIntegral(1), DummyIntegral(2), _normalize=False) + x = F._from_coprime_ints(DummyIntegral(1), DummyIntegral(2)) self.assertRaises(TypeError, lambda: x.numerator/x.denominator) self.assertEqual(float(x), 0.5) diff --git a/Misc/NEWS.d/next/Library/2023-02-10-11-59-13.gh-issue-101773.J_kI7y.rst b/Misc/NEWS.d/next/Library/2023-02-10-11-59-13.gh-issue-101773.J_kI7y.rst new file mode 100644 index 000000000000..b577d93d28c2 --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-10-11-59-13.gh-issue-101773.J_kI7y.rst @@ -0,0 +1,2 @@ +Optimize :class:`fractions.Fraction` for small components. The private argument +``_normalize`` of the :class:`fractions.Fraction` constructor has been removed. From webhook-mailer at python.org Mon Feb 27 14:11:39 2023 From: webhook-mailer at python.org (mdickinson) Date: Mon, 27 Feb 2023 19:11:39 -0000 Subject: [Python-checkins] gh-101825: Clarify that as_integer_ratio() output is always normalized (#101843) Message-ID: https://github.com/python/cpython/commit/4624987b296108c2dc1e6e3a24e65d2de7afd451 commit: 4624987b296108c2dc1e6e3a24e65d2de7afd451 branch: main author: Sergey B Kirpichev committer: mdickinson date: 2023-02-27T19:11:28Z summary: gh-101825: Clarify that as_integer_ratio() output is always normalized (#101843) Make docstrings for `as_integer_ratio` consistent across types, and document that the returned pair is always normalized (coprime integers, with positive denominator). --------- Co-authored-by: Owain Davies <116417456+OTheDev at users.noreply.github.com> Co-authored-by: Mark Dickinson files: M Doc/library/fractions.rst M Doc/library/stdtypes.rst M Lib/fractions.py M Objects/clinic/floatobject.c.h M Objects/clinic/longobject.c.h M Objects/floatobject.c M Objects/longobject.c diff --git a/Doc/library/fractions.rst b/Doc/library/fractions.rst index c61bbac892e0..fe2e8ab655ed 100644 --- a/Doc/library/fractions.rst +++ b/Doc/library/fractions.rst @@ -118,7 +118,8 @@ another rational number, or from a string. .. method:: as_integer_ratio() Return a tuple of two integers, whose ratio is equal - to the Fraction and with a positive denominator. + to the original Fraction. The ratio is in lowest terms + and has a positive denominator. .. versionadded:: 3.8 diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst index 41947d66c151..1240f80b0f11 100644 --- a/Doc/library/stdtypes.rst +++ b/Doc/library/stdtypes.rst @@ -602,8 +602,8 @@ class`. In addition, it provides a few more methods: .. method:: int.as_integer_ratio() - Return a pair of integers whose ratio is exactly equal to the original - integer and with a positive denominator. The integer ratio of integers + Return a pair of integers whose ratio is equal to the original + integer and has a positive denominator. The integer ratio of integers (whole numbers) is always the integer as the numerator and ``1`` as the denominator. @@ -624,7 +624,7 @@ class`. float also has the following additional methods. .. method:: float.as_integer_ratio() Return a pair of integers whose ratio is exactly equal to the - original float and with a positive denominator. Raises + original float. The ratio is in lowest terms and has a positive denominator. Raises :exc:`OverflowError` on infinities and a :exc:`ValueError` on NaNs. diff --git a/Lib/fractions.py b/Lib/fractions.py index f718b35639be..c95db0730e5b 100644 --- a/Lib/fractions.py +++ b/Lib/fractions.py @@ -331,10 +331,9 @@ def is_integer(self): return self._denominator == 1 def as_integer_ratio(self): - """Return the integer ratio as a tuple. + """Return a pair of integers, whose ratio is equal to the original Fraction. - Return a tuple of two integers, whose ratio is equal to the - Fraction and with a positive denominator. + The ratio is in lowest terms and has a positive denominator. """ return (self._numerator, self._denominator) diff --git a/Objects/clinic/floatobject.c.h b/Objects/clinic/floatobject.c.h index 6bc25a0a409f..a99fd74e4b62 100644 --- a/Objects/clinic/floatobject.c.h +++ b/Objects/clinic/floatobject.c.h @@ -173,12 +173,10 @@ PyDoc_STRVAR(float_as_integer_ratio__doc__, "as_integer_ratio($self, /)\n" "--\n" "\n" -"Return integer ratio.\n" +"Return a pair of integers, whose ratio is exactly equal to the original float.\n" "\n" -"Return a pair of integers, whose ratio is exactly equal to the original float\n" -"and with a positive denominator.\n" -"\n" -"Raise OverflowError on infinities and a ValueError on NaNs.\n" +"The ratio is in lowest terms and has a positive denominator. Raise\n" +"OverflowError on infinities and a ValueError on NaNs.\n" "\n" ">>> (10.0).as_integer_ratio()\n" "(10, 1)\n" @@ -327,4 +325,4 @@ float___format__(PyObject *self, PyObject *arg) exit: return return_value; } -/*[clinic end generated code: output=74bc91bb44014df9 input=a9049054013a1b77]*/ +/*[clinic end generated code: output=ea329577074911b9 input=a9049054013a1b77]*/ diff --git a/Objects/clinic/longobject.c.h b/Objects/clinic/longobject.c.h index 206bffdd086a..c26ceafbc2be 100644 --- a/Objects/clinic/longobject.c.h +++ b/Objects/clinic/longobject.c.h @@ -231,10 +231,9 @@ PyDoc_STRVAR(int_as_integer_ratio__doc__, "as_integer_ratio($self, /)\n" "--\n" "\n" -"Return integer ratio.\n" +"Return a pair of integers, whose ratio is equal to the original int.\n" "\n" -"Return a pair of integers, whose ratio is exactly equal to the original int\n" -"and with a positive denominator.\n" +"The ratio is in lowest terms and has a positive denominator.\n" "\n" ">>> (10).as_integer_ratio()\n" "(10, 1)\n" @@ -485,4 +484,4 @@ int_is_integer(PyObject *self, PyObject *Py_UNUSED(ignored)) { return int_is_integer_impl(self); } -/*[clinic end generated code: output=e518fe2b5d519322 input=a9049054013a1b77]*/ +/*[clinic end generated code: output=cfdf35d916158d4f input=a9049054013a1b77]*/ diff --git a/Objects/floatobject.c b/Objects/floatobject.c index 912b742f797d..d641311f1126 100644 --- a/Objects/floatobject.c +++ b/Objects/floatobject.c @@ -1546,12 +1546,10 @@ float_fromhex(PyTypeObject *type, PyObject *string) /*[clinic input] float.as_integer_ratio -Return integer ratio. +Return a pair of integers, whose ratio is exactly equal to the original float. -Return a pair of integers, whose ratio is exactly equal to the original float -and with a positive denominator. - -Raise OverflowError on infinities and a ValueError on NaNs. +The ratio is in lowest terms and has a positive denominator. Raise +OverflowError on infinities and a ValueError on NaNs. >>> (10.0).as_integer_ratio() (10, 1) @@ -1563,7 +1561,7 @@ Raise OverflowError on infinities and a ValueError on NaNs. static PyObject * float_as_integer_ratio_impl(PyObject *self) -/*[clinic end generated code: output=65f25f0d8d30a712 input=e21d08b4630c2e44]*/ +/*[clinic end generated code: output=65f25f0d8d30a712 input=d5ba7765655d75bd]*/ { double self_double; double float_part; diff --git a/Objects/longobject.c b/Objects/longobject.c index 8293f133bed2..51655cd0bad9 100644 --- a/Objects/longobject.c +++ b/Objects/longobject.c @@ -6020,10 +6020,9 @@ int_bit_count_impl(PyObject *self) /*[clinic input] int.as_integer_ratio -Return integer ratio. +Return a pair of integers, whose ratio is equal to the original int. -Return a pair of integers, whose ratio is exactly equal to the original int -and with a positive denominator. +The ratio is in lowest terms and has a positive denominator. >>> (10).as_integer_ratio() (10, 1) @@ -6035,7 +6034,7 @@ and with a positive denominator. static PyObject * int_as_integer_ratio_impl(PyObject *self) -/*[clinic end generated code: output=e60803ae1cc8621a input=55ce3058e15de393]*/ +/*[clinic end generated code: output=e60803ae1cc8621a input=384ff1766634bec2]*/ { PyObject *ratio_tuple; PyObject *numerator = long_long(self); From webhook-mailer at python.org Mon Feb 27 16:16:38 2023 From: webhook-mailer at python.org (JelleZijlstra) Date: Mon, 27 Feb 2023 21:16:38 -0000 Subject: [Python-checkins] gh-101561: Add typing.override decorator (#101564) Message-ID: https://github.com/python/cpython/commit/0f89acf6cc4d4790f7b7a82165d0a6e7e84e4b72 commit: 0f89acf6cc4d4790f7b7a82165d0a6e7e84e4b72 branch: main author: Steven Troxler committer: JelleZijlstra date: 2023-02-27T13:16:11-08:00 summary: gh-101561: Add typing.override decorator (#101564) Co-authored-by: Jelle Zijlstra Co-authored-by: Alex Waygood files: A Misc/NEWS.d/next/Library/2023-02-04-16-35-46.gh-issue-101561.Xo6pIZ.rst M Doc/library/typing.rst M Doc/whatsnew/3.12.rst M Lib/test/test_typing.py M Lib/typing.py M Misc/ACKS diff --git a/Doc/library/typing.rst b/Doc/library/typing.rst index bbbf6920ddec..3395e4bfb95c 100644 --- a/Doc/library/typing.rst +++ b/Doc/library/typing.rst @@ -91,6 +91,8 @@ annotations. These include: *Introducing* :data:`LiteralString` * :pep:`681`: Data Class Transforms *Introducing* the :func:`@dataclass_transform` decorator +* :pep:`698`: Adding an override decorator to typing + *Introducing* the :func:`@override` decorator .. _type-aliases: @@ -2722,6 +2724,42 @@ Functions and decorators This wraps the decorator with something that wraps the decorated function in :func:`no_type_check`. + +.. decorator:: override + + A decorator for methods that indicates to type checkers that this method + should override a method or attribute with the same name on a base class. + This helps prevent bugs that may occur when a base class is changed without + an equivalent change to a child class. + + For example:: + + class Base: + def log_status(self) + + class Sub(Base): + @override + def log_status(self) -> None: # Okay: overrides Base.log_status + ... + + @override + def done(self) -> None: # Error reported by type checker + ... + + There is no runtime checking of this property. + + The decorator will set the ``__override__`` attribute to ``True`` on + the decorated object. Thus, a check like + ``if getattr(obj, "__override__", False)`` can be used at runtime to determine + whether an object ``obj`` has been marked as an override. If the decorated object + does not support setting attributes, the decorator returns the object unchanged + without raising an exception. + + See :pep:`698` for more details. + + .. versionadded:: 3.12 + + .. decorator:: type_check_only Decorator to mark a class or function to be unavailable at runtime. diff --git a/Doc/whatsnew/3.12.rst b/Doc/whatsnew/3.12.rst index e551c5b4fd06..1a25ec6b7061 100644 --- a/Doc/whatsnew/3.12.rst +++ b/Doc/whatsnew/3.12.rst @@ -350,6 +350,14 @@ tempfile The :class:`tempfile.NamedTemporaryFile` function has a new optional parameter *delete_on_close* (Contributed by Evgeny Zorin in :gh:`58451`.) +typing +------ + +* Add :func:`typing.override`, an override decorator telling to static type + checkers to verify that a method overrides some method or attribute of the + same name on a base class, as per :pep:`698`. (Contributed by Steven Troxler in + :gh:`101564`.) + sys --- diff --git a/Lib/test/test_typing.py b/Lib/test/test_typing.py index 7a460d94469f..d61dc6e2fbd7 100644 --- a/Lib/test/test_typing.py +++ b/Lib/test/test_typing.py @@ -23,6 +23,7 @@ from typing import assert_type, cast, runtime_checkable from typing import get_type_hints from typing import get_origin, get_args +from typing import override from typing import is_typeddict from typing import reveal_type from typing import dataclass_transform @@ -4166,6 +4167,43 @@ def cached(self): ... self.assertIs(True, Methods.cached.__final__) +class OverrideDecoratorTests(BaseTestCase): + def test_override(self): + class Base: + def normal_method(self): ... + @staticmethod + def static_method_good_order(): ... + @staticmethod + def static_method_bad_order(): ... + @staticmethod + def decorator_with_slots(): ... + + class Derived(Base): + @override + def normal_method(self): + return 42 + + @staticmethod + @override + def static_method_good_order(): + return 42 + + @override + @staticmethod + def static_method_bad_order(): + return 42 + + + self.assertIsSubclass(Derived, Base) + instance = Derived() + self.assertEqual(instance.normal_method(), 42) + self.assertIs(True, instance.normal_method.__override__) + self.assertEqual(Derived.static_method_good_order(), 42) + self.assertIs(True, Derived.static_method_good_order.__override__) + self.assertEqual(Derived.static_method_bad_order(), 42) + self.assertIs(False, hasattr(Derived.static_method_bad_order, "__override__")) + + class CastTests(BaseTestCase): def test_basics(self): diff --git a/Lib/typing.py b/Lib/typing.py index bdf51bb5f415..8d40e923bb1d 100644 --- a/Lib/typing.py +++ b/Lib/typing.py @@ -138,6 +138,7 @@ def _idfunc(_, x): 'NoReturn', 'NotRequired', 'overload', + 'override', 'ParamSpecArgs', 'ParamSpecKwargs', 'Required', @@ -2657,6 +2658,7 @@ class Other(Leaf): # Error reported by type checker # Internal type variable used for Type[]. CT_co = TypeVar('CT_co', covariant=True, bound=type) + # A useful type variable with constraints. This represents string types. # (This one *is* for export!) AnyStr = TypeVar('AnyStr', bytes, str) @@ -2748,6 +2750,8 @@ def new_user(user_class: Type[U]) -> U: At this point the type checker knows that joe has type BasicUser. """ +# Internal type variable for callables. Not for export. +F = TypeVar("F", bound=Callable[..., Any]) @runtime_checkable class SupportsInt(Protocol): @@ -3448,3 +3452,40 @@ def decorator(cls_or_fn): } return cls_or_fn return decorator + + + +def override(method: F, /) -> F: + """Indicate that a method is intended to override a method in a base class. + + Usage: + + class Base: + def method(self) -> None: ... + pass + + class Child(Base): + @override + def method(self) -> None: + super().method() + + When this decorator is applied to a method, the type checker will + validate that it overrides a method or attribute with the same name on a + base class. This helps prevent bugs that may occur when a base class is + changed without an equivalent change to a child class. + + There is no runtime checking of this property. The decorator sets the + ``__override__`` attribute to ``True`` on the decorated object to allow + runtime introspection. + + See PEP 698 for details. + + """ + try: + method.__override__ = True + except (AttributeError, TypeError): + # Skip the attribute silently if it is not writable. + # AttributeError happens if the object has __slots__ or a + # read-only property, TypeError if it's a builtin class. + pass + return method diff --git a/Misc/ACKS b/Misc/ACKS index 3403aee4cc78..2da3d0ab29b8 100644 --- a/Misc/ACKS +++ b/Misc/ACKS @@ -1848,6 +1848,7 @@ Tom Tromey John Tromp Diane Trout Jason Trowbridge +Steven Troxler Brent Tubbs Anthony Tuininga Erno Tukia diff --git a/Misc/NEWS.d/next/Library/2023-02-04-16-35-46.gh-issue-101561.Xo6pIZ.rst b/Misc/NEWS.d/next/Library/2023-02-04-16-35-46.gh-issue-101561.Xo6pIZ.rst new file mode 100644 index 000000000000..2f6a4153062e --- /dev/null +++ b/Misc/NEWS.d/next/Library/2023-02-04-16-35-46.gh-issue-101561.Xo6pIZ.rst @@ -0,0 +1 @@ +Add a new decorator :func:`typing.override`. See :pep:`698` for details. Patch by Steven Troxler. From webhook-mailer at python.org Tue Feb 28 01:12:32 2023 From: webhook-mailer at python.org (terryjreedy) Date: Tue, 28 Feb 2023 06:12:32 -0000 Subject: [Python-checkins] IDLE: Simplify DynOptionsMenu __init__code (#101371) Message-ID: https://github.com/python/cpython/commit/c41af812c948ec3cb07b2ef7a46a238f5cab3dc2 commit: c41af812c948ec3cb07b2ef7a46a238f5cab3dc2 branch: main author: JosephSBoyle <48555120+JosephSBoyle at users.noreply.github.com> committer: terryjreedy date: 2023-02-28T01:11:52-05:00 summary: IDLE: Simplify DynOptionsMenu __init__code (#101371) Refactor DynOptionMenu's initializer to not copy kwargs dict and use subscripting; improve its htest. Co-authored-by: Terry Jan Reedy files: M Lib/idlelib/dynoption.py diff --git a/Lib/idlelib/dynoption.py b/Lib/idlelib/dynoption.py index 9c6ffa435a10..d5dfc3eda13f 100644 --- a/Lib/idlelib/dynoption.py +++ b/Lib/idlelib/dynoption.py @@ -2,24 +2,19 @@ OptionMenu widget modified to allow dynamic menu reconfiguration and setting of highlightthickness """ -import copy - from tkinter import OptionMenu, _setit, StringVar, Button class DynOptionMenu(OptionMenu): - """ - unlike OptionMenu, our kwargs can include highlightthickness + """Add SetMenu and highlightthickness to OptionMenu. + + Highlightthickness adds space around menu button. """ def __init__(self, master, variable, value, *values, **kwargs): - # TODO copy value instead of whole dict - kwargsCopy=copy.copy(kwargs) - if 'highlightthickness' in list(kwargs.keys()): - del(kwargs['highlightthickness']) + highlightthickness = kwargs.pop('highlightthickness', None) OptionMenu.__init__(self, master, variable, value, *values, **kwargs) - self.config(highlightthickness=kwargsCopy.get('highlightthickness')) - #self.menu=self['menu'] - self.variable=variable - self.command=kwargs.get('command') + self['highlightthickness'] = highlightthickness + self.variable = variable + self.command = kwargs.get('command') def SetMenu(self,valueList,value=None): """ @@ -38,14 +33,15 @@ def _dyn_option_menu(parent): # htest # from tkinter import Toplevel # + StringVar, Button top = Toplevel(parent) - top.title("Tets dynamic option menu") + top.title("Test dynamic option menu") x, y = map(int, parent.geometry().split('+')[1:]) top.geometry("200x100+%d+%d" % (x + 250, y + 175)) top.focus_set() var = StringVar(top) var.set("Old option set") #Set the default value - dyn = DynOptionMenu(top,var, "old1","old2","old3","old4") + dyn = DynOptionMenu(top, var, "old1","old2","old3","old4", + highlightthickness=5) dyn.pack() def update(): @@ -54,5 +50,6 @@ def update(): button.pack() if __name__ == '__main__': + # Only module without unittests because of intention to replace. from idlelib.idle_test.htest import run run(_dyn_option_menu) From webhook-mailer at python.org Tue Feb 28 01:36:41 2023 From: webhook-mailer at python.org (miss-islington) Date: Tue, 28 Feb 2023 06:36:41 -0000 Subject: [Python-checkins] IDLE: Simplify DynOptionsMenu __init__code (GH-101371) Message-ID: https://github.com/python/cpython/commit/2701a49df2a4ce3284f500bf8ababb172216f8cc commit: 2701a49df2a4ce3284f500bf8ababb172216f8cc branch: 3.10 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-27T22:36:34-08:00 summary: IDLE: Simplify DynOptionsMenu __init__code (GH-101371) Refactor DynOptionMenu's initializer to not copy kwargs dict and use subscripting; improve its htest. (cherry picked from commit c41af812c948ec3cb07b2ef7a46a238f5cab3dc2) Co-authored-by: JosephSBoyle <48555120+JosephSBoyle at users.noreply.github.com> Co-authored-by: Terry Jan Reedy files: M Lib/idlelib/dynoption.py diff --git a/Lib/idlelib/dynoption.py b/Lib/idlelib/dynoption.py index 9c6ffa435a10..d5dfc3eda13f 100644 --- a/Lib/idlelib/dynoption.py +++ b/Lib/idlelib/dynoption.py @@ -2,24 +2,19 @@ OptionMenu widget modified to allow dynamic menu reconfiguration and setting of highlightthickness """ -import copy - from tkinter import OptionMenu, _setit, StringVar, Button class DynOptionMenu(OptionMenu): - """ - unlike OptionMenu, our kwargs can include highlightthickness + """Add SetMenu and highlightthickness to OptionMenu. + + Highlightthickness adds space around menu button. """ def __init__(self, master, variable, value, *values, **kwargs): - # TODO copy value instead of whole dict - kwargsCopy=copy.copy(kwargs) - if 'highlightthickness' in list(kwargs.keys()): - del(kwargs['highlightthickness']) + highlightthickness = kwargs.pop('highlightthickness', None) OptionMenu.__init__(self, master, variable, value, *values, **kwargs) - self.config(highlightthickness=kwargsCopy.get('highlightthickness')) - #self.menu=self['menu'] - self.variable=variable - self.command=kwargs.get('command') + self['highlightthickness'] = highlightthickness + self.variable = variable + self.command = kwargs.get('command') def SetMenu(self,valueList,value=None): """ @@ -38,14 +33,15 @@ def _dyn_option_menu(parent): # htest # from tkinter import Toplevel # + StringVar, Button top = Toplevel(parent) - top.title("Tets dynamic option menu") + top.title("Test dynamic option menu") x, y = map(int, parent.geometry().split('+')[1:]) top.geometry("200x100+%d+%d" % (x + 250, y + 175)) top.focus_set() var = StringVar(top) var.set("Old option set") #Set the default value - dyn = DynOptionMenu(top,var, "old1","old2","old3","old4") + dyn = DynOptionMenu(top, var, "old1","old2","old3","old4", + highlightthickness=5) dyn.pack() def update(): @@ -54,5 +50,6 @@ def update(): button.pack() if __name__ == '__main__': + # Only module without unittests because of intention to replace. from idlelib.idle_test.htest import run run(_dyn_option_menu) From webhook-mailer at python.org Tue Feb 28 01:41:27 2023 From: webhook-mailer at python.org (miss-islington) Date: Tue, 28 Feb 2023 06:41:27 -0000 Subject: [Python-checkins] IDLE: Simplify DynOptionsMenu __init__code (GH-101371) Message-ID: https://github.com/python/cpython/commit/d01cf5072be5511595b6d0c35ace6c1b07716f8d commit: d01cf5072be5511595b6d0c35ace6c1b07716f8d branch: 3.11 author: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com> committer: miss-islington <31488909+miss-islington at users.noreply.github.com> date: 2023-02-27T22:41:20-08:00 summary: IDLE: Simplify DynOptionsMenu __init__code (GH-101371) Refactor DynOptionMenu's initializer to not copy kwargs dict and use subscripting; improve its htest. (cherry picked from commit c41af812c948ec3cb07b2ef7a46a238f5cab3dc2) Co-authored-by: JosephSBoyle <48555120+JosephSBoyle at users.noreply.github.com> Co-authored-by: Terry Jan Reedy files: M Lib/idlelib/dynoption.py diff --git a/Lib/idlelib/dynoption.py b/Lib/idlelib/dynoption.py index 9c6ffa435a10..d5dfc3eda13f 100644 --- a/Lib/idlelib/dynoption.py +++ b/Lib/idlelib/dynoption.py @@ -2,24 +2,19 @@ OptionMenu widget modified to allow dynamic menu reconfiguration and setting of highlightthickness """ -import copy - from tkinter import OptionMenu, _setit, StringVar, Button class DynOptionMenu(OptionMenu): - """ - unlike OptionMenu, our kwargs can include highlightthickness + """Add SetMenu and highlightthickness to OptionMenu. + + Highlightthickness adds space around menu button. """ def __init__(self, master, variable, value, *values, **kwargs): - # TODO copy value instead of whole dict - kwargsCopy=copy.copy(kwargs) - if 'highlightthickness' in list(kwargs.keys()): - del(kwargs['highlightthickness']) + highlightthickness = kwargs.pop('highlightthickness', None) OptionMenu.__init__(self, master, variable, value, *values, **kwargs) - self.config(highlightthickness=kwargsCopy.get('highlightthickness')) - #self.menu=self['menu'] - self.variable=variable - self.command=kwargs.get('command') + self['highlightthickness'] = highlightthickness + self.variable = variable + self.command = kwargs.get('command') def SetMenu(self,valueList,value=None): """ @@ -38,14 +33,15 @@ def _dyn_option_menu(parent): # htest # from tkinter import Toplevel # + StringVar, Button top = Toplevel(parent) - top.title("Tets dynamic option menu") + top.title("Test dynamic option menu") x, y = map(int, parent.geometry().split('+')[1:]) top.geometry("200x100+%d+%d" % (x + 250, y + 175)) top.focus_set() var = StringVar(top) var.set("Old option set") #Set the default value - dyn = DynOptionMenu(top,var, "old1","old2","old3","old4") + dyn = DynOptionMenu(top, var, "old1","old2","old3","old4", + highlightthickness=5) dyn.pack() def update(): @@ -54,5 +50,6 @@ def update(): button.pack() if __name__ == '__main__': + # Only module without unittests because of intention to replace. from idlelib.idle_test.htest import run run(_dyn_option_menu) From webhook-mailer at python.org Tue Feb 28 03:31:09 2023 From: webhook-mailer at python.org (encukou) Date: Tue, 28 Feb 2023 08:31:09 -0000 Subject: [Python-checkins] gh-101101: Unstable C API tier (PEP 689) (GH-101102) Message-ID: https://github.com/python/cpython/commit/6b2d7c0ddb4d2afe298ee7c8f6a23c400f1465fd commit: 6b2d7c0ddb4d2afe298ee7c8f6a23c400f1465fd branch: main author: Petr Viktorin committer: encukou date: 2023-02-28T09:31:01+01:00 summary: gh-101101: Unstable C API tier (PEP 689) (GH-101102) files: A Misc/NEWS.d/next/C API/2022-04-21-17-25-22.gh-issue-91744.FgvaMi.rst A Modules/_testcapi/code.c M Doc/c-api/code.rst M Doc/c-api/stable.rst M Doc/tools/extensions/c_annotations.py M Doc/whatsnew/3.12.rst M Include/README.rst M Include/cpython/ceval.h M Include/cpython/code.h M Include/pyport.h M Lib/test/test_code.py M Modules/Setup.stdlib.in M Modules/_testcapi/parts.h M Modules/_testcapimodule.c M Objects/codeobject.c M PCbuild/_testcapi.vcxproj M PCbuild/_testcapi.vcxproj.filters M Python/ceval.c diff --git a/Doc/c-api/code.rst b/Doc/c-api/code.rst index ae75d68901d7..062ef3a1fea9 100644 --- a/Doc/c-api/code.rst +++ b/Doc/c-api/code.rst @@ -33,28 +33,47 @@ bound into a function. Return the number of free variables in *co*. -.. c:function:: PyCodeObject* PyCode_New(int argcount, int kwonlyargcount, int nlocals, int stacksize, int flags, PyObject *code, PyObject *consts, PyObject *names, PyObject *varnames, PyObject *freevars, PyObject *cellvars, PyObject *filename, PyObject *name, int firstlineno, PyObject *linetable, PyObject *exceptiontable) +.. c:function:: PyCodeObject* PyUnstable_Code_New(int argcount, int kwonlyargcount, int nlocals, int stacksize, int flags, PyObject *code, PyObject *consts, PyObject *names, PyObject *varnames, PyObject *freevars, PyObject *cellvars, PyObject *filename, PyObject *name, int firstlineno, PyObject *linetable, PyObject *exceptiontable) Return a new code object. If you need a dummy code object to create a frame, - use :c:func:`PyCode_NewEmpty` instead. Calling :c:func:`PyCode_New` directly - will bind you to a precise Python version since the definition of the bytecode - changes often. The many arguments of this function are inter-dependent in complex + use :c:func:`PyCode_NewEmpty` instead. + + Since the definition of the bytecode changes often, calling + :c:func:`PyCode_New` directly can bind you to a precise Python version. + + The many arguments of this function are inter-dependent in complex ways, meaning that subtle changes to values are likely to result in incorrect execution or VM crashes. Use this function only with extreme care. .. versionchanged:: 3.11 Added ``exceptiontable`` parameter. -.. c:function:: PyCodeObject* PyCode_NewWithPosOnlyArgs(int argcount, int posonlyargcount, int kwonlyargcount, int nlocals, int stacksize, int flags, PyObject *code, PyObject *consts, PyObject *names, PyObject *varnames, PyObject *freevars, PyObject *cellvars, PyObject *filename, PyObject *name, int firstlineno, PyObject *linetable, PyObject *exceptiontable) + .. index:: single: PyCode_New + + .. versionchanged:: 3.12 + + Renamed from ``PyCode_New`` as part of :ref:`unstable-c-api`. + The old name is deprecated, but will remain available until the + signature changes again. + +.. c:function:: PyCodeObject* PyUnstable_Code_NewWithPosOnlyArgs(int argcount, int posonlyargcount, int kwonlyargcount, int nlocals, int stacksize, int flags, PyObject *code, PyObject *consts, PyObject *names, PyObject *varnames, PyObject *freevars, PyObject *cellvars, PyObject *filename, PyObject *name, int firstlineno, PyObject *linetable, PyObject *exceptiontable) Similar to :c:func:`PyCode_New`, but with an extra "posonlyargcount" for positional-only arguments. The same caveats that apply to ``PyCode_New`` also apply to this function. - .. versionadded:: 3.8 + .. index:: single: PyCode_NewWithPosOnlyArgs + + .. versionadded:: 3.8 as ``PyCode_NewWithPosOnlyArgs`` .. versionchanged:: 3.11 Added ``exceptiontable`` parameter. + .. versionchanged:: 3.12 + + Renamed to ``PyUnstable_Code_NewWithPosOnlyArgs``. + The old name is deprecated, but will remain available until the + signature changes again. + .. c:function:: PyCodeObject* PyCode_NewEmpty(const char *filename, const char *funcname, int firstlineno) Return a new empty code object with the specified filename, @@ -165,3 +184,70 @@ bound into a function. :c:func:`PyErr_WriteUnraisable`. Otherwise it should return ``0``. .. versionadded:: 3.12 + + +Extra information +----------------- + +To support low-level extensions to frame evaluation, such as external +just-in-time compilers, it is possible to attach arbitrary extra data to +code objects. + +These functions are part of the unstable C API tier: +this functionality is a CPython implementation detail, and the API +may change without deprecation warnings. + +.. c:function:: Py_ssize_t PyUnstable_Eval_RequestCodeExtraIndex(freefunc free) + + Return a new an opaque index value used to adding data to code objects. + + You generally call this function once (per interpreter) and use the result + with ``PyCode_GetExtra`` and ``PyCode_SetExtra`` to manipulate + data on individual code objects. + + If *free* is not ``NULL``: when a code object is deallocated, + *free* will be called on non-``NULL`` data stored under the new index. + Use :c:func:`Py_DecRef` when storing :c:type:`PyObject`. + + .. index:: single: _PyEval_RequestCodeExtraIndex + + .. versionadded:: 3.6 as ``_PyEval_RequestCodeExtraIndex`` + + .. versionchanged:: 3.12 + + Renamed to ``PyUnstable_Eval_RequestCodeExtraIndex``. + The old private name is deprecated, but will be available until the API + changes. + +.. c:function:: int PyUnstable_Code_GetExtra(PyObject *code, Py_ssize_t index, void **extra) + + Set *extra* to the extra data stored under the given index. + Return 0 on success. Set an exception and return -1 on failure. + + If no data was set under the index, set *extra* to ``NULL`` and return + 0 without setting an exception. + + .. index:: single: _PyCode_GetExtra + + .. versionadded:: 3.6 as ``_PyCode_GetExtra`` + + .. versionchanged:: 3.12 + + Renamed to ``PyUnstable_Code_GetExtra``. + The old private name is deprecated, but will be available until the API + changes. + +.. c:function:: int PyUnstable_Code_SetExtra(PyObject *code, Py_ssize_t index, void *extra) + + Set the extra data stored under the given index to *extra*. + Return 0 on success. Set an exception and return -1 on failure. + + .. index:: single: _PyCode_SetExtra + + .. versionadded:: 3.6 as ``_PyCode_SetExtra`` + + .. versionchanged:: 3.12 + + Renamed to ``PyUnstable_Code_SetExtra``. + The old private name is deprecated, but will be available until the API + changes. diff --git a/Doc/c-api/stable.rst b/Doc/c-api/stable.rst index 4ae20e93e367..3721fc0697f5 100644 --- a/Doc/c-api/stable.rst +++ b/Doc/c-api/stable.rst @@ -6,9 +6,9 @@ C API Stability *************** -Python's C API is covered by the Backwards Compatibility Policy, :pep:`387`. -While the C API will change with every minor release (e.g. from 3.9 to 3.10), -most changes will be source-compatible, typically by only adding new API. +Unless documented otherwise, Python's C API is covered by the Backwards +Compatibility Policy, :pep:`387`. +Most changes to it are source-compatible (typically by only adding new API). Changing existing API or removing API is only done after a deprecation period or to fix serious issues. @@ -18,8 +18,38 @@ way; see :ref:`stable-abi-platform` below). So, code compiled for Python 3.10.0 will work on 3.10.8 and vice versa, but will need to be compiled separately for 3.9.x and 3.10.x. +There are two tiers of C API with different stability exepectations: + +- *Unstable API*, may change in minor versions without a deprecation period. + It is marked by the ``PyUnstable`` prefix in names. +- *Limited API*, is compatible across several minor releases. + When :c:macro:`Py_LIMITED_API` is defined, only this subset is exposed + from ``Python.h``. + +These are discussed in more detail below. + Names prefixed by an underscore, such as ``_Py_InternalState``, are private API that can change without notice even in patch releases. +If you need to use this API, consider reaching out to +`CPython developers `_ +to discuss adding public API for your use case. + +.. _unstable-c-api: + +Unstable C API +============== + +.. index:: single: PyUnstable + +Any API named with the ``PyUnstable`` prefix exposes CPython implementation +details, and may change in every minor release (e.g. from 3.9 to 3.10) without +any deprecation warnings. +However, it will not change in a bugfix release (e.g. from 3.10.0 to 3.10.1). + +It is generally intended for specialized, low-level tools like debuggers. + +Projects that use this API are expected to follow +CPython development and spend extra effort adjusting to changes. Stable Application Binary Interface diff --git a/Doc/tools/extensions/c_annotations.py b/Doc/tools/extensions/c_annotations.py index 9defb24a0331..e5dc82cf0dc2 100644 --- a/Doc/tools/extensions/c_annotations.py +++ b/Doc/tools/extensions/c_annotations.py @@ -143,6 +143,22 @@ def add_annotations(self, app, doctree): ' (Only some members are part of the stable ABI.)') node.insert(0, emph_node) + # Unstable API annotation. + if name.startswith('PyUnstable'): + warn_node = nodes.admonition( + classes=['unstable-c-api', 'warning']) + message = 'This is ' + emph_node = nodes.emphasis(message, message) + ref_node = addnodes.pending_xref( + 'Unstable API', refdomain="std", + reftarget='unstable-c-api', + reftype='ref', refexplicit="False") + ref_node += nodes.Text('Unstable API') + emph_node += ref_node + emph_node += nodes.Text('. It may change without warning in minor releases.') + warn_node += emph_node + node.insert(0, warn_node) + # Return value annotation if objtype != 'function': continue diff --git a/Doc/whatsnew/3.12.rst b/Doc/whatsnew/3.12.rst index 1a25ec6b7061..c0c021c67914 100644 --- a/Doc/whatsnew/3.12.rst +++ b/Doc/whatsnew/3.12.rst @@ -810,6 +810,29 @@ C API Changes New Features ------------ + +* :pep:`697`: Introduced the :ref:`Unstable C API tier `, + intended for low-level tools like debuggers and JIT compilers. + This API may change in each minor release of CPython without deprecation + warnings. + Its contents are marked by the ``PyUnstable_`` prefix in names. + + Code object constructors: + + - ``PyUnstable_Code_New()`` (renamed from ``PyCode_New``) + - ``PyUnstable_Code_NewWithPosOnlyArgs()`` (renamed from ``PyCode_NewWithPosOnlyArgs``) + + Extra storage for code objects (:pep:`523`): + + - ``PyUnstable_Eval_RequestCodeExtraIndex()`` (renamed from ``_PyEval_RequestCodeExtraIndex``) + - ``PyUnstable_Code_GetExtra()`` (renamed from ``_PyCode_GetExtra``) + - ``PyUnstable_Code_SetExtra()`` (renamed from ``_PyCode_SetExtra``) + + The original names will continue to be available until the respective + API changes. + + (Contributed by Petr Viktorin in :gh:`101101`.) + * Added the new limited C API function :c:func:`PyType_FromMetaclass`, which generalizes the existing :c:func:`PyType_FromModuleAndSpec` using an additional metaclass argument. diff --git a/Include/README.rst b/Include/README.rst index f52e690eac9a..531f09692f78 100644 --- a/Include/README.rst +++ b/Include/README.rst @@ -1,11 +1,13 @@ The Python C API ================ -The C API is divided into three sections: +The C API is divided into these sections: 1. ``Include/``: Limited API 2. ``Include/cpython/``: CPython implementation details -3. ``Include/internal/``: The internal API +3. ``Include/cpython/``, names with the ``PyUnstable_`` prefix: API that can + change between minor releases +4. ``Include/internal/``, and any name with ``_`` prefix: The internal API Information on changing the C API is available `in the developer guide`_ diff --git a/Include/cpython/ceval.h b/Include/cpython/ceval.h index 74665c9fa105..0fbbee10c2ed 100644 --- a/Include/cpython/ceval.h +++ b/Include/cpython/ceval.h @@ -22,7 +22,12 @@ PyAPI_FUNC(PyObject *) _PyEval_EvalFrameDefault(PyThreadState *tstate, struct _P PyAPI_FUNC(void) _PyEval_SetSwitchInterval(unsigned long microseconds); PyAPI_FUNC(unsigned long) _PyEval_GetSwitchInterval(void); -PyAPI_FUNC(Py_ssize_t) _PyEval_RequestCodeExtraIndex(freefunc); +PyAPI_FUNC(Py_ssize_t) PyUnstable_Eval_RequestCodeExtraIndex(freefunc); +// Old name -- remove when this API changes: +_Py_DEPRECATED_EXTERNALLY(3.12) static inline Py_ssize_t +_PyEval_RequestCodeExtraIndex(freefunc f) { + return PyUnstable_Eval_RequestCodeExtraIndex(f); +} PyAPI_FUNC(int) _PyEval_SliceIndex(PyObject *, Py_ssize_t *); PyAPI_FUNC(int) _PyEval_SliceIndexNotNone(PyObject *, Py_ssize_t *); diff --git a/Include/cpython/code.h b/Include/cpython/code.h index fba9296aedc9..0e4bd8a58c16 100644 --- a/Include/cpython/code.h +++ b/Include/cpython/code.h @@ -178,19 +178,40 @@ static inline int PyCode_GetFirstFree(PyCodeObject *op) { #define _PyCode_CODE(CO) _Py_RVALUE((_Py_CODEUNIT *)(CO)->co_code_adaptive) #define _PyCode_NBYTES(CO) (Py_SIZE(CO) * (Py_ssize_t)sizeof(_Py_CODEUNIT)) -/* Public interface */ -PyAPI_FUNC(PyCodeObject *) PyCode_New( +/* Unstable public interface */ +PyAPI_FUNC(PyCodeObject *) PyUnstable_Code_New( int, int, int, int, int, PyObject *, PyObject *, PyObject *, PyObject *, PyObject *, PyObject *, PyObject *, PyObject *, PyObject *, int, PyObject *, PyObject *); -PyAPI_FUNC(PyCodeObject *) PyCode_NewWithPosOnlyArgs( +PyAPI_FUNC(PyCodeObject *) PyUnstable_Code_NewWithPosOnlyArgs( int, int, int, int, int, int, PyObject *, PyObject *, PyObject *, PyObject *, PyObject *, PyObject *, PyObject *, PyObject *, PyObject *, int, PyObject *, PyObject *); /* same as struct above */ +// Old names -- remove when this API changes: +_Py_DEPRECATED_EXTERNALLY(3.12) static inline PyCodeObject * +PyCode_New( + int a, int b, int c, int d, int e, PyObject *f, PyObject *g, + PyObject *h, PyObject *i, PyObject *j, PyObject *k, + PyObject *l, PyObject *m, PyObject *n, int o, PyObject *p, + PyObject *q) +{ + return PyUnstable_Code_New( + a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q); +} +_Py_DEPRECATED_EXTERNALLY(3.12) static inline PyCodeObject * +PyCode_NewWithPosOnlyArgs( + int a, int poac, int b, int c, int d, int e, PyObject *f, PyObject *g, + PyObject *h, PyObject *i, PyObject *j, PyObject *k, + PyObject *l, PyObject *m, PyObject *n, int o, PyObject *p, + PyObject *q) +{ + return PyUnstable_Code_NewWithPosOnlyArgs( + a, poac, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q); +} /* Creates a new empty code object with the specified source location. */ PyAPI_FUNC(PyCodeObject *) @@ -269,11 +290,21 @@ PyAPI_FUNC(PyObject*) _PyCode_ConstantKey(PyObject *obj); PyAPI_FUNC(PyObject*) PyCode_Optimize(PyObject *code, PyObject* consts, PyObject *names, PyObject *lnotab); - -PyAPI_FUNC(int) _PyCode_GetExtra(PyObject *code, Py_ssize_t index, - void **extra); -PyAPI_FUNC(int) _PyCode_SetExtra(PyObject *code, Py_ssize_t index, - void *extra); +PyAPI_FUNC(int) PyUnstable_Code_GetExtra( + PyObject *code, Py_ssize_t index, void **extra); +PyAPI_FUNC(int) PyUnstable_Code_SetExtra( + PyObject *code, Py_ssize_t index, void *extra); +// Old names -- remove when this API changes: +_Py_DEPRECATED_EXTERNALLY(3.12) static inline int +_PyCode_GetExtra(PyObject *code, Py_ssize_t index, void **extra) +{ + return PyUnstable_Code_GetExtra(code, index, extra); +} +_Py_DEPRECATED_EXTERNALLY(3.12) static inline int +_PyCode_SetExtra(PyObject *code, Py_ssize_t index, void *extra) +{ + return PyUnstable_Code_SetExtra(code, index, extra); +} /* Equivalent to getattr(code, 'co_code') in Python. Returns a strong reference to a bytes object. */ diff --git a/Include/pyport.h b/Include/pyport.h index 40092c2f81ad..eef0fe1bfd71 100644 --- a/Include/pyport.h +++ b/Include/pyport.h @@ -323,6 +323,15 @@ extern "C" { #define Py_DEPRECATED(VERSION_UNUSED) #endif +// _Py_DEPRECATED_EXTERNALLY(version) +// Deprecated outside CPython core. +#ifdef Py_BUILD_CORE +#define _Py_DEPRECATED_EXTERNALLY(VERSION_UNUSED) +#else +#define _Py_DEPRECATED_EXTERNALLY(version) Py_DEPRECATED(version) +#endif + + #if defined(__clang__) #define _Py_COMP_DIAG_PUSH _Pragma("clang diagnostic push") #define _Py_COMP_DIAG_IGNORE_DEPR_DECLS \ diff --git a/Lib/test/test_code.py b/Lib/test/test_code.py index 9c2ac83e1b69..0cd1fb3f9728 100644 --- a/Lib/test/test_code.py +++ b/Lib/test/test_code.py @@ -752,15 +752,15 @@ def f(): py = ctypes.pythonapi freefunc = ctypes.CFUNCTYPE(None,ctypes.c_voidp) - RequestCodeExtraIndex = py._PyEval_RequestCodeExtraIndex + RequestCodeExtraIndex = py.PyUnstable_Eval_RequestCodeExtraIndex RequestCodeExtraIndex.argtypes = (freefunc,) RequestCodeExtraIndex.restype = ctypes.c_ssize_t - SetExtra = py._PyCode_SetExtra + SetExtra = py.PyUnstable_Code_SetExtra SetExtra.argtypes = (ctypes.py_object, ctypes.c_ssize_t, ctypes.c_voidp) SetExtra.restype = ctypes.c_int - GetExtra = py._PyCode_GetExtra + GetExtra = py.PyUnstable_Code_GetExtra GetExtra.argtypes = (ctypes.py_object, ctypes.c_ssize_t, ctypes.POINTER(ctypes.c_voidp)) GetExtra.restype = ctypes.c_int diff --git a/Misc/NEWS.d/next/C API/2022-04-21-17-25-22.gh-issue-91744.FgvaMi.rst b/Misc/NEWS.d/next/C API/2022-04-21-17-25-22.gh-issue-91744.FgvaMi.rst new file mode 100644 index 000000000000..20db25ddd0c1 --- /dev/null +++ b/Misc/NEWS.d/next/C API/2022-04-21-17-25-22.gh-issue-91744.FgvaMi.rst @@ -0,0 +1,3 @@ +Introduced the *Unstable C API tier*, marking APi that is allowed to change +in minor releases without a deprecation period. +See :pep:`689` for details. diff --git a/Modules/Setup.stdlib.in b/Modules/Setup.stdlib.in index 7551e5b34943..b12290d436cb 100644 --- a/Modules/Setup.stdlib.in +++ b/Modules/Setup.stdlib.in @@ -169,7 +169,7 @@ @MODULE__XXTESTFUZZ_TRUE at _xxtestfuzz _xxtestfuzz/_xxtestfuzz.c _xxtestfuzz/fuzzer.c @MODULE__TESTBUFFER_TRUE at _testbuffer _testbuffer.c @MODULE__TESTINTERNALCAPI_TRUE at _testinternalcapi _testinternalcapi.c - at MODULE__TESTCAPI_TRUE@_testcapi _testcapimodule.c _testcapi/vectorcall.c _testcapi/vectorcall_limited.c _testcapi/heaptype.c _testcapi/unicode.c _testcapi/getargs.c _testcapi/pytime.c _testcapi/datetime.c _testcapi/docstring.c _testcapi/mem.c _testcapi/watchers.c _testcapi/long.c _testcapi/float.c _testcapi/structmember.c _testcapi/exceptions.c + at MODULE__TESTCAPI_TRUE@_testcapi _testcapimodule.c _testcapi/vectorcall.c _testcapi/vectorcall_limited.c _testcapi/heaptype.c _testcapi/unicode.c _testcapi/getargs.c _testcapi/pytime.c _testcapi/datetime.c _testcapi/docstring.c _testcapi/mem.c _testcapi/watchers.c _testcapi/long.c _testcapi/float.c _testcapi/structmember.c _testcapi/exceptions.c _testcapi/code.c @MODULE__TESTCLINIC_TRUE at _testclinic _testclinic.c # Some testing modules MUST be built as shared libraries. diff --git a/Modules/_testcapi/code.c b/Modules/_testcapi/code.c new file mode 100644 index 000000000000..588dc67971ea --- /dev/null +++ b/Modules/_testcapi/code.c @@ -0,0 +1,115 @@ +#include "parts.h" + +static Py_ssize_t +get_code_extra_index(PyInterpreterState* interp) { + Py_ssize_t result = -1; + + static const char *key = "_testcapi.frame_evaluation.code_index"; + + PyObject *interp_dict = PyInterpreterState_GetDict(interp); // borrowed + assert(interp_dict); // real users would handle missing dict... somehow + + PyObject *index_obj = PyDict_GetItemString(interp_dict, key); // borrowed + Py_ssize_t index = 0; + if (!index_obj) { + if (PyErr_Occurred()) { + goto finally; + } + index = PyUnstable_Eval_RequestCodeExtraIndex(NULL); + if (index < 0 || PyErr_Occurred()) { + goto finally; + } + index_obj = PyLong_FromSsize_t(index); // strong ref + if (!index_obj) { + goto finally; + } + int res = PyDict_SetItemString(interp_dict, key, index_obj); + Py_DECREF(index_obj); + if (res < 0) { + goto finally; + } + } + else { + index = PyLong_AsSsize_t(index_obj); + if (index == -1 && PyErr_Occurred()) { + goto finally; + } + } + + result = index; +finally: + return result; +} + +static PyObject * +test_code_extra(PyObject* self, PyObject *Py_UNUSED(callable)) +{ + PyObject *result = NULL; + PyObject *test_module = NULL; + PyObject *test_func = NULL; + + // Get or initialize interpreter-specific code object storage index + PyInterpreterState *interp = PyInterpreterState_Get(); + if (!interp) { + return NULL; + } + Py_ssize_t code_extra_index = get_code_extra_index(interp); + if (PyErr_Occurred()) { + goto finally; + } + + // Get a function to test with + // This can be any Python function. Use `test.test_misc.testfunction`. + test_module = PyImport_ImportModule("test.test_capi.test_misc"); + if (!test_module) { + goto finally; + } + test_func = PyObject_GetAttrString(test_module, "testfunction"); + if (!test_func) { + goto finally; + } + PyObject *test_func_code = PyFunction_GetCode(test_func); // borrowed + if (!test_func_code) { + goto finally; + } + + // Check the value is initially NULL + void *extra; + int res = PyUnstable_Code_GetExtra(test_func_code, code_extra_index, &extra); + if (res < 0) { + goto finally; + } + assert (extra == NULL); + + // Set another code extra value + res = PyUnstable_Code_SetExtra(test_func_code, code_extra_index, (void*)(uintptr_t)77); + if (res < 0) { + goto finally; + } + // Assert it was set correctly + res = PyUnstable_Code_GetExtra(test_func_code, code_extra_index, &extra); + if (res < 0) { + goto finally; + } + assert ((uintptr_t)extra == 77); + + result = Py_NewRef(Py_None); +finally: + Py_XDECREF(test_module); + Py_XDECREF(test_func); + return result; +} + +static PyMethodDef TestMethods[] = { + {"test_code_extra", test_code_extra, METH_NOARGS}, + {NULL}, +}; + +int +_PyTestCapi_Init_Code(PyObject *m) { + if (PyModule_AddFunctions(m, TestMethods) < 0) { + return -1; + } + + return 0; +} diff --git a/Modules/_testcapi/parts.h b/Modules/_testcapi/parts.h index 1689f186b833..c8f31dc8e39f 100644 --- a/Modules/_testcapi/parts.h +++ b/Modules/_testcapi/parts.h @@ -37,6 +37,7 @@ int _PyTestCapi_Init_Long(PyObject *module); int _PyTestCapi_Init_Float(PyObject *module); int _PyTestCapi_Init_Structmember(PyObject *module); int _PyTestCapi_Init_Exceptions(PyObject *module); +int _PyTestCapi_Init_Code(PyObject *module); #ifdef LIMITED_API_AVAILABLE int _PyTestCapi_Init_VectorcallLimited(PyObject *module); diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index fc716a3564d3..10e507d6b481 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -4083,6 +4083,9 @@ PyInit__testcapi(void) if (_PyTestCapi_Init_Exceptions(m) < 0) { return NULL; } + if (_PyTestCapi_Init_Code(m) < 0) { + return NULL; + } #ifndef LIMITED_API_AVAILABLE PyModule_AddObjectRef(m, "LIMITED_API_AVAILABLE", Py_False); diff --git a/Objects/codeobject.c b/Objects/codeobject.c index a03b14edea8d..175bd57568f8 100644 --- a/Objects/codeobject.c +++ b/Objects/codeobject.c @@ -567,7 +567,8 @@ _PyCode_New(struct _PyCodeConstructor *con) ******************/ PyCodeObject * -PyCode_NewWithPosOnlyArgs(int argcount, int posonlyargcount, int kwonlyargcount, +PyUnstable_Code_NewWithPosOnlyArgs( + int argcount, int posonlyargcount, int kwonlyargcount, int nlocals, int stacksize, int flags, PyObject *code, PyObject *consts, PyObject *names, PyObject *varnames, PyObject *freevars, PyObject *cellvars, @@ -691,7 +692,7 @@ PyCode_NewWithPosOnlyArgs(int argcount, int posonlyargcount, int kwonlyargcount, } PyCodeObject * -PyCode_New(int argcount, int kwonlyargcount, +PyUnstable_Code_New(int argcount, int kwonlyargcount, int nlocals, int stacksize, int flags, PyObject *code, PyObject *consts, PyObject *names, PyObject *varnames, PyObject *freevars, PyObject *cellvars, @@ -1371,7 +1372,7 @@ typedef struct { int -_PyCode_GetExtra(PyObject *code, Py_ssize_t index, void **extra) +PyUnstable_Code_GetExtra(PyObject *code, Py_ssize_t index, void **extra) { if (!PyCode_Check(code)) { PyErr_BadInternalCall(); @@ -1392,7 +1393,7 @@ _PyCode_GetExtra(PyObject *code, Py_ssize_t index, void **extra) int -_PyCode_SetExtra(PyObject *code, Py_ssize_t index, void *extra) +PyUnstable_Code_SetExtra(PyObject *code, Py_ssize_t index, void *extra) { PyInterpreterState *interp = _PyInterpreterState_GET(); diff --git a/PCbuild/_testcapi.vcxproj b/PCbuild/_testcapi.vcxproj index 742eb3ed2d90..4cc184bfc1ac 100644 --- a/PCbuild/_testcapi.vcxproj +++ b/PCbuild/_testcapi.vcxproj @@ -108,6 +108,7 @@ + diff --git a/PCbuild/_testcapi.vcxproj.filters b/PCbuild/_testcapi.vcxproj.filters index ab5afc150c32..fbdaf04ce37c 100644 --- a/PCbuild/_testcapi.vcxproj.filters +++ b/PCbuild/_testcapi.vcxproj.filters @@ -54,6 +54,9 @@ Source Files + + Source Files + diff --git a/Python/ceval.c b/Python/ceval.c index b382d2109b93..5540c93d5e3d 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -2988,7 +2988,7 @@ format_awaitable_error(PyThreadState *tstate, PyTypeObject *type, int oparg) Py_ssize_t -_PyEval_RequestCodeExtraIndex(freefunc free) +PyUnstable_Eval_RequestCodeExtraIndex(freefunc free) { PyInterpreterState *interp = _PyInterpreterState_GET(); Py_ssize_t new_index; From webhook-mailer at python.org Tue Feb 28 06:29:48 2023 From: webhook-mailer at python.org (iritkatriel) Date: Tue, 28 Feb 2023 11:29:48 -0000 Subject: [Python-checkins] gh-87092: Make jump target label equal to the offset of the target in the instructions sequence (#102093) Message-ID: https://github.com/python/cpython/commit/9f799ab0200f07d762eb8fe822c606e1b4b4866e commit: 9f799ab0200f07d762eb8fe822c606e1b4b4866e branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-28T11:29:32Z summary: gh-87092: Make jump target label equal to the offset of the target in the instructions sequence (#102093) files: M Lib/test/support/bytecode_helper.py M Lib/test/test_compiler_codegen.py M Lib/test/test_peepholer.py M Python/compile.c diff --git a/Lib/test/support/bytecode_helper.py b/Lib/test/support/bytecode_helper.py index 65ae7a227baa..190fe8723b1f 100644 --- a/Lib/test/support/bytecode_helper.py +++ b/Lib/test/support/bytecode_helper.py @@ -50,18 +50,13 @@ class CompilationStepTestCase(unittest.TestCase): HAS_TARGET = set(dis.hasjrel + dis.hasjabs + dis.hasexc) HAS_ARG_OR_TARGET = HAS_ARG.union(HAS_TARGET) - def setUp(self): - self.last_label = 0 - - def Label(self): - self.last_label += 1 - return self.last_label + class Label: + pass def assertInstructionsMatch(self, actual_, expected_): # get two lists where each entry is a label or - # an instruction tuple. Compare them, while mapping - # each actual label to a corresponding expected label - # based on their locations. + # an instruction tuple. Normalize the labels to the + # instruction count of the target, and compare the lists. self.assertIsInstance(actual_, list) self.assertIsInstance(expected_, list) @@ -82,39 +77,35 @@ def assertInstructionsMatch(self, actual_, expected_): act = act[:len(exp)] self.assertEqual(exp, act) + def resolveAndRemoveLabels(self, insts): + idx = 0 + res = [] + for item in insts: + assert isinstance(item, (self.Label, tuple)) + if isinstance(item, self.Label): + item.value = idx + else: + idx += 1 + res.append(item) + + return res + def normalize_insts(self, insts): """ Map labels to instruction index. - Remove labels which are not used as jump targets. Map opcodes to opnames. """ - labels_map = {} - targets = set() - idx = 1 - for item in insts: - assert isinstance(item, (int, tuple)) - if isinstance(item, tuple): - opcode, oparg, *_ = item - if dis.opmap.get(opcode, opcode) in self.HAS_TARGET: - targets.add(oparg) - idx += 1 - elif isinstance(item, int): - assert item not in labels_map, "label reused" - labels_map[item] = idx - + insts = self.resolveAndRemoveLabels(insts) res = [] for item in insts: - if isinstance(item, int) and item in targets: - if not res or labels_map[item] != res[-1]: - res.append(labels_map[item]) - elif isinstance(item, tuple): - opcode, oparg, *loc = item - opcode = dis.opmap.get(opcode, opcode) - if opcode in self.HAS_TARGET: - arg = labels_map[oparg] - else: - arg = oparg if opcode in self.HAS_TARGET else None - opcode = dis.opname[opcode] - res.append((opcode, arg, *loc)) + assert isinstance(item, tuple) + opcode, oparg, *loc = item + opcode = dis.opmap.get(opcode, opcode) + if isinstance(oparg, self.Label): + arg = oparg.value + else: + arg = oparg if opcode in self.HAS_ARG else None + opcode = dis.opname[opcode] + res.append((opcode, arg, *loc)) return res @@ -129,20 +120,18 @@ class CfgOptimizationTestCase(CompilationStepTestCase): def complete_insts_info(self, insts): # fill in omitted fields in location, and oparg 0 for ops with no arg. - instructions = [] + res = [] for item in insts: - if isinstance(item, int): - instructions.append(item) - else: - assert isinstance(item, tuple) - inst = list(reversed(item)) - opcode = dis.opmap[inst.pop()] - oparg = inst.pop() if opcode in self.HAS_ARG_OR_TARGET else 0 - loc = inst + [-1] * (4 - len(inst)) - instructions.append((opcode, oparg, *loc)) - return instructions + assert isinstance(item, tuple) + inst = list(reversed(item)) + opcode = dis.opmap[inst.pop()] + oparg = inst.pop() if opcode in self.HAS_ARG_OR_TARGET else 0 + loc = inst + [-1] * (4 - len(inst)) + res.append((opcode, oparg, *loc)) + return res def get_optimized(self, insts, consts): + insts = self.normalize_insts(insts) insts = self.complete_insts_info(insts) insts = optimize_cfg(insts, consts) return insts, consts diff --git a/Lib/test/test_compiler_codegen.py b/Lib/test/test_compiler_codegen.py index f2e14c1e628c..022753e0c994 100644 --- a/Lib/test/test_compiler_codegen.py +++ b/Lib/test/test_compiler_codegen.py @@ -37,11 +37,11 @@ def test_for_loop(self): ('GET_ITER', None, 1), loop_lbl := self.Label(), ('FOR_ITER', exit_lbl := self.Label(), 1), - ('STORE_NAME', None, 1), + ('STORE_NAME', 1, 1), ('PUSH_NULL', None, 2), - ('LOAD_NAME', None, 2), - ('LOAD_NAME', None, 2), - ('CALL', None, 2), + ('LOAD_NAME', 2, 2), + ('LOAD_NAME', 1, 2), + ('CALL', 1, 2), ('POP_TOP', None), ('JUMP', loop_lbl), exit_lbl, diff --git a/Lib/test/test_peepholer.py b/Lib/test/test_peepholer.py index 707ff821b31a..aea234e38705 100644 --- a/Lib/test/test_peepholer.py +++ b/Lib/test/test_peepholer.py @@ -984,6 +984,7 @@ def cfg_optimization_test(self, insts, expected_insts, if expected_consts is None: expected_consts = consts opt_insts, opt_consts = self.get_optimized(insts, consts) + expected_insts = self.normalize_insts(expected_insts) self.assertInstructionsMatch(opt_insts, expected_insts) self.assertEqual(opt_consts, expected_consts) @@ -996,11 +997,11 @@ def test_conditional_jump_forward_non_const_condition(self): ('LOAD_CONST', 3, 14), ] expected = [ - ('LOAD_NAME', '1', 11), + ('LOAD_NAME', 1, 11), ('POP_JUMP_IF_TRUE', lbl := self.Label(), 12), - ('LOAD_CONST', '2', 13), + ('LOAD_CONST', 2, 13), lbl, - ('LOAD_CONST', '3', 14) + ('LOAD_CONST', 3, 14) ] self.cfg_optimization_test(insts, expected, consts=list(range(5))) @@ -1018,7 +1019,7 @@ def test_conditional_jump_forward_const_condition(self): expected = [ ('NOP', None, 11), ('NOP', None, 12), - ('LOAD_CONST', '3', 14) + ('LOAD_CONST', 3, 14) ] self.cfg_optimization_test(insts, expected, consts=list(range(5))) @@ -1031,9 +1032,9 @@ def test_conditional_jump_backward_non_const_condition(self): ] expected = [ lbl := self.Label(), - ('LOAD_NAME', '1', 11), + ('LOAD_NAME', 1, 11), ('POP_JUMP_IF_TRUE', lbl, 12), - ('LOAD_CONST', '2', 13) + ('LOAD_CONST', 2, 13) ] self.cfg_optimization_test(insts, expected, consts=list(range(5))) diff --git a/Python/compile.c b/Python/compile.c index 3f620beb0d02..2f1130e62ee1 100644 --- a/Python/compile.c +++ b/Python/compile.c @@ -9704,56 +9704,77 @@ instructions_to_cfg(PyObject *instructions, cfg_builder *g) { assert(PyList_Check(instructions)); - Py_ssize_t instr_size = PyList_GET_SIZE(instructions); - for (Py_ssize_t i = 0; i < instr_size; i++) { + Py_ssize_t num_insts = PyList_GET_SIZE(instructions); + bool *is_target = PyMem_Calloc(num_insts, sizeof(bool)); + if (is_target == NULL) { + return ERROR; + } + for (Py_ssize_t i = 0; i < num_insts; i++) { PyObject *item = PyList_GET_ITEM(instructions, i); - if (PyLong_Check(item)) { - int lbl_id = PyLong_AsLong(item); + if (!PyTuple_Check(item) || PyTuple_GET_SIZE(item) != 6) { + PyErr_SetString(PyExc_ValueError, "expected a 6-tuple"); + goto error; + } + int opcode = PyLong_AsLong(PyTuple_GET_ITEM(item, 0)); + if (PyErr_Occurred()) { + goto error; + } + if (HAS_TARGET(opcode)) { + int oparg = PyLong_AsLong(PyTuple_GET_ITEM(item, 1)); if (PyErr_Occurred()) { - return ERROR; + goto error; } - if (lbl_id <= 0 || lbl_id > instr_size) { - /* expect label in a reasonable range */ + if (oparg < 0 || oparg >= num_insts) { PyErr_SetString(PyExc_ValueError, "label out of range"); - return ERROR; + goto error; } - jump_target_label lbl = {lbl_id}; + is_target[oparg] = true; + } + } + + for (int i = 0; i < num_insts; i++) { + if (is_target[i]) { + jump_target_label lbl = {i}; RETURN_IF_ERROR(cfg_builder_use_label(g, lbl)); } - else { - if (!PyTuple_Check(item) || PyTuple_GET_SIZE(item) != 6) { - PyErr_SetString(PyExc_ValueError, "expected a 6-tuple"); - return ERROR; - } - int opcode = PyLong_AsLong(PyTuple_GET_ITEM(item, 0)); - if (PyErr_Occurred()) { - return ERROR; - } - int oparg = PyLong_AsLong(PyTuple_GET_ITEM(item, 1)); - if (PyErr_Occurred()) { - return ERROR; - } - location loc; - loc.lineno = PyLong_AsLong(PyTuple_GET_ITEM(item, 2)); - if (PyErr_Occurred()) { - return ERROR; - } - loc.end_lineno = PyLong_AsLong(PyTuple_GET_ITEM(item, 3)); - if (PyErr_Occurred()) { - return ERROR; - } - loc.col_offset = PyLong_AsLong(PyTuple_GET_ITEM(item, 4)); - if (PyErr_Occurred()) { - return ERROR; - } - loc.end_col_offset = PyLong_AsLong(PyTuple_GET_ITEM(item, 5)); - if (PyErr_Occurred()) { - return ERROR; - } - RETURN_IF_ERROR(cfg_builder_addop(g, opcode, oparg, loc)); + PyObject *item = PyList_GET_ITEM(instructions, i); + if (!PyTuple_Check(item) || PyTuple_GET_SIZE(item) != 6) { + PyErr_SetString(PyExc_ValueError, "expected a 6-tuple"); + goto error; + } + int opcode = PyLong_AsLong(PyTuple_GET_ITEM(item, 0)); + if (PyErr_Occurred()) { + goto error; + } + int oparg = PyLong_AsLong(PyTuple_GET_ITEM(item, 1)); + if (PyErr_Occurred()) { + goto error; + } + location loc; + loc.lineno = PyLong_AsLong(PyTuple_GET_ITEM(item, 2)); + if (PyErr_Occurred()) { + goto error; } + loc.end_lineno = PyLong_AsLong(PyTuple_GET_ITEM(item, 3)); + if (PyErr_Occurred()) { + goto error; + } + loc.col_offset = PyLong_AsLong(PyTuple_GET_ITEM(item, 4)); + if (PyErr_Occurred()) { + goto error; + } + loc.end_col_offset = PyLong_AsLong(PyTuple_GET_ITEM(item, 5)); + if (PyErr_Occurred()) { + goto error; + } + RETURN_IF_ERROR(cfg_builder_addop(g, opcode, oparg, loc)); } + + PyMem_Free(is_target); return SUCCESS; +error: + PyMem_Free(is_target); + return ERROR; } static PyObject * @@ -9763,20 +9784,12 @@ cfg_to_instructions(cfg_builder *g) if (instructions == NULL) { return NULL; } - int lbl = 1; + int lbl = 0; for (basicblock *b = g->g_entryblock; b != NULL; b = b->b_next) { - b->b_label = lbl++; + b->b_label = lbl; + lbl += b->b_iused; } for (basicblock *b = g->g_entryblock; b != NULL; b = b->b_next) { - PyObject *lbl = PyLong_FromLong(b->b_label); - if (lbl == NULL) { - goto error; - } - if (PyList_Append(instructions, lbl) != 0) { - Py_DECREF(lbl); - goto error; - } - Py_DECREF(lbl); for (int i = 0; i < b->b_iused; i++) { struct instr *instr = &b->b_instr[i]; location loc = instr->i_loc; From webhook-mailer at python.org Tue Feb 28 06:43:19 2023 From: webhook-mailer at python.org (hugovk) Date: Tue, 28 Feb 2023 11:43:19 -0000 Subject: [Python-checkins] GH-90744: Fix erroneous doc links in the sys module (#101319) Message-ID: https://github.com/python/cpython/commit/85b1fc1fc5a2e49d521382eaf5e7793175d00371 commit: 85b1fc1fc5a2e49d521382eaf5e7793175d00371 branch: main author: Furkan Onder committer: hugovk date: 2023-02-28T13:43:00+02:00 summary: GH-90744: Fix erroneous doc links in the sys module (#101319) Co-authored-by: Hugo van Kemenade files: M Doc/library/sys.rst diff --git a/Doc/library/sys.rst b/Doc/library/sys.rst index 605e2c9a6710..23c5bbed0c6f 100644 --- a/Doc/library/sys.rst +++ b/Doc/library/sys.rst @@ -568,55 +568,55 @@ always available. .. tabularcolumns:: |l|l|L| - +---------------------+----------------+--------------------------------------------------+ - | attribute | float.h macro | explanation | - +=====================+================+==================================================+ - | :const:`epsilon` | DBL_EPSILON | difference between 1.0 and the least value | - | | | greater than 1.0 that is representable as a float| - | | | | - | | | See also :func:`math.ulp`. | - +---------------------+----------------+--------------------------------------------------+ - | :const:`dig` | DBL_DIG | maximum number of decimal digits that can be | - | | | faithfully represented in a float; see below | - +---------------------+----------------+--------------------------------------------------+ - | :const:`mant_dig` | DBL_MANT_DIG | float precision: the number of base-``radix`` | - | | | digits in the significand of a float | - +---------------------+----------------+--------------------------------------------------+ - | :const:`max` | DBL_MAX | maximum representable positive finite float | - +---------------------+----------------+--------------------------------------------------+ - | :const:`max_exp` | DBL_MAX_EXP | maximum integer *e* such that ``radix**(e-1)`` is| - | | | a representable finite float | - +---------------------+----------------+--------------------------------------------------+ - | :const:`max_10_exp` | DBL_MAX_10_EXP | maximum integer *e* such that ``10**e`` is in the| - | | | range of representable finite floats | - +---------------------+----------------+--------------------------------------------------+ - | :const:`min` | DBL_MIN | minimum representable positive *normalized* float| - | | | | - | | | Use :func:`math.ulp(0.0) ` to get the | - | | | smallest positive *denormalized* representable | - | | | float. | - +---------------------+----------------+--------------------------------------------------+ - | :const:`min_exp` | DBL_MIN_EXP | minimum integer *e* such that ``radix**(e-1)`` is| - | | | a normalized float | - +---------------------+----------------+--------------------------------------------------+ - | :const:`min_10_exp` | DBL_MIN_10_EXP | minimum integer *e* such that ``10**e`` is a | - | | | normalized float | - +---------------------+----------------+--------------------------------------------------+ - | :const:`radix` | FLT_RADIX | radix of exponent representation | - +---------------------+----------------+--------------------------------------------------+ - | :const:`rounds` | FLT_ROUNDS | integer representing the rounding mode for | - | | | floating-point arithmetic. This reflects the | - | | | value of the system FLT_ROUNDS macro at | - | | | interpreter startup time: | - | | | ``-1`` indeterminable, | - | | | ``0`` toward zero, | - | | | ``1`` to nearest, | - | | | ``2`` toward positive infinity, | - | | | ``3`` toward negative infinity | - | | | | - | | | All other values for FLT_ROUNDS characterize | - | | | implementation-defined rounding behavior. | - +---------------------+----------------+--------------------------------------------------+ + +---------------------+---------------------+--------------------------------------------------+ + | attribute | float.h macro | explanation | + +=====================+=====================+==================================================+ + | ``epsilon`` | ``DBL_EPSILON`` | difference between 1.0 and the least value | + | | | greater than 1.0 that is representable as a float| + | | | | + | | | See also :func:`math.ulp`. | + +---------------------+---------------------+--------------------------------------------------+ + | ``dig`` | ``DBL_DIG`` | maximum number of decimal digits that can be | + | | | faithfully represented in a float; see below | + +---------------------+---------------------+--------------------------------------------------+ + | ``mant_dig`` | ``DBL_MANT_DIG`` | float precision: the number of base-``radix`` | + | | | digits in the significand of a float | + +---------------------+---------------------+--------------------------------------------------+ + | ``max`` | ``DBL_MAX`` | maximum representable positive finite float | + +---------------------+---------------------+--------------------------------------------------+ + | ``max_exp`` | ``DBL_MAX_EXP`` | maximum integer *e* such that ``radix**(e-1)`` is| + | | | a representable finite float | + +---------------------+---------------------+--------------------------------------------------+ + | ``max_10_exp`` | ``DBL_MAX_10_EXP`` | maximum integer *e* such that ``10**e`` is in the| + | | | range of representable finite floats | + +---------------------+---------------------+--------------------------------------------------+ + | ``min`` | ``DBL_MIN`` | minimum representable positive *normalized* float| + | | | | + | | | Use :func:`math.ulp(0.0) ` to get the | + | | | smallest positive *denormalized* representable | + | | | float. | + +---------------------+---------------------+--------------------------------------------------+ + | ``min_exp`` | ``DBL_MIN_EXP`` | minimum integer *e* such that ``radix**(e-1)`` is| + | | | a normalized float | + +---------------------+---------------------+--------------------------------------------------+ + | ``min_10_exp`` | ``DBL_MIN_10_EXP`` | minimum integer *e* such that ``10**e`` is a | + | | | normalized float | + +---------------------+---------------------+--------------------------------------------------+ + | ``radix`` | ``FLT_RADIX`` | radix of exponent representation | + +---------------------+---------------------+--------------------------------------------------+ + | ``rounds`` | ``FLT_ROUNDS`` | integer representing the rounding mode for | + | | | floating-point arithmetic. This reflects the | + | | | value of the system ``FLT_ROUNDS`` macro at | + | | | interpreter startup time: | + | | | ``-1`` indeterminable, | + | | | ``0`` toward zero, | + | | | ``1`` to nearest, | + | | | ``2`` toward positive infinity, | + | | | ``3`` toward negative infinity | + | | | | + | | | All other values for ``FLT_ROUNDS`` characterize | + | | | implementation-defined rounding behavior. | + +---------------------+---------------------+--------------------------------------------------+ The attribute :attr:`sys.float_info.dig` needs further explanation. If ``s`` is any string representing a decimal number with at most From webhook-mailer at python.org Tue Feb 28 06:51:00 2023 From: webhook-mailer at python.org (iritkatriel) Date: Tue, 28 Feb 2023 11:51:00 -0000 Subject: [Python-checkins] gh-102192: Replace PyErr_Fetch/Restore etc by more efficient alternatives (in Python/) (#102193) Message-ID: https://github.com/python/cpython/commit/4c87537efb5fd28b4e4ee9631076ed5953720156 commit: 4c87537efb5fd28b4e4ee9631076ed5953720156 branch: main author: Irit Katriel <1055913+iritkatriel at users.noreply.github.com> committer: iritkatriel <1055913+iritkatriel at users.noreply.github.com> date: 2023-02-28T11:50:52Z summary: gh-102192: Replace PyErr_Fetch/Restore etc by more efficient alternatives (in Python/) (#102193) files: M Python/bytecodes.c M Python/ceval.c M Python/ceval_gil.c M Python/compile.c M Python/frame.c M Python/initconfig.c M Python/modsupport.c M Python/sysmodule.c diff --git a/Python/bytecodes.c b/Python/bytecodes.c index 7e9b36f69721..63dbecad3b45 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -17,7 +17,7 @@ #include "pycore_object.h" // _PyObject_GC_TRACK() #include "pycore_moduleobject.h" // PyModuleObject #include "pycore_opcode.h" // EXTRA_CASES -#include "pycore_pyerrors.h" // _PyErr_Fetch() +#include "pycore_pyerrors.h" // _PyErr_GetRaisedException() #include "pycore_pymem.h" // _PyMem_IsPtrFreed() #include "pycore_pystate.h" // _PyInterpreterState_GET() #include "pycore_range.h" // _PyRangeIterObject diff --git a/Python/ceval.c b/Python/ceval.c index 5540c93d5e3d..b422d0ed34ed 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -13,7 +13,7 @@ #include "pycore_object.h" // _PyObject_GC_TRACK() #include "pycore_moduleobject.h" // PyModuleObject #include "pycore_opcode.h" // EXTRA_CASES -#include "pycore_pyerrors.h" // _PyErr_Fetch() +#include "pycore_pyerrors.h" // _PyErr_Fetch(), _PyErr_GetRaisedException() #include "pycore_pymem.h" // _PyMem_IsPtrFreed() #include "pycore_pystate.h" // _PyInterpreterState_GET() #include "pycore_range.h" // _PyRangeIterObject @@ -105,8 +105,7 @@ static void dump_stack(_PyInterpreterFrame *frame, PyObject **stack_pointer) { PyObject **stack_base = _PyFrame_Stackbase(frame); - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); + PyObject *exc = PyErr_GetRaisedException(); printf(" stack=["); for (PyObject **ptr = stack_base; ptr < stack_pointer; ptr++) { if (ptr != stack_base) { @@ -120,7 +119,7 @@ dump_stack(_PyInterpreterFrame *frame, PyObject **stack_pointer) } printf("]\n"); fflush(stdout); - PyErr_Restore(type, value, traceback); + PyErr_SetRaisedException(exc); } static void @@ -157,8 +156,7 @@ lltrace_resume_frame(_PyInterpreterFrame *frame) return; } PyFunctionObject *f = (PyFunctionObject *)fobj; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); + PyObject *exc = PyErr_GetRaisedException(); PyObject *name = f->func_qualname; if (name == NULL) { name = f->func_name; @@ -178,7 +176,7 @@ lltrace_resume_frame(_PyInterpreterFrame *frame) } printf("\n"); fflush(stdout); - PyErr_Restore(type, value, traceback); + PyErr_SetRaisedException(exc); } #endif static int call_trace(Py_tracefunc, PyObject *, @@ -1032,7 +1030,6 @@ _PyEval_EvalFrameDefault(PyThreadState *tstate, _PyInterpreterFrame *frame, int PyObject *v = POP(); Py_XDECREF(v); } - PyObject *exc, *val, *tb; if (lasti) { int frame_lasti = _PyInterpreterFrame_LASTI(frame); PyObject *lasti = PyLong_FromLong(frame_lasti); @@ -1041,19 +1038,12 @@ _PyEval_EvalFrameDefault(PyThreadState *tstate, _PyInterpreterFrame *frame, int } PUSH(lasti); } - _PyErr_Fetch(tstate, &exc, &val, &tb); + /* Make the raw exception data available to the handler, so a program can emulate the Python main loop. */ - _PyErr_NormalizeException(tstate, &exc, &val, &tb); - if (tb != NULL) - PyException_SetTraceback(val, tb); - else - PyException_SetTraceback(val, Py_None); - Py_XDECREF(tb); - Py_XDECREF(exc); - PUSH(val); + PUSH(_PyErr_GetRaisedException(tstate)); JUMPTO(handler); /* Resume normal execution */ DISPATCH(); @@ -2075,19 +2065,15 @@ call_trace_protected(Py_tracefunc func, PyObject *obj, PyThreadState *tstate, _PyInterpreterFrame *frame, int what, PyObject *arg) { - PyObject *type, *value, *traceback; - int err; - _PyErr_Fetch(tstate, &type, &value, &traceback); - err = call_trace(func, obj, tstate, frame, what, arg); + PyObject *exc = _PyErr_GetRaisedException(tstate); + int err = call_trace(func, obj, tstate, frame, what, arg); if (err == 0) { - _PyErr_Restore(tstate, type, value, traceback); + _PyErr_SetRaisedException(tstate, exc); return 0; } else { - Py_XDECREF(type); - Py_XDECREF(value); - Py_XDECREF(traceback); + Py_XDECREF(exc); return -1; } } @@ -2935,18 +2921,15 @@ format_exc_check_arg(PyThreadState *tstate, PyObject *exc, if (exc == PyExc_NameError) { // Include the name in the NameError exceptions to offer suggestions later. - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - PyErr_NormalizeException(&type, &value, &traceback); - if (PyErr_GivenExceptionMatches(value, PyExc_NameError)) { - PyNameErrorObject* exc = (PyNameErrorObject*) value; - if (exc->name == NULL) { + PyObject *exc = PyErr_GetRaisedException(); + if (PyErr_GivenExceptionMatches(exc, PyExc_NameError)) { + if (((PyNameErrorObject*)exc)->name == NULL) { // We do not care if this fails because we are going to restore the // NameError anyway. - (void)PyObject_SetAttr(value, &_Py_ID(name), obj); + (void)PyObject_SetAttr(exc, &_Py_ID(name), obj); } } - PyErr_Restore(type, value, traceback); + PyErr_SetRaisedException(exc); } } diff --git a/Python/ceval_gil.c b/Python/ceval_gil.c index 1bf223348d28..749d8144bf7a 100644 --- a/Python/ceval_gil.c +++ b/Python/ceval_gil.c @@ -2,7 +2,7 @@ #include "Python.h" #include "pycore_atomic.h" // _Py_atomic_int #include "pycore_ceval.h" // _PyEval_SignalReceived() -#include "pycore_pyerrors.h" // _PyErr_Fetch() +#include "pycore_pyerrors.h" // _PyErr_GetRaisedException() #include "pycore_pylifecycle.h" // _PyErr_Print() #include "pycore_initconfig.h" // _PyStatus_OK() #include "pycore_interp.h" // _Py_RunGC() @@ -870,10 +870,9 @@ _Py_FinishPendingCalls(PyThreadState *tstate) } if (make_pending_calls(tstate->interp) < 0) { - PyObject *exc, *val, *tb; - _PyErr_Fetch(tstate, &exc, &val, &tb); + PyObject *exc = _PyErr_GetRaisedException(tstate); PyErr_BadInternalCall(); - _PyErr_ChainExceptions(exc, val, tb); + _PyErr_ChainExceptions1(exc); _PyErr_Print(tstate); } } diff --git a/Python/compile.c b/Python/compile.c index 2f1130e62ee1..b14c637210ff 100644 --- a/Python/compile.c +++ b/Python/compile.c @@ -1633,8 +1633,7 @@ static void compiler_exit_scope(struct compiler *c) { // Don't call PySequence_DelItem() with an exception raised - PyObject *exc_type, *exc_val, *exc_tb; - PyErr_Fetch(&exc_type, &exc_val, &exc_tb); + PyObject *exc = PyErr_GetRaisedException(); c->c_nestlevel--; compiler_unit_free(c->u); @@ -1655,7 +1654,7 @@ compiler_exit_scope(struct compiler *c) c->u = NULL; } - PyErr_Restore(exc_type, exc_val, exc_tb); + PyErr_SetRaisedException(exc); } /* Search if variable annotations are present statically in a block. */ diff --git a/Python/frame.c b/Python/frame.c index b562709ce10f..c2c0be301139 100644 --- a/Python/frame.c +++ b/Python/frame.c @@ -29,17 +29,14 @@ PyFrameObject * _PyFrame_MakeAndSetFrameObject(_PyInterpreterFrame *frame) { assert(frame->frame_obj == NULL); - PyObject *error_type, *error_value, *error_traceback; - PyErr_Fetch(&error_type, &error_value, &error_traceback); + PyObject *exc = PyErr_GetRaisedException(); PyFrameObject *f = _PyFrame_New_NoTrack(frame->f_code); if (f == NULL) { - Py_XDECREF(error_type); - Py_XDECREF(error_value); - Py_XDECREF(error_traceback); + Py_XDECREF(exc); return NULL; } - PyErr_Restore(error_type, error_value, error_traceback); + PyErr_SetRaisedException(exc); if (frame->frame_obj) { // GH-97002: How did we get into this horrible situation? Most likely, // allocating f triggered a GC collection, which ran some code that diff --git a/Python/initconfig.c b/Python/initconfig.c index deec805a6b1c..db7f11e17d66 100644 --- a/Python/initconfig.c +++ b/Python/initconfig.c @@ -5,7 +5,7 @@ #include "pycore_interp.h" // _PyInterpreterState.runtime #include "pycore_long.h" // _PY_LONG_MAX_STR_DIGITS_THRESHOLD #include "pycore_pathconfig.h" // _Py_path_config -#include "pycore_pyerrors.h" // _PyErr_Fetch() +#include "pycore_pyerrors.h" // _PyErr_GetRaisedException() #include "pycore_pylifecycle.h" // _Py_PreInitializeFromConfig() #include "pycore_pymem.h" // _PyMem_SetDefaultAllocator() #include "pycore_pystate.h" // _PyThreadState_GET() diff --git a/Python/modsupport.c b/Python/modsupport.c index b9a10dc157e7..75698455c881 100644 --- a/Python/modsupport.c +++ b/Python/modsupport.c @@ -93,16 +93,12 @@ static PyObject *do_mkvalue(const char**, va_list *, int); static void do_ignore(const char **p_format, va_list *p_va, char endchar, Py_ssize_t n, int flags) { - PyObject *v; - Py_ssize_t i; assert(PyErr_Occurred()); - v = PyTuple_New(n); - for (i = 0; i < n; i++) { - PyObject *exception, *value, *tb, *w; - - PyErr_Fetch(&exception, &value, &tb); - w = do_mkvalue(p_format, p_va, flags); - PyErr_Restore(exception, value, tb); + PyObject *v = PyTuple_New(n); + for (Py_ssize_t i = 0; i < n; i++) { + PyObject *exc = PyErr_GetRaisedException(); + PyObject *w = do_mkvalue(p_format, p_va, flags); + PyErr_SetRaisedException(exc); if (w != NULL) { if (v != NULL) { PyTuple_SET_ITEM(v, i, w); diff --git a/Python/sysmodule.c b/Python/sysmodule.c index b69b80356092..207abb964bca 100644 --- a/Python/sysmodule.c +++ b/Python/sysmodule.c @@ -23,7 +23,7 @@ Data members: #include "pycore_namespace.h" // _PyNamespace_New() #include "pycore_object.h" // _PyObject_IS_GC() #include "pycore_pathconfig.h" // _PyPathConfig_ComputeSysPath0() -#include "pycore_pyerrors.h" // _PyErr_Fetch() +#include "pycore_pyerrors.h" // _PyErr_GetRaisedException() #include "pycore_pylifecycle.h" // _PyErr_WriteUnraisableDefaultHook() #include "pycore_pymath.h" // _PY_SHORT_FLOAT_REPR #include "pycore_pymem.h" // _PyMem_SetDefaultAllocator() @@ -89,12 +89,11 @@ PySys_GetObject(const char *name) { PyThreadState *tstate = _PyThreadState_GET(); - PyObject *exc_type, *exc_value, *exc_tb; - _PyErr_Fetch(tstate, &exc_type, &exc_value, &exc_tb); + PyObject *exc = _PyErr_GetRaisedException(tstate); PyObject *value = _PySys_GetObject(tstate->interp, name); /* XXX Suppress a new exception if it was raised and restore * the old one. */ - _PyErr_Restore(tstate, exc_type, exc_value, exc_tb); + _PyErr_SetRaisedException(tstate, exc); return value; } @@ -203,8 +202,8 @@ sys_audit_tstate(PyThreadState *ts, const char *event, int dtrace = PyDTrace_AUDIT_ENABLED(); - PyObject *exc_type, *exc_value, *exc_tb; - _PyErr_Fetch(ts, &exc_type, &exc_value, &exc_tb); + + PyObject *exc = _PyErr_GetRaisedException(ts); /* Initialize event args now */ if (argFormat && argFormat[0]) { @@ -287,13 +286,11 @@ sys_audit_tstate(PyThreadState *ts, const char *event, Py_XDECREF(eventArgs); if (!res) { - _PyErr_Restore(ts, exc_type, exc_value, exc_tb); + _PyErr_SetRaisedException(ts, exc); } else { assert(_PyErr_Occurred(ts)); - Py_XDECREF(exc_type); - Py_XDECREF(exc_value); - Py_XDECREF(exc_tb); + Py_XDECREF(exc); } return res; @@ -3661,12 +3658,11 @@ static void sys_write(PyObject *key, FILE *fp, const char *format, va_list va) { PyObject *file; - PyObject *error_type, *error_value, *error_traceback; char buffer[1001]; int written; PyThreadState *tstate = _PyThreadState_GET(); - _PyErr_Fetch(tstate, &error_type, &error_value, &error_traceback); + PyObject *exc = _PyErr_GetRaisedException(tstate); file = _PySys_GetAttr(tstate, key); written = PyOS_vsnprintf(buffer, sizeof(buffer), format, va); if (sys_pyfile_write(buffer, file) != 0) { @@ -3678,7 +3674,7 @@ sys_write(PyObject *key, FILE *fp, const char *format, va_list va) if (sys_pyfile_write(truncated, file) != 0) fputs(truncated, fp); } - _PyErr_Restore(tstate, error_type, error_value, error_traceback); + _PyErr_SetRaisedException(tstate, exc); } void @@ -3708,7 +3704,7 @@ sys_format(PyObject *key, FILE *fp, const char *format, va_list va) const char *utf8; PyThreadState *tstate = _PyThreadState_GET(); - PyObject *error = _PyErr_GetRaisedException(tstate); + PyObject *exc = _PyErr_GetRaisedException(tstate); file = _PySys_GetAttr(tstate, key); message = PyUnicode_FromFormatV(format, va); if (message != NULL) { @@ -3720,7 +3716,7 @@ sys_format(PyObject *key, FILE *fp, const char *format, va_list va) } Py_DECREF(message); } - _PyErr_SetRaisedException(tstate, error); + _PyErr_SetRaisedException(tstate, exc); } void From webhook-mailer at python.org Tue Feb 28 08:23:52 2023 From: webhook-mailer at python.org (ewdurbin) Date: Tue, 28 Feb 2023 13:23:52 -0000 Subject: [Python-checkins] Migrate to new PSF mailgun account (#102284) Message-ID: https://github.com/python/cpython/commit/e1a90ec75cd0f5cab21b467a73404081eaa1077f commit: e1a90ec75cd0f5cab21b467a73404081eaa1077f branch: main author: Ee Durbin committer: ewdurbin date: 2023-02-28T08:23:39-05:00 summary: Migrate to new PSF mailgun account (#102284) Our legacy mailgun account is associated with a parent rackspace account that I am trying to decomission. The necessary secret has been added to the GitHub Actions Secrets already, so this is ready to go on approval. files: M .github/workflows/new-bugs-announce-notifier.yml diff --git a/.github/workflows/new-bugs-announce-notifier.yml b/.github/workflows/new-bugs-announce-notifier.yml index b2b63472d834..b2a76ef7d361 100644 --- a/.github/workflows/new-bugs-announce-notifier.yml +++ b/.github/workflows/new-bugs-announce-notifier.yml @@ -19,13 +19,13 @@ jobs: - name: Send notification uses: actions/github-script at v6 env: - MAILGUN_API_KEY: ${{ secrets.PSF_MAILGUN_KEY }} + MAILGUN_API_KEY: ${{ secrets.MAILGUN_PYTHON_ORG_MAILGUN_KEY }} with: script: | const Mailgun = require("mailgun.js"); const formData = require('form-data'); const mailgun = new Mailgun(formData); - const DOMAIN = "mg.python.org"; + const DOMAIN = "mailgun.python.org"; const mg = mailgun.client({username: 'api', key: process.env.MAILGUN_API_KEY}); github.rest.issues.get({ issue_number: context.issue.number, @@ -44,7 +44,7 @@ jobs: }; const data = { - from: "CPython Issues ", + from: "CPython Issues ", to: "new-bugs-announce at python.org", subject: `[Issue ${issue.data.number}] ${issue.data.title}`, template: "new-github-issue", From webhook-mailer at python.org Tue Feb 28 11:49:44 2023 From: webhook-mailer at python.org (gvanrossum) Date: Tue, 28 Feb 2023 16:49:44 -0000 Subject: [Python-checkins] GH-102305: Expand some macros in generated_cases.c.h (#102309) Message-ID: https://github.com/python/cpython/commit/b5ff38243355c06d665ba8245d461a0d82504581 commit: b5ff38243355c06d665ba8245d461a0d82504581 branch: main author: Guido van Rossum committer: gvanrossum date: 2023-02-28T08:49:35-08:00 summary: GH-102305: Expand some macros in generated_cases.c.h (#102309) * Emit straight stack_pointer[-i] instead of PEEK(i), POKE(i, ...) * Expand JUMPBY() and NEXTOPARG(), and fix a perf bug files: M Python/generated_cases.c.h M Tools/cases_generator/generate_cases.py diff --git a/Python/generated_cases.c.h b/Python/generated_cases.c.h index 271ba26f4895..f59f7c17451c 100644 --- a/Python/generated_cases.c.h +++ b/Python/generated_cases.c.h @@ -22,7 +22,7 @@ if (value == NULL) goto unbound_local_error; Py_INCREF(value); STACK_GROW(1); - POKE(1, value); + stack_pointer[-1] = value; DISPATCH(); } @@ -32,7 +32,7 @@ if (value == NULL) goto unbound_local_error; Py_INCREF(value); STACK_GROW(1); - POKE(1, value); + stack_pointer[-1] = value; DISPATCH(); } @@ -42,7 +42,7 @@ assert(value != NULL); Py_INCREF(value); STACK_GROW(1); - POKE(1, value); + stack_pointer[-1] = value; DISPATCH(); } @@ -52,12 +52,12 @@ value = GETITEM(consts, oparg); Py_INCREF(value); STACK_GROW(1); - POKE(1, value); + stack_pointer[-1] = value; DISPATCH(); } TARGET(STORE_FAST) { - PyObject *value = PEEK(1); + PyObject *value = stack_pointer[-1]; SETLOCAL(oparg, value); STACK_SHRINK(1); DISPATCH(); @@ -73,8 +73,7 @@ Py_INCREF(value); _tmp_2 = value; } - NEXTOPARG(); - JUMPBY(1); + oparg = (next_instr++)->op.arg; { PyObject *value; value = GETLOCAL(oparg); @@ -83,8 +82,8 @@ _tmp_1 = value; } STACK_GROW(2); - POKE(1, _tmp_1); - POKE(2, _tmp_2); + stack_pointer[-1] = _tmp_1; + stack_pointer[-2] = _tmp_2; DISPATCH(); } @@ -98,8 +97,7 @@ Py_INCREF(value); _tmp_2 = value; } - NEXTOPARG(); - JUMPBY(1); + oparg = (next_instr++)->op.arg; { PyObject *value; value = GETITEM(consts, oparg); @@ -107,19 +105,18 @@ _tmp_1 = value; } STACK_GROW(2); - POKE(1, _tmp_1); - POKE(2, _tmp_2); + stack_pointer[-1] = _tmp_1; + stack_pointer[-2] = _tmp_2; DISPATCH(); } TARGET(STORE_FAST__LOAD_FAST) { - PyObject *_tmp_1 = PEEK(1); + PyObject *_tmp_1 = stack_pointer[-1]; { PyObject *value = _tmp_1; SETLOCAL(oparg, value); } - NEXTOPARG(); - JUMPBY(1); + oparg = (next_instr++)->op.arg; { PyObject *value; value = GETLOCAL(oparg); @@ -127,19 +124,18 @@ Py_INCREF(value); _tmp_1 = value; } - POKE(1, _tmp_1); + stack_pointer[-1] = _tmp_1; DISPATCH(); } TARGET(STORE_FAST__STORE_FAST) { - PyObject *_tmp_1 = PEEK(1); - PyObject *_tmp_2 = PEEK(2); + PyObject *_tmp_1 = stack_pointer[-1]; + PyObject *_tmp_2 = stack_pointer[-2]; { PyObject *value = _tmp_1; SETLOCAL(oparg, value); } - NEXTOPARG(); - JUMPBY(1); + oparg = (next_instr++)->op.arg; { PyObject *value = _tmp_2; SETLOCAL(oparg, value); @@ -157,8 +153,7 @@ Py_INCREF(value); _tmp_2 = value; } - NEXTOPARG(); - JUMPBY(1); + oparg = (next_instr++)->op.arg; { PyObject *value; value = GETLOCAL(oparg); @@ -167,13 +162,13 @@ _tmp_1 = value; } STACK_GROW(2); - POKE(1, _tmp_1); - POKE(2, _tmp_2); + stack_pointer[-1] = _tmp_1; + stack_pointer[-2] = _tmp_2; DISPATCH(); } TARGET(POP_TOP) { - PyObject *value = PEEK(1); + PyObject *value = stack_pointer[-1]; Py_DECREF(value); STACK_SHRINK(1); DISPATCH(); @@ -183,13 +178,13 @@ PyObject *res; res = NULL; STACK_GROW(1); - POKE(1, res); + stack_pointer[-1] = res; DISPATCH(); } TARGET(END_FOR) { - PyObject *_tmp_1 = PEEK(1); - PyObject *_tmp_2 = PEEK(2); + PyObject *_tmp_1 = stack_pointer[-1]; + PyObject *_tmp_2 = stack_pointer[-2]; { PyObject *value = _tmp_1; Py_DECREF(value); @@ -203,17 +198,17 @@ } TARGET(UNARY_NEGATIVE) { - PyObject *value = PEEK(1); + PyObject *value = stack_pointer[-1]; PyObject *res; res = PyNumber_Negative(value); Py_DECREF(value); if (res == NULL) goto pop_1_error; - POKE(1, res); + stack_pointer[-1] = res; DISPATCH(); } TARGET(UNARY_NOT) { - PyObject *value = PEEK(1); + PyObject *value = stack_pointer[-1]; PyObject *res; int err = PyObject_IsTrue(value); Py_DECREF(value); @@ -225,23 +220,23 @@ res = Py_False; } Py_INCREF(res); - POKE(1, res); + stack_pointer[-1] = res; DISPATCH(); } TARGET(UNARY_INVERT) { - PyObject *value = PEEK(1); + PyObject *value = stack_pointer[-1]; PyObject *res; res = PyNumber_Invert(value); Py_DECREF(value); if (res == NULL) goto pop_1_error; - POKE(1, res); + stack_pointer[-1] = res; DISPATCH(); } TARGET(BINARY_OP_MULTIPLY_INT) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; PyObject *prod; assert(cframe.use_tracing == 0); DEOPT_IF(!PyLong_CheckExact(left), BINARY_OP); @@ -252,14 +247,14 @@ _Py_DECREF_SPECIALIZED(left, (destructor)PyObject_Free); if (prod == NULL) goto pop_2_error; STACK_SHRINK(1); - POKE(1, prod); - JUMPBY(1); + stack_pointer[-1] = prod; + next_instr += 1; DISPATCH(); } TARGET(BINARY_OP_MULTIPLY_FLOAT) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; PyObject *prod; assert(cframe.use_tracing == 0); DEOPT_IF(!PyFloat_CheckExact(left), BINARY_OP); @@ -272,14 +267,14 @@ _Py_DECREF_SPECIALIZED(left, _PyFloat_ExactDealloc); if (prod == NULL) goto pop_2_error; STACK_SHRINK(1); - POKE(1, prod); - JUMPBY(1); + stack_pointer[-1] = prod; + next_instr += 1; DISPATCH(); } TARGET(BINARY_OP_SUBTRACT_INT) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; PyObject *sub; assert(cframe.use_tracing == 0); DEOPT_IF(!PyLong_CheckExact(left), BINARY_OP); @@ -290,14 +285,14 @@ _Py_DECREF_SPECIALIZED(left, (destructor)PyObject_Free); if (sub == NULL) goto pop_2_error; STACK_SHRINK(1); - POKE(1, sub); - JUMPBY(1); + stack_pointer[-1] = sub; + next_instr += 1; DISPATCH(); } TARGET(BINARY_OP_SUBTRACT_FLOAT) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; PyObject *sub; assert(cframe.use_tracing == 0); DEOPT_IF(!PyFloat_CheckExact(left), BINARY_OP); @@ -309,14 +304,14 @@ _Py_DECREF_SPECIALIZED(left, _PyFloat_ExactDealloc); if (sub == NULL) goto pop_2_error; STACK_SHRINK(1); - POKE(1, sub); - JUMPBY(1); + stack_pointer[-1] = sub; + next_instr += 1; DISPATCH(); } TARGET(BINARY_OP_ADD_UNICODE) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; PyObject *res; assert(cframe.use_tracing == 0); DEOPT_IF(!PyUnicode_CheckExact(left), BINARY_OP); @@ -327,14 +322,14 @@ _Py_DECREF_SPECIALIZED(right, _PyUnicode_ExactDealloc); if (res == NULL) goto pop_2_error; STACK_SHRINK(1); - POKE(1, res); - JUMPBY(1); + stack_pointer[-1] = res; + next_instr += 1; DISPATCH(); } TARGET(BINARY_OP_INPLACE_ADD_UNICODE) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; assert(cframe.use_tracing == 0); DEOPT_IF(!PyUnicode_CheckExact(left), BINARY_OP); DEOPT_IF(Py_TYPE(right) != Py_TYPE(left), BINARY_OP); @@ -367,8 +362,8 @@ } TARGET(BINARY_OP_ADD_FLOAT) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; PyObject *sum; assert(cframe.use_tracing == 0); DEOPT_IF(!PyFloat_CheckExact(left), BINARY_OP); @@ -381,14 +376,14 @@ _Py_DECREF_SPECIALIZED(left, _PyFloat_ExactDealloc); if (sum == NULL) goto pop_2_error; STACK_SHRINK(1); - POKE(1, sum); - JUMPBY(1); + stack_pointer[-1] = sum; + next_instr += 1; DISPATCH(); } TARGET(BINARY_OP_ADD_INT) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; PyObject *sum; assert(cframe.use_tracing == 0); DEOPT_IF(!PyLong_CheckExact(left), BINARY_OP); @@ -399,16 +394,16 @@ _Py_DECREF_SPECIALIZED(left, (destructor)PyObject_Free); if (sum == NULL) goto pop_2_error; STACK_SHRINK(1); - POKE(1, sum); - JUMPBY(1); + stack_pointer[-1] = sum; + next_instr += 1; DISPATCH(); } TARGET(BINARY_SUBSCR) { PREDICTED(BINARY_SUBSCR); static_assert(INLINE_CACHE_ENTRIES_BINARY_SUBSCR == 4, "incorrect cache size"); - PyObject *sub = PEEK(1); - PyObject *container = PEEK(2); + PyObject *sub = stack_pointer[-1]; + PyObject *container = stack_pointer[-2]; PyObject *res; #if ENABLE_SPECIALIZATION _PyBinarySubscrCache *cache = (_PyBinarySubscrCache *)next_instr; @@ -426,15 +421,15 @@ Py_DECREF(sub); if (res == NULL) goto pop_2_error; STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; DISPATCH(); } TARGET(BINARY_SLICE) { - PyObject *stop = PEEK(1); - PyObject *start = PEEK(2); - PyObject *container = PEEK(3); + PyObject *stop = stack_pointer[-1]; + PyObject *start = stack_pointer[-2]; + PyObject *container = stack_pointer[-3]; PyObject *res; PyObject *slice = _PyBuildSlice_ConsumeRefs(start, stop); // Can't use ERROR_IF() here, because we haven't @@ -449,15 +444,15 @@ Py_DECREF(container); if (res == NULL) goto pop_3_error; STACK_SHRINK(2); - POKE(1, res); + stack_pointer[-1] = res; DISPATCH(); } TARGET(STORE_SLICE) { - PyObject *stop = PEEK(1); - PyObject *start = PEEK(2); - PyObject *container = PEEK(3); - PyObject *v = PEEK(4); + PyObject *stop = stack_pointer[-1]; + PyObject *start = stack_pointer[-2]; + PyObject *container = stack_pointer[-3]; + PyObject *v = stack_pointer[-4]; PyObject *slice = _PyBuildSlice_ConsumeRefs(start, stop); int err; if (slice == NULL) { @@ -475,8 +470,8 @@ } TARGET(BINARY_SUBSCR_LIST_INT) { - PyObject *sub = PEEK(1); - PyObject *list = PEEK(2); + PyObject *sub = stack_pointer[-1]; + PyObject *list = stack_pointer[-2]; PyObject *res; assert(cframe.use_tracing == 0); DEOPT_IF(!PyLong_CheckExact(sub), BINARY_SUBSCR); @@ -494,14 +489,14 @@ _Py_DECREF_SPECIALIZED(sub, (destructor)PyObject_Free); Py_DECREF(list); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; DISPATCH(); } TARGET(BINARY_SUBSCR_TUPLE_INT) { - PyObject *sub = PEEK(1); - PyObject *tuple = PEEK(2); + PyObject *sub = stack_pointer[-1]; + PyObject *tuple = stack_pointer[-2]; PyObject *res; assert(cframe.use_tracing == 0); DEOPT_IF(!PyLong_CheckExact(sub), BINARY_SUBSCR); @@ -519,14 +514,14 @@ _Py_DECREF_SPECIALIZED(sub, (destructor)PyObject_Free); Py_DECREF(tuple); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; DISPATCH(); } TARGET(BINARY_SUBSCR_DICT) { - PyObject *sub = PEEK(1); - PyObject *dict = PEEK(2); + PyObject *sub = stack_pointer[-1]; + PyObject *dict = stack_pointer[-2]; PyObject *res; assert(cframe.use_tracing == 0); DEOPT_IF(!PyDict_CheckExact(dict), BINARY_SUBSCR); @@ -544,14 +539,14 @@ Py_DECREF(dict); Py_DECREF(sub); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; DISPATCH(); } TARGET(BINARY_SUBSCR_GETITEM) { - PyObject *sub = PEEK(1); - PyObject *container = PEEK(2); + PyObject *sub = stack_pointer[-1]; + PyObject *container = stack_pointer[-2]; uint32_t type_version = read_u32(&next_instr[1].cache); uint16_t func_version = read_u16(&next_instr[3].cache); PyTypeObject *tp = Py_TYPE(container); @@ -575,8 +570,8 @@ } TARGET(LIST_APPEND) { - PyObject *v = PEEK(1); - PyObject *list = PEEK(2 + (oparg-1)); + PyObject *v = stack_pointer[-1]; + PyObject *list = stack_pointer[-(2 + (oparg-1))]; if (_PyList_AppendTakeRef((PyListObject *)list, v) < 0) goto pop_1_error; STACK_SHRINK(1); PREDICT(JUMP_BACKWARD); @@ -584,8 +579,8 @@ } TARGET(SET_ADD) { - PyObject *v = PEEK(1); - PyObject *set = PEEK(2 + (oparg-1)); + PyObject *v = stack_pointer[-1]; + PyObject *set = stack_pointer[-(2 + (oparg-1))]; int err = PySet_Add(set, v); Py_DECREF(v); if (err) goto pop_1_error; @@ -597,9 +592,9 @@ TARGET(STORE_SUBSCR) { PREDICTED(STORE_SUBSCR); static_assert(INLINE_CACHE_ENTRIES_STORE_SUBSCR == 1, "incorrect cache size"); - PyObject *sub = PEEK(1); - PyObject *container = PEEK(2); - PyObject *v = PEEK(3); + PyObject *sub = stack_pointer[-1]; + PyObject *container = stack_pointer[-2]; + PyObject *v = stack_pointer[-3]; uint16_t counter = read_u16(&next_instr[0].cache); #if ENABLE_SPECIALIZATION if (ADAPTIVE_COUNTER_IS_ZERO(counter)) { @@ -621,14 +616,14 @@ Py_DECREF(sub); if (err) goto pop_3_error; STACK_SHRINK(3); - JUMPBY(1); + next_instr += 1; DISPATCH(); } TARGET(STORE_SUBSCR_LIST_INT) { - PyObject *sub = PEEK(1); - PyObject *list = PEEK(2); - PyObject *value = PEEK(3); + PyObject *sub = stack_pointer[-1]; + PyObject *list = stack_pointer[-2]; + PyObject *value = stack_pointer[-3]; assert(cframe.use_tracing == 0); DEOPT_IF(!PyLong_CheckExact(sub), STORE_SUBSCR); DEOPT_IF(!PyList_CheckExact(list), STORE_SUBSCR); @@ -647,14 +642,14 @@ _Py_DECREF_SPECIALIZED(sub, (destructor)PyObject_Free); Py_DECREF(list); STACK_SHRINK(3); - JUMPBY(1); + next_instr += 1; DISPATCH(); } TARGET(STORE_SUBSCR_DICT) { - PyObject *sub = PEEK(1); - PyObject *dict = PEEK(2); - PyObject *value = PEEK(3); + PyObject *sub = stack_pointer[-1]; + PyObject *dict = stack_pointer[-2]; + PyObject *value = stack_pointer[-3]; assert(cframe.use_tracing == 0); DEOPT_IF(!PyDict_CheckExact(dict), STORE_SUBSCR); STAT_INC(STORE_SUBSCR, hit); @@ -662,13 +657,13 @@ Py_DECREF(dict); if (err) goto pop_3_error; STACK_SHRINK(3); - JUMPBY(1); + next_instr += 1; DISPATCH(); } TARGET(DELETE_SUBSCR) { - PyObject *sub = PEEK(1); - PyObject *container = PEEK(2); + PyObject *sub = stack_pointer[-1]; + PyObject *container = stack_pointer[-2]; /* del container[sub] */ int err = PyObject_DelItem(container, sub); Py_DECREF(container); @@ -679,19 +674,19 @@ } TARGET(CALL_INTRINSIC_1) { - PyObject *value = PEEK(1); + PyObject *value = stack_pointer[-1]; PyObject *res; assert(oparg <= MAX_INTRINSIC_1); res = _PyIntrinsics_UnaryFunctions[oparg](tstate, value); Py_DECREF(value); if (res == NULL) goto pop_1_error; - POKE(1, res); + stack_pointer[-1] = res; DISPATCH(); } TARGET(CALL_INTRINSIC_2) { - PyObject *value1 = PEEK(1); - PyObject *value2 = PEEK(2); + PyObject *value1 = stack_pointer[-1]; + PyObject *value2 = stack_pointer[-2]; PyObject *res; assert(oparg <= MAX_INTRINSIC_2); res = _PyIntrinsics_BinaryFunctions[oparg](tstate, value2, value1); @@ -699,12 +694,12 @@ Py_DECREF(value1); if (res == NULL) goto pop_2_error; STACK_SHRINK(1); - POKE(1, res); + stack_pointer[-1] = res; DISPATCH(); } TARGET(RAISE_VARARGS) { - PyObject **args = &PEEK(oparg); + PyObject **args = (stack_pointer - oparg); PyObject *cause = NULL, *exc = NULL; switch (oparg) { case 2: @@ -725,7 +720,7 @@ } TARGET(INTERPRETER_EXIT) { - PyObject *retval = PEEK(1); + PyObject *retval = stack_pointer[-1]; assert(frame == &entry_frame); assert(_PyFrame_IsIncomplete(frame)); STACK_SHRINK(1); // Since we're not going to DISPATCH() @@ -740,7 +735,7 @@ } TARGET(RETURN_VALUE) { - PyObject *retval = PEEK(1); + PyObject *retval = stack_pointer[-1]; STACK_SHRINK(1); assert(EMPTY()); _PyFrame_SetStackPointer(frame, stack_pointer); @@ -774,7 +769,7 @@ } TARGET(GET_AITER) { - PyObject *obj = PEEK(1); + PyObject *obj = stack_pointer[-1]; PyObject *iter; unaryfunc getter = NULL; PyTypeObject *type = Py_TYPE(obj); @@ -806,12 +801,12 @@ Py_DECREF(iter); if (true) goto pop_1_error; } - POKE(1, iter); + stack_pointer[-1] = iter; DISPATCH(); } TARGET(GET_ANEXT) { - PyObject *aiter = PEEK(1); + PyObject *aiter = stack_pointer[-1]; PyObject *awaitable; unaryfunc getter = NULL; PyObject *next_iter = NULL; @@ -857,14 +852,14 @@ } STACK_GROW(1); - POKE(1, awaitable); + stack_pointer[-1] = awaitable; PREDICT(LOAD_CONST); DISPATCH(); } TARGET(GET_AWAITABLE) { PREDICTED(GET_AWAITABLE); - PyObject *iterable = PEEK(1); + PyObject *iterable = stack_pointer[-1]; PyObject *iter; iter = _PyCoro_GetAwaitableIter(iterable); @@ -890,15 +885,15 @@ if (iter == NULL) goto pop_1_error; - POKE(1, iter); + stack_pointer[-1] = iter; PREDICT(LOAD_CONST); DISPATCH(); } TARGET(SEND) { PREDICTED(SEND); - PyObject *v = PEEK(1); - PyObject *receiver = PEEK(2); + PyObject *v = stack_pointer[-1]; + PyObject *receiver = stack_pointer[-2]; PyObject *retval; #if ENABLE_SPECIALIZATION _PySendCache *cache = (_PySendCache *)next_instr; @@ -935,14 +930,14 @@ assert(retval != NULL); } Py_DECREF(v); - POKE(1, retval); - JUMPBY(1); + stack_pointer[-1] = retval; + next_instr += 1; DISPATCH(); } TARGET(SEND_GEN) { - PyObject *v = PEEK(1); - PyObject *receiver = PEEK(2); + PyObject *v = stack_pointer[-1]; + PyObject *receiver = stack_pointer[-2]; assert(cframe.use_tracing == 0); PyGenObject *gen = (PyGenObject *)receiver; DEOPT_IF(Py_TYPE(gen) != &PyGen_Type && @@ -961,7 +956,7 @@ } TARGET(YIELD_VALUE) { - PyObject *retval = PEEK(1); + PyObject *retval = stack_pointer[-1]; // NOTE: It's important that YIELD_VALUE never raises an exception! // The compiler treats any exception raised here as a failed close() // or throw() call. @@ -983,7 +978,7 @@ } TARGET(POP_EXCEPT) { - PyObject *exc_value = PEEK(1); + PyObject *exc_value = stack_pointer[-1]; _PyErr_StackItem *exc_info = tstate->exc_info; Py_XSETREF(exc_info->exc_value, exc_value); STACK_SHRINK(1); @@ -991,8 +986,8 @@ } TARGET(RERAISE) { - PyObject *exc = PEEK(1); - PyObject **values = &PEEK(1 + oparg); + PyObject *exc = stack_pointer[-1]; + PyObject **values = (stack_pointer - (1 + oparg)); assert(oparg >= 0 && oparg <= 2); if (oparg) { PyObject *lasti = values[0]; @@ -1015,8 +1010,8 @@ } TARGET(END_ASYNC_FOR) { - PyObject *exc = PEEK(1); - PyObject *awaitable = PEEK(2); + PyObject *exc = stack_pointer[-1]; + PyObject *awaitable = stack_pointer[-2]; assert(exc && PyExceptionInstance_Check(exc)); if (PyErr_GivenExceptionMatches(exc, PyExc_StopAsyncIteration)) { Py_DECREF(awaitable); @@ -1034,9 +1029,9 @@ } TARGET(CLEANUP_THROW) { - PyObject *exc_value = PEEK(1); - PyObject *last_sent_val = PEEK(2); - PyObject *sub_iter = PEEK(3); + PyObject *exc_value = stack_pointer[-1]; + PyObject *last_sent_val = stack_pointer[-2]; + PyObject *sub_iter = stack_pointer[-3]; PyObject *none; PyObject *value; assert(throwflag); @@ -1053,8 +1048,8 @@ goto exception_unwind; } STACK_SHRINK(1); - POKE(1, value); - POKE(2, none); + stack_pointer[-1] = value; + stack_pointer[-2] = none; DISPATCH(); } @@ -1062,7 +1057,7 @@ PyObject *value; value = Py_NewRef(PyExc_AssertionError); STACK_GROW(1); - POKE(1, value); + stack_pointer[-1] = value; DISPATCH(); } @@ -1090,12 +1085,12 @@ } } STACK_GROW(1); - POKE(1, bc); + stack_pointer[-1] = bc; DISPATCH(); } TARGET(STORE_NAME) { - PyObject *v = PEEK(1); + PyObject *v = stack_pointer[-1]; PyObject *name = GETITEM(names, oparg); PyObject *ns = LOCALS(); int err; @@ -1138,7 +1133,7 @@ TARGET(UNPACK_SEQUENCE) { PREDICTED(UNPACK_SEQUENCE); static_assert(INLINE_CACHE_ENTRIES_UNPACK_SEQUENCE == 1, "incorrect cache size"); - PyObject *seq = PEEK(1); + PyObject *seq = stack_pointer[-1]; #if ENABLE_SPECIALIZATION _PyUnpackSequenceCache *cache = (_PyUnpackSequenceCache *)next_instr; if (ADAPTIVE_COUNTER_IS_ZERO(cache->counter)) { @@ -1156,12 +1151,12 @@ if (res == 0) goto pop_1_error; STACK_SHRINK(1); STACK_GROW(oparg); - JUMPBY(1); + next_instr += 1; DISPATCH(); } TARGET(UNPACK_SEQUENCE_TWO_TUPLE) { - PyObject *seq = PEEK(1); + PyObject *seq = stack_pointer[-1]; PyObject **values = stack_pointer - (1); DEOPT_IF(!PyTuple_CheckExact(seq), UNPACK_SEQUENCE); DEOPT_IF(PyTuple_GET_SIZE(seq) != 2, UNPACK_SEQUENCE); @@ -1172,12 +1167,12 @@ Py_DECREF(seq); STACK_SHRINK(1); STACK_GROW(oparg); - JUMPBY(1); + next_instr += 1; DISPATCH(); } TARGET(UNPACK_SEQUENCE_TUPLE) { - PyObject *seq = PEEK(1); + PyObject *seq = stack_pointer[-1]; PyObject **values = stack_pointer - (1); DEOPT_IF(!PyTuple_CheckExact(seq), UNPACK_SEQUENCE); DEOPT_IF(PyTuple_GET_SIZE(seq) != oparg, UNPACK_SEQUENCE); @@ -1189,12 +1184,12 @@ Py_DECREF(seq); STACK_SHRINK(1); STACK_GROW(oparg); - JUMPBY(1); + next_instr += 1; DISPATCH(); } TARGET(UNPACK_SEQUENCE_LIST) { - PyObject *seq = PEEK(1); + PyObject *seq = stack_pointer[-1]; PyObject **values = stack_pointer - (1); DEOPT_IF(!PyList_CheckExact(seq), UNPACK_SEQUENCE); DEOPT_IF(PyList_GET_SIZE(seq) != oparg, UNPACK_SEQUENCE); @@ -1206,12 +1201,12 @@ Py_DECREF(seq); STACK_SHRINK(1); STACK_GROW(oparg); - JUMPBY(1); + next_instr += 1; DISPATCH(); } TARGET(UNPACK_EX) { - PyObject *seq = PEEK(1); + PyObject *seq = stack_pointer[-1]; int totalargs = 1 + (oparg & 0xFF) + (oparg >> 8); PyObject **top = stack_pointer + totalargs - 1; int res = unpack_iterable(tstate, seq, oparg & 0xFF, oparg >> 8, top); @@ -1224,8 +1219,8 @@ TARGET(STORE_ATTR) { PREDICTED(STORE_ATTR); static_assert(INLINE_CACHE_ENTRIES_STORE_ATTR == 4, "incorrect cache size"); - PyObject *owner = PEEK(1); - PyObject *v = PEEK(2); + PyObject *owner = stack_pointer[-1]; + PyObject *v = stack_pointer[-2]; uint16_t counter = read_u16(&next_instr[0].cache); #if ENABLE_SPECIALIZATION if (ADAPTIVE_COUNTER_IS_ZERO(counter)) { @@ -1247,12 +1242,12 @@ Py_DECREF(owner); if (err) goto pop_2_error; STACK_SHRINK(2); - JUMPBY(4); + next_instr += 4; DISPATCH(); } TARGET(DELETE_ATTR) { - PyObject *owner = PEEK(1); + PyObject *owner = stack_pointer[-1]; PyObject *name = GETITEM(names, oparg); int err = PyObject_SetAttr(owner, name, (PyObject *)NULL); Py_DECREF(owner); @@ -1262,7 +1257,7 @@ } TARGET(STORE_GLOBAL) { - PyObject *v = PEEK(1); + PyObject *v = stack_pointer[-1]; PyObject *name = GETITEM(names, oparg); int err = PyDict_SetItem(GLOBALS(), name, v); Py_DECREF(v); @@ -1347,7 +1342,7 @@ } } STACK_GROW(1); - POKE(1, v); + stack_pointer[-1] = v; DISPATCH(); } @@ -1410,9 +1405,9 @@ null = NULL; STACK_GROW(1); STACK_GROW(((oparg & 1) ? 1 : 0)); - POKE(1, v); - if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), null); } - JUMPBY(5); + stack_pointer[-1] = v; + if (oparg & 1) { stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))] = null; } + next_instr += 5; DISPATCH(); } @@ -1434,9 +1429,9 @@ null = NULL; STACK_GROW(1); STACK_GROW(((oparg & 1) ? 1 : 0)); - POKE(1, res); - if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), null); } - JUMPBY(5); + stack_pointer[-1] = res; + if (oparg & 1) { stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))] = null; } + next_instr += 5; DISPATCH(); } @@ -1462,9 +1457,9 @@ null = NULL; STACK_GROW(1); STACK_GROW(((oparg & 1) ? 1 : 0)); - POKE(1, res); - if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), null); } - JUMPBY(5); + stack_pointer[-1] = res; + if (oparg & 1) { stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))] = null; } + next_instr += 5; DISPATCH(); } @@ -1535,7 +1530,7 @@ Py_INCREF(value); } STACK_GROW(1); - POKE(1, value); + stack_pointer[-1] = value; DISPATCH(); } @@ -1549,12 +1544,12 @@ } Py_INCREF(value); STACK_GROW(1); - POKE(1, value); + stack_pointer[-1] = value; DISPATCH(); } TARGET(STORE_DEREF) { - PyObject *v = PEEK(1); + PyObject *v = stack_pointer[-1]; PyObject *cell = GETLOCAL(oparg); PyObject *oldobj = PyCell_GET(cell); PyCell_SET(cell, v); @@ -1578,7 +1573,7 @@ } TARGET(BUILD_STRING) { - PyObject **pieces = &PEEK(oparg); + PyObject **pieces = (stack_pointer - oparg); PyObject *str; str = _PyUnicode_JoinArray(&_Py_STR(empty), pieces, oparg); for (int i = 0; i < oparg; i++) { @@ -1587,35 +1582,35 @@ if (str == NULL) { STACK_SHRINK(oparg); goto error; } STACK_SHRINK(oparg); STACK_GROW(1); - POKE(1, str); + stack_pointer[-1] = str; DISPATCH(); } TARGET(BUILD_TUPLE) { - PyObject **values = &PEEK(oparg); + PyObject **values = (stack_pointer - oparg); PyObject *tup; tup = _PyTuple_FromArraySteal(values, oparg); if (tup == NULL) { STACK_SHRINK(oparg); goto error; } STACK_SHRINK(oparg); STACK_GROW(1); - POKE(1, tup); + stack_pointer[-1] = tup; DISPATCH(); } TARGET(BUILD_LIST) { - PyObject **values = &PEEK(oparg); + PyObject **values = (stack_pointer - oparg); PyObject *list; list = _PyList_FromArraySteal(values, oparg); if (list == NULL) { STACK_SHRINK(oparg); goto error; } STACK_SHRINK(oparg); STACK_GROW(1); - POKE(1, list); + stack_pointer[-1] = list; DISPATCH(); } TARGET(LIST_EXTEND) { - PyObject *iterable = PEEK(1); - PyObject *list = PEEK(2 + (oparg-1)); + PyObject *iterable = stack_pointer[-1]; + PyObject *list = stack_pointer[-(2 + (oparg-1))]; PyObject *none_val = _PyList_Extend((PyListObject *)list, iterable); if (none_val == NULL) { if (_PyErr_ExceptionMatches(tstate, PyExc_TypeError) && @@ -1636,8 +1631,8 @@ } TARGET(SET_UPDATE) { - PyObject *iterable = PEEK(1); - PyObject *set = PEEK(2 + (oparg-1)); + PyObject *iterable = stack_pointer[-1]; + PyObject *set = stack_pointer[-(2 + (oparg-1))]; int err = _PySet_Update(set, iterable); Py_DECREF(iterable); if (err < 0) goto pop_1_error; @@ -1646,7 +1641,7 @@ } TARGET(BUILD_SET) { - PyObject **values = &PEEK(oparg); + PyObject **values = (stack_pointer - oparg); PyObject *set; set = PySet_New(NULL); if (set == NULL) @@ -1664,12 +1659,12 @@ } STACK_SHRINK(oparg); STACK_GROW(1); - POKE(1, set); + stack_pointer[-1] = set; DISPATCH(); } TARGET(BUILD_MAP) { - PyObject **values = &PEEK(oparg*2); + PyObject **values = (stack_pointer - oparg*2); PyObject *map; map = _PyDict_FromItems( values, 2, @@ -1685,7 +1680,7 @@ if (map == NULL) { STACK_SHRINK(oparg*2); goto error; } STACK_SHRINK(oparg*2); STACK_GROW(1); - POKE(1, map); + stack_pointer[-1] = map; DISPATCH(); } @@ -1733,8 +1728,8 @@ } TARGET(BUILD_CONST_KEY_MAP) { - PyObject *keys = PEEK(1); - PyObject **values = &PEEK(1 + oparg); + PyObject *keys = stack_pointer[-1]; + PyObject **values = (stack_pointer - (1 + oparg)); PyObject *map; if (!PyTuple_CheckExact(keys) || PyTuple_GET_SIZE(keys) != (Py_ssize_t)oparg) { @@ -1751,12 +1746,12 @@ } if (map == NULL) { STACK_SHRINK(oparg); goto pop_1_error; } STACK_SHRINK(oparg); - POKE(1, map); + stack_pointer[-1] = map; DISPATCH(); } TARGET(DICT_UPDATE) { - PyObject *update = PEEK(1); + PyObject *update = stack_pointer[-1]; PyObject *dict = PEEK(oparg + 1); // update is still on the stack if (PyDict_Update(dict, update) < 0) { if (_PyErr_ExceptionMatches(tstate, PyExc_AttributeError)) { @@ -1773,7 +1768,7 @@ } TARGET(DICT_MERGE) { - PyObject *update = PEEK(1); + PyObject *update = stack_pointer[-1]; PyObject *dict = PEEK(oparg + 1); // update is still on the stack if (_PyDict_MergeEx(dict, update, 2) < 0) { @@ -1788,8 +1783,8 @@ } TARGET(MAP_ADD) { - PyObject *value = PEEK(1); - PyObject *key = PEEK(2); + PyObject *value = stack_pointer[-1]; + PyObject *key = stack_pointer[-2]; PyObject *dict = PEEK(oparg + 2); // key, value are still on the stack assert(PyDict_CheckExact(dict)); /* dict[key] = value */ @@ -1803,7 +1798,7 @@ TARGET(LOAD_ATTR) { PREDICTED(LOAD_ATTR); static_assert(INLINE_CACHE_ENTRIES_LOAD_ATTR == 9, "incorrect cache size"); - PyObject *owner = PEEK(1); + PyObject *owner = stack_pointer[-1]; PyObject *res2 = NULL; PyObject *res; #if ENABLE_SPECIALIZATION @@ -1853,14 +1848,14 @@ if (res == NULL) goto pop_1_error; } STACK_GROW(((oparg & 1) ? 1 : 0)); - POKE(1, res); - if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } - JUMPBY(9); + stack_pointer[-1] = res; + if (oparg & 1) { stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))] = res2; } + next_instr += 9; DISPATCH(); } TARGET(LOAD_ATTR_INSTANCE_VALUE) { - PyObject *owner = PEEK(1); + PyObject *owner = stack_pointer[-1]; PyObject *res2 = NULL; PyObject *res; uint32_t type_version = read_u32(&next_instr[1].cache); @@ -1880,14 +1875,14 @@ res2 = NULL; Py_DECREF(owner); STACK_GROW(((oparg & 1) ? 1 : 0)); - POKE(1, res); - if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } - JUMPBY(9); + stack_pointer[-1] = res; + if (oparg & 1) { stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))] = res2; } + next_instr += 9; DISPATCH(); } TARGET(LOAD_ATTR_MODULE) { - PyObject *owner = PEEK(1); + PyObject *owner = stack_pointer[-1]; PyObject *res2 = NULL; PyObject *res; uint32_t type_version = read_u32(&next_instr[1].cache); @@ -1907,14 +1902,14 @@ res2 = NULL; Py_DECREF(owner); STACK_GROW(((oparg & 1) ? 1 : 0)); - POKE(1, res); - if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } - JUMPBY(9); + stack_pointer[-1] = res; + if (oparg & 1) { stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))] = res2; } + next_instr += 9; DISPATCH(); } TARGET(LOAD_ATTR_WITH_HINT) { - PyObject *owner = PEEK(1); + PyObject *owner = stack_pointer[-1]; PyObject *res2 = NULL; PyObject *res; uint32_t type_version = read_u32(&next_instr[1].cache); @@ -1948,14 +1943,14 @@ res2 = NULL; Py_DECREF(owner); STACK_GROW(((oparg & 1) ? 1 : 0)); - POKE(1, res); - if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } - JUMPBY(9); + stack_pointer[-1] = res; + if (oparg & 1) { stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))] = res2; } + next_instr += 9; DISPATCH(); } TARGET(LOAD_ATTR_SLOT) { - PyObject *owner = PEEK(1); + PyObject *owner = stack_pointer[-1]; PyObject *res2 = NULL; PyObject *res; uint32_t type_version = read_u32(&next_instr[1].cache); @@ -1972,14 +1967,14 @@ res2 = NULL; Py_DECREF(owner); STACK_GROW(((oparg & 1) ? 1 : 0)); - POKE(1, res); - if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } - JUMPBY(9); + stack_pointer[-1] = res; + if (oparg & 1) { stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))] = res2; } + next_instr += 9; DISPATCH(); } TARGET(LOAD_ATTR_CLASS) { - PyObject *cls = PEEK(1); + PyObject *cls = stack_pointer[-1]; PyObject *res2 = NULL; PyObject *res; uint32_t type_version = read_u32(&next_instr[1].cache); @@ -1998,14 +1993,14 @@ Py_INCREF(res); Py_DECREF(cls); STACK_GROW(((oparg & 1) ? 1 : 0)); - POKE(1, res); - if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } - JUMPBY(9); + stack_pointer[-1] = res; + if (oparg & 1) { stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))] = res2; } + next_instr += 9; DISPATCH(); } TARGET(LOAD_ATTR_PROPERTY) { - PyObject *owner = PEEK(1); + PyObject *owner = stack_pointer[-1]; uint32_t type_version = read_u32(&next_instr[1].cache); uint32_t func_version = read_u32(&next_instr[3].cache); PyObject *fget = read_obj(&next_instr[5].cache); @@ -2035,7 +2030,7 @@ } TARGET(LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN) { - PyObject *owner = PEEK(1); + PyObject *owner = stack_pointer[-1]; uint32_t type_version = read_u32(&next_instr[1].cache); uint32_t func_version = read_u32(&next_instr[3].cache); PyObject *getattribute = read_obj(&next_instr[5].cache); @@ -2067,8 +2062,8 @@ } TARGET(STORE_ATTR_INSTANCE_VALUE) { - PyObject *owner = PEEK(1); - PyObject *value = PEEK(2); + PyObject *owner = stack_pointer[-1]; + PyObject *value = stack_pointer[-2]; uint32_t type_version = read_u32(&next_instr[1].cache); uint16_t index = read_u16(&next_instr[3].cache); assert(cframe.use_tracing == 0); @@ -2090,13 +2085,13 @@ } Py_DECREF(owner); STACK_SHRINK(2); - JUMPBY(4); + next_instr += 4; DISPATCH(); } TARGET(STORE_ATTR_WITH_HINT) { - PyObject *owner = PEEK(1); - PyObject *value = PEEK(2); + PyObject *owner = stack_pointer[-1]; + PyObject *value = stack_pointer[-2]; uint32_t type_version = read_u32(&next_instr[1].cache); uint16_t hint = read_u16(&next_instr[3].cache); assert(cframe.use_tracing == 0); @@ -2139,13 +2134,13 @@ dict->ma_version_tag = new_version; Py_DECREF(owner); STACK_SHRINK(2); - JUMPBY(4); + next_instr += 4; DISPATCH(); } TARGET(STORE_ATTR_SLOT) { - PyObject *owner = PEEK(1); - PyObject *value = PEEK(2); + PyObject *owner = stack_pointer[-1]; + PyObject *value = stack_pointer[-2]; uint32_t type_version = read_u32(&next_instr[1].cache); uint16_t index = read_u16(&next_instr[3].cache); assert(cframe.use_tracing == 0); @@ -2159,13 +2154,13 @@ Py_XDECREF(old_value); Py_DECREF(owner); STACK_SHRINK(2); - JUMPBY(4); + next_instr += 4; DISPATCH(); } TARGET(COMPARE_OP) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; PyObject *res; STAT_INC(COMPARE_OP, deferred); assert((oparg >> 4) <= Py_GE); @@ -2174,15 +2169,15 @@ Py_DECREF(right); if (res == NULL) goto pop_2_error; STACK_SHRINK(1); - POKE(1, res); - JUMPBY(1); + stack_pointer[-1] = res; + next_instr += 1; DISPATCH(); } TARGET(COMPARE_AND_BRANCH) { PREDICTED(COMPARE_AND_BRANCH); - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; #if ENABLE_SPECIALIZATION _PyCompareOpCache *cache = (_PyCompareOpCache *)next_instr; if (ADAPTIVE_COUNTER_IS_ZERO(cache->counter)) { @@ -2210,13 +2205,13 @@ JUMPBY(offset); } STACK_SHRINK(2); - JUMPBY(2); + next_instr += 2; DISPATCH(); } TARGET(COMPARE_AND_BRANCH_FLOAT) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; assert(cframe.use_tracing == 0); DEOPT_IF(!PyFloat_CheckExact(left), COMPARE_AND_BRANCH); DEOPT_IF(!PyFloat_CheckExact(right), COMPARE_AND_BRANCH); @@ -2232,13 +2227,13 @@ JUMPBY(offset); } STACK_SHRINK(2); - JUMPBY(2); + next_instr += 2; DISPATCH(); } TARGET(COMPARE_AND_BRANCH_INT) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; assert(cframe.use_tracing == 0); DEOPT_IF(!PyLong_CheckExact(left), COMPARE_AND_BRANCH); DEOPT_IF(!PyLong_CheckExact(right), COMPARE_AND_BRANCH); @@ -2257,13 +2252,13 @@ JUMPBY(offset); } STACK_SHRINK(2); - JUMPBY(2); + next_instr += 2; DISPATCH(); } TARGET(COMPARE_AND_BRANCH_STR) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; assert(cframe.use_tracing == 0); DEOPT_IF(!PyUnicode_CheckExact(left), COMPARE_AND_BRANCH); DEOPT_IF(!PyUnicode_CheckExact(right), COMPARE_AND_BRANCH); @@ -2280,26 +2275,26 @@ JUMPBY(offset); } STACK_SHRINK(2); - JUMPBY(2); + next_instr += 2; DISPATCH(); } TARGET(IS_OP) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; PyObject *b; int res = Py_Is(left, right) ^ oparg; Py_DECREF(left); Py_DECREF(right); b = Py_NewRef(res ? Py_True : Py_False); STACK_SHRINK(1); - POKE(1, b); + stack_pointer[-1] = b; DISPATCH(); } TARGET(CONTAINS_OP) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; PyObject *b; int res = PySequence_Contains(right, left); Py_DECREF(left); @@ -2307,13 +2302,13 @@ if (res < 0) goto pop_2_error; b = Py_NewRef((res^oparg) ? Py_True : Py_False); STACK_SHRINK(1); - POKE(1, b); + stack_pointer[-1] = b; DISPATCH(); } TARGET(CHECK_EG_MATCH) { - PyObject *match_type = PEEK(1); - PyObject *exc_value = PEEK(2); + PyObject *match_type = stack_pointer[-1]; + PyObject *exc_value = stack_pointer[-2]; PyObject *rest; PyObject *match; if (check_except_star_type_valid(tstate, match_type) < 0) { @@ -2336,14 +2331,14 @@ if (!Py_IsNone(match)) { PyErr_SetExcInfo(NULL, Py_NewRef(match), NULL); } - POKE(1, match); - POKE(2, rest); + stack_pointer[-1] = match; + stack_pointer[-2] = rest; DISPATCH(); } TARGET(CHECK_EXC_MATCH) { - PyObject *right = PEEK(1); - PyObject *left = PEEK(2); + PyObject *right = stack_pointer[-1]; + PyObject *left = stack_pointer[-2]; PyObject *b; assert(PyExceptionInstance_Check(left)); if (check_except_type_valid(tstate, right) < 0) { @@ -2354,13 +2349,13 @@ int res = PyErr_GivenExceptionMatches(left, right); Py_DECREF(right); b = Py_NewRef(res ? Py_True : Py_False); - POKE(1, b); + stack_pointer[-1] = b; DISPATCH(); } TARGET(IMPORT_NAME) { - PyObject *fromlist = PEEK(1); - PyObject *level = PEEK(2); + PyObject *fromlist = stack_pointer[-1]; + PyObject *level = stack_pointer[-2]; PyObject *res; PyObject *name = GETITEM(names, oparg); res = import_name(tstate, frame, name, fromlist, level); @@ -2368,18 +2363,18 @@ Py_DECREF(fromlist); if (res == NULL) goto pop_2_error; STACK_SHRINK(1); - POKE(1, res); + stack_pointer[-1] = res; DISPATCH(); } TARGET(IMPORT_FROM) { - PyObject *from = PEEK(1); + PyObject *from = stack_pointer[-1]; PyObject *res; PyObject *name = GETITEM(names, oparg); res = import_from(tstate, from, name); if (res == NULL) goto error; STACK_GROW(1); - POKE(1, res); + stack_pointer[-1] = res; DISPATCH(); } @@ -2398,7 +2393,7 @@ TARGET(POP_JUMP_IF_FALSE) { PREDICTED(POP_JUMP_IF_FALSE); - PyObject *cond = PEEK(1); + PyObject *cond = stack_pointer[-1]; if (Py_IsTrue(cond)) { _Py_DECREF_NO_DEALLOC(cond); } @@ -2421,7 +2416,7 @@ } TARGET(POP_JUMP_IF_TRUE) { - PyObject *cond = PEEK(1); + PyObject *cond = stack_pointer[-1]; if (Py_IsFalse(cond)) { _Py_DECREF_NO_DEALLOC(cond); } @@ -2444,7 +2439,7 @@ } TARGET(POP_JUMP_IF_NOT_NONE) { - PyObject *value = PEEK(1); + PyObject *value = stack_pointer[-1]; if (!Py_IsNone(value)) { Py_DECREF(value); JUMPBY(oparg); @@ -2457,7 +2452,7 @@ } TARGET(POP_JUMP_IF_NONE) { - PyObject *value = PEEK(1); + PyObject *value = stack_pointer[-1]; if (Py_IsNone(value)) { _Py_DECREF_NO_DEALLOC(value); JUMPBY(oparg); @@ -2470,7 +2465,7 @@ } TARGET(JUMP_IF_FALSE_OR_POP) { - PyObject *cond = PEEK(1); + PyObject *cond = stack_pointer[-1]; bool jump = false; int err; if (Py_IsTrue(cond)) { @@ -2499,7 +2494,7 @@ } TARGET(JUMP_IF_TRUE_OR_POP) { - PyObject *cond = PEEK(1); + PyObject *cond = stack_pointer[-1]; bool jump = false; int err; if (Py_IsFalse(cond)) { @@ -2538,7 +2533,7 @@ } TARGET(GET_LEN) { - PyObject *obj = PEEK(1); + PyObject *obj = stack_pointer[-1]; PyObject *len_o; // PUSH(len(TOS)) Py_ssize_t len_i = PyObject_Length(obj); @@ -2546,14 +2541,14 @@ len_o = PyLong_FromSsize_t(len_i); if (len_o == NULL) goto error; STACK_GROW(1); - POKE(1, len_o); + stack_pointer[-1] = len_o; DISPATCH(); } TARGET(MATCH_CLASS) { - PyObject *names = PEEK(1); - PyObject *type = PEEK(2); - PyObject *subject = PEEK(3); + PyObject *names = stack_pointer[-1]; + PyObject *type = stack_pointer[-2]; + PyObject *subject = stack_pointer[-3]; PyObject *attrs; // Pop TOS and TOS1. Set TOS to a tuple of attributes on success, or // None on failure. @@ -2570,57 +2565,57 @@ attrs = Py_NewRef(Py_None); // Failure! } STACK_SHRINK(2); - POKE(1, attrs); + stack_pointer[-1] = attrs; DISPATCH(); } TARGET(MATCH_MAPPING) { - PyObject *subject = PEEK(1); + PyObject *subject = stack_pointer[-1]; PyObject *res; int match = Py_TYPE(subject)->tp_flags & Py_TPFLAGS_MAPPING; res = Py_NewRef(match ? Py_True : Py_False); STACK_GROW(1); - POKE(1, res); + stack_pointer[-1] = res; PREDICT(POP_JUMP_IF_FALSE); DISPATCH(); } TARGET(MATCH_SEQUENCE) { - PyObject *subject = PEEK(1); + PyObject *subject = stack_pointer[-1]; PyObject *res; int match = Py_TYPE(subject)->tp_flags & Py_TPFLAGS_SEQUENCE; res = Py_NewRef(match ? Py_True : Py_False); STACK_GROW(1); - POKE(1, res); + stack_pointer[-1] = res; PREDICT(POP_JUMP_IF_FALSE); DISPATCH(); } TARGET(MATCH_KEYS) { - PyObject *keys = PEEK(1); - PyObject *subject = PEEK(2); + PyObject *keys = stack_pointer[-1]; + PyObject *subject = stack_pointer[-2]; PyObject *values_or_none; // On successful match, PUSH(values). Otherwise, PUSH(None). values_or_none = match_keys(tstate, subject, keys); if (values_or_none == NULL) goto error; STACK_GROW(1); - POKE(1, values_or_none); + stack_pointer[-1] = values_or_none; DISPATCH(); } TARGET(GET_ITER) { - PyObject *iterable = PEEK(1); + PyObject *iterable = stack_pointer[-1]; PyObject *iter; /* before: [obj]; after [getiter(obj)] */ iter = PyObject_GetIter(iterable); Py_DECREF(iterable); if (iter == NULL) goto pop_1_error; - POKE(1, iter); + stack_pointer[-1] = iter; DISPATCH(); } TARGET(GET_YIELD_FROM_ITER) { - PyObject *iterable = PEEK(1); + PyObject *iterable = stack_pointer[-1]; PyObject *iter; /* before: [obj]; after [getiter(obj)] */ if (PyCoro_CheckExact(iterable)) { @@ -2646,7 +2641,7 @@ } Py_DECREF(iterable); } - POKE(1, iter); + stack_pointer[-1] = iter; PREDICT(LOAD_CONST); DISPATCH(); } @@ -2654,7 +2649,7 @@ TARGET(FOR_ITER) { PREDICTED(FOR_ITER); static_assert(INLINE_CACHE_ENTRIES_FOR_ITER == 1, "incorrect cache size"); - PyObject *iter = PEEK(1); + PyObject *iter = stack_pointer[-1]; PyObject *next; #if ENABLE_SPECIALIZATION _PyForIterCache *cache = (_PyForIterCache *)next_instr; @@ -2689,13 +2684,13 @@ } // Common case: no jump, leave it to the code generator STACK_GROW(1); - POKE(1, next); - JUMPBY(1); + stack_pointer[-1] = next; + next_instr += 1; DISPATCH(); } TARGET(FOR_ITER_LIST) { - PyObject *iter = PEEK(1); + PyObject *iter = stack_pointer[-1]; PyObject *next; assert(cframe.use_tracing == 0); DEOPT_IF(Py_TYPE(iter) != &PyListIter_Type, FOR_ITER); @@ -2718,13 +2713,13 @@ end_for_iter_list: // Common case: no jump, leave it to the code generator STACK_GROW(1); - POKE(1, next); - JUMPBY(1); + stack_pointer[-1] = next; + next_instr += 1; DISPATCH(); } TARGET(FOR_ITER_TUPLE) { - PyObject *iter = PEEK(1); + PyObject *iter = stack_pointer[-1]; PyObject *next; assert(cframe.use_tracing == 0); _PyTupleIterObject *it = (_PyTupleIterObject *)iter; @@ -2747,13 +2742,13 @@ end_for_iter_tuple: // Common case: no jump, leave it to the code generator STACK_GROW(1); - POKE(1, next); - JUMPBY(1); + stack_pointer[-1] = next; + next_instr += 1; DISPATCH(); } TARGET(FOR_ITER_RANGE) { - PyObject *iter = PEEK(1); + PyObject *iter = stack_pointer[-1]; PyObject *next; assert(cframe.use_tracing == 0); _PyRangeIterObject *r = (_PyRangeIterObject *)iter; @@ -2774,13 +2769,13 @@ goto error; } STACK_GROW(1); - POKE(1, next); - JUMPBY(1); + stack_pointer[-1] = next; + next_instr += 1; DISPATCH(); } TARGET(FOR_ITER_GEN) { - PyObject *iter = PEEK(1); + PyObject *iter = stack_pointer[-1]; assert(cframe.use_tracing == 0); PyGenObject *gen = (PyGenObject *)iter; DEOPT_IF(Py_TYPE(gen) != &PyGen_Type, FOR_ITER); @@ -2798,7 +2793,7 @@ } TARGET(BEFORE_ASYNC_WITH) { - PyObject *mgr = PEEK(1); + PyObject *mgr = stack_pointer[-1]; PyObject *exit; PyObject *res; PyObject *enter = _PyObject_LookupSpecial(mgr, &_Py_ID(__aenter__)); @@ -2831,14 +2826,14 @@ if (true) goto pop_1_error; } STACK_GROW(1); - POKE(1, res); - POKE(2, exit); + stack_pointer[-1] = res; + stack_pointer[-2] = exit; PREDICT(GET_AWAITABLE); DISPATCH(); } TARGET(BEFORE_WITH) { - PyObject *mgr = PEEK(1); + PyObject *mgr = stack_pointer[-1]; PyObject *exit; PyObject *res; /* pop the context manager, push its __exit__ and the @@ -2874,15 +2869,15 @@ if (true) goto pop_1_error; } STACK_GROW(1); - POKE(1, res); - POKE(2, exit); + stack_pointer[-1] = res; + stack_pointer[-2] = exit; DISPATCH(); } TARGET(WITH_EXCEPT_START) { - PyObject *val = PEEK(1); - PyObject *lasti = PEEK(3); - PyObject *exit_func = PEEK(4); + PyObject *val = stack_pointer[-1]; + PyObject *lasti = stack_pointer[-3]; + PyObject *exit_func = stack_pointer[-4]; PyObject *res; /* At the top of the stack are 4 values: - val: TOP = exc_info() @@ -2905,12 +2900,12 @@ 3 | PY_VECTORCALL_ARGUMENTS_OFFSET, NULL); if (res == NULL) goto error; STACK_GROW(1); - POKE(1, res); + stack_pointer[-1] = res; DISPATCH(); } TARGET(PUSH_EXC_INFO) { - PyObject *new_exc = PEEK(1); + PyObject *new_exc = stack_pointer[-1]; PyObject *prev_exc; _PyErr_StackItem *exc_info = tstate->exc_info; if (exc_info->exc_value != NULL) { @@ -2922,13 +2917,13 @@ assert(PyExceptionInstance_Check(new_exc)); exc_info->exc_value = Py_NewRef(new_exc); STACK_GROW(1); - POKE(1, new_exc); - POKE(2, prev_exc); + stack_pointer[-1] = new_exc; + stack_pointer[-2] = prev_exc; DISPATCH(); } TARGET(LOAD_ATTR_METHOD_WITH_VALUES) { - PyObject *self = PEEK(1); + PyObject *self = stack_pointer[-1]; PyObject *res2 = NULL; PyObject *res; uint32_t type_version = read_u32(&next_instr[1].cache); @@ -2952,14 +2947,14 @@ res = self; assert(oparg & 1); STACK_GROW(((oparg & 1) ? 1 : 0)); - POKE(1, res); - if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } - JUMPBY(9); + stack_pointer[-1] = res; + if (oparg & 1) { stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))] = res2; } + next_instr += 9; DISPATCH(); } TARGET(LOAD_ATTR_METHOD_NO_DICT) { - PyObject *self = PEEK(1); + PyObject *self = stack_pointer[-1]; PyObject *res2 = NULL; PyObject *res; uint32_t type_version = read_u32(&next_instr[1].cache); @@ -2975,14 +2970,14 @@ res = self; assert(oparg & 1); STACK_GROW(((oparg & 1) ? 1 : 0)); - POKE(1, res); - if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } - JUMPBY(9); + stack_pointer[-1] = res; + if (oparg & 1) { stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))] = res2; } + next_instr += 9; DISPATCH(); } TARGET(LOAD_ATTR_METHOD_LAZY_DICT) { - PyObject *self = PEEK(1); + PyObject *self = stack_pointer[-1]; PyObject *res2 = NULL; PyObject *res; uint32_t type_version = read_u32(&next_instr[1].cache); @@ -3002,9 +2997,9 @@ res = self; assert(oparg & 1); STACK_GROW(((oparg & 1) ? 1 : 0)); - POKE(1, res); - if (oparg & 1) { POKE(1 + ((oparg & 1) ? 1 : 0), res2); } - JUMPBY(9); + stack_pointer[-1] = res; + if (oparg & 1) { stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))] = res2; } + next_instr += 9; DISPATCH(); } @@ -3018,9 +3013,9 @@ TARGET(CALL) { PREDICTED(CALL); static_assert(INLINE_CACHE_ENTRIES_CALL == 4, "incorrect cache size"); - PyObject **args = &PEEK(oparg); - PyObject *callable = PEEK(1 + oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *method = stack_pointer[-(2 + oparg)]; PyObject *res; int is_meth = method != NULL; int total_args = oparg; @@ -3095,15 +3090,15 @@ if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_BOUND_METHOD_EXACT_ARGS) { - PyObject *callable = PEEK(1 + oparg); - PyObject *method = PEEK(2 + oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *method = stack_pointer[-(2 + oparg)]; DEOPT_IF(method != NULL, CALL); DEOPT_IF(Py_TYPE(callable) != &PyMethod_Type, CALL); STAT_INC(CALL, hit); @@ -3117,9 +3112,9 @@ TARGET(CALL_PY_EXACT_ARGS) { PREDICTED(CALL_PY_EXACT_ARGS); - PyObject **args = &PEEK(oparg); - PyObject *callable = PEEK(1 + oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *method = stack_pointer[-(2 + oparg)]; uint32_t func_version = read_u32(&next_instr[1].cache); assert(kwnames == NULL); DEOPT_IF(tstate->interp->eval_frame, CALL); @@ -3148,9 +3143,9 @@ } TARGET(CALL_PY_WITH_DEFAULTS) { - PyObject **args = &PEEK(oparg); - PyObject *callable = PEEK(1 + oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *method = stack_pointer[-(2 + oparg)]; uint32_t func_version = read_u32(&next_instr[1].cache); uint16_t min_args = read_u16(&next_instr[3].cache); assert(kwnames == NULL); @@ -3185,9 +3180,9 @@ } TARGET(CALL_NO_KW_TYPE_1) { - PyObject **args = &PEEK(oparg); - PyObject *callable = PEEK(1 + oparg); - PyObject *null = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *null = stack_pointer[-(2 + oparg)]; PyObject *res; assert(kwnames == NULL); assert(cframe.use_tracing == 0); @@ -3201,15 +3196,15 @@ Py_DECREF(&PyType_Type); // I.e., callable STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; DISPATCH(); } TARGET(CALL_NO_KW_STR_1) { - PyObject **args = &PEEK(oparg); - PyObject *callable = PEEK(1 + oparg); - PyObject *null = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *null = stack_pointer[-(2 + oparg)]; PyObject *res; assert(kwnames == NULL); assert(cframe.use_tracing == 0); @@ -3224,16 +3219,16 @@ if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_NO_KW_TUPLE_1) { - PyObject **args = &PEEK(oparg); - PyObject *callable = PEEK(1 + oparg); - PyObject *null = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *null = stack_pointer[-(2 + oparg)]; PyObject *res; assert(kwnames == NULL); assert(oparg == 1); @@ -3247,16 +3242,16 @@ if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_BUILTIN_CLASS) { - PyObject **args = &PEEK(oparg); - PyObject *callable = PEEK(1 + oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *method = stack_pointer[-(2 + oparg)]; PyObject *res; int is_meth = method != NULL; int total_args = oparg; @@ -3281,16 +3276,16 @@ if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_NO_KW_BUILTIN_O) { - PyObject **args = &PEEK(oparg); - PyObject *callable = PEEK(1 + oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *method = stack_pointer[-(2 + oparg)]; PyObject *res; assert(cframe.use_tracing == 0); /* Builtin METH_O functions */ @@ -3322,16 +3317,16 @@ if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_NO_KW_BUILTIN_FAST) { - PyObject **args = &PEEK(oparg); - PyObject *callable = PEEK(1 + oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *method = stack_pointer[-(2 + oparg)]; PyObject *res; assert(cframe.use_tracing == 0); /* Builtin METH_FASTCALL functions, without keywords */ @@ -3367,16 +3362,16 @@ */ STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_BUILTIN_FAST_WITH_KEYWORDS) { - PyObject **args = &PEEK(oparg); - PyObject *callable = PEEK(1 + oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *method = stack_pointer[-(2 + oparg)]; PyObject *res; assert(cframe.use_tracing == 0); /* Builtin METH_FASTCALL | METH_KEYWORDS functions */ @@ -3412,16 +3407,16 @@ if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_NO_KW_LEN) { - PyObject **args = &PEEK(oparg); - PyObject *callable = PEEK(1 + oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *method = stack_pointer[-(2 + oparg)]; PyObject *res; assert(cframe.use_tracing == 0); assert(kwnames == NULL); @@ -3450,15 +3445,15 @@ if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; DISPATCH(); } TARGET(CALL_NO_KW_ISINSTANCE) { - PyObject **args = &PEEK(oparg); - PyObject *callable = PEEK(1 + oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *callable = stack_pointer[-(1 + oparg)]; + PyObject *method = stack_pointer[-(2 + oparg)]; PyObject *res; assert(cframe.use_tracing == 0); assert(kwnames == NULL); @@ -3489,15 +3484,15 @@ if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; DISPATCH(); } TARGET(CALL_NO_KW_LIST_APPEND) { - PyObject **args = &PEEK(oparg); - PyObject *self = PEEK(1 + oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *self = stack_pointer[-(1 + oparg)]; + PyObject *method = stack_pointer[-(2 + oparg)]; assert(cframe.use_tracing == 0); assert(kwnames == NULL); assert(oparg == 1); @@ -3519,8 +3514,8 @@ } TARGET(CALL_NO_KW_METHOD_DESCRIPTOR_O) { - PyObject **args = &PEEK(oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *method = stack_pointer[-(2 + oparg)]; PyObject *res; assert(kwnames == NULL); int is_meth = method != NULL; @@ -3554,15 +3549,15 @@ if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS) { - PyObject **args = &PEEK(oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *method = stack_pointer[-(2 + oparg)]; PyObject *res; int is_meth = method != NULL; int total_args = oparg; @@ -3594,15 +3589,15 @@ if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS) { - PyObject **args = &PEEK(oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *method = stack_pointer[-(2 + oparg)]; PyObject *res; assert(kwnames == NULL); assert(oparg == 0 || oparg == 1); @@ -3634,15 +3629,15 @@ if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_NO_KW_METHOD_DESCRIPTOR_FAST) { - PyObject **args = &PEEK(oparg); - PyObject *method = PEEK(2 + oparg); + PyObject **args = (stack_pointer - oparg); + PyObject *method = stack_pointer[-(2 + oparg)]; PyObject *res; assert(kwnames == NULL); int is_meth = method != NULL; @@ -3673,17 +3668,17 @@ if (res == NULL) { STACK_SHRINK(oparg); goto pop_2_error; } STACK_SHRINK(oparg); STACK_SHRINK(1); - POKE(1, res); - JUMPBY(4); + stack_pointer[-1] = res; + next_instr += 4; CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(CALL_FUNCTION_EX) { PREDICTED(CALL_FUNCTION_EX); - PyObject *kwargs = (oparg & 1) ? PEEK(((oparg & 1) ? 1 : 0)) : NULL; - PyObject *callargs = PEEK(1 + ((oparg & 1) ? 1 : 0)); - PyObject *func = PEEK(2 + ((oparg & 1) ? 1 : 0)); + PyObject *kwargs = (oparg & 1) ? stack_pointer[-(((oparg & 1) ? 1 : 0))] : NULL; + PyObject *callargs = stack_pointer[-(1 + ((oparg & 1) ? 1 : 0))]; + PyObject *func = stack_pointer[-(2 + ((oparg & 1) ? 1 : 0))]; PyObject *result; if (oparg & 1) { // DICT_MERGE is called before this opcode if there are kwargs. @@ -3711,17 +3706,17 @@ if (result == NULL) { STACK_SHRINK(((oparg & 1) ? 1 : 0)); goto pop_3_error; } STACK_SHRINK(((oparg & 1) ? 1 : 0)); STACK_SHRINK(2); - POKE(1, result); + stack_pointer[-1] = result; CHECK_EVAL_BREAKER(); DISPATCH(); } TARGET(MAKE_FUNCTION) { - PyObject *codeobj = PEEK(1); - PyObject *closure = (oparg & 0x08) ? PEEK(1 + ((oparg & 0x08) ? 1 : 0)) : NULL; - PyObject *annotations = (oparg & 0x04) ? PEEK(1 + ((oparg & 0x08) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0)) : NULL; - PyObject *kwdefaults = (oparg & 0x02) ? PEEK(1 + ((oparg & 0x08) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0) + ((oparg & 0x02) ? 1 : 0)) : NULL; - PyObject *defaults = (oparg & 0x01) ? PEEK(1 + ((oparg & 0x08) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0) + ((oparg & 0x02) ? 1 : 0) + ((oparg & 0x01) ? 1 : 0)) : NULL; + PyObject *codeobj = stack_pointer[-1]; + PyObject *closure = (oparg & 0x08) ? stack_pointer[-(1 + ((oparg & 0x08) ? 1 : 0))] : NULL; + PyObject *annotations = (oparg & 0x04) ? stack_pointer[-(1 + ((oparg & 0x08) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0))] : NULL; + PyObject *kwdefaults = (oparg & 0x02) ? stack_pointer[-(1 + ((oparg & 0x08) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0) + ((oparg & 0x02) ? 1 : 0))] : NULL; + PyObject *defaults = (oparg & 0x01) ? stack_pointer[-(1 + ((oparg & 0x08) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0) + ((oparg & 0x02) ? 1 : 0) + ((oparg & 0x01) ? 1 : 0))] : NULL; PyObject *func; PyFunctionObject *func_obj = (PyFunctionObject *) @@ -3752,7 +3747,7 @@ func_obj->func_version = ((PyCodeObject *)codeobj)->co_version; func = (PyObject *)func_obj; STACK_SHRINK(((oparg & 0x01) ? 1 : 0) + ((oparg & 0x02) ? 1 : 0) + ((oparg & 0x04) ? 1 : 0) + ((oparg & 0x08) ? 1 : 0)); - POKE(1, func); + stack_pointer[-1] = func; DISPATCH(); } @@ -3780,9 +3775,9 @@ } TARGET(BUILD_SLICE) { - PyObject *step = (oparg == 3) ? PEEK(((oparg == 3) ? 1 : 0)) : NULL; - PyObject *stop = PEEK(1 + ((oparg == 3) ? 1 : 0)); - PyObject *start = PEEK(2 + ((oparg == 3) ? 1 : 0)); + PyObject *step = (oparg == 3) ? stack_pointer[-(((oparg == 3) ? 1 : 0))] : NULL; + PyObject *stop = stack_pointer[-(1 + ((oparg == 3) ? 1 : 0))]; + PyObject *start = stack_pointer[-(2 + ((oparg == 3) ? 1 : 0))]; PyObject *slice; slice = PySlice_New(start, stop, step); Py_DECREF(start); @@ -3791,13 +3786,13 @@ if (slice == NULL) { STACK_SHRINK(((oparg == 3) ? 1 : 0)); goto pop_2_error; } STACK_SHRINK(((oparg == 3) ? 1 : 0)); STACK_SHRINK(1); - POKE(1, slice); + stack_pointer[-1] = slice; DISPATCH(); } TARGET(FORMAT_VALUE) { - PyObject *fmt_spec = ((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? PEEK((((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? 1 : 0)) : NULL; - PyObject *value = PEEK(1 + (((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? 1 : 0)); + PyObject *fmt_spec = ((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? stack_pointer[-((((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? 1 : 0))] : NULL; + PyObject *value = stack_pointer[-(1 + (((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? 1 : 0))]; PyObject *result; /* Handles f-string value formatting. */ PyObject *(*conv_fn)(PyObject *); @@ -3845,25 +3840,25 @@ if (result == NULL) { STACK_SHRINK((((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? 1 : 0)); goto pop_1_error; } } STACK_SHRINK((((oparg & FVS_MASK) == FVS_HAVE_SPEC) ? 1 : 0)); - POKE(1, result); + stack_pointer[-1] = result; DISPATCH(); } TARGET(COPY) { - PyObject *bottom = PEEK(1 + (oparg-1)); + PyObject *bottom = stack_pointer[-(1 + (oparg-1))]; PyObject *top; assert(oparg > 0); top = Py_NewRef(bottom); STACK_GROW(1); - POKE(1, top); + stack_pointer[-1] = top; DISPATCH(); } TARGET(BINARY_OP) { PREDICTED(BINARY_OP); static_assert(INLINE_CACHE_ENTRIES_BINARY_OP == 1, "incorrect cache size"); - PyObject *rhs = PEEK(1); - PyObject *lhs = PEEK(2); + PyObject *rhs = stack_pointer[-1]; + PyObject *lhs = stack_pointer[-2]; PyObject *res; #if ENABLE_SPECIALIZATION _PyBinaryOpCache *cache = (_PyBinaryOpCache *)next_instr; @@ -3884,17 +3879,17 @@ Py_DECREF(rhs); if (res == NULL) goto pop_2_error; STACK_SHRINK(1); - POKE(1, res); - JUMPBY(1); + stack_pointer[-1] = res; + next_instr += 1; DISPATCH(); } TARGET(SWAP) { - PyObject *top = PEEK(1); - PyObject *bottom = PEEK(2 + (oparg-2)); + PyObject *top = stack_pointer[-1]; + PyObject *bottom = stack_pointer[-(2 + (oparg-2))]; assert(oparg >= 2); - POKE(1, bottom); - POKE(2 + (oparg-2), top); + stack_pointer[-1] = bottom; + stack_pointer[-(2 + (oparg-2))] = top; DISPATCH(); } diff --git a/Tools/cases_generator/generate_cases.py b/Tools/cases_generator/generate_cases.py index c7f52b55a5eb..b760172974c8 100644 --- a/Tools/cases_generator/generate_cases.py +++ b/Tools/cases_generator/generate_cases.py @@ -171,19 +171,17 @@ def declare(self, dst: StackEffect, src: StackEffect | None): def assign(self, dst: StackEffect, src: StackEffect): if src.name == UNUSED: return + if src.size: + # Don't write sized arrays -- it's up to the user code. + return cast = self.cast(dst, src) - if m := re.match(r"^PEEK\((.*)\)$", dst.name): - stmt = f"POKE({m.group(1)}, {cast}{src.name});" + if re.match(r"^REG\(oparg(\d+)\)$", dst.name): + self.emit(f"Py_XSETREF({dst.name}, {cast}{src.name});") + else: + stmt = f"{dst.name} = {cast}{src.name};" if src.cond: stmt = f"if ({src.cond}) {{ {stmt} }}" self.emit(stmt) - elif m := re.match(r"^&PEEK\(.*\)$", dst.name): - # The user code is responsible for writing to the output array. - pass - elif m := re.match(r"^REG\(oparg(\d+)\)$", dst.name): - self.emit(f"Py_XSETREF({dst.name}, {cast}{src.name});") - else: - self.emit(f"{dst.name} = {cast}{src.name};") def cast(self, dst: StackEffect, src: StackEffect) -> str: return f"({dst.type or 'PyObject *'})" if src.type != dst.type else "" @@ -292,11 +290,11 @@ def write(self, out: Formatter) -> None: list_effect_size([ieff for ieff in ieffects[: i + 1]]) ) if ieffect.size: - src = StackEffect(f"&PEEK({isize})", "PyObject **") + src = StackEffect(f"(stack_pointer - {maybe_parenthesize(isize)})", "PyObject **") elif ieffect.cond: - src = StackEffect(f"({ieffect.cond}) ? PEEK({isize}) : NULL", "") + src = StackEffect(f"({ieffect.cond}) ? stack_pointer[-{maybe_parenthesize(isize)}] : NULL", "") else: - src = StackEffect(f"PEEK({isize})", "") + src = StackEffect(f"stack_pointer[-{maybe_parenthesize(isize)}]", "") out.declare(ieffect, src) else: # Write input register variable declarations and initializations @@ -324,7 +322,7 @@ def write(self, out: Formatter) -> None: else: out.declare(oeffect, None) - # out.emit(f"JUMPBY(OPSIZE({self.inst.name}) - 1);") + # out.emit(f"next_instr += OPSIZE({self.inst.name}) - 1;") self.write_body(out, 0) @@ -349,9 +347,9 @@ def write(self, out: Formatter) -> None: list_effect_size([oeff for oeff in oeffects[: i + 1]]) ) if oeffect.size: - dst = StackEffect(f"&PEEK({osize})", "PyObject **") + dst = StackEffect(f"stack_pointer - {maybe_parenthesize(osize)}", "PyObject **") else: - dst = StackEffect(f"PEEK({osize})", "") + dst = StackEffect(f"stack_pointer[-{maybe_parenthesize(osize)}]", "") out.assign(dst, oeffect) else: # Write output register assignments @@ -361,7 +359,7 @@ def write(self, out: Formatter) -> None: # Write cache effect if self.cache_offset: - out.emit(f"JUMPBY({self.cache_offset});") + out.emit(f"next_instr += {self.cache_offset};") def write_body(self, out: Formatter, dedent: int, cache_adjust: int = 0) -> None: """Write the instruction body.""" @@ -1060,17 +1058,13 @@ def write_super(self, sup: SuperInstruction) -> None: with self.wrap_super_or_macro(sup): first = True for comp in sup.parts: - if first: - pass - # self.out.emit("JUMPBY(OPSIZE(opcode) - 1);") - else: - self.out.emit("NEXTOPARG();") - self.out.emit("JUMPBY(1);") - # self.out.emit("JUMPBY(OPSIZE(opcode));") + if not first: + self.out.emit("oparg = (next_instr++)->op.arg;") + # self.out.emit("next_instr += OPSIZE(opcode) - 1;") first = False comp.write_body(self.out, 0) if comp.instr.cache_offset: - self.out.emit(f"JUMPBY({comp.instr.cache_offset});") + self.out.emit(f"next_instr += {comp.instr.cache_offset};") def write_macro(self, mac: MacroInstruction) -> None: """Write code for a macro instruction.""" @@ -1087,7 +1081,7 @@ def write_macro(self, mac: MacroInstruction) -> None: cache_adjust += comp.instr.cache_offset if cache_adjust: - self.out.emit(f"JUMPBY({cache_adjust});") + self.out.emit(f"next_instr += {cache_adjust};") if ( last_instr @@ -1113,7 +1107,7 @@ def wrap_super_or_macro(self, up: SuperOrMacroInstruction): for i, var in reversed(list(enumerate(up.stack))): src = None if i < up.initial_sp: - src = StackEffect(f"PEEK({up.initial_sp - i})", "") + src = StackEffect(f"stack_pointer[-{up.initial_sp - i}]", "") self.out.declare(var, src) yield @@ -1122,7 +1116,7 @@ def wrap_super_or_macro(self, up: SuperOrMacroInstruction): self.out.stack_adjust(up.final_sp - up.initial_sp, [], []) for i, var in enumerate(reversed(up.stack[: up.final_sp]), 1): - dst = StackEffect(f"PEEK({i})", "") + dst = StackEffect(f"stack_pointer[-{i}]", "") self.out.assign(dst, var) self.out.emit(f"DISPATCH();") From webhook-mailer at python.org Tue Feb 28 15:14:48 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Tue, 28 Feb 2023 20:14:48 -0000 Subject: [Python-checkins] gh-100227: Move the dtoa State to PyInterpreterState (gh-102331) Message-ID: https://github.com/python/cpython/commit/f300a1fa4c121f7807cbda4fc8bb26240c69ea74 commit: f300a1fa4c121f7807cbda4fc8bb26240c69ea74 branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-28T13:14:40-07:00 summary: gh-100227: Move the dtoa State to PyInterpreterState (gh-102331) https://github.com/python/cpython/issues/100227 files: M Include/internal/pycore_dtoa.h M Include/internal/pycore_interp.h M Include/internal/pycore_runtime.h M Include/internal/pycore_runtime_init.h M Python/dtoa.c M Python/pystate.c diff --git a/Include/internal/pycore_dtoa.h b/Include/internal/pycore_dtoa.h index 67189cf0ade6..fb524770efed 100644 --- a/Include/internal/pycore_dtoa.h +++ b/Include/internal/pycore_dtoa.h @@ -24,10 +24,11 @@ Bigint { #ifdef Py_USING_MEMORY_DEBUGGER -struct _dtoa_runtime_state { +struct _dtoa_state { int _not_used; }; -#define _dtoa_runtime_state_INIT {0} +#define _dtoa_interp_state_INIT(INTERP) \ + {0} #else // !Py_USING_MEMORY_DEBUGGER @@ -40,7 +41,7 @@ struct _dtoa_runtime_state { #define Bigint_PREALLOC_SIZE \ ((PRIVATE_MEM+sizeof(double)-1)/sizeof(double)) -struct _dtoa_runtime_state { +struct _dtoa_state { /* p5s is a linked list of powers of 5 of the form 5**(2**i), i >= 2 */ // XXX This should be freed during runtime fini. struct Bigint *p5s; @@ -48,9 +49,9 @@ struct _dtoa_runtime_state { double preallocated[Bigint_PREALLOC_SIZE]; double *preallocated_next; }; -#define _dtoa_runtime_state_INIT(runtime) \ +#define _dtoa_state_INIT(INTERP) \ { \ - .preallocated_next = runtime.dtoa.preallocated, \ + .preallocated_next = (INTERP)->dtoa.preallocated, \ } #endif // !Py_USING_MEMORY_DEBUGGER diff --git a/Include/internal/pycore_interp.h b/Include/internal/pycore_interp.h index 60de31b336f6..7ef9c40153e4 100644 --- a/Include/internal/pycore_interp.h +++ b/Include/internal/pycore_interp.h @@ -16,6 +16,7 @@ extern "C" { #include "pycore_code.h" // struct callable_cache #include "pycore_context.h" // struct _Py_context_state #include "pycore_dict_state.h" // struct _Py_dict_state +#include "pycore_dtoa.h" // struct _dtoa_state #include "pycore_exceptions.h" // struct _Py_exc_state #include "pycore_floatobject.h" // struct _Py_float_state #include "pycore_function.h" // FUNC_MAX_WATCHERS @@ -139,6 +140,7 @@ struct _is { struct _Py_unicode_state unicode; struct _Py_float_state float_state; struct _Py_long_state long_state; + struct _dtoa_state dtoa; /* Using a cache is very effective since typically only a single slice is created and then deleted again. */ PySliceObject *slice_cache; diff --git a/Include/internal/pycore_runtime.h b/Include/internal/pycore_runtime.h index 9ef270791576..2350eaab5976 100644 --- a/Include/internal/pycore_runtime.h +++ b/Include/internal/pycore_runtime.h @@ -11,7 +11,6 @@ extern "C" { #include "pycore_atomic.h" /* _Py_atomic_address */ #include "pycore_ceval_state.h" // struct _ceval_runtime_state #include "pycore_dict_state.h" // struct _Py_dict_runtime_state -#include "pycore_dtoa.h" // struct _dtoa_runtime_state #include "pycore_floatobject.h" // struct _Py_float_runtime_state #include "pycore_faulthandler.h" // struct _faulthandler_runtime_state #include "pycore_function.h" // struct _func_runtime_state @@ -141,7 +140,6 @@ typedef struct pyruntimestate { struct _ceval_runtime_state ceval; struct _gilstate_runtime_state gilstate; struct _getargs_runtime_state getargs; - struct _dtoa_runtime_state dtoa; struct _fileutils_state fileutils; struct _faulthandler_runtime_state faulthandler; struct _tracemalloc_runtime_state tracemalloc; diff --git a/Include/internal/pycore_runtime_init.h b/Include/internal/pycore_runtime_init.h index a8d5953ff98b..b54adf04761d 100644 --- a/Include/internal/pycore_runtime_init.h +++ b/Include/internal/pycore_runtime_init.h @@ -53,7 +53,6 @@ extern "C" { .gilstate = { \ .check_enabled = 1, \ }, \ - .dtoa = _dtoa_runtime_state_INIT(runtime), \ .fileutils = { \ .force_ascii = -1, \ }, \ @@ -94,10 +93,10 @@ extern "C" { }, \ }, \ }, \ - ._main_interpreter = _PyInterpreterState_INIT, \ + ._main_interpreter = _PyInterpreterState_INIT(runtime._main_interpreter), \ } -#define _PyInterpreterState_INIT \ +#define _PyInterpreterState_INIT(INTERP) \ { \ .id_refcount = -1, \ .imports = IMPORTS_INIT, \ @@ -113,6 +112,7 @@ extern "C" { { .threshold = 10, }, \ }, \ }, \ + .dtoa = _dtoa_state_INIT(&(INTERP)), \ .static_objects = { \ .singletons = { \ ._not_used = 1, \ diff --git a/Python/dtoa.c b/Python/dtoa.c index cff5f1b0658e..6ea60ac9946e 100644 --- a/Python/dtoa.c +++ b/Python/dtoa.c @@ -119,7 +119,7 @@ #include "Python.h" #include "pycore_dtoa.h" // _PY_SHORT_FLOAT_REPR -#include "pycore_runtime.h" // _PyRuntime +#include "pycore_pystate.h" // _PyInterpreterState_GET() #include // exit() /* if _PY_SHORT_FLOAT_REPR == 0, then don't even try to compile @@ -339,9 +339,9 @@ typedef struct Bigint Bigint; Bfree to PyMem_Free. Investigate whether this has any significant performance on impact. */ -#define freelist _PyRuntime.dtoa.freelist -#define private_mem _PyRuntime.dtoa.preallocated -#define pmem_next _PyRuntime.dtoa.preallocated_next +#define freelist interp->dtoa.freelist +#define private_mem interp->dtoa.preallocated +#define pmem_next interp->dtoa.preallocated_next /* Allocate space for a Bigint with up to 1<next; @@ -385,6 +386,7 @@ Bfree(Bigint *v) if (v->k > Bigint_Kmax) FREE((void*)v); else { + PyInterpreterState *interp = _PyInterpreterState_GET(); v->next = freelist[v->k]; freelist[v->k] = v; } @@ -692,7 +694,8 @@ pow5mult(Bigint *b, int k) if (!(k >>= 2)) return b; - p5 = _PyRuntime.dtoa.p5s; + PyInterpreterState *interp = _PyInterpreterState_GET(); + p5 = interp->dtoa.p5s; if (!p5) { /* first time */ p5 = i2b(625); @@ -700,7 +703,7 @@ pow5mult(Bigint *b, int k) Bfree(b); return NULL; } - _PyRuntime.dtoa.p5s = p5; + interp->dtoa.p5s = p5; p5->next = 0; } for(;;) { diff --git a/Python/pystate.c b/Python/pystate.c index 3c655bf38958..28606e4f32f7 100644 --- a/Python/pystate.c +++ b/Python/pystate.c @@ -3,7 +3,8 @@ #include "Python.h" #include "pycore_ceval.h" -#include "pycore_code.h" // stats +#include "pycore_code.h" // stats +#include "pycore_dtoa.h" // _dtoa_state_INIT() #include "pycore_frame.h" #include "pycore_initconfig.h" #include "pycore_object.h" // _PyType_InitCache() @@ -618,6 +619,18 @@ free_interpreter(PyInterpreterState *interp) e.g. by PyMem_RawCalloc() or memset(), or otherwise pre-initialized. The runtime state is not manipulated. Instead it is assumed that the interpreter is getting added to the runtime. + + Note that the main interpreter was statically initialized as part + of the runtime and most state is already set properly. That leaves + a small number of fields to initialize dynamically, as well as some + that are initialized lazily. + + For subinterpreters we memcpy() the main interpreter in + PyInterpreterState_New(), leaving it in the same mostly-initialized + state. The only difference is that the interpreter has some + self-referential state that is statically initializexd to the + main interpreter. We fix those fields here, in addition + to the other dynamically initialized fields. */ static void @@ -645,6 +658,11 @@ init_interpreter(PyInterpreterState *interp, PyConfig_InitPythonConfig(&interp->config); _PyType_InitCache(interp); + if (interp != &runtime->_main_interpreter) { + /* Fix the self-referential, statically initialized fields. */ + interp->dtoa = (struct _dtoa_state)_dtoa_state_INIT(interp); + } + interp->_initialized = 1; } From webhook-mailer at python.org Tue Feb 28 16:16:47 2023 From: webhook-mailer at python.org (ericsnowcurrently) Date: Tue, 28 Feb 2023 21:16:47 -0000 Subject: [Python-checkins] gh-100227: Move _str_replace_inf to PyInterpreterState (gh-102333) Message-ID: https://github.com/python/cpython/commit/880437d4ec65ef35d505eeaff9dad5c6654dbc1a commit: 880437d4ec65ef35d505eeaff9dad5c6654dbc1a branch: main author: Eric Snow committer: ericsnowcurrently date: 2023-02-28T14:16:39-07:00 summary: gh-100227: Move _str_replace_inf to PyInterpreterState (gh-102333) https://github.com/python/cpython/issues/100227 files: M Include/internal/pycore_global_objects.h M Parser/asdl_c.py M Python/Python-ast.c M Python/ast_unparse.c diff --git a/Include/internal/pycore_global_objects.h b/Include/internal/pycore_global_objects.h index d0461fa7e82e..30c7c4e3bbd0 100644 --- a/Include/internal/pycore_global_objects.h +++ b/Include/internal/pycore_global_objects.h @@ -27,8 +27,6 @@ extern "C" { _PyRuntime.cached_objects.NAME struct _Py_cached_objects { - PyObject *str_replace_inf; - PyObject *interned_strings; }; @@ -67,11 +65,14 @@ struct _Py_static_objects { (interp)->cached_objects.NAME struct _Py_interp_cached_objects { - int _not_set; + /* AST */ + PyObject *str_replace_inf; + /* object.__reduce__ */ PyObject *objreduce; PyObject *type_slots_pname; pytype_slotdef *type_slots_ptrs[MAX_EQUIV]; + }; #define _Py_INTERP_STATIC_OBJECT(interp, NAME) \ diff --git a/Parser/asdl_c.py b/Parser/asdl_c.py index db0e597b7a5a..b44e303ac259 100755 --- a/Parser/asdl_c.py +++ b/Parser/asdl_c.py @@ -1484,9 +1484,7 @@ def generate_ast_fini(module_state, f): for s in module_state: f.write(" Py_CLEAR(state->" + s + ');\n') f.write(textwrap.dedent(""" - if (_PyInterpreterState_Get() == _PyInterpreterState_Main()) { - Py_CLEAR(_Py_CACHED_OBJECT(str_replace_inf)); - } + Py_CLEAR(_Py_INTERP_CACHED_OBJECT(interp, str_replace_inf)); #if !defined(NDEBUG) state->initialized = -1; diff --git a/Python/Python-ast.c b/Python/Python-ast.c index d113c47b9539..6c878474afb1 100644 --- a/Python/Python-ast.c +++ b/Python/Python-ast.c @@ -263,9 +263,7 @@ void _PyAST_Fini(PyInterpreterState *interp) Py_CLEAR(state->vararg); Py_CLEAR(state->withitem_type); - if (_PyInterpreterState_Get() == _PyInterpreterState_Main()) { - Py_CLEAR(_Py_CACHED_OBJECT(str_replace_inf)); - } + Py_CLEAR(_Py_INTERP_CACHED_OBJECT(interp, str_replace_inf)); #if !defined(NDEBUG) state->initialized = -1; diff --git a/Python/ast_unparse.c b/Python/ast_unparse.c index 79b2e2f15ba2..8aff045101cc 100644 --- a/Python/ast_unparse.c +++ b/Python/ast_unparse.c @@ -1,5 +1,6 @@ #include "Python.h" #include "pycore_ast.h" // expr_ty +#include "pycore_pystate.h" // _PyInterpreterState_GET() #include "pycore_runtime.h" // _Py_ID() #include // DBL_MAX_10_EXP #include @@ -13,7 +14,10 @@ _Py_DECLARE_STR(open_br, "{"); _Py_DECLARE_STR(dbl_open_br, "{{"); _Py_DECLARE_STR(close_br, "}"); _Py_DECLARE_STR(dbl_close_br, "}}"); -#define _str_replace_inf _Py_CACHED_OBJECT(str_replace_inf) + +/* We would statically initialize this if doing so were simple enough. */ +#define _str_replace_inf(interp) \ + _Py_INTERP_CACHED_OBJECT(interp, str_replace_inf) /* Forward declarations for recursion via helper functions. */ static PyObject * @@ -78,10 +82,11 @@ append_repr(_PyUnicodeWriter *writer, PyObject *obj) if ((PyFloat_CheckExact(obj) && Py_IS_INFINITY(PyFloat_AS_DOUBLE(obj))) || PyComplex_CheckExact(obj)) { + PyInterpreterState *interp = _PyInterpreterState_GET(); PyObject *new_repr = PyUnicode_Replace( repr, &_Py_ID(inf), - _str_replace_inf, + _str_replace_inf(interp), -1 ); Py_DECREF(repr); @@ -916,9 +921,13 @@ append_ast_expr(_PyUnicodeWriter *writer, expr_ty e, int level) static int maybe_init_static_strings(void) { - if (!_str_replace_inf && - !(_str_replace_inf = PyUnicode_FromFormat("1e%d", 1 + DBL_MAX_10_EXP))) { - return -1; + PyInterpreterState *interp = _PyInterpreterState_GET(); + if (_str_replace_inf(interp) == NULL) { + PyObject *tmp = PyUnicode_FromFormat("1e%d", 1 + DBL_MAX_10_EXP); + if (tmp == NULL) { + return -1; + } + _str_replace_inf(interp) = tmp; } return 0; } From webhook-mailer at python.org Tue Feb 28 16:34:15 2023 From: webhook-mailer at python.org (erlend-aasland) Date: Tue, 28 Feb 2023 21:34:15 -0000 Subject: [Python-checkins] gh-99108: Add missing md5/sha1 defines to Modules/Setup (#102308) Message-ID: https://github.com/python/cpython/commit/360ef843d8fc03aad38ed84f9f45026ab08ca4f4 commit: 360ef843d8fc03aad38ed84f9f45026ab08ca4f4 branch: main author: Anthony Sottile committer: erlend-aasland date: 2023-02-28T22:34:06+01:00 summary: gh-99108: Add missing md5/sha1 defines to Modules/Setup (#102308) files: M Modules/Setup diff --git a/Modules/Setup b/Modules/Setup index ff432e2929ec..1c6f2f7ea518 100644 --- a/Modules/Setup +++ b/Modules/Setup @@ -163,8 +163,8 @@ PYTHONPATH=$(COREPYTHONPATH) # hashing builtins #_blake2 _blake2/blake2module.c _blake2/blake2b_impl.c _blake2/blake2s_impl.c -#_md5 md5module.c -I$(srcdir)/Modules/_hacl/include _hacl/Hacl_Hash_MD5.c -#_sha1 sha1module.c -I$(srcdir)/Modules/_hacl/include _hacl/Hacl_Hash_SHA1.c +#_md5 md5module.c -I$(srcdir)/Modules/_hacl/include _hacl/Hacl_Hash_MD5.c -D_BSD_SOURCE -D_DEFAULT_SOURCE +#_sha1 sha1module.c -I$(srcdir)/Modules/_hacl/include _hacl/Hacl_Hash_SHA1.c -D_BSD_SOURCE -D_DEFAULT_SOURCE #_sha2 sha2module.c -I$(srcdir)/Modules/_hacl/include Modules/_hacl/libHacl_Streaming_SHA2.a #_sha3 _sha3/sha3module.c From webhook-mailer at python.org Tue Feb 28 19:31:28 2023 From: webhook-mailer at python.org (zooba) Date: Wed, 01 Mar 2023 00:31:28 -0000 Subject: [Python-checkins] gh-102336: Remove code specifically for handling Windows 7 (GH-102337) Message-ID: https://github.com/python/cpython/commit/938e36f824c5f834d6b77d47942ad81edd5491d0 commit: 938e36f824c5f834d6b77d47942ad81edd5491d0 branch: main author: Max Bachmann committer: zooba date: 2023-03-01T00:31:21Z summary: gh-102336: Remove code specifically for handling Windows 7 (GH-102337) files: A Misc/NEWS.d/next/Core and Builtins/2023-02-28-21-17-03.gh-issue-102336.-wL3Tm.rst M Modules/_winapi.c M Modules/getpath.c M Modules/mmapmodule.c M Modules/posixmodule.c M Modules/socketmodule.c M Python/fileutils.c diff --git a/Misc/NEWS.d/next/Core and Builtins/2023-02-28-21-17-03.gh-issue-102336.-wL3Tm.rst b/Misc/NEWS.d/next/Core and Builtins/2023-02-28-21-17-03.gh-issue-102336.-wL3Tm.rst new file mode 100644 index 000000000000..0c3e4bd4b860 --- /dev/null +++ b/Misc/NEWS.d/next/Core and Builtins/2023-02-28-21-17-03.gh-issue-102336.-wL3Tm.rst @@ -0,0 +1 @@ +Cleanup Windows 7 specific special handling. Patch by Max Bachmann. diff --git a/Modules/_winapi.c b/Modules/_winapi.c index f4d982b15d40..8c4c725b9df2 100644 --- a/Modules/_winapi.c +++ b/Modules/_winapi.c @@ -63,23 +63,6 @@ #define T_HANDLE T_POINTER -/* Grab CancelIoEx dynamically from kernel32 */ -static int has_CancelIoEx = -1; -static BOOL (CALLBACK *Py_CancelIoEx)(HANDLE, LPOVERLAPPED); - -static int -check_CancelIoEx() -{ - if (has_CancelIoEx == -1) - { - HINSTANCE hKernel32 = GetModuleHandle("KERNEL32"); - * (FARPROC *) &Py_CancelIoEx = GetProcAddress(hKernel32, - "CancelIoEx"); - has_CancelIoEx = (Py_CancelIoEx != NULL); - } - return has_CancelIoEx; -} - typedef struct { PyTypeObject *overlapped_type; } WinApiState; @@ -134,8 +117,7 @@ overlapped_dealloc(OverlappedObject *self) PyObject_GC_UnTrack(self); if (self->pending) { - if (check_CancelIoEx() && - Py_CancelIoEx(self->handle, &self->overlapped) && + if (CancelIoEx(self->handle, &self->overlapped) && GetOverlappedResult(self->handle, &self->overlapped, &bytes, TRUE)) { /* The operation is no longer pending -- nothing to do. */ @@ -306,10 +288,7 @@ _winapi_Overlapped_cancel_impl(OverlappedObject *self) if (self->pending) { Py_BEGIN_ALLOW_THREADS - if (check_CancelIoEx()) - res = Py_CancelIoEx(self->handle, &self->overlapped); - else - res = CancelIo(self->handle); + CancelIoEx(self->handle, &self->overlapped); Py_END_ALLOW_THREADS } @@ -655,8 +634,10 @@ _winapi_CreateJunction_impl(PyObject *module, LPCWSTR src_path, cleanup: ret = GetLastError(); - CloseHandle(token); - CloseHandle(junction); + if (token != NULL) + CloseHandle(token); + if (junction != NULL) + CloseHandle(junction); PyMem_RawFree(rdb); if (ret != 0) diff --git a/Modules/getpath.c b/Modules/getpath.c index 13db010649fe..c807a3c8e9a2 100644 --- a/Modules/getpath.c +++ b/Modules/getpath.c @@ -227,12 +227,11 @@ getpath_isxfile(PyObject *Py_UNUSED(self), PyObject *args) path = PyUnicode_AsWideCharString(pathobj, &cchPath); if (path) { #ifdef MS_WINDOWS - const wchar_t *ext; DWORD attr = GetFileAttributesW(path); r = (attr != INVALID_FILE_ATTRIBUTES) && !(attr & FILE_ATTRIBUTE_DIRECTORY) && - SUCCEEDED(PathCchFindExtension(path, cchPath + 1, &ext)) && - (CompareStringOrdinal(ext, -1, L".exe", -1, 1 /* ignore case */) == CSTR_EQUAL) + (cchPath >= 4) && + (CompareStringOrdinal(path + cchPath - 4, -1, L".exe", -1, 1 /* ignore case */) == CSTR_EQUAL) ? Py_True : Py_False; #else struct stat st; diff --git a/Modules/mmapmodule.c b/Modules/mmapmodule.c index a01e798265c5..6054c2853d7a 100644 --- a/Modules/mmapmodule.c +++ b/Modules/mmapmodule.c @@ -502,7 +502,7 @@ mmap_resize_method(mmap_object *self, CloseHandle(self->map_handle); /* if the file mapping still exists, it cannot be resized. */ if (self->tagname) { - self->map_handle = OpenFileMapping(FILE_MAP_WRITE, FALSE, + self->map_handle = OpenFileMappingA(FILE_MAP_WRITE, FALSE, self->tagname); if (self->map_handle) { PyErr_SetFromWindowsErr(ERROR_USER_MAPPED_FILE); @@ -531,7 +531,7 @@ mmap_resize_method(mmap_object *self, /* create a new file mapping and map a new view */ /* FIXME: call CreateFileMappingW with wchar_t tagname */ - self->map_handle = CreateFileMapping( + self->map_handle = CreateFileMappingA( self->file_handle, NULL, PAGE_READWRITE, @@ -1514,12 +1514,12 @@ new_mmap_object(PyTypeObject *type, PyObject *args, PyObject *kwdict) off_lo = (DWORD)(offset & 0xFFFFFFFF); /* For files, it would be sufficient to pass 0 as size. For anonymous maps, we have to pass the size explicitly. */ - m_obj->map_handle = CreateFileMapping(m_obj->file_handle, - NULL, - flProtect, - size_hi, - size_lo, - m_obj->tagname); + m_obj->map_handle = CreateFileMappingA(m_obj->file_handle, + NULL, + flProtect, + size_hi, + size_lo, + m_obj->tagname); if (m_obj->map_handle != NULL) { m_obj->data = (char *) MapViewOfFile(m_obj->map_handle, dwDesiredAccess, diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c index 6ea216b3bf5e..937233a3bd07 100644 --- a/Modules/posixmodule.c +++ b/Modules/posixmodule.c @@ -10,16 +10,6 @@ #define PY_SSIZE_T_CLEAN #include "Python.h" -// Include before pycore internal headers. FSCTL_GET_REPARSE_POINT -// is not exported by if the WIN32_LEAN_AND_MEAN macro is defined, -// whereas pycore_condvar.h defines the WIN32_LEAN_AND_MEAN macro. -#ifdef MS_WINDOWS -# include -# include -# include // UNLEN -# include "osdefs.h" // SEP -# define HAVE_SYMLINK -#endif #ifdef __VXWORKS__ # include "pycore_bitutils.h" // _Py_popcount32() @@ -34,6 +24,15 @@ #include "pycore_pystate.h" // _PyInterpreterState_GET() #include "pycore_signal.h" // Py_NSIG +#ifdef MS_WINDOWS +# include +# include +# include +# include // UNLEN +# include "osdefs.h" // SEP +# define HAVE_SYMLINK +#endif + #include "structmember.h" // PyMemberDef #ifndef MS_WINDOWS # include "posixmodule.h" @@ -1507,32 +1506,6 @@ _Py_Sigset_Converter(PyObject *obj, void *addr) } #endif /* HAVE_SIGSET_T */ -#ifdef MS_WINDOWS - -static int -win32_get_reparse_tag(HANDLE reparse_point_handle, ULONG *reparse_tag) -{ - char target_buffer[_Py_MAXIMUM_REPARSE_DATA_BUFFER_SIZE]; - _Py_REPARSE_DATA_BUFFER *rdb = (_Py_REPARSE_DATA_BUFFER *)target_buffer; - DWORD n_bytes_returned; - - if (0 == DeviceIoControl( - reparse_point_handle, - FSCTL_GET_REPARSE_POINT, - NULL, 0, /* in buffer */ - target_buffer, sizeof(target_buffer), - &n_bytes_returned, - NULL)) /* we're not using OVERLAPPED_IO */ - return FALSE; - - if (reparse_tag) - *reparse_tag = rdb->ReparseTag; - - return TRUE; -} - -#endif /* MS_WINDOWS */ - /* Return a dictionary corresponding to the POSIX environment table */ #if defined(WITH_NEXT_FRAMEWORK) || (defined(__APPLE__) && defined(Py_ENABLE_SHARED)) /* On Darwin/MacOSX a shared library or framework has no access to @@ -8263,42 +8236,32 @@ os_setpgrp_impl(PyObject *module) #ifdef HAVE_GETPPID #ifdef MS_WINDOWS -#include +#include static PyObject* win32_getppid() { - HANDLE snapshot; + DWORD error; PyObject* result = NULL; - BOOL have_record; - PROCESSENTRY32 pe; - - DWORD mypid = GetCurrentProcessId(); /* This function never fails */ - - snapshot = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); - if (snapshot == INVALID_HANDLE_VALUE) - return PyErr_SetFromWindowsErr(GetLastError()); + HANDLE process = GetCurrentProcess(); - pe.dwSize = sizeof(pe); - have_record = Process32First(snapshot, &pe); - while (have_record) { - if (mypid == pe.th32ProcessID) { - /* We could cache the ulong value in a static variable. */ - result = PyLong_FromUnsignedLong(pe.th32ParentProcessID); - break; - } - - have_record = Process32Next(snapshot, &pe); + HPSS snapshot = NULL; + error = PssCaptureSnapshot(process, PSS_CAPTURE_NONE, 0, &snapshot); + if (error != ERROR_SUCCESS) { + return PyErr_SetFromWindowsErr(error); } - /* If our loop exits and our pid was not found (result will be NULL) - * then GetLastError will return ERROR_NO_MORE_FILES. This is an - * error anyway, so let's raise it. */ - if (!result) - result = PyErr_SetFromWindowsErr(GetLastError()); - - CloseHandle(snapshot); + PSS_PROCESS_INFORMATION info; + error = PssQuerySnapshot(snapshot, PSS_QUERY_PROCESS_INFORMATION, &info, + sizeof(info)); + if (error == ERROR_SUCCESS) { + result = PyLong_FromUnsignedLong(info.ParentProcessId); + } + else { + result = PyErr_SetFromWindowsErr(error); + } + PssFreeSnapshot(process, snapshot); return result; } #endif /*MS_WINDOWS*/ @@ -15067,9 +15030,6 @@ os_getrandom_impl(PyObject *module, Py_ssize_t size, int flags) * on win32 */ -typedef DLL_DIRECTORY_COOKIE (WINAPI *PAddDllDirectory)(PCWSTR newDirectory); -typedef BOOL (WINAPI *PRemoveDllDirectory)(DLL_DIRECTORY_COOKIE cookie); - /*[clinic input] os._add_dll_directory @@ -15089,8 +15049,6 @@ static PyObject * os__add_dll_directory_impl(PyObject *module, path_t *path) /*[clinic end generated code: output=80b025daebb5d683 input=1de3e6c13a5808c8]*/ { - HMODULE hKernel32; - PAddDllDirectory AddDllDirectory; DLL_DIRECTORY_COOKIE cookie = 0; DWORD err = 0; @@ -15098,14 +15056,8 @@ os__add_dll_directory_impl(PyObject *module, path_t *path) return NULL; } - /* For Windows 7, we have to load this. As this will be a fairly - infrequent operation, just do it each time. Kernel32 is always - loaded. */ Py_BEGIN_ALLOW_THREADS - if (!(hKernel32 = GetModuleHandleW(L"kernel32")) || - !(AddDllDirectory = (PAddDllDirectory)GetProcAddress( - hKernel32, "AddDllDirectory")) || - !(cookie = (*AddDllDirectory)(path->wide))) { + if (!(cookie = AddDllDirectory(path->wide))) { err = GetLastError(); } Py_END_ALLOW_THREADS @@ -15134,8 +15086,6 @@ static PyObject * os__remove_dll_directory_impl(PyObject *module, PyObject *cookie) /*[clinic end generated code: output=594350433ae535bc input=c1d16a7e7d9dc5dc]*/ { - HMODULE hKernel32; - PRemoveDllDirectory RemoveDllDirectory; DLL_DIRECTORY_COOKIE cookieValue; DWORD err = 0; @@ -15148,14 +15098,8 @@ os__remove_dll_directory_impl(PyObject *module, PyObject *cookie) cookieValue = (DLL_DIRECTORY_COOKIE)PyCapsule_GetPointer( cookie, "DLL directory cookie"); - /* For Windows 7, we have to load this. As this will be a fairly - infrequent operation, just do it each time. Kernel32 is always - loaded. */ Py_BEGIN_ALLOW_THREADS - if (!(hKernel32 = GetModuleHandleW(L"kernel32")) || - !(RemoveDllDirectory = (PRemoveDllDirectory)GetProcAddress( - hKernel32, "RemoveDllDirectory")) || - !(*RemoveDllDirectory)(cookieValue)) { + if (!RemoveDllDirectory(cookieValue)) { err = GetLastError(); } Py_END_ALLOW_THREADS diff --git a/Modules/socketmodule.c b/Modules/socketmodule.c index 00608be38f61..43a0cc0f963f 100644 --- a/Modules/socketmodule.c +++ b/Modules/socketmodule.c @@ -606,11 +606,6 @@ select_error(void) # define SUPPRESS_DEPRECATED_CALL #endif -#ifdef MS_WINDOWS -/* Does WSASocket() support the WSA_FLAG_NO_HANDLE_INHERIT flag? */ -static int support_wsa_no_inherit = -1; -#endif - /* Convenience function to raise an error according to errno and return a NULL pointer from a function. */ @@ -5342,6 +5337,13 @@ sock_initobj_impl(PySocketSockObject *self, int family, int type, int proto, set_error(); return -1; } + + if (!SetHandleInformation((HANDLE)fd, HANDLE_FLAG_INHERIT, 0)) { + closesocket(fd); + PyErr_SetFromWindowsErr(0); + return -1; + } + family = info.iAddressFamily; type = info.iSocketType; proto = info.iProtocol; @@ -5438,33 +5440,15 @@ sock_initobj_impl(PySocketSockObject *self, int family, int type, int proto, #endif Py_BEGIN_ALLOW_THREADS - if (support_wsa_no_inherit) { - fd = WSASocketW(family, type, proto, - NULL, 0, - WSA_FLAG_OVERLAPPED | WSA_FLAG_NO_HANDLE_INHERIT); - if (fd == INVALID_SOCKET) { - /* Windows 7 or Windows 2008 R2 without SP1 or the hotfix */ - support_wsa_no_inherit = 0; - fd = socket(family, type, proto); - } - } - else { - fd = socket(family, type, proto); - } + fd = WSASocketW(family, type, proto, + NULL, 0, + WSA_FLAG_OVERLAPPED | WSA_FLAG_NO_HANDLE_INHERIT); Py_END_ALLOW_THREADS if (fd == INVALID_SOCKET) { set_error(); return -1; } - - if (!support_wsa_no_inherit) { - if (!SetHandleInformation((HANDLE)fd, HANDLE_FLAG_INHERIT, 0)) { - closesocket(fd); - PyErr_SetFromWindowsErr(0); - return -1; - } - } #else /* UNIX */ Py_BEGIN_ALLOW_THREADS @@ -7340,12 +7324,6 @@ PyInit__socket(void) if (!os_init()) return NULL; -#ifdef MS_WINDOWS - if (support_wsa_no_inherit == -1) { - support_wsa_no_inherit = IsWindows7SP1OrGreater(); - } -#endif - Py_SET_TYPE(&sock_type, &PyType_Type); m = PyModule_Create(&socketmodule); if (m == NULL) diff --git a/Python/fileutils.c b/Python/fileutils.c index 897c2f9f4ea1..93bee9ee007c 100644 --- a/Python/fileutils.c +++ b/Python/fileutils.c @@ -1362,17 +1362,11 @@ set_inheritable(int fd, int inheritable, int raise, int *atomic_flag_works) else flags = 0; - /* This check can be removed once support for Windows 7 ends. */ -#define CONSOLE_PSEUDOHANDLE(handle) (((ULONG_PTR)(handle) & 0x3) == 0x3 && \ - GetFileType(handle) == FILE_TYPE_CHAR) - - if (!CONSOLE_PSEUDOHANDLE(handle) && - !SetHandleInformation(handle, HANDLE_FLAG_INHERIT, flags)) { + if (!SetHandleInformation(handle, HANDLE_FLAG_INHERIT, flags)) { if (raise) PyErr_SetFromWindowsErr(0); return -1; } -#undef CONSOLE_PSEUDOHANDLE return 0; #else From webhook-mailer at python.org Tue Feb 28 19:48:33 2023 From: webhook-mailer at python.org (methane) Date: Wed, 01 Mar 2023 00:48:33 -0000 Subject: [Python-checkins] Doc: Fix minor error in ePub (GH-100614) Message-ID: https://github.com/python/cpython/commit/7d1d66341838d7d1963c9ee7ffca2950d3a751fd commit: 7d1d66341838d7d1963c9ee7ffca2950d3a751fd branch: main author: Inada Naoki committer: methane date: 2023-03-01T09:48:15+09:00 summary: Doc: Fix minor error in ePub (GH-100614) Fix issue reported https://mail.python.org/archives/list/docs at python.org/message/KE7OIAO53P4XRC4ZOWPDHA63ZQJCHEC3/ files: M Doc/conf.py diff --git a/Doc/conf.py b/Doc/conf.py index b3da8fa9ec44..29fb63cbcc86 100644 --- a/Doc/conf.py +++ b/Doc/conf.py @@ -268,7 +268,7 @@ ogp_site_name = 'Python documentation' ogp_image = '_static/og-image.png' ogp_custom_meta_tags = [ - '', - '', - '', + '', + '', + '', ] From webhook-mailer at python.org Tue Feb 28 20:19:25 2023 From: webhook-mailer at python.org (methane) Date: Wed, 01 Mar 2023 01:19:25 -0000 Subject: [Python-checkins] Doc: Fix minor error in ePub (GH-100614) Message-ID: https://github.com/python/cpython/commit/90ec292ab8167c1886a53b7d9cbe7a255c3cf764 commit: 90ec292ab8167c1886a53b7d9cbe7a255c3cf764 branch: 3.11 author: Inada Naoki committer: methane date: 2023-03-01T10:19:16+09:00 summary: Doc: Fix minor error in ePub (GH-100614) Fix issue reported https://mail.python.org/archives/list/docs at python.org/message/KE7OIAO53P4XRC4ZOWPDHA63ZQJCHEC3/ (cherry picked from commit 7d1d66341838d7d1963c9ee7ffca2950d3a751fd) files: M Doc/conf.py diff --git a/Doc/conf.py b/Doc/conf.py index 98b076c0ee57..982d6b40f66e 100644 --- a/Doc/conf.py +++ b/Doc/conf.py @@ -256,7 +256,7 @@ ogp_site_name = 'Python documentation' ogp_image = '_static/og-image.png' ogp_custom_meta_tags = [ - '', - '', - '', + '', + '', + '', ]