[Python-ideas] Proposal to extend PEP 484 (gradual typing) to support Python 2.7

Guido van Rossum guido at python.org
Thu Jan 21 13:44:07 EST 2016


On Thu, Jan 21, 2016 at 10:14 AM, Agustín Herranz Cecilia <
agustin.herranz at gmail.com> wrote:

> El 2016/01/21 a las 1:11, Guido van Rossum escribió:
> [...]
>
>
>>
>> > - As this is intended to gradual type python2 code to port it to python
>> 3 I think it's convenient to add some sort of import that only be used for
>> type checking, and be only imported by the type analyzer, not the runtime.
>> This could be achieve by prepending "#type: " to the normal import
>> statement, something like:
>> >    # type: import module
>> >    # type: from package import module
>>
>> That sounds like a bad idea. If the typing module shadows some global,
>> you won't get any errors, but your code will be misleading to a reader (and
>> even worse if you from package.module import t). If the cost of the import
>> is too high for Python 2, surely it's also too high for Python 3. And what
>> other reason do you have for skipping it?
>>
>
> Exactly. Even though (when using Python 2) all type annotations are in
> comments, you still must write real imports. (This causes minor annoyances
> with linters that warn about unused imports, but there are ways to teach
> them.)
>
> This type comment 'imports' are not intended to shadow the current
> namespace, are intended to tell the analyzer where it can find those types
> present in the type comments that are not in the current namespace without
> import in it. This surely complicates the analyzer task but helps avoid
> namespace pollution and also saves memory on runtime.
>
> The typical case I've found is when using a third party library (that
> don't have type information) and you creates objects with a factory. The
> class of the objects is no needed anywhere so it's not imported in the
> current namespace, but it's needed only for type analysis and autocomplete.
>

You're describing a case I have also encountered: we have a module with a
function foo

# foo_mod.py
def foo(a):
    return a.bar()

and the intention is that a is an instance of a class A defined in another
module, which is not imported.

If we add annotations we have to add an import

from a_mod import A

def foo(a: A) -> str:
    return a.bar()

But the code that calls foo() is already importing A from a_mod somewhere,
so there's not really any time wasted -- the import is just done at a
different time.

At least, that's the theory.

In practice, indeed there are some unpleasant cases. For example, adding
the explicit import might create an import cycle, and A may not yet be
defined when foo_mod is loaded. We can't use the usual subterfuge, since we
really need the definition of A:

import a_mod

def foo(a: a_mod.A) -> str:
    return a.bar()

This will still fail if a_mod hasn't defined A yet because we reference
a_mod.A at load time (annotations are evaluated when the function
definition is executed).

So we end up with this:

import a_mod

def foo(a: 'a_mod.A') -> str:
    return a.bar()

This is both hard to read and probably wastes a lot of developer time
figuring out they have to do this.

And there are other issues, e.g. some folks have tricks to work around
their start-up time by importing modules late (e.g. do the import inside
the function that needs that module).

In mypy there's another hack possible: it doesn't care if an import is
inside "if False". So you can write:

if False:
    from a_mod import A

def foo(a: 'A') -> str:
    return a.bar()

You still have to quote 'A' because A isn't actually defined at run time,
but it's the best we can do. When using type comments you can skip the
quotes:

if False:
    from a_mod import A

def foo(a):
    # type: (A) -> str
    return a.bar()

All of this is unpleasant but not unbearable -- the big constraint here is
that we don't want to add extra syntax (beyond PEP 3107, i.e. function
annotations), so that we can use mypy for Python 3.2 and up. And with the
type comments we even support 2.7.


>
>
>> > - Also there must be addressed how it work on a python2 to python3
>> environment as there are types with the same name, str for example, that
>> works differently on each python version. If the code is for only one
>> version uses the type names of that version.
>>
>> That's the same problem that exists at runtime, and people (and tools)
>> already know how to deal with it: use bytes when you mean bytes, unicode
>> when you mean unicode, and str when you mean whatever is "native" to the
>> version you're running under and are willing to deal with it. So now you
>> just have to do the same thing in type hints that you're already doing in
>> constructors, isinstance checks, etc.
>>
>
> This is actually still a real problem. But it has no bearing on the choice
> of syntax for annotations in Python 2 or straddling code.
>
>
> Yes, this is no related with the choice of syntax for annotations
> directly. This is intended to help in the process of porting python2 code
> to python3, and it's outside of the PEP scope but related to the original
> problem. What I have in mind is some type aliases so you could annotate a
> version specific type to avoid ambiguousness on code that it's used on
> different versions. At the end what I originally try to said is that it's
> good to have a convention way to name this type aliases.
>

Yes, this is a useful thing to discuss.

Maybe we can standardize on the types defined by the 'six' package, which
is commonly used for 2-3 straddling code:

six.text_type (unicode in PY2, str in PY3)
six.binary_type (str in PY2, bytes in PY3)

Actually for the latter we might as well use bytes.


> This are intended to use during the process of porting, to help some
> automated tools, in a period of transition between versions. It's a way to
> tell the analyzer that a type have a behavior, perhaps different, than the
> same type on the running python version.
>
> For example. You start with some working python2 code that you want to
> still be working. A code analysis tool can infer the types and annotate the
> code. Also can check which parts are py2/py3 compatible and which not, and
> mark those types with the mentioned type aliases. With this, and test
> suites, it could be calculated how much code is needed to be ported.
> Refactor to adapt the code to python3 maintaining code to still run on
> python2 (it could be marked for automate deletion), and when it's done,
> drop all the python2 code..
>

Yes, that's the kind of process we're trying to develop. It's still early
days though -- people have gotten different workflows already using six and
tests and the notion of straddling code, __future__ imports, and PyPI
backports of some PY3 stdlib packages (e.g. contextlib2).

There's also a healthy set of tools that converts PY2 code to straddling
code, approximately (e.g. futurize and modernize). What's missing (as you
point out) is tools that help automating a larger part of the conversion
once PY2 code has been annotated.

But first we need to agree on how to annotate PY2 code.


>
> Of course many people use libraries like six to help them deal with this,
>> which means that those libraries have to be type-hinted appropriately for
>> both versions (maybe using different stubs for py2 and py3, with the right
>> one selected at pip install time?), but if that's taken care of, user code
>> should just work.
>>
>
> Yeah, we could use help. There are some very rudimentary stubs for a few
> things defined by six (
> <https://github.com/python/typeshed/tree/master/third_party/3/six>
> https://github.com/python/typeshed/tree/master/third_party/3/six,
> https://github.com/python/typeshed/tree/master/third_party/2.7/six) but
> we need more. There's a PR but it's of bewildering size (
> https://github.com/python/typeshed/pull/21).
>
> I think the process of porting it's different from the process of adapting
> code to work on python 2/3. Code with bytes, unicode, & str(don't mind) are
> not python2 code nor python3. Lot's of libraries that are 2/3 compatibles
> are just python2 code minimally adapted to run on python3 with six, and
> still be developed with a python2 style. When the time of drop python2
> arrives the refactor needed will be huge. There is also an article that
> recently claims "Stop writing code that break on Python 4" and show code
> that treats python3 as the special case..
>
> PS. I have a hard time following the rest of Agustin's comments. The
> comment-based syntax I proposed for Python 2.7 does support exactly the
> same functionality as the official PEP 484 syntax; the only thing it
> doesn't allow is selectively leaving out types for some arguments -- you
> must use 'Any' to fill those positions. It's not a problem in practice, and
> it doesn't reduce functionality (omitted argument types are assumed to be
> Any in PEP 484 too). I should also remark that mypy supports the
> comment-based syntax in Python 2 mode as well as in Python 3 mode; but when
> writing Python 3 only code, the non-comment version is strongly preferred.
> (We plan to eventually produce a tool that converts the comments to
> standard PEP 484 syntax).
>
> --
> --Guido van Rossum (python.org/~guido <http://python.org/%7Eguido>)
>
>
> My original point is that if comment-based function annotations are going
> to be added, add it to python 3 too, no only for the special case of
> "Python 2.7 and straddling code", even though, on python 3, type
> annotations are preferred.
>

The text I added to the end of PEP 484 already says so:

"""
- Tools that support this syntax should support it regardless of the
  Python version being checked.  This is necessary in order to support
  code that straddles Python 2 and Python 3.
"""


>
> I think that have the alternative to define types of a function as a type
> comment is a good thing because annotations could become a mesh, specially
> with complex types and default parameters, and I don't fell that the
> optional part of gradual typing must include readability.
> Some examples of my own code:
>
> class Field:
>     def __init__(self, name: str,
>                  extract: Callable[[str], str],
>                  validate: Callable[[str], bool]=bool_test,
>                  transform: Callable[[str], Any]=identity) -> 'Field':
>
> class RepeatableField:
>     def __init__(self,
>                  extract: Callable[[str], str],
>                  size: int,
>                  fields: List[Field],
>                  index_label: str,
>                  index_transform: Callable[[int], str]=lambda x: str(x))
> -> 'RepeatableField':
>
> def filter_by(field_gen: Iterable[Dict[str, Any]], **kwargs) ->
> Generator[Dict[str, Any], Any, Any]:
>
>
> So, for define a comment-based function annotation it should be accepted
> two kind of syntax:
> - one 'explicit' marking the type of the function according to the PEP484
> syntax:
>
>     def embezzle(self, account, funds=1000000, *fake_receipts):
>         # type: Callable[[str, int, *str], None]
>         """Embezzle funds from account using fake receipts."""
>         <code goes here>
>
>   like if was a normal type comment:
>
>     embezzle = get_embezzle_function()  # type: Callable[[str, int, *str], None]
>
>
> - and another one that 'implicitly' define the type of the function as
> Callable:
>
>     def embezzle(self, account, funds=1000000, *fake_receipts):
>         # type: (str, int, *str) -> None
>         """Embezzle funds from account using fake receipts."""
>         <code goes here>
>
> Both ways are easily translated back and forth into python3 annotations.
>

I don't see what adding support for

# type: Callable[[str, int, *str], None]

adds. It's more verbose, and when the 'implicit' notation is used, the type
checker already knows that embezzle is a function with that signature. You
can already do this (except for the *str part):

from typing import Callable

def embezzle(account, funds=1000000):
    # type: (str, int) -> None
    """Embezzle funds from account using fake receipts."""
    pass

f = None  # type: Callable[[str, int], None]

f = embezzle

f('a', 42)

However, note that no matter which notation you use, there's no way in PEP
484 to write the type of the original embezzle() function object using
Callable -- Callable does not have support for varargs like *fake_receipts.
If you want that the best place to bring it up is the typehinting tracker (
https://github.com/ambv/typehinting/issues). But it's going to be a tough
nut to crack, and the majority of places where Callable is needed (mostly
higher-order functions like filter/map) don't need it -- their function
arguments have purely positional arguments.


> Also, comment-based function annotations easily goes over one line's
> characters, so it should be define which syntax is used to break the line.
> As it said on  https://github.com/JukkaL/mypy/issues/1102
>
> Those things should be on a PEP as a standard way to implement this, not
> only for mypy, also for other tools.
> Accept comment-based function annotations in python3 is good for migration
> python 2/3 code as it helps on refactor and use (better autocomplete), but
> makes it a python2 feature and not python3 increase the gap between
> versions.
>

Consider it done. The time machine strikes again. :-)

>
>
> Hope I expressed better, if not, sorry about that.
>

It's perfectly fine this time!


>
> Agustín Herranz
>
>


-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20160121/34d50f99/attachment-0001.html>


More information about the Python-ideas mailing list