[issue12943] tokenize: add python -m tokenize support back
Meador Inge
report at bugs.python.org
Fri Sep 9 05:11:07 CEST 2011
New submission from Meador Inge <meadori at gmail.com>:
In 2.x, 'python -m tokenize' worked great:
[meadori at motherbrain cpython]$ python2.7 -m tokenize test.py
1,0-1,5: NAME 'print'
1,6-1,21: STRING '"Hello, World!"'
1,21-1,22: NEWLINE '\n'
2,0-2,0: ENDMARKER ''
In 3.x, however, the functionality has been removed and replaced with
some hard-wired test code:
[meadori at motherbrain cpython]$ python3 -m tokenize test.py
TokenInfo(type=57 (ENCODING), string='utf-8', start=(0, 0), end=(0, 0), line='')
TokenInfo(type=1 (NAME), string='def', start=(1, 0), end=(1, 3),
line='def parseline(self, line):')
TokenInfo(type=1 (NAME), string='parseline', start=(1, 4), end=(1,
13), line='def parseline(self, line):')
TokenInfo(type=53 (OP), string='(', start=(1, 13), end=(1, 14),
line='def parseline(self, line):')
...
The functionality was removed here [1], but with no explanation. Let's add it back and document the functionality this time around.
[1] http://hg.python.org/cpython/rev/51e24512e305/
----------
components: Library (Lib)
messages: 143752
nosy: meadori
priority: normal
severity: normal
stage: needs patch
status: open
title: tokenize: add python -m tokenize support back
type: feature request
versions: Python 3.3
_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue12943>
_______________________________________
More information about the Python-bugs-list
mailing list