[pypy-commit] pypy json-decoder-maps: address two comments
cfbolz
pypy.commits at gmail.com
Thu Sep 19 14:08:54 EDT 2019
Author: Carl Friedrich Bolz-Tereick <cfbolz at gmx.de>
Branch: json-decoder-maps
Changeset: r97546:d2524657b1d4
Date: 2019-09-19 20:08 +0200
http://bitbucket.org/pypy/pypy/changeset/d2524657b1d4/
Log: address two comments
(the "obscure hack to help the cpu cache in withintprebuiltint
doesn't apply here, because the json decoder *doesn't* touch the
boxed ints again)
diff --git a/pypy/module/_pypyjson/interp_decoder.py b/pypy/module/_pypyjson/interp_decoder.py
--- a/pypy/module/_pypyjson/interp_decoder.py
+++ b/pypy/module/_pypyjson/interp_decoder.py
@@ -25,19 +25,15 @@
return x * NEG_POW_10[exp]
-# <antocuni> This is basically the same logic that we use to implement
-# objspace.std.withintprebuilt. On one hand, it would be nice to have only a
-# single implementation. On the other hand, since it is disabled by default,
-# it doesn't change much at runtime. However, in intobject.wrapint there is an
-# "obscure hack to help the CPU cache": it might be useful here as well?
-#
-# <antocuni> this is more a feature than a review but: I wonder whether it is
-# worth to also have a per-decoder int cache which caches all the ints, not
-# only the small ones. I suppose it might be useful in case you have a big
-# json file with e.g. unique ids which might be repeated here and there.
class IntCache(object):
""" A cache for wrapped ints between START and END """
+ # I also tried various combinations of having an LRU cache for ints as
+ # well, didn't really help.
+
+ # XXX one thing to do would be to use withintprebuilt in general again,
+ # hidden behind a 'we_are_jitted'
+
START = -10
END = 256
More information about the pypy-commit
mailing list