From tom.augspurger88 at gmail.com Sat Aug 4 14:50:34 2018 From: tom.augspurger88 at gmail.com (Tom Augspurger) Date: Sat, 4 Aug 2018 13:50:34 -0500 Subject: [Pandas-dev] ANN: Pandas 0.23.4 Released Message-ID: Hi all, I'm happy to announce pandas that pandas 0.23.4 has been released. This is a minor bug-fix release in the 0.23.x series and includes some regression fixes, bug fixes, and performance improvements. We recommend that all users upgrade to this version. See the full whatsnew for a list of all the changes. The release can be installed with conda from the default channel and conda-forge:: conda install pandas Or via PyPI: python -m pip install --upgrade pandas A total of 4 people contributed to this release. People with a "+" by their names contributed a patch for the first time. * Jeff Reback * Tom Augspurger * chris-b1 * h-vetinari -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at pietrobattiston.it Sun Aug 5 03:05:54 2018 From: me at pietrobattiston.it (Pietro Battiston) Date: Sun, 05 Aug 2018 09:05:54 +0200 Subject: [Pandas-dev] [pydata] ANN: Pandas 0.23.4 Released In-Reply-To: References: Message-ID: <1533452754.2350.35.camel@pietrobattiston.it> Hi Tom, just wanted to signal that the page https://pandas.pydata.org/pandas-docs/version/0.23.4/whatsnew.html has a "pandas 0.23.3+8.g4aa80b6d6 documentation" header (while for instance https://pandas.pydata.org/pandas-docs/version/0.23.3/whatsnew.html has the expected "pandas 0.23.3 documentation"). Pietro Il giorno sab, 04/08/2018 alle 13.50 -0500, Tom Augspurger ha scritto: > Hi all, > I'm happy to announce pandas that pandas 0.23.4 has been released. > This is a minor bug-fix release in the 0.23.x series and includes > some regression fixes, bug fixes, and performance improvements. We > recommend that all users upgrade to this version. > See the full whatsnew for a list of all the changes. > The release can be installed with conda from the default channel and > conda-forge:: > conda install pandas > Or via PyPI: > python -m pip install --upgrade pandas > > A total of 4 people contributed to this release.? People with a "+" > by their > names contributed a patch for the first time. > > * Jeff Reback > * Tom Augspurger > * chris-b1 > * h-vetinari > > --? > You received this message because you are subscribed to the Google > Groups "PyData" group. > To unsubscribe from this group and stop receiving emails from it, > send an email to pydata+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From jazzviewer at gmail.com Mon Aug 6 06:37:06 2018 From: jazzviewer at gmail.com (John Paul) Date: Mon, 6 Aug 2018 06:37:06 -0400 Subject: [Pandas-dev] columns names issues using pd.read_csv Message-ID: Hi list, I am pretty new to python. I am trying to import the SMSSpam Collection Data using pandas read_csv module. I The import went went. But as the file does not have header I tried to include columns names(variables names : "status" and "message" and ended up with empty file. Here is my code: << import numpy as np import pandas as pd file_loc="C:\\Users\\User\Documents\\JP\\SMSCollection.txt" df=pd.read_csv(file_loc,sep='\t') >> The above code works well I got the I got the 5571 rows x 2 columns]. But when I add columns using the following line of code df.columns=["status","message"] I ended up with an empty df Any help on this ? Thanks John -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at pietrobattiston.it Tue Aug 14 04:59:39 2018 From: me at pietrobattiston.it (Pietro Battiston) Date: Tue, 14 Aug 2018 10:59:39 +0200 Subject: [Pandas-dev] Datetime (with timezone?) as extension array? Message-ID: <1534237179.2549.26.camel@pietrobattiston.it> Hi all, I assumed that Datetime (with timezone, or maybe in general?) was also planned to follow the extension array interface, which is related to issue https://github.com/pandas-dev/pandas/issues/19041 , to the annoying fact that datetimeindexwithtz._values returns the index itself, and also to the fact that https://pandas.pydata.org/pandas-docs/stable/extending.html currently states "Pandas itself uses the extension system for some types that aren?t built into NumPy (categorical, period, interval, datetime with timezone).", which is false. ... but I didn't find an issue for this? Did I miss it? Should I create it? Or was there a decision to leave datetimeindextz as it is, maybe for better compatibility with numpy? Pietro From tom.augspurger88 at gmail.com Tue Aug 14 07:13:54 2018 From: tom.augspurger88 at gmail.com (Tom Augspurger) Date: Tue, 14 Aug 2018 06:13:54 -0500 Subject: [Pandas-dev] Datetime (with timezone?) as extension array? In-Reply-To: <1534237179.2549.26.camel@pietrobattiston.it> References: <1534237179.2549.26.camel@pietrobattiston.it> Message-ID: The discussion on datetime with timezone has been a bit scattered. I don't think there's a single issue with everyone's thoughts. There will be a DatetimeWithTZ array that implements the EA interface. Anywhere we're internally using a DatetimeIndex as a container for datetimes with timezones will use the new EA. The unclear part is what `Series[datetime_with_tz].values` should be. Currently, we convert to UTC, strip the timezone, and return a datetime64[ns] ndarray. Changing that would be disruptive, jarringly different from `Series[datetime].values` (no tz) and of little value I think. Tom On Tue, Aug 14, 2018 at 4:07 AM Pietro Battiston wrote: > Hi all, > > I assumed that Datetime (with timezone, or maybe in general?) was also > planned to follow the extension array interface, which is related to > issue https://github.com/pandas-dev/pandas/issues/19041 , to the > annoying fact that datetimeindexwithtz._values returns the index > itself, and also to the fact that > https://pandas.pydata.org/pandas-docs/stable/extending.html > currently states "Pandas itself uses the extension system for some > types that aren?t built into NumPy (categorical, period, interval, > datetime with timezone).", which is false. > > ... but I didn't find an issue for this? Did I miss it? Should I create > it? Or was there a decision to leave datetimeindextz as it is, maybe > for better compatibility with numpy? > > Pietro > _______________________________________________ > Pandas-dev mailing list > Pandas-dev at python.org > https://mail.python.org/mailman/listinfo/pandas-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbrockmendel at gmail.com Tue Aug 14 11:32:50 2018 From: jbrockmendel at gmail.com (Brock Mendel) Date: Tue, 14 Aug 2018 08:32:50 -0700 Subject: [Pandas-dev] Datetime (with timezone?) as extension array? In-Reply-To: References: <1534237179.2549.26.camel@pietrobattiston.it> Message-ID: `DatetimeArray` is close to ready if you want to bring it over the finish line. Pretty much all that has to be done is having `DatetimeArrayMixin` subclass `ExtensionArray` (and, uh, implement the relevant EA methods). If no one else picks this up, my current plan is to do this _after_ updating all of the relevant arithmetic tests to test DatetimeArrayMixin. > The unclear part is what `Series[datetime_with_tz].values` should be. I thought the conclusion was that `.values` should be non-lossy, in which case it would have to be the EA. My preference would be for the EA to be returned for non-tz datetime64[ns] Series too. For that matter, I'd like it if `Series.values` _always_ returned an EA, but we're not there yet. On Tue, Aug 14, 2018 at 4:13 AM, Tom Augspurger wrote: > The discussion on datetime with timezone has been a bit scattered. I don't > think there's a single issue with everyone's thoughts. > > There will be a DatetimeWithTZ array that implements the EA interface. > Anywhere we're internally using a DatetimeIndex as a > container for datetimes with timezones will use the new EA. > > The unclear part is what `Series[datetime_with_tz].values` should be. > Currently, we convert to UTC, strip the timezone, and return > a datetime64[ns] ndarray. Changing that would be disruptive, jarringly > different from `Series[datetime].values` (no tz) and of little > value I think. > > Tom > > On Tue, Aug 14, 2018 at 4:07 AM Pietro Battiston > wrote: > >> Hi all, >> >> I assumed that Datetime (with timezone, or maybe in general?) was also >> planned to follow the extension array interface, which is related to >> issue https://github.com/pandas-dev/pandas/issues/19041 , to the >> annoying fact that datetimeindexwithtz._values returns the index >> itself, and also to the fact that >> https://pandas.pydata.org/pandas-docs/stable/extending.html >> currently states "Pandas itself uses the extension system for some >> types that aren?t built into NumPy (categorical, period, interval, >> datetime with timezone).", which is false. >> >> ... but I didn't find an issue for this? Did I miss it? Should I create >> it? Or was there a decision to leave datetimeindextz as it is, maybe >> for better compatibility with numpy? >> >> Pietro >> _______________________________________________ >> Pandas-dev mailing list >> Pandas-dev at python.org >> https://mail.python.org/mailman/listinfo/pandas-dev >> > > _______________________________________________ > Pandas-dev mailing list > Pandas-dev at python.org > https://mail.python.org/mailman/listinfo/pandas-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.augspurger88 at gmail.com Tue Aug 14 11:43:09 2018 From: tom.augspurger88 at gmail.com (Tom Augspurger) Date: Tue, 14 Aug 2018 10:43:09 -0500 Subject: [Pandas-dev] Datetime (with timezone?) as extension array? In-Reply-To: References: <1534237179.2549.26.camel@pietrobattiston.it> Message-ID: I'm currently focused on Sparse, and after that maybe Period. We'll see who gets to DatetimeArray first :) > I thought the conclusion was that `.values` should be non-lossy I don't recall coming to a firm conclusion, but I could easily be mis-remembering. On Tue, Aug 14, 2018 at 10:32 AM Brock Mendel wrote: > `DatetimeArray` is close to ready if you want to bring it over the finish > line. Pretty much all that has to be done is having `DatetimeArrayMixin` > subclass `ExtensionArray` (and, uh, implement the relevant EA methods). If > no one else picks this up, my current plan is to do this _after_ updating > all of the relevant arithmetic tests to test DatetimeArrayMixin. > > > The unclear part is what `Series[datetime_with_tz].values` should be. > > I thought the conclusion was that `.values` should be non-lossy, in which > case it would have to be the EA. My preference would be for the EA to be > returned for non-tz datetime64[ns] Series too. > > For that matter, I'd like it if `Series.values` _always_ returned an EA, > but we're not there yet. > > > On Tue, Aug 14, 2018 at 4:13 AM, Tom Augspurger < > tom.augspurger88 at gmail.com> wrote: > >> The discussion on datetime with timezone has been a bit scattered. I >> don't think there's a single issue with everyone's thoughts. >> >> There will be a DatetimeWithTZ array that implements the EA interface. >> Anywhere we're internally using a DatetimeIndex as a >> container for datetimes with timezones will use the new EA. >> >> The unclear part is what `Series[datetime_with_tz].values` should be. >> Currently, we convert to UTC, strip the timezone, and return >> a datetime64[ns] ndarray. Changing that would be disruptive, jarringly >> different from `Series[datetime].values` (no tz) and of little >> value I think. >> >> Tom >> >> On Tue, Aug 14, 2018 at 4:07 AM Pietro Battiston >> wrote: >> >>> Hi all, >>> >>> I assumed that Datetime (with timezone, or maybe in general?) was also >>> planned to follow the extension array interface, which is related to >>> issue https://github.com/pandas-dev/pandas/issues/19041 , to the >>> annoying fact that datetimeindexwithtz._values returns the index >>> itself, and also to the fact that >>> https://pandas.pydata.org/pandas-docs/stable/extending.html >>> currently states "Pandas itself uses the extension system for some >>> types that aren?t built into NumPy (categorical, period, interval, >>> datetime with timezone).", which is false. >>> >>> ... but I didn't find an issue for this? Did I miss it? Should I create >>> it? Or was there a decision to leave datetimeindextz as it is, maybe >>> for better compatibility with numpy? >>> >>> Pietro >>> _______________________________________________ >>> Pandas-dev mailing list >>> Pandas-dev at python.org >>> https://mail.python.org/mailman/listinfo/pandas-dev >>> >> >> _______________________________________________ >> Pandas-dev mailing list >> Pandas-dev at python.org >> https://mail.python.org/mailman/listinfo/pandas-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at pietrobattiston.it Tue Aug 14 12:21:29 2018 From: me at pietrobattiston.it (Pietro Battiston) Date: Tue, 14 Aug 2018 18:21:29 +0200 Subject: [Pandas-dev] Datetime (with timezone?) as extension array? In-Reply-To: References: <1534237179.2549.26.camel@pietrobattiston.it> Message-ID: <1534263689.2549.30.camel@pietrobattiston.it> Il giorno mar, 14/08/2018 alle 08.32 -0700, Brock Mendel ha scritto: > `DatetimeArray` is close to ready if you want to bring it over the > finish line.? Pretty much all that has to be done is having > `DatetimeArrayMixin` subclass `ExtensionArray` (and, uh, implement > the relevant EA methods).? If no one else picks this up, my current > plan is to do this _after_ updating all of the relevant arithmetic > tests to test DatetimeArrayMixin. > > >?The unclear part is what `Series[datetime_with_tz].values` should > be. > > I thought the conclusion was that `.values` should be non-lossy, in > which case it would have to be the EA.? My preference would be for > the EA to be returned for non-tz datetime64[ns] Series too. Thanks for the clarifying comments. I just wanted to stress that my concern is not just about the (problematic) issue of whether ``.values`` should drop the tz, but first and foremost that pd.Series([pd.Timestamp('2018-10-10', tz='utc')])._values returns a (Datetime)Index. That this is wrong, I think is not controversial (right?), and decoupling the datetime storage from the index interface should not per se be a source of compatibilty problems (and is, as far as I understand, a required step towards using DatetimeArray - and removing some hacks in the codebase). ... but maybe there is no issue for this just because it is a natural part of the migration to DatetimeArray? Pietro From tom.augspurger88 at gmail.com Tue Aug 14 12:35:45 2018 From: tom.augspurger88 at gmail.com (Tom Augspurger) Date: Tue, 14 Aug 2018 11:35:45 -0500 Subject: [Pandas-dev] Datetime (with timezone?) as extension array? In-Reply-To: <1534263689.2549.30.camel@pietrobattiston.it> References: <1534237179.2549.26.camel@pietrobattiston.it> <1534263689.2549.30.camel@pietrobattiston.it> Message-ID: On Tue, Aug 14, 2018 at 11:21 AM Pietro Battiston wrote: > Il giorno mar, 14/08/2018 alle 08.32 -0700, Brock Mendel ha scritto: > > `DatetimeArray` is close to ready if you want to bring it over the > > finish line. Pretty much all that has to be done is having > > `DatetimeArrayMixin` subclass `ExtensionArray` (and, uh, implement > > the relevant EA methods). If no one else picks this up, my current > > plan is to do this _after_ updating all of the relevant arithmetic > > tests to test DatetimeArrayMixin. > > > > > The unclear part is what `Series[datetime_with_tz].values` should > > be. > > > > I thought the conclusion was that `.values` should be non-lossy, in > > which case it would have to be the EA. My preference would be for > > the EA to be returned for non-tz datetime64[ns] Series too. > > Thanks for the clarifying comments. > > I just wanted to stress that my concern is not just about the > (problematic) issue of whether ``.values`` should drop the tz, but > first and foremost that > > pd.Series([pd.Timestamp('2018-10-10', tz='utc')])._values > > returns a (Datetime)Index. > That this is wrong, I think is not controversial (right?), and > decoupling the datetime storage from the index interface should not per > se be a source of compatibilty problems (and is, as far as I > understand, a required step towards using DatetimeArray - and removing > some hacks in the codebase). > Right, that's what I meant by "Anywhere we're internally using a DatetimeIndex as a container for datetimes with timezones will use the new EA." earlier. So `._values` will change from DTI to DatetimeArray. > ... but maybe there is no issue for this just because it is a natural > part of the migration to DatetimeArray? > > Pietro > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garcia.marc at gmail.com Fri Aug 17 11:20:56 2018 From: garcia.marc at gmail.com (Marc Garcia) Date: Fri, 17 Aug 2018 16:20:56 +0100 Subject: [Pandas-dev] pandas types Message-ID: I was thinking that it could be a good idea to start using pandas types before pandas 1.0 (I think this change was assumed to happen sooner or later). Meaning that instead of something like `df.astype(numpy.uint8)` or `df.astype('category')` users would have to use `df.astype(pandas.uint8)` or `df.astype(pandas.category)`. I see 3 advantages on doing it before 1.0: - The API would be clearer and more consistent for users (and creating new extension types will be more controlled). - IMO users will be excited about migrating to pandas 1.0, and as the change will be quite trivial for them, I think the adoption of the new syntax will be faster, than if left until later. - I think it should allow us to make some internal changes transparently (e.g. replacing numpy). I think as a first version, the change could be almost as simple as implementing the pandas types as classes extending a base class, with an attribute that maps the current type. And then in every function/method that receives a dtype, check if the type is a pandas type, do the lookup if it is, and show a deprecation warning if it's not. Does this make sense? Am I missing something? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.augspurger88 at gmail.com Sat Aug 18 15:24:48 2018 From: tom.augspurger88 at gmail.com (Tom Augspurger) Date: Sat, 18 Aug 2018 14:24:48 -0500 Subject: [Pandas-dev] pandas types In-Reply-To: References: Message-ID: Your third advantage is the most compelling to me. I don't think we really have the developer bandwidth or expertise to develop our own type system. And I don't think it'd be a good from an ecosystem perspective either, as we want fundamental things like dtypes to be shared across projects. Currently that's NumPy's dtype system. But I could maybe see the advantage of a very simple system that wraps NumPy's (or someday Arrow's or some other library). Wasn't there a dtypes BoF at SciPy this year? Did anything come of that? On Fri, Aug 17, 2018 at 10:21 AM Marc Garcia wrote: > I was thinking that it could be a good idea to start using pandas types > before pandas 1.0 (I think this change was assumed to happen sooner or > later). > > Meaning that instead of something like `df.astype(numpy.uint8)` or > `df.astype('category')` users would have to use `df.astype(pandas.uint8)` > or `df.astype(pandas.category)`. > > I see 3 advantages on doing it before 1.0: > - The API would be clearer and more consistent for users (and creating new > extension types will be more controlled). > - IMO users will be excited about migrating to pandas 1.0, and as the > change will be quite trivial for them, I think the adoption of the new > syntax will be faster, than if left until later. > - I think it should allow us to make some internal changes transparently > (e.g. replacing numpy). > > I think as a first version, the change could be almost as simple as > implementing the pandas types as classes extending a base class, with an > attribute that maps the current type. And then in every function/method > that receives a dtype, check if the type is a pandas type, do the lookup if > it is, and show a deprecation warning if it's not. > > Does this make sense? Am I missing something? > _______________________________________________ > Pandas-dev mailing list > Pandas-dev at python.org > https://mail.python.org/mailman/listinfo/pandas-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garcia.marc at gmail.com Sat Aug 18 15:39:03 2018 From: garcia.marc at gmail.com (Marc Garcia) Date: Sat, 18 Aug 2018 20:39:03 +0100 Subject: [Pandas-dev] pandas types In-Reply-To: References: Message-ID: Sorry for the lack of context in my first email. A wapper around numpy, arrow (and possibly others) is what I had in mind. As well as a way to abstract the user on whether the type has a direct physical representation (int, float) or not (category...). This document (I guess Wes wrote it), is why I was assuming this was already in the agenda: https://pandas-dev.github.io/pandas2/internal-architecture.html#high-level-logical-type-proposal My proposal wasn't anything else besides what the document says. I was just proposing to make the change (at least the API part) sooner rather than later. IMO ideally before pandas 1.0, for the reasons I mentioned. On Sat, Aug 18, 2018 at 8:25 PM Tom Augspurger wrote: > Your third advantage is the most compelling to me. > > I don't think we really have the developer bandwidth or expertise to > develop our own type system. And I don't think it'd be a good > from an ecosystem perspective either, as we want fundamental things like > dtypes to be shared across projects. Currently that's > NumPy's dtype system. But I could maybe see the advantage of a very simple > system that wraps NumPy's (or someday Arrow's > or some other library). > > Wasn't there a dtypes BoF at SciPy this year? Did anything come of that? > > > > > On Fri, Aug 17, 2018 at 10:21 AM Marc Garcia > wrote: > >> I was thinking that it could be a good idea to start using pandas types >> before pandas 1.0 (I think this change was assumed to happen sooner or >> later). >> >> Meaning that instead of something like `df.astype(numpy.uint8)` or >> `df.astype('category')` users would have to use `df.astype(pandas.uint8)` >> or `df.astype(pandas.category)`. >> >> I see 3 advantages on doing it before 1.0: >> - The API would be clearer and more consistent for users (and creating >> new extension types will be more controlled). >> - IMO users will be excited about migrating to pandas 1.0, and as the >> change will be quite trivial for them, I think the adoption of the new >> syntax will be faster, than if left until later. >> - I think it should allow us to make some internal changes transparently >> (e.g. replacing numpy). >> >> I think as a first version, the change could be almost as simple as >> implementing the pandas types as classes extending a base class, with an >> attribute that maps the current type. And then in every function/method >> that receives a dtype, check if the type is a pandas type, do the lookup if >> it is, and show a deprecation warning if it's not. >> >> Does this make sense? Am I missing something? >> _______________________________________________ >> Pandas-dev mailing list >> Pandas-dev at python.org >> https://mail.python.org/mailman/listinfo/pandas-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorisvandenbossche at gmail.com Wed Aug 29 04:35:46 2018 From: jorisvandenbossche at gmail.com (Joris Van den Bossche) Date: Wed, 29 Aug 2018 10:35:46 +0200 Subject: [Pandas-dev] pandas types In-Reply-To: References: Message-ID: Op za 18 aug. 2018 om 21:39 schreef Marc Garcia : > Sorry for the lack of context in my first email. A wapper around numpy, > arrow (and possibly others) is what I had in mind. As well as a way to > abstract the user on whether the type has a direct physical representation > (int, float) or not (category...). > I am not fully sure how possible this is in practice with current numpy. Eg a custom dtype class can never be compared to a numpy dtype (it will always raise a TypeError if numpy does not recognize it) due to the way numpy has implemented dtype comparisons. To actually write a dtype object that is compatible with numpy, I think this can currently only be done in C by writing an actual new numpy dtype (but I might be wrong here). So I am not sure that a simple system that wraps numpy's dtypes is actually possible. I agree with the points you raise for why we would want our own dtype objects, and I also think we should do this in the long term. But I doubt that it can currently be done without a big backwards compatibility break (even the light wrapping to provide a consistent experience to our users). And if that is the case, I don't think we should consider that for pandas 1.0 Joris > This document (I guess Wes wrote it), is why I was assuming this was > already in the agenda: > https://pandas-dev.github.io/pandas2/internal-architecture.html#high-level-logical-type-proposal > > My proposal wasn't anything else besides what the document says. I was > just proposing to make the change (at least the API part) sooner rather > than later. IMO ideally before pandas 1.0, for the reasons I mentioned. > > On Sat, Aug 18, 2018 at 8:25 PM Tom Augspurger > wrote: > >> Your third advantage is the most compelling to me. >> >> I don't think we really have the developer bandwidth or expertise to >> develop our own type system. And I don't think it'd be a good >> from an ecosystem perspective either, as we want fundamental things like >> dtypes to be shared across projects. Currently that's >> NumPy's dtype system. But I could maybe see the advantage of a very >> simple system that wraps NumPy's (or someday Arrow's >> or some other library). >> >> Wasn't there a dtypes BoF at SciPy this year? Did anything come of that? >> >> >> >> >> On Fri, Aug 17, 2018 at 10:21 AM Marc Garcia >> wrote: >> >>> I was thinking that it could be a good idea to start using pandas types >>> before pandas 1.0 (I think this change was assumed to happen sooner or >>> later). >>> >>> Meaning that instead of something like `df.astype(numpy.uint8)` or >>> `df.astype('category')` users would have to use `df.astype(pandas.uint8)` >>> or `df.astype(pandas.category)`. >>> >>> I see 3 advantages on doing it before 1.0: >>> - The API would be clearer and more consistent for users (and creating >>> new extension types will be more controlled). >>> - IMO users will be excited about migrating to pandas 1.0, and as the >>> change will be quite trivial for them, I think the adoption of the new >>> syntax will be faster, than if left until later. >>> - I think it should allow us to make some internal changes transparently >>> (e.g. replacing numpy). >>> >>> I think as a first version, the change could be almost as simple as >>> implementing the pandas types as classes extending a base class, with an >>> attribute that maps the current type. And then in every function/method >>> that receives a dtype, check if the type is a pandas type, do the lookup if >>> it is, and show a deprecation warning if it's not. >>> >>> Does this make sense? Am I missing something? >>> _______________________________________________ >>> Pandas-dev mailing list >>> Pandas-dev at python.org >>> https://mail.python.org/mailman/listinfo/pandas-dev >>> >> _______________________________________________ > Pandas-dev mailing list > Pandas-dev at python.org > https://mail.python.org/mailman/listinfo/pandas-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: