From rakulj321 at gmail.com Thu Nov 3 05:10:07 2022 From: rakulj321 at gmail.com (rakul) Date: Thu, 3 Nov 2022 14:40:07 +0530 Subject: [Pandas-dev] (no subject) Message-ID: An HTML attachment was scrubbed... URL: From rakulj321 at gmail.com Thu Nov 3 05:10:07 2022 From: rakulj321 at gmail.com (rakul) Date: Thu, 3 Nov 2022 14:40:07 +0530 Subject: [Pandas-dev] (no subject) Message-ID: An HTML attachment was scrubbed... URL: From garcia.marc at gmail.com Sat Nov 5 10:23:39 2022 From: garcia.marc at gmail.com (Marc Garcia) Date: Sat, 5 Nov 2022 21:23:39 +0700 Subject: [Pandas-dev] pandas new infrastructure (OVH donation) Message-ID: Hi all, pandas has received a donation from OVHcloud to support the project infrastructure, with OVHcloud public cloud credits (an initial amount of 10,000 EUR for a period of one year). OVH is open to sponsor longer term and also other projects of the ecosystem (or NumFOCUS as a whole), but we started with this to have feedback at a smaller scale first. The credits will be used initially for: - Hosting of the pandas website - Running the pandas benchmarks - Speeding up the project CI I detail next what I have in mind to set up for each. If anyone is interested in getting involved, or has ideas, comments... please let me know. I'll publish updates here as there is progress on this. Website: I'm planning to experiment on splitting the website in two (it'll be transparent for users). The website and the stable docs which receive most of the traffic can probably be stored in Cloudflare pages. We're already using Cloudflare as a CDN, so instead of using it as a cache, we can publish the documents there. The rest of the docs (old versions and the dev version) can be hosted in bucket storage of the OVHcloud. Response times may be a bit slower, but our website is bigger than the Cloudflare quota, and having old docs rarely accessed in a CDN seems unnecessary anyway. - Benchmarks: OVHcloud instances have guaranteed hardware, and we'll be checking if this is enough for the results of the benchmarks to be consistent over runs, or if there is too much variability and we need dedicated hardware. If consistency is good enough that would be great, since our benchmarks mostly use one core, and using dedicated hardware is likely to be a decent waste of resources, since most servers will likely have 16 cores or more. We'll discuss with OVH if dedicated hardware is needed, as at the moment their public cloud doesn't offer it (there is an alpha for providing dedicated instances, but we need to check with them). - Faster CI: Our GitHub runners are small, and most builds take around one hour or more to finish. We should be able to use bigger OVH instances for our existing CI pretty easily, via their OpenStack API and CIrun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorisvandenbossche at gmail.com Tue Nov 8 18:11:05 2022 From: jorisvandenbossche at gmail.com (Joris Van den Bossche) Date: Wed, 9 Nov 2022 00:11:05 +0100 Subject: [Pandas-dev] November 2022 monthly community meeting (Wednesday November 9, UTC 18:00) In-Reply-To: References: Message-ID: Hi all, A reminder that the next monthly dev call is tomorrow (Wednesday, November 9) at 18:00 UTC (note we kept the same time in UTC, which means it will typically have shifted one hour in your local time zone!). Our calendar is at https://pandas.pydata.org/docs/development/meeting.html#calendar to check your local time. All are welcome to attend! Video Call: https://us06web.zoom.us/j/84484803210?pwd=TjUxNmcyNHcvcG9SNGJvbE53Y21GZz09 Meeting notes: https://docs.google.com/document/u/1/d/1tGbTiYORHiSPgVMXawiweGJlBw5dOkVJLY-licoBmBU/edit?ouid=102771015311436394588&usp=docs_home&ths=true Joris -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorisvandenbossche at gmail.com Wed Nov 9 12:43:08 2022 From: jorisvandenbossche at gmail.com (Joris Van den Bossche) Date: Wed, 9 Nov 2022 18:43:08 +0100 Subject: [Pandas-dev] pandas new infrastructure (OVH donation) In-Reply-To: References: Message-ID: On Sat, 5 Nov 2022 at 15:24, Marc Garcia wrote: > Hi all, > > pandas has received a donation from OVHcloud > to support the project infrastructure, with OVHcloud public cloud credits > (an initial amount of 10,000 EUR for a period of one year). OVH is open to > sponsor longer term and also other projects of the ecosystem (or NumFOCUS > as a whole), but we started with this to have feedback at a smaller scale > first. > > The credits will be used initially for: > - Hosting of the pandas website > - Running the pandas benchmarks > - Speeding up the project CI > > I detail next what I have in mind to set up for each. If anyone is > interested in getting involved, or has ideas, comments... please let me > know. I'll publish updates here as there is progress on this. > > > Website: I'm planning to experiment on splitting the website in two (it'll > be transparent for users). The website and the stable docs which receive > most of the traffic can probably be stored in Cloudflare pages. We're > already using Cloudflare as a CDN, so instead of using it as a cache, we > can publish the documents there. The rest of the docs (old versions and the > dev version) can be hosted in bucket storage of the OVHcloud. Response > times may be a bit slower, but our website is bigger than the Cloudflare > quota, and having old docs rarely accessed in a CDN seems unnecessary > anyway. > Splitting like that makes sense! (_if_ it is within quota, we could maybe consider keeping the dev docs, and only move old docs to bucket storage?) > > - Benchmarks: OVHcloud instances have guaranteed hardware, and we'll be > checking if this is enough for the results of the benchmarks to be > consistent over runs, or if there is too much variability and we need > dedicated hardware. If consistency is good enough that would be great, > since our benchmarks mostly use one core, and using dedicated hardware is > likely to be a decent waste of resources, since most servers will likely > have 16 cores or more. We'll discuss with OVH if dedicated hardware is > needed, as at the moment their public cloud doesn't offer it (there is an > alpha for providing dedicated instances, but we need to check with them). > > - Faster CI: Our GitHub runners are small, and most builds take around one > hour or more to finish. We should be able to use bigger OVH instances for > our existing CI pretty easily, via their OpenStack API and CIrun. > I am not familiar with CIrun, but quickly checking it, that would basically be using our current github actions but through their "self-hosted" runner feature? > _______________________________________________ > Pandas-dev mailing list > Pandas-dev at python.org > https://mail.python.org/mailman/listinfo/pandas-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garcia.marc at gmail.com Wed Nov 9 23:50:21 2022 From: garcia.marc at gmail.com (Marc Garcia) Date: Thu, 10 Nov 2022 11:50:21 +0700 Subject: [Pandas-dev] pandas new infrastructure (OVH donation) In-Reply-To: References: Message-ID: Some updates (the ones shared in yesterday's call, and some new ones. The cloud (bucket) storage didn't seem convenient for different reasons, so I moved forward with a regular Ubuntu instance (the cheapest, 2 cores, 7Gb ram, 24 EUR/month). I moved now all the traffic to the new instance, and since we've just got static file serving, the instance seems to be more than enough to handle our traffic (I didn't see CPU or RAM exceed 4% usage in the time I've been monitoring the resources). I've got a PR open (#49614) to start syncing our web/docs with the new server. In few hours I'll stop the nginx in the old server (I confirmed there is no traffic already, since we use cloudflare our dns changes are immediate). And in few days I'll switch off the instance in rackspace. Besides the open PR, the only missing thing are the benchmarks at ( pandas.pydata.org/speed). The link is not working now, since I didn't move the benchmarks yet. But before moving this, we should also make the changes in the benchmarks repo, so benchmark results start to synchronize with the new server. Can someone with access to the server take care of it please (DM for the new server info). On running the benchmarks in OVH, the VM instances don't seem to be stable enough to keep track of performance over time, as it was likely. Full results of the tests I did are in this repo: https://gitlab.com/datapythonista/pandas_ovh_benchmarks . OVH is checking the best way to give us access to dedicated hardware, will continue with that once we've got it. In parallel to that, I'm planning to do some tests to see if it could be feasible to use valgrind's cachegrind (or equivalent) to instead of monitor time, we monitor CPU cycles. That should make benchmarking much easier and faster, as any hardware would work, and benchmarks could be run in parallel. With a dedicated server we're likely to only be able to use a single core to have stable results, which means that we can only run one benchmark suite per server every 3 hours. But implementing it can be tricky. About CIrun, as you say Joris, it's like a middle man between our hardware (the OVH openstack API to create/delete instances) and GitHub actions. We need to add an extra yaml file with the CIrun configuration, and other than that we should be able to use OVH hardware directly from our current CI jobs without changes (except one entry to say what instance we want to use for the jobs running in OVH I assume). Please let me know of any feedback. In particular if you see any problem with our website that could be caused by the migration. Cheers, On Thu, Nov 10, 2022 at 12:43 AM Joris Van den Bossche < jorisvandenbossche at gmail.com> wrote: > > > On Sat, 5 Nov 2022 at 15:24, Marc Garcia wrote: > >> Hi all, >> >> pandas has received a donation from OVHcloud >> to support the project infrastructure, with OVHcloud public cloud credits >> (an initial amount of 10,000 EUR for a period of one year). OVH is open to >> sponsor longer term and also other projects of the ecosystem (or NumFOCUS >> as a whole), but we started with this to have feedback at a smaller scale >> first. >> >> The credits will be used initially for: >> - Hosting of the pandas website >> - Running the pandas benchmarks >> - Speeding up the project CI >> >> I detail next what I have in mind to set up for each. If anyone is >> interested in getting involved, or has ideas, comments... please let me >> know. I'll publish updates here as there is progress on this. >> >> >> Website: I'm planning to experiment on splitting the website in two >> (it'll be transparent for users). The website and the stable docs which >> receive most of the traffic can probably be stored in Cloudflare pages. >> We're already using Cloudflare as a CDN, so instead of using it as a cache, >> we can publish the documents there. The rest of the docs (old versions and >> the dev version) can be hosted in bucket storage of the OVHcloud. Response >> times may be a bit slower, but our website is bigger than the Cloudflare >> quota, and having old docs rarely accessed in a CDN seems unnecessary >> anyway. >> > > Splitting like that makes sense! (_if_ it is within quota, we could maybe > consider keeping the dev docs, and only move old docs to bucket storage?) > > >> >> - Benchmarks: OVHcloud instances have guaranteed hardware, and we'll be >> checking if this is enough for the results of the benchmarks to be >> consistent over runs, or if there is too much variability and we need >> dedicated hardware. If consistency is good enough that would be great, >> since our benchmarks mostly use one core, and using dedicated hardware is >> likely to be a decent waste of resources, since most servers will likely >> have 16 cores or more. We'll discuss with OVH if dedicated hardware is >> needed, as at the moment their public cloud doesn't offer it (there is an >> alpha for providing dedicated instances, but we need to check with them). >> >> - Faster CI: Our GitHub runners are small, and most builds take around >> one hour or more to finish. We should be able to use bigger OVH instances >> for our existing CI pretty easily, via their OpenStack API and CIrun. >> > > I am not familiar with CIrun, but quickly checking it, that would > basically be using our current github actions but through their > "self-hosted" runner feature? > > >> _______________________________________________ >> Pandas-dev mailing list >> Pandas-dev at python.org >> https://mail.python.org/mailman/listinfo/pandas-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhshadrach at gmail.com Thu Nov 10 09:18:23 2022 From: rhshadrach at gmail.com (Richard Shadrach) Date: Thu, 10 Nov 2022 09:18:23 -0500 Subject: [Pandas-dev] pandas new infrastructure (OVH donation) In-Reply-To: References: Message-ID: > Besides the open PR, the only missing thing are the benchmarks at ( pandas.pydata.org/speed). The link is not working now, since I didn't move the benchmarks yet. But before moving this, we should also make the changes in the benchmarks repo, so benchmark results start to synchronize with the new server. Can someone with access to the server take care of it please (DM for the new server info). The link https://asv-runner.github.io/asv-collection/pandas/ is being automatically updated. Can we point to this URL for now, given that we may be changing how the benchmarks are run? If it's desirable to have the benchmarks results on the docs server and our current solution is deemed to be the long term one, I can work on the synchronization. However I'm resistant to putting in that work if it's just going to go away given the easier solution. Best, Richard On Wed, Nov 9, 2022 at 11:50 PM Marc Garcia wrote: > Some updates (the ones shared in yesterday's call, and some new ones. > > The cloud (bucket) storage didn't seem convenient for different reasons, > so I moved forward with a regular Ubuntu instance (the cheapest, 2 cores, > 7Gb ram, 24 EUR/month). I moved now all the traffic to the new instance, > and since we've just got static file serving, the instance seems to be more > than enough to handle our traffic (I didn't see CPU or RAM exceed 4% usage > in the time I've been monitoring the resources). I've got a PR open > (#49614) to start syncing our web/docs with the new server. In few hours > I'll stop the nginx in the old server (I confirmed there is no traffic > already, since we use cloudflare our dns changes are immediate). And in few > days I'll switch off the instance in rackspace. > > Besides the open PR, the only missing thing are the benchmarks at ( > pandas.pydata.org/speed). The link is not working now, since I didn't > move the benchmarks yet. But before moving this, we should also make the > changes in the benchmarks repo, so benchmark results start to synchronize > with the new server. Can someone with access to the server take care of it > please (DM for the new server info). > > On running the benchmarks in OVH, the VM instances don't seem to be stable > enough to keep track of performance over time, as it was likely. Full > results of the tests I did are in this repo: > https://gitlab.com/datapythonista/pandas_ovh_benchmarks . OVH is checking > the best way to give us access to dedicated hardware, will continue with > that once we've got it. In parallel to that, I'm planning to do some tests > to see if it could be feasible to use valgrind's cachegrind (or equivalent) > to instead of monitor time, we monitor CPU cycles. That should make > benchmarking much easier and faster, as any hardware would work, and > benchmarks could be run in parallel. With a dedicated server we're likely > to only be able to use a single core to have stable results, which means > that we can only run one benchmark suite per server every 3 hours. But > implementing it can be tricky. > > About CIrun, as you say Joris, it's like a middle man between our hardware > (the OVH openstack API to create/delete instances) and GitHub actions. We > need to add an extra yaml file with the CIrun configuration, and other than > that we should be able to use OVH hardware directly from our current CI > jobs without changes (except one entry to say what instance we want to use > for the jobs running in OVH I assume). > > Please let me know of any feedback. In particular if you see any problem > with our website that could be caused by the migration. > > Cheers, > > On Thu, Nov 10, 2022 at 12:43 AM Joris Van den Bossche < > jorisvandenbossche at gmail.com> wrote: > >> >> >> On Sat, 5 Nov 2022 at 15:24, Marc Garcia wrote: >> >>> Hi all, >>> >>> pandas has received a donation from OVHcloud >>> to support the project infrastructure, with OVHcloud public cloud credits >>> (an initial amount of 10,000 EUR for a period of one year). OVH is open to >>> sponsor longer term and also other projects of the ecosystem (or NumFOCUS >>> as a whole), but we started with this to have feedback at a smaller scale >>> first. >>> >>> The credits will be used initially for: >>> - Hosting of the pandas website >>> - Running the pandas benchmarks >>> - Speeding up the project CI >>> >>> I detail next what I have in mind to set up for each. If anyone is >>> interested in getting involved, or has ideas, comments... please let me >>> know. I'll publish updates here as there is progress on this. >>> >>> >>> Website: I'm planning to experiment on splitting the website in two >>> (it'll be transparent for users). The website and the stable docs which >>> receive most of the traffic can probably be stored in Cloudflare pages. >>> We're already using Cloudflare as a CDN, so instead of using it as a cache, >>> we can publish the documents there. The rest of the docs (old versions and >>> the dev version) can be hosted in bucket storage of the OVHcloud. Response >>> times may be a bit slower, but our website is bigger than the Cloudflare >>> quota, and having old docs rarely accessed in a CDN seems unnecessary >>> anyway. >>> >> >> Splitting like that makes sense! (_if_ it is within quota, we could maybe >> consider keeping the dev docs, and only move old docs to bucket storage?) >> >> >>> >>> - Benchmarks: OVHcloud instances have guaranteed hardware, and we'll be >>> checking if this is enough for the results of the benchmarks to be >>> consistent over runs, or if there is too much variability and we need >>> dedicated hardware. If consistency is good enough that would be great, >>> since our benchmarks mostly use one core, and using dedicated hardware is >>> likely to be a decent waste of resources, since most servers will likely >>> have 16 cores or more. We'll discuss with OVH if dedicated hardware is >>> needed, as at the moment their public cloud doesn't offer it (there is an >>> alpha for providing dedicated instances, but we need to check with them). >>> >>> - Faster CI: Our GitHub runners are small, and most builds take around >>> one hour or more to finish. We should be able to use bigger OVH instances >>> for our existing CI pretty easily, via their OpenStack API and CIrun. >>> >> >> I am not familiar with CIrun, but quickly checking it, that would >> basically be using our current github actions but through their >> "self-hosted" runner feature? >> >> >>> _______________________________________________ >>> Pandas-dev mailing list >>> Pandas-dev at python.org >>> https://mail.python.org/mailman/listinfo/pandas-dev >>> >> _______________________________________________ > Pandas-dev mailing list > Pandas-dev at python.org > https://mail.python.org/mailman/listinfo/pandas-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garcia.marc at gmail.com Thu Nov 10 09:31:39 2022 From: garcia.marc at gmail.com (Marc Garcia) Date: Thu, 10 Nov 2022 21:31:39 +0700 Subject: [Pandas-dev] pandas new infrastructure (OVH donation) In-Reply-To: References: Message-ID: Oh, I forgot we were not using the rendered asv website from the old server. We're using nginx, so I can easily make pandas.pydata.org/speed show the content from that url. But I guess we can also check them directly in the github pages url, not sure if it makes a difference. Let me know if it's useful, and I'll set it up. Thanks for the info! On Thu, Nov 10, 2022, 21:18 Richard Shadrach wrote: > > Besides the open PR, the only missing thing are the benchmarks at ( > pandas.pydata.org/speed). The link is not working now, since I didn't > move the benchmarks yet. But before moving this, we should also make the > changes in the benchmarks repo, so benchmark results start to synchronize > with the new server. Can someone with access to the server take care of it > please (DM for the new server info). > > The link https://asv-runner.github.io/asv-collection/pandas/ is being > automatically updated. Can we point to this URL for now, given that we may > be changing how the benchmarks are run? If it's desirable to have the > benchmarks results on the docs server and our current solution is deemed to > be the long term one, I can work on the synchronization. However I'm > resistant to putting in that work if it's just going to go away given the > easier solution. > > Best, > Richard > > > On Wed, Nov 9, 2022 at 11:50 PM Marc Garcia wrote: > >> Some updates (the ones shared in yesterday's call, and some new ones. >> >> The cloud (bucket) storage didn't seem convenient for different reasons, >> so I moved forward with a regular Ubuntu instance (the cheapest, 2 cores, >> 7Gb ram, 24 EUR/month). I moved now all the traffic to the new instance, >> and since we've just got static file serving, the instance seems to be more >> than enough to handle our traffic (I didn't see CPU or RAM exceed 4% usage >> in the time I've been monitoring the resources). I've got a PR open >> (#49614) to start syncing our web/docs with the new server. In few hours >> I'll stop the nginx in the old server (I confirmed there is no traffic >> already, since we use cloudflare our dns changes are immediate). And in few >> days I'll switch off the instance in rackspace. >> >> Besides the open PR, the only missing thing are the benchmarks at ( >> pandas.pydata.org/speed). The link is not working now, since I didn't >> move the benchmarks yet. But before moving this, we should also make the >> changes in the benchmarks repo, so benchmark results start to synchronize >> with the new server. Can someone with access to the server take care of it >> please (DM for the new server info). >> >> On running the benchmarks in OVH, the VM instances don't seem to be >> stable enough to keep track of performance over time, as it was likely. >> Full results of the tests I did are in this repo: >> https://gitlab.com/datapythonista/pandas_ovh_benchmarks . OVH is >> checking the best way to give us access to dedicated hardware, will >> continue with that once we've got it. In parallel to that, I'm planning to >> do some tests to see if it could be feasible to use valgrind's cachegrind >> (or equivalent) to instead of monitor time, we monitor CPU cycles. That >> should make benchmarking much easier and faster, as any hardware would >> work, and benchmarks could be run in parallel. With a dedicated server >> we're likely to only be able to use a single core to have stable results, >> which means that we can only run one benchmark suite per server every 3 >> hours. But implementing it can be tricky. >> >> About CIrun, as you say Joris, it's like a middle man between our >> hardware (the OVH openstack API to create/delete instances) and GitHub >> actions. We need to add an extra yaml file with the CIrun configuration, >> and other than that we should be able to use OVH hardware directly from our >> current CI jobs without changes (except one entry to say what instance we >> want to use for the jobs running in OVH I assume). >> >> Please let me know of any feedback. In particular if you see any problem >> with our website that could be caused by the migration. >> >> Cheers, >> >> On Thu, Nov 10, 2022 at 12:43 AM Joris Van den Bossche < >> jorisvandenbossche at gmail.com> wrote: >> >>> >>> >>> On Sat, 5 Nov 2022 at 15:24, Marc Garcia wrote: >>> >>>> Hi all, >>>> >>>> pandas has received a donation from OVHcloud >>>> to support the project infrastructure, >>>> with OVHcloud public cloud credits (an initial amount of 10,000 EUR for a >>>> period of one year). OVH is open to sponsor longer term and also other >>>> projects of the ecosystem (or NumFOCUS as a whole), but we started with >>>> this to have feedback at a smaller scale first. >>>> >>>> The credits will be used initially for: >>>> - Hosting of the pandas website >>>> - Running the pandas benchmarks >>>> - Speeding up the project CI >>>> >>>> I detail next what I have in mind to set up for each. If anyone is >>>> interested in getting involved, or has ideas, comments... please let me >>>> know. I'll publish updates here as there is progress on this. >>>> >>>> >>>> Website: I'm planning to experiment on splitting the website in two >>>> (it'll be transparent for users). The website and the stable docs which >>>> receive most of the traffic can probably be stored in Cloudflare pages. >>>> We're already using Cloudflare as a CDN, so instead of using it as a cache, >>>> we can publish the documents there. The rest of the docs (old versions and >>>> the dev version) can be hosted in bucket storage of the OVHcloud. Response >>>> times may be a bit slower, but our website is bigger than the Cloudflare >>>> quota, and having old docs rarely accessed in a CDN seems unnecessary >>>> anyway. >>>> >>> >>> Splitting like that makes sense! (_if_ it is within quota, we could >>> maybe consider keeping the dev docs, and only move old docs to bucket >>> storage?) >>> >>> >>>> >>>> - Benchmarks: OVHcloud instances have guaranteed hardware, and we'll be >>>> checking if this is enough for the results of the benchmarks to be >>>> consistent over runs, or if there is too much variability and we need >>>> dedicated hardware. If consistency is good enough that would be great, >>>> since our benchmarks mostly use one core, and using dedicated hardware is >>>> likely to be a decent waste of resources, since most servers will likely >>>> have 16 cores or more. We'll discuss with OVH if dedicated hardware is >>>> needed, as at the moment their public cloud doesn't offer it (there is an >>>> alpha for providing dedicated instances, but we need to check with them). >>>> >>>> - Faster CI: Our GitHub runners are small, and most builds take around >>>> one hour or more to finish. We should be able to use bigger OVH instances >>>> for our existing CI pretty easily, via their OpenStack API and CIrun. >>>> >>> >>> I am not familiar with CIrun, but quickly checking it, that would >>> basically be using our current github actions but through their >>> "self-hosted" runner feature? >>> >>> >>>> _______________________________________________ >>>> Pandas-dev mailing list >>>> Pandas-dev at python.org >>>> https://mail.python.org/mailman/listinfo/pandas-dev >>>> >>> _______________________________________________ >> Pandas-dev mailing list >> Pandas-dev at python.org >> https://mail.python.org/mailman/listinfo/pandas-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.e.gorelli at gmail.com Fri Nov 11 10:08:17 2022 From: m.e.gorelli at gmail.com (Marco Gorelli) Date: Fri, 11 Nov 2022 15:08:17 +0000 Subject: [Pandas-dev] Is it time for a bi-weekly pandas call? In-Reply-To: References: Message-ID: Cool stuff Unless there's any objections,, then, let's add a call for the 4th week of each month (same day of week and time as the current one) On Wed, Oct 19, 2022 at 1:46 PM Richard Shadrach wrote: > I would also like bi-weekly. > > Best, > Richard > > On Tue, Oct 18, 2022, 12:34 Matthew Roeschke > wrote: > >> I would be interested in meeting twice a month. >> >> On Sun, Oct 16, 2022 at 9:35 PM Marc Garcia >> wrote: >> >>> I'm personally happy with both frequencies, no preference. >>> >>> Couple of related things: >>> - Since this call is where a lot of the decision making happens, would >>> it make sense to discuss this as part of the governance discussions? >>> - Maybe worth also discussing the time of the call? I think the current >>> time is quite reasonable for many time zones, and it necessarily need to be >>> night time in some places during the call. But I wonder if it'd make a >>> difference if we have the call a bit later to contributors or potential >>> contributors in India, China... if we move the call one or two hours >>> earlier, and how this affects people in California or Hawaii. This shows >>> the current time in different time zones: >>> https://www.timeanddate.com/worldclock/converter.html?iso=20221107T180000&p1=103&p2=224&p3=179&p4=233&p5=1440&p6=125&p7=166&p8=776&p9=176&p10=28&p11=237&p12=240 >>> >>> On Fri, Oct 14, 2022 at 5:13 PM Marco Gorelli >>> wrote: >>> >>>> Currently, there's a monthly pandas call, which has been in place for >>>> several years. Seeing as there's now more people working on pandas as part >>>> of their jobs, might it be time to increase the frequency? E.g. to meet >>>> every 2 weeks, instead of once a month? >>>> _______________________________________________ >>>> Pandas-dev mailing list >>>> Pandas-dev at python.org >>>> https://mail.python.org/mailman/listinfo/pandas-dev >>>> >>> _______________________________________________ >>> Pandas-dev mailing list >>> Pandas-dev at python.org >>> https://mail.python.org/mailman/listinfo/pandas-dev >>> >> >> >> -- >> Matthew Roeschke >> _______________________________________________ >> Pandas-dev mailing list >> Pandas-dev at python.org >> https://mail.python.org/mailman/listinfo/pandas-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garcia.marc at gmail.com Tue Nov 15 03:05:34 2022 From: garcia.marc at gmail.com (Marc Garcia) Date: Tue, 15 Nov 2022 15:05:34 +0700 Subject: [Pandas-dev] pandas new infrastructure (OVH donation) In-Reply-To: References: Message-ID: Quick update about the new infrastructure. - New hosting for the website seems to be working just fine, no issues detected. I just stopped nginx in the old server, in case there is anything there still being used we hopefully realize. But if there are no issues and no objections, I'll be switching off the server in few days. - We should be able to start using dedicated hardware for the benchmarks from our OVH cloud account in December. It'll work as regular cloud instances, but with dedicated servers. We'll be doing some tests to try to get more stability in the benchmarks, and hopefully we can get something even better than until now when the OVH hardware is ready. On Thu, Nov 10, 2022 at 9:31 PM Marc Garcia wrote: > Oh, I forgot we were not using the rendered asv website from the old > server. We're using nginx, so I can easily make pandas.pydata.org/speed > show the content from that url. But I guess we can also check them directly > in the github pages url, not sure if it makes a difference. > > Let me know if it's useful, and I'll set it up. Thanks for the info! > > On Thu, Nov 10, 2022, 21:18 Richard Shadrach wrote: > >> > Besides the open PR, the only missing thing are the benchmarks at ( >> pandas.pydata.org/speed). The link is not working now, since I didn't >> move the benchmarks yet. But before moving this, we should also make the >> changes in the benchmarks repo, so benchmark results start to synchronize >> with the new server. Can someone with access to the server take care of it >> please (DM for the new server info). >> >> The link https://asv-runner.github.io/asv-collection/pandas/ is being >> automatically updated. Can we point to this URL for now, given that we may >> be changing how the benchmarks are run? If it's desirable to have the >> benchmarks results on the docs server and our current solution is deemed to >> be the long term one, I can work on the synchronization. However I'm >> resistant to putting in that work if it's just going to go away given the >> easier solution. >> >> Best, >> Richard >> >> >> On Wed, Nov 9, 2022 at 11:50 PM Marc Garcia >> wrote: >> >>> Some updates (the ones shared in yesterday's call, and some new ones. >>> >>> The cloud (bucket) storage didn't seem convenient for different reasons, >>> so I moved forward with a regular Ubuntu instance (the cheapest, 2 cores, >>> 7Gb ram, 24 EUR/month). I moved now all the traffic to the new instance, >>> and since we've just got static file serving, the instance seems to be more >>> than enough to handle our traffic (I didn't see CPU or RAM exceed 4% usage >>> in the time I've been monitoring the resources). I've got a PR open >>> (#49614) to start syncing our web/docs with the new server. In few hours >>> I'll stop the nginx in the old server (I confirmed there is no traffic >>> already, since we use cloudflare our dns changes are immediate). And in few >>> days I'll switch off the instance in rackspace. >>> >>> Besides the open PR, the only missing thing are the benchmarks at ( >>> pandas.pydata.org/speed). The link is not working now, since I didn't >>> move the benchmarks yet. But before moving this, we should also make the >>> changes in the benchmarks repo, so benchmark results start to synchronize >>> with the new server. Can someone with access to the server take care of it >>> please (DM for the new server info). >>> >>> On running the benchmarks in OVH, the VM instances don't seem to be >>> stable enough to keep track of performance over time, as it was likely. >>> Full results of the tests I did are in this repo: >>> https://gitlab.com/datapythonista/pandas_ovh_benchmarks . OVH is >>> checking the best way to give us access to dedicated hardware, will >>> continue with that once we've got it. In parallel to that, I'm planning to >>> do some tests to see if it could be feasible to use valgrind's cachegrind >>> (or equivalent) to instead of monitor time, we monitor CPU cycles. That >>> should make benchmarking much easier and faster, as any hardware would >>> work, and benchmarks could be run in parallel. With a dedicated server >>> we're likely to only be able to use a single core to have stable results, >>> which means that we can only run one benchmark suite per server every 3 >>> hours. But implementing it can be tricky. >>> >>> About CIrun, as you say Joris, it's like a middle man between our >>> hardware (the OVH openstack API to create/delete instances) and GitHub >>> actions. We need to add an extra yaml file with the CIrun configuration, >>> and other than that we should be able to use OVH hardware directly from our >>> current CI jobs without changes (except one entry to say what instance we >>> want to use for the jobs running in OVH I assume). >>> >>> Please let me know of any feedback. In particular if you see any problem >>> with our website that could be caused by the migration. >>> >>> Cheers, >>> >>> On Thu, Nov 10, 2022 at 12:43 AM Joris Van den Bossche < >>> jorisvandenbossche at gmail.com> wrote: >>> >>>> >>>> >>>> On Sat, 5 Nov 2022 at 15:24, Marc Garcia wrote: >>>> >>>>> Hi all, >>>>> >>>>> pandas has received a donation from OVHcloud >>>>> to support the project infrastructure, >>>>> with OVHcloud public cloud credits (an initial amount of 10,000 EUR for a >>>>> period of one year). OVH is open to sponsor longer term and also other >>>>> projects of the ecosystem (or NumFOCUS as a whole), but we started with >>>>> this to have feedback at a smaller scale first. >>>>> >>>>> The credits will be used initially for: >>>>> - Hosting of the pandas website >>>>> - Running the pandas benchmarks >>>>> - Speeding up the project CI >>>>> >>>>> I detail next what I have in mind to set up for each. If anyone is >>>>> interested in getting involved, or has ideas, comments... please let me >>>>> know. I'll publish updates here as there is progress on this. >>>>> >>>>> >>>>> Website: I'm planning to experiment on splitting the website in two >>>>> (it'll be transparent for users). The website and the stable docs which >>>>> receive most of the traffic can probably be stored in Cloudflare pages. >>>>> We're already using Cloudflare as a CDN, so instead of using it as a cache, >>>>> we can publish the documents there. The rest of the docs (old versions and >>>>> the dev version) can be hosted in bucket storage of the OVHcloud. Response >>>>> times may be a bit slower, but our website is bigger than the Cloudflare >>>>> quota, and having old docs rarely accessed in a CDN seems unnecessary >>>>> anyway. >>>>> >>>> >>>> Splitting like that makes sense! (_if_ it is within quota, we could >>>> maybe consider keeping the dev docs, and only move old docs to bucket >>>> storage?) >>>> >>>> >>>>> >>>>> - Benchmarks: OVHcloud instances have guaranteed hardware, and we'll >>>>> be checking if this is enough for the results of the benchmarks to be >>>>> consistent over runs, or if there is too much variability and we need >>>>> dedicated hardware. If consistency is good enough that would be great, >>>>> since our benchmarks mostly use one core, and using dedicated hardware is >>>>> likely to be a decent waste of resources, since most servers will likely >>>>> have 16 cores or more. We'll discuss with OVH if dedicated hardware is >>>>> needed, as at the moment their public cloud doesn't offer it (there is an >>>>> alpha for providing dedicated instances, but we need to check with them). >>>>> >>>>> - Faster CI: Our GitHub runners are small, and most builds take around >>>>> one hour or more to finish. We should be able to use bigger OVH instances >>>>> for our existing CI pretty easily, via their OpenStack API and CIrun. >>>>> >>>> >>>> I am not familiar with CIrun, but quickly checking it, that would >>>> basically be using our current github actions but through their >>>> "self-hosted" runner feature? >>>> >>>> >>>>> _______________________________________________ >>>>> Pandas-dev mailing list >>>>> Pandas-dev at python.org >>>>> https://mail.python.org/mailman/listinfo/pandas-dev >>>>> >>>> _______________________________________________ >>> Pandas-dev mailing list >>> Pandas-dev at python.org >>> https://mail.python.org/mailman/listinfo/pandas-dev >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From hello at noatamir.com Wed Nov 16 11:03:44 2022 From: hello at noatamir.com (Noa Tamir) Date: Wed, 16 Nov 2022 16:03:44 +0000 Subject: [Pandas-dev] =?utf-8?q?The_new_contributors_meeting_is_today_?= =?utf-8?b?8J+OiQ==?= Message-ID: Hi folks ??, Join us today at 6:00 PM UTC for the pandas New Contributors Meeting. In today?s meeting Richard Shadrach will do a debugging demo. These meetings are great if you've: - ever wondered how to contribute to pandas - are you trying to contribute but are stuck on something - want a friendly chat with maintainers To prepare, here's the contributing guide: https://pandas.pydata.org/docs/dev/development/contributing.html#where-to-start ?? 6pm UTC in your local time: https://dateful.com/convert/utc?t=6pm ? Our meeting calendar: https://pandas.pydata.org/docs/dev/development/community.html#calendar ? Agenda+zoom link: https://hackmd.io/@pandas-dev/HJgQt1Tei Cheers, Noa she/they/???/sie -------------- next part -------------- An HTML attachment was scrubbed... URL: From garcia.marc at gmail.com Wed Nov 23 01:09:04 2022 From: garcia.marc at gmail.com (Marc Garcia) Date: Wed, 23 Nov 2022 13:09:04 +0700 Subject: [Pandas-dev] ANN: pandas v1.5.2 Message-ID: We are pleased to announce the release of pandas v1.5.2. This is a patch release in the 1.5.x series and includes some regression fixes and bug fixes. We recommend that all users in the 1.5.x series upgrade to this version. See the release notes for a list of all the changes. The release can be installed from PyPI python -m pip install --upgrade pandas==1.5.2 Or from conda-forge mamba install -c conda-forge pandas==1.5.2 Please report any issues with the release on the pandas issue tracker . Thanks to all the contributors who made this release possible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorisvandenbossche at gmail.com Wed Nov 30 10:26:28 2022 From: jorisvandenbossche at gmail.com (Joris Van den Bossche) Date: Wed, 30 Nov 2022 16:26:28 +0100 Subject: [Pandas-dev] PDEPs: pandas enhancement proposals In-Reply-To: References: <94BC0B63-20D8-4341-A440-675DC9F82D4E@gmail.com> Message-ID: Hi all, In the last meeting on governance, we were discussing the current workflow around PDEPs, including "When and how to notify about new PDEPs" (or, how to ensure that people are aware of new PDEPs and ongoing discussions). In that context, it came up again that it could help to have those discussions in a separate repo (for people that cannot easily handle the large stream of notifications in the main repo). So I would like to bring forward this proposal once more. Thoughts about this? (now we have a bit of experience with it) If we decide to do this, I am happy to look at the necessary changes to still include the PDEP's text in the website build in the main repo. Best, Joris On Fri, 5 Aug 2022 at 17:41, Simon Hawkins wrote: > > > On Fri, 5 Aug 2022 at 16:29, Joris Van den Bossche < > jorisvandenbossche at gmail.com> wrote: > >> >> On Wed, 20 Jul 2022 at 16:26, Marc Garcia wrote: >> >>> Ok, correct me if I'm wrong, but for what you say the options to >>> consider are: >>> >>> 1) Keep everything as is (in the main pandas repo), and maybe improve >>> notifications (send emails to a list where people can subscribe, rss feed, >>> telegram messages...) >>> >>> 2) Use a new repo for PRs, and use the main pandas website to display >>> them >>> 2.a) On the website build, fetch the PDEP docs from the other repo >>> 2.b) On the PDEP repo CI, push the PDEP docs to the main pandas repo >>> >>> 3) Use a new repo for the PDEP PRs, and have a separate website for PDEPs >>> >>> Does these options sound reasonable as the ones to discuss? Or am I >>> missing something? >>> >>> My preference is 1, as I think it's the simplest, and adding >>> notifications that allow following PDEPs separate from all pandas activity >>> doesn't seem complex. >>> >>> I'm also fine with 2.a if more people have a strong opinion about >>> keeping PDEP discussions/PRs in a separate repo. I personally don't see >>> advantages in 2.b and 3. >>> >> >> Thanks, that's indeed a good summary of the options. I also think 2.a is >> the easiest of the alternatives, so I would indeed only consider 1 and 2.a. >> >> My preference is to go with 2 (separate repo), for the reasons mentioned >> before. I think Tom also mentioned this as his preference, and Jeff being >> OK with it, while Marc/Matthew prefer the main repo. But it would be good >> to hear from others as well whether they have a (strong) preference. >> > > maybe another quick poll with just those 2 options, it seemed to get a > swift resolution on the PDEP name. > > if not a clear majority, then we would need further discussion. > > if a majority, then maybe only a few pain points to resolve. > > >> >> If we decide to do that, I am happy to do a PR to update the publishing >> workflow to handle a separate repo. >> >> Joris >> >> >>> >>> On Mon, Jul 18, 2022, 18:34 Joris Van den Bossche < >>> jorisvandenbossche at gmail.com> wrote: >>> >>>> Thanks Marc for the detailed answer. >>>> In general, I personally think that the added complexity is not that >>>> big, and we can still have a nice publishing workflow to the website with a >>>> separate repo (some more detailed responses inline below). >>>> >>>> For someone who wants to follow the PDEPs (and I hope with this new >>>> PDEP process we can engage more people in the pandas community), but >>>> doesn't have the time follow all of pandas (eg a maintainer of a dependent >>>> package, ..), my hunch is that a separate repo is a more accessible way to >>>> do this. >>>> You can indeed list all related PRs based on a label filter, but you >>>> still need to know this (we can of course document that on the roadmap >>>> page) and it's not an automatic notification. And for email notifications >>>> you can indeed set up an email filter (although I don't think you have a >>>> good option if using github notifications?). >>>> >>>> For someone as myself, if we end up using the main repo, I can for sure >>>> set up those filters, that is not a problem. But in general I think that is >>>> not a very accessible way to have people follow those discussions. Having >>>> it as a separate repo provides a clear home and gives you all the tools >>>> that github has to manage and customize the notifications however you want >>>> (eg watch one repo and not the other). >>>> >>>> Sidenote: I do (or did) this for other projects, such as numpy or >>>> python. I don't follow either of their issue trackers, but I do (somewhat) >>>> follow NEP or PEP discussions, and both give me a way to do that without >>>> having to follow their main issue trackers. >>>> >>>> The last point that you raise about "forgetting about a separate repo" >>>> is certainly a valid concern. It's true that the other separate repos that >>>> we have (had) were no success, so we don't have a good track record on this >>>> front. But I do think it is a matter of habit (and >>>> documentation/communication! we never really publicized any of the other >>>> repos, neither actively used them at any point), and if ensure we have a >>>> steady activity in such a separate repo for a while, I think that will grow >>>> naturally. >>>> >>>> On Sat, 25 Jun 2022 at 21:19, Matthew Roeschke >>>> wrote: >>>> >>>>> I find Marc's arguments regarding general simplicity of PDEP flow >>>>> (publishing to website & integration to the main repo) a strong argument to >>>>> keep these in the main repo. >>>>> >>>>> Since there is a dependency between PDEP development and the >>>>> pandas-dev repo development, having them separated may lead to similar >>>>> workflow challenges with the MacPython/pandas-wheels repo for example >>>>> (where ciwheelbuild being integrated into the main repo >>>>> is considered a >>>>> benefit due to tighter integration). >>>>> >>>> >>>> I think an important difference here is that building wheels is defined >>>> in the pandas repo (packaging setup) and often needs fixes in pandas, and >>>> so here it indeed makes that workflow much easier to have that in the same >>>> repo. For PDEPs that is much less of an issue. >>>> >>>>> >>>>> I agree PDEP visibility from notifications is important, but >>>>> notification priority and channels can differ person-to-person. For >>>>> example, I just manage my GIthub notifications in Github, not email. >>>>> >>>>> I don't think there is fundamentally a difference between both. Also >>>> if I was using github notifications, seeing a specific subset of issues in >>>> those is challenging (while when using email I could at least set up some >>>> automatic filters). >>>> (but I don't know github notifications well, so I might be wrong) >>>> >>>> >>>>> On Sat, Jun 25, 2022 at 10:50 AM Tom Augspurger < >>>>> tom.w.augspurger at gmail.com> wrote: >>>>> >>>>>> For me, notifications are the big thing. Having the emails come from >>>>>> a separate repo would make following things much easier for those who can?t >>>>>> keep up with the main repo. >>>>>> >>>>>> Tom >>>>>> >>>>>> On Jun 25, 2022, at 12:04 PM, Marc Garcia >>>>>> wrote: >>>>>> >>>>>> ? >>>>>> Thanks for the feedback. I understand your point about using a >>>>>> different repo, but I see several advantages on the current approach, so >>>>>> maybe worth discussing a bit further what are the exact pain points, to see >>>>>> if a separate repo is really the best solution. >>>>>> >>>>>> Let me know if I miss something, but I see three different ways in >>>>>> which we'll be interacting with PDEPs: >>>>>> >>>>>> a) Via their rendered version. Not sure if you checked it, but the >>>>>> current rendered page from the PDEP PR (attached) is equivalent to the home >>>>>> of the scikit-learn SLEP proposals [1]. The main difference is that with >>>>>> the current approach we have it integrated with the website, which I >>>>>> personally think it's an advantage. >>>>>> >>>>>> I am assuming that also with a separate repo we will have an >>>> identical web page (which wil be very useful!). >>>> >>>>> >>>>>> b) Via the list of PDEP PRs to review. In this case, to see only PDEP >>>>>> PRs, if we use the main pandas repo, this is just a label filter [2]. To me >>>>>> personally quicker than having to go to another repo, but no big difference >>>>>> about one or the other. >>>>>> >>>>>> c) Notifications. I guess this is the main thing. I think one concern >>>>>> is that notifications from PDEPs get lost in the rest of the repo >>>>>> notifications. I assume you're using your email client filters, and if the >>>>>> notifications come from another repo, you can change the rules easily. I >>>>>> guess the solution here would be to use something like PDEP in the title >>>>>> and use that as a rule. Or we can try to find something more reliable, if >>>>>> that's the main concern. >>>>>> >>>>>> Personally, I don't see the advantages of having the proposals in a >>>>>> separate repo very significant. And by keeping things the way they're >>>>>> implemented in the PR, I do see some advantages: >>>>>> - No need to maintain a separate repo, CI workflow, jobs to publish >>>>>> the build, sphinx (or equivalent) project... Nothing too complex, by why >>>>>> having to implement and maintain all that if our website is already >>>>>> prepared to handle it. And in particular, with Sphinx is not as easy as >>>>>> with out website to fetch the open PRs and render them. >>>>>> - Integrated UX of the PDEPs into our website. I think this gives it >>>>>> more visibility, and a better using experience than having to jump from one >>>>>> website to another. >>>>>> >>>>>> I think it should certainly be possible to keep the website UX as you >>>> implemented with a separate repo as well. >>>> There are for sure multiple options, but one (maybe simplest) option >>>> would be to keep the publishing in the main repo as you have now (since the >>>> website publishing lives there): for example the separate repo could >>>> additionally be cloned in the website workflow, and then that content is >>>> available as well (requiring to change the path in the script a bit). >>>> >>>> The PDEP repo itself could further have only very limited CI? >>>> >>>> >>>>> - One of my concerns is that being in a separate repo we forget about >>>>>> them. We're used to check PRs in the pandas repo, and we'll keep coming >>>>>> back to PRs about PDEPs until they're merged if they are in the main repo, >>>>>> but feels like being in a separate repo is easier to forget them when there >>>>>> is no recent activity and notifications. >>>>>> >>>>>> It would be good to know if I miss any of your concerns. If I didn't, >>>>>> I'd say we can start with what's already implemented, which is almost ready >>>>>> to get merged, and if in the future you still think we can do better by >>>>>> using a separate repo, you can implement it, we have a discussion about it, >>>>>> and we move PDEPs to a separate repo if that makes sense. What do you think? >>>>>> >>>>>> Cheers, >>>>>> Marc >>>>>> >>>>>> 1. >>>>>> https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/ >>>>>> 2. >>>>>> https://github.com/pandas-dev/pandas/pulls?q=is%3Aopen+is%3Apr+label%3APDEP >>>>>> >>>>>> >>>>>> On Sat, Jun 25, 2022 at 7:05 AM Jeff Reback >>>>>> wrote: >>>>>> >>>>>>> +1 in using a separate repo (under pandas-dev) for this >>>>>>> >>>>>>> >>>>>>> On Jun 24, 2022, at 5:05 PM, Joris Van den Bossche < >>>>>>> jorisvandenbossche at gmail.com> wrote: >>>>>>> >>>>>>> ? >>>>>>> Thanks for starting this proposal, Marc! >>>>>>> >>>>>>> I have already been doing this in some ad-hoc way with eg the >>>>>>> Copy/View proposal (writing an actual proposal document), so I am very much >>>>>>> in favor of formalizing this a bit more. >>>>>>> >>>>>>> Personally, I would prefer that we use a more dedicated home for >>>>>>> this instead of using the existing pandas repo (e.g. a separate repo in the >>>>>>> pandas-dev org). The main pandas repo has nowadays such a high volume in >>>>>>> issue and PR comments, that it becomes difficult to follow this or notice >>>>>>> specific issues. While there are certainly ways to deal with this (e.g. >>>>>>> consistently using a specific label and title, ensuring we always notify >>>>>>> the mailing list as well, ...), IMO it would make it more accessible to >>>>>>> follow and have an overview of those discussions in e.g. a separate repo. >>>>>>> >>>>>>> (there are examples of both in other projects, for example >>>>>>> scikit-learn has a separate repo, while bumpy uses the main repo I think) >>>>>>> >>>>>>> Joris >>>>>>> >>>>>>> Op di 21 jun. 2022 09:46 schreef Marc Garcia >>>>>> >: >>>>>>> >>>>>>>> We're in the process of implementing PDEPs, equivalent to Python's >>>>>>>> PEPs and NumPy's NEPs, but for pandas. This should help build the roadmap, >>>>>>>> make discussions more efficient, obtain more structured feedback from the >>>>>>>> community, and add visibility to agreed future plans for pandas. >>>>>>>> >>>>>>>> The initial implementation (workflow) is a bit simpler than PEP or >>>>>>>> NEP, but we'll iterate in the future as convenient. >>>>>>>> >>>>>>>> You can see the PR for PDEP-1 with the purpose, scope and >>>>>>>> guidelines here: https://github.com/pandas-dev/pandas/pull/47444 >>>>>>>> >>>>>>>> Feedback is very welcome. >>>>>>>> _______________________________________________ >>>>>>>> Pandas-dev mailing list >>>>>>>> Pandas-dev at python.org >>>>>>>> https://mail.python.org/mailman/listinfo/pandas-dev >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> Pandas-dev mailing list >>>>>>> Pandas-dev at python.org >>>>>>> https://mail.python.org/mailman/listinfo/pandas-dev >>>>>>> >>>>>>> _______________________________________________ >>>>>> Pandas-dev mailing list >>>>>> Pandas-dev at python.org >>>>>> https://mail.python.org/mailman/listinfo/pandas-dev >>>>>> >>>>>> _______________________________________________ >>>>>> Pandas-dev mailing list >>>>>> Pandas-dev at python.org >>>>>> https://mail.python.org/mailman/listinfo/pandas-dev >>>>>> >>>>> >>>>> >>>>> -- >>>>> Matthew Roeschke >>>>> _______________________________________________ >>>>> Pandas-dev mailing list >>>>> Pandas-dev at python.org >>>>> https://mail.python.org/mailman/listinfo/pandas-dev >>>>> >>>> _______________________________________________ >>>> Pandas-dev mailing list >>>> Pandas-dev at python.org >>>> https://mail.python.org/mailman/listinfo/pandas-dev >>>> >>> _______________________________________________ >> Pandas-dev mailing list >> Pandas-dev at python.org >> https://mail.python.org/mailman/listinfo/pandas-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: