From garcia.marc at gmail.com Fri Mar 17 08:43:12 2023 From: garcia.marc at gmail.com (Marc Garcia) Date: Fri, 17 Mar 2023 13:43:12 +0100 Subject: [Pandas-dev] ANN: pandas 2.0.0 RC1 Message-ID: We are happy to announce the *second release candidate* of pandas 2.0.0. It can be installed from our conda-forge and PyPI packages via mamba, conda and pip, for example: mamba install -c conda-forge/label/pandas_rc pandas==2.0.0rc1 python -m pip install --upgrade --pre pandas==2.0.0rc1 Users who have pandas code in production and maintainers of libraries with pandas as a dependency are *strongly* recommended to run their test suites with the release candidate, and report any problem to our issue tracker before the official 2.0.0 release. You can find the documentation of pandas 2.0.0 here , and the list of changes in 2.0.0, in the release notes page . We expect to release the final version of pandas 2.0.0 in around two weeks, but the final date will depend on the issues reported to the release candidate. -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.e.gorelli at gmail.com Thu Mar 23 07:14:52 2023 From: m.e.gorelli at gmail.com (Marco Gorelli) Date: Thu, 23 Mar 2023 11:14:52 +0000 Subject: [Pandas-dev] PDEP6: open for discussion / request for comments Message-ID: Hello everybody, I've opened an enhancement proposal to ban upcasting it setitem-like operations. You can find the text here . Anyone is welcome to comment - assuming no major discussion points are raised, the core team will hold a vote on it in a couple weeks' time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kuennew21 at gmail.com Wed Mar 22 19:09:05 2023 From: kuennew21 at gmail.com (William Kuenne) Date: Wed, 22 Mar 2023 16:09:05 -0700 Subject: [Pandas-dev] =?utf-8?q?Error_got_multiple_values_for_argument_?= =?utf-8?b?4oCYU2NoZW1h4oCZ?= Message-ID: Hi Mike and Pandas team, I hope you are well. I am trying to run a script that uses SQLAlchemy and there is a very simple test query of a single table from a single schema and I get an error at df = pd.read_sql(query, engine) I have confirmed the query definitely queries fine and there is only one schema name but I am still getting this error: TypeError: __init__() got multiple values for argument ?Schema? I am using python 3.11 SQLAlchemy 1.4.46 Pandas 1.5.3 Do you guys have any advice for me as to how to work around this error? Thank you for your time and consideration I really appreciate it. Best, Bill Kuenne -------------- next part -------------- An HTML attachment was scrubbed... URL: From mk1853387 at gmail.com Mon Mar 27 18:41:53 2023 From: mk1853387 at gmail.com (marc nicole) Date: Tue, 28 Mar 2023 00:41:53 +0200 Subject: [Pandas-dev] (no subject) Message-ID: SUBSCRIBE -------------- next part -------------- An HTML attachment was scrubbed... URL: From mk1853387 at gmail.com Wed Mar 29 10:33:33 2023 From: mk1853387 at gmail.com (marc nicole) Date: Wed, 29 Mar 2023 16:33:33 +0200 Subject: [Pandas-dev] combinations of all rows and cols from a dataframe Message-ID: Hello everyone, Given a dataframe like this: 2 6 8 5 I want to yield the following list of lists: [ [[2],[6,5]], [[2],[6]], [[2],[5]], [[8],[6,5]], [[8],[6]], [[8],[5]], [[6],[2,8]], [[6],[8]], [[6],[2]], [[5],[2,8]], [[5],[2]], [[5],[8]], [[6,5],[2,8]] ] I have written the following (which doesn't yield the expected results) import pandas as pd > from itertools import combinations > import numpy as np > resList=[] > resListTmp=[] > resListTmp2=[] > dataframe = > pd.read_excel("C:\\Users\\user\\Desktop\\testData.xlsx",index_col=False,header=None) for i in range(0, len(dataframe)+1): > for j in range(0, len(dataframe.columns)): > for k in range (0,len(dataframe)+1): > for xVals in list(combinations(dataframe.iloc[k:i,j], i)): > if list(xVals) not in resListTmp: > resListTmp.append(list(xVals)) > resListTmp2.append(resListTmp) > resList.append(resListTmp2) > print(resList) > What is wrong with my code? -------------- next part -------------- An HTML attachment was scrubbed... URL: