From matsaueressig at hotmail.com Fri Dec 2 14:51:15 2022 From: matsaueressig at hotmail.com (Matheus Saueressig) Date: Fri, 2 Dec 2022 19:51:15 +0000 Subject: [Pandas-dev] Filtering and writing csv of dataframe's mean Message-ID: Hi, So I'm creating a python program that reads a csv file containing some columns with strings and numbers. What I want to do is to select the rows which their column corresponds to a certain type. For example, I want all the rows with the Type "Red". So far so good, I came with this: data1 = dataFrame[dataFrame['Type'].str.contains('red')] Okay, now what I want to do is quite more complicated. I want to select all rows with a specific column destination(For example, West), then I want to make the mean of all individual numerical columns and get one single row with the type Red and West destinatinon which each column(number of apples, number of bananas) its the means of each numerical column. For example: Input: Plane Type Destination Bananas Apples Pineapples A Red West 1 2 3 B Blue West 4 7 3 C Red East 4 5 6 D Green North 7 8 9 E Red East 6 5 7 Output: Type Destination Bananas Apples Pineapples Red West 1 2 3 Red East 5 5 5 Anyone knows how can I proceed the coding? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rhshadrach at gmail.com Sat Dec 3 10:53:24 2022 From: rhshadrach at gmail.com (Richard Shadrach) Date: Sat, 3 Dec 2022 10:53:24 -0500 Subject: [Pandas-dev] Filtering and writing csv of dataframe's mean In-Reply-To: References: Message-ID: Hi Matheus, This mailing list is for pandas developers working on pandas. Stack Overflow is a great site where you can ask usage questions - I recommend asking this question there. If your question doesn't get any answers after several days, it may then be appropriate to ask the question on the pandas GitHub issue tracker. Best, Richard On Sat, Dec 3, 2022, 08:48 Matheus Saueressig wrote: > Hi, > > So I'm creating a python program that reads a csv file containing some > columns with strings and numbers. > > What I want to do is to select the rows which their column corresponds to > a certain type. For example, I want all the rows with the Type "Red". > > So far so good, I came with this: > > data1 = dataFrame[dataFrame['Type'].str.contains('red')] > > Okay, now what I want to do is quite more complicated. I want to select > all rows with a specific column destination(For example, West), then I want > to make the mean of all individual numerical columns and get one single row > with the type Red and West destinatinon which each column(number of apples, > number of bananas) its the means of each numerical column. > > For example: > > Input: > > Plane Type Destination Bananas Apples Pineapples > > A Red West 1 2 3 > B Blue West 4 7 3 > C Red East 4 5 6 > D Green North 7 8 9 > E Red East 6 5 7 > > Output: > > Type Destination Bananas Apples Pineapples > Red West 1 2 3 > > Red East 5 5 5 > > Anyone knows how can I proceed the coding? > > _______________________________________________ > Pandas-dev mailing list > Pandas-dev at python.org > https://mail.python.org/mailman/listinfo/pandas-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorisvandenbossche at gmail.com Tue Dec 13 15:45:04 2022 From: jorisvandenbossche at gmail.com (Joris Van den Bossche) Date: Tue, 13 Dec 2022 21:45:04 +0100 Subject: [Pandas-dev] December 2022 monthly community meeting (Wednesday December 14, UTC 18:00) Message-ID: Hi all, A reminder that the next monthly dev call is tomorrow (Wednesday, December 14) at 18:00 UTC (12:00 central time). Our calendar is at https://pandas.pydata.org/docs/development/meeting.html#calendar to check your local time. The pandas Community Meeting is a regular sync meeting for the project's maintainers which is open to the community. All are welcome to attend! Video Call: https://us06web.zoom.us/j/84484803210?pwd=TjUxNmcyNHcvcG9SNGJvbE53Y21GZz09 Meeting notes: https://docs.google.com/document/u/1/d/1tGbTiYORHiSPgVMXawiweGJlBw5dOkVJLY-licoBmBU/edit?ouid=102771015311436394588&usp=docs_home&ths=true Joris -------------- next part -------------- An HTML attachment was scrubbed... URL: