From noonslists at gmail.com Wed Oct 2 03:46:07 2013 From: noonslists at gmail.com (Noon Silk) Date: Wed, 2 Oct 2013 11:46:07 +1000 Subject: [melbourne-pug] melbourne-pug Digest, Vol 86, Issue 7 In-Reply-To: References: Message-ID: Hm, thanks Damian, I missed this when you originally posted it! Have you used it at all? Glancing at the tutorials[1] it looks promising but doesn't seems to be that complete? [1] http://sourceforge.net/apps/trac/cake-build/wiki/Tutorials On Tue, Aug 27, 2013 at 8:16 PM, Damian Heard wrote: > Hi Noon, > > Another one to throw into the mix 'cake' > > http://sourceforge.net/projects/cake-build/ > > a python build system designed to replace make. It's built with > multiprocessing in mind and definitely worth a look. > > Regards, > Damian > > > Sent from my iPhone > > On 27/08/2013, at 8:00 PM, melbourne-pug-request at python.org wrote: > > Send melbourne-pug mailing list submissions to > melbourne-pug at python.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.python.org/mailman/listinfo/melbourne-pug > or, via email, send a message with subject or body 'help' to > melbourne-pug-request at python.org > > You can reach the person managing the list at > melbourne-pug-owner at python.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of melbourne-pug digest..." > > > Today's Topics: > > 1. Tool to script builds and other such things (Noon Silk) > 2. Re: Tool to script builds and other such things (Mike Dewhirst) > 3. Re: Tool to script builds and other such things (Noon Silk) > 4. Re: Tool to script builds and other such things (Mike Dewhirst) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 27 Aug 2013 10:03:18 +1000 > From: Noon Silk > To: Melbourne Python Users Group > Subject: [melbourne-pug] Tool to script builds and other such things > Message-ID: > > Content-Type: text/plain; charset="windows-1252" > > What are people using for this? > > Suppose I'd like to do things like: > - Run python tests > - Create python exes > - Build arbitrary languages (say C++/C#/etc) > - Perform arbitrary tasks. > > SCons is good for perhaps the first one, but bad for the rest. NAnt is what > I use currently. A quick searching leads me to: > http://paver.github.io/paver/ > > I know I could also do things in perhaps make, cmake, or rake, to varying > degrees of goodness. > > Notably, I want to be able to do this primarily on Windows, and optionally > on linux. > > -- > Noon Silk > > Fancy a quantum lunch? https://sites.google.com/site/quantumlunch/ > > "Every morning when I wake up, I experience an exquisite joy ? the joy > of being this signature." > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.python.org/pipermail/melbourne-pug/attachments/20130827/7898ad52/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Tue, 27 Aug 2013 10:54:48 +1000 > From: Mike Dewhirst > To: Melbourne Python Users Group > Subject: Re: [melbourne-pug] Tool to script builds and other such > things > Message-ID: <521BF8D8.7060801 at dewhirst.com.au> > Content-Type: text/plain; charset=windows-1252; format=flowed > > On 27/08/2013 10:03am, Noon Silk wrote: > > What are people using for this? > > > Suppose I'd like to do things like: > > - Run python tests > > > Windows: batch commands > Linux: Buildbot > > - Create python exes > > > Windows: distutils and py2exe > > - Build arbitrary languages (say C++/C#/etc) > > > Nah. Not since Python. > > - Perform arbitrary tasks. > > > Windows: Python scripts and batch commands > Linux: Python scripts, shell scripts and Buildbot > > > SCons is good for perhaps the first one, but bad for the rest. NAnt is > > what I use currently. A quick searching leads me to: > > http://paver.github.io/paver/ > > > I know I could also do things in perhaps make, cmake, or rake, to > > varying degrees of goodness. > > > Notably, I want to be able to do this primarily on Windows, and > > optionally on linux. > > > -- > > Noon Silk > > > Fancy a quantum lunch? https://sites.google.com/site/quantumlunch/ > > > "Every morning when I wake up, I experience an exquisite joy ? the joy > > of being this signature." > > > > _______________________________________________ > > melbourne-pug mailing list > > melbourne-pug at python.org > > http://mail.python.org/mailman/listinfo/melbourne-pug > > > > > > ------------------------------ > > Message: 3 > Date: Tue, 27 Aug 2013 11:11:46 +1000 > From: Noon Silk > To: Melbourne Python Users Group > Subject: Re: [melbourne-pug] Tool to script builds and other such > things > Message-ID: > > Content-Type: text/plain; charset="windows-1252" > > On Tue, Aug 27, 2013 at 10:54 AM, Mike Dewhirst >wrote: > > On 27/08/2013 10:03am, Noon Silk wrote: > > > What are people using for this? > > > Suppose I'd like to do things like: > > - Run python tests > > > > Windows: batch commands > > Linux: Buildbot > > > > I should've mentioned that we're using jenkins to *run* the NAnt, > currently. The question is not how to replace arbitrary execution of build > scripts, but what system to write such build scripts (gluing build scripts) > in. > > -- > Noon Silk > > Fancy a quantum lunch? https://sites.google.com/site/quantumlunch/ > > "Every morning when I wake up, I experience an exquisite joy ? the joy > of being this signature." > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.python.org/pipermail/melbourne-pug/attachments/20130827/dde57bb8/attachment-0001.html > > > > ------------------------------ > > Message: 4 > Date: Tue, 27 Aug 2013 11:45:18 +1000 > From: Mike Dewhirst > To: Melbourne Python Users Group > Subject: Re: [melbourne-pug] Tool to script builds and other such > things > Message-ID: <521C04AE.8060400 at dewhirst.com.au> > Content-Type: text/plain; charset=windows-1252; format=flowed > > On 27/08/2013 11:11am, Noon Silk wrote: > > On Tue, Aug 27, 2013 at 10:54 AM, Mike Dewhirst > >> wrote: > > > On 27/08/2013 10:03am, Noon Silk wrote: > > > What are people using for this? > > > Suppose I'd like to do things like: > > - Run python tests > > > > Windows: batch commands > > Linux: Buildbot > > > > I should've mentioned that we're using jenkins to *run* the NAnt, > > currently. The question is not how to replace arbitrary execution of > > build scripts, but what system to write such build scripts (gluing build > > scripts) in. > > > I should've also mentioned that Buildbot also works on Windows. It is > Python all the way down - not that I'd look too deeply of course - but > Jacob Kaplan-Moss did and here is a quote from his blog ... > > "I?m treating Buildbot as a CI framework, not a a CI server that I?ve > configured. Instead of just tweaking and tuning things, I?m subclassing > liberally, overriding the parts that I don?t want and adding extra bits > that I do. > > And it?s working brilliantly." > > http://jacobian.org/writing/buildbot/configuration-and-architecture/ > > > -- > > Noon Silk > > > Fancy a quantum lunch? https://sites.google.com/site/quantumlunch/ > > > "Every morning when I wake up, I experience an exquisite joy ? the joy > > of being this signature." > > > > _______________________________________________ > > melbourne-pug mailing list > > melbourne-pug at python.org > > http://mail.python.org/mailman/listinfo/melbourne-pug > > > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > http://mail.python.org/mailman/listinfo/melbourne-pug > > > ------------------------------ > > End of melbourne-pug Digest, Vol 86, Issue 7 > ******************************************** > > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > http://mail.python.org/mailman/listinfo/melbourne-pug > > -- Noon Silk Fancy a quantum lunch? https://sites.google.com/site/quantumlunch/ "Every morning when I wake up, I experience an exquisite joy ? the joy of being this signature." -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier at candeira.com Sun Oct 6 08:37:33 2013 From: javier at candeira.com (Javier Candeira) Date: Sun, 6 Oct 2013 17:37:33 +1100 Subject: [melbourne-pug] Devops with Salt, Bioinformatics and LCA - 7 October, 6PM - Inspire 9, 41 Stewart St. Message-ID: Dear Melbourne Pythonistas, This is a reminder that tomorrow 7 October we're holding our usual Melbourne Python Users Group meeting. The venue is Inspire 9, 41 Stewart St. The time is 6pm. This is the program: # Clare Slogget -- Python for Bioinformatics. # Lex Hider -- Salt: How to be truly lazy. # Bianca Gibson -- Linux.conf.au and Linux Australia Grants. See you all there, -- The MPUG organisers From javier at candeira.com Sun Oct 6 08:44:59 2013 From: javier at candeira.com (Javier Candeira) Date: Sun, 6 Oct 2013 17:44:59 +1100 Subject: [melbourne-pug] Devops with Salt, Bioinformatics and LCA - 7 October, 6PM - Inspire 9, 41 Stewart St. In-Reply-To: References: Message-ID: Oh, and my apologies for the cut-and-paste. It's Clare Sloggett, I keep spelling that wrong. J out. On Sun, Oct 6, 2013 at 5:37 PM, Javier Candeira wrote: > Dear Melbourne Pythonistas, > > This is a reminder that tomorrow 7 October we're holding our usual > Melbourne Python Users Group meeting. The venue is Inspire 9, 41 > Stewart St. The time is 6pm. > > This is the program: > > # Clare Slogget -- Python for Bioinformatics. > > # Lex Hider -- Salt: How to be truly lazy. > > # Bianca Gibson -- Linux.conf.au and Linux Australia Grants. > > See you all there, > > -- The MPUG organisers From mark.angrish at innerloop.io Tue Oct 8 04:08:01 2013 From: mark.angrish at innerloop.io (Mark Angrish) Date: Tue, 8 Oct 2013 13:08:01 +1100 Subject: [melbourne-pug] Looking for a startup cofounder to shake up the IT recruitment industry. Message-ID: Hello everyone, Unfortunately I always end up missing these events but I am going to do my best to make the November one! I've been looking for a cofounder for several months and I thought since I * _am_* using python I may as well ask you guys (and girls!) if any of you are interested in joining me in trying to revolutionise the IT recruitment industry! My product basically works like a giant semantic graph which matches a candidate's likes, dislikes, interests and existing skills etc. with jobs. However it does this in a way where candidate anonymity is protected and all without CV's or job descriptions. Think of it like eHarmony but for IT jobs. And all of it is free. I'm looking for someone to lead more on the technical side while I focus more on the product (but i'd still be coding too) with the view of going to raise funds in San Francisco in the next month or two and then to shift the company there within 3-4 months. If you are interested in the stack I'm using it's: - AngularJs - Flask - Neo4J all deployed on to Heroku. At this stage you would only get sweat equity with salary once we raise funds. Australia isn't that well versed in startups and most people in established jobs think doing one is kind of crazy (http://blog.innerloop.io/) so that's exactly what I'm looking for; someone who is slightly crazy! If you think you might be up for it and fancy hacking on some code to see how we would gel just hit me back or if you know someone who might be interested please feel free to pass it on! Ultimately I'm just looking for a good technologist.. so people from other languages are also welcome! If you want to know more about me you can stalk me at: www.linkedin.com/in/mark.angrish. Thanks for reading! ::mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasjidw at openminddev.net Tue Oct 8 15:23:47 2013 From: rasjidw at openminddev.net (Rasjid Wilcox) Date: Wed, 09 Oct 2013 00:23:47 +1100 Subject: [melbourne-pug] Looking for a startup cofounder to shake up the IT recruitment industry. In-Reply-To: References: Message-ID: <52540763.80209@openminddev.net> On 8/10/2013 1:08 PM, Mark Angrish wrote: > If you are interested in the stack I'm using it's: > - AngularJs > - Flask > - Neo4J > > all deployed on to Heroku. > I've almost finished a project built with AngularJS + RapydScript for the frontend, and Bottle with Postgres (via sqlalchemy) for the backend, and I must say I've been pretty happy with the result. Neo4J looks really interesting, although not immediately relevant to my current crop of projects. Cheers, Rasjid. From bianca.rachel.gibson at gmail.com Wed Oct 9 05:06:10 2013 From: bianca.rachel.gibson at gmail.com (Bianca Gibson) Date: Wed, 9 Oct 2013 14:06:10 +1100 Subject: [melbourne-pug] An unconference in the sticks: StixCampGembrook opens registrations (Nov 1-3) In-Reply-To: <5254778D.10108@dechrai.com> References: <5254778D.10108@dechrai.com> Message-ID: >From Ben Dechrai: > Registrations for StixCampGembrook are open! > > StixCampGembrook is a weekend unconference event running from the 1st - 3rd November 2013. Participants will spend 2 full days networking with like-minded others, sharing their knowledge, and learning from others. The venue is a scout camp in the Dandenong Ranges and accommodation is provided (subject to availability - camping spots are available too) and will provide a change of atmosphere to the typical city-based conferences. > > Participation extends from the usual unconference environment, with the option for communal meals and other social activities available at the camp. > > There are two ticket types: a regular participant ticket that we'll try and refund, subject to surplus sponsorship funds; and a supporting participant ticket, which won't be refunded, allowing, instead, for regular participants to get more of a refund. Both cost $67.50 + GST. > > Register Today > > StixCampGembrook is organised by the BarCampMelbourne team and is a Linux Australia Event. Any surplus funds after the regular participant tickets have been refunded, will go back to Linux Australia to be used for further helping the open source community in Australia. > > Please forward this email to your colleagues. > > Cheers! > Ben > > > -- > Ben Dechrai > Internet Technology Consultant > Mentor, Presenter, and Hard-Core Privacy Nut > > phone > +415 127 120 > im / email > ben at dechrai.com > website > https://bendechrai.com/ > twitter > @bendechrai -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.larrymite at gmail.com Fri Oct 11 01:00:45 2013 From: andy.larrymite at gmail.com (Andrew Jones) Date: Fri, 11 Oct 2013 10:00:45 +1100 Subject: [melbourne-pug] Medibank Health Solutions is after a senior python dev Message-ID: *About the role* - Roll out health and well-being websites that actually make a difference to their users - Maintain and potentially migrate a legacy twisted/nevow/storm product - Setup new greenfield django/flask projects in hours - Continuously improve and maintain said projects - Ace our buildbot CI environment - Help us get continuous delivery up and running - Improve our operations infrastructure - Steer the direction of our tech stack - Be a key member of a highly collaborative cross functional agile team We use all the good stuff: fabric, puppet, AngularJS, django, south, flask, git, postgresql, ubuntu, AWS and python of course. *Essential* - At least 5 years development experience with python in Linux environments - In-depth experience in web service development using Django and/or Flask - A github (or similar) account and projects - Experience developing database schema particularly with PostgreSQL - Programming experience with SOA, SaaS & Web Services - Understanding of web technologies and protocols *Desirable* - Twisted - Experience of scalability and related design patterns - Puppet - Fabric Apply via seek: http://www.seek.com.au/job/25365272 Cheers Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From acalcium at yahoo.com.au Wed Oct 16 02:44:58 2013 From: acalcium at yahoo.com.au (Chai Ang) Date: Tue, 15 Oct 2013 17:44:58 -0700 (PDT) Subject: [melbourne-pug] Free ebook - Plone 3 Products Development Cookbook Message-ID: <1381884298.72741.YahooMailNeo@web162805.mail.bf1.yahoo.com> In case anyone might find a book like this handy. https://app.packtpub.com/# Should be available till about 10am Thursday. packtpub gives an ebook away daily. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at microcomaustralia.com.au Wed Oct 16 03:27:22 2013 From: brian at microcomaustralia.com.au (Brian May) Date: Wed, 16 Oct 2013 12:27:22 +1100 Subject: [melbourne-pug] django db race conditions Message-ID: Hello All, I have a reasonable amount of Django code that follows this general model: try: object = Model.objects.get(name="woof") except Model.DoesNotExist: object = Model() init_object(update) object.save() Or, in some cases: object = Model.objects.get_or_create(name="woof") In both cases the resultant code is very similar. In both cases there is a race condition. Depending on the flow of execution, I can end up with two or more db objects with name="woof". There are many forum posts discussing this race condition. As an example, for the first case happens when displaying a webpage. Lets assume init_object() is relatively slow. As the web page takes a while to load, the user clicks reload. This results in two (or more) objects being created with name="woof" in error. Another example, for the second case occurs when a JavaScript app makes concurrent calls to the web service. Some people have suggested that if I I want name to be unique, I should make it a database constraint. However that is not always the case that I want these values to be strictly unique, I just want to reuse an existing entry or create it if it doesn't exist. Also, the database constraint would mean the code fails instead of committing two objects, which is not really helpful. Other people have suggested locking the db table, while doing the get_or_create. Seems to require possible db specific SQL code, am I bit reluctant to do this. Django's select_for_update method is interesting, however as the object doesn't actually exist yet, not really applicable. Another solution I have considered, at least for some cases, is moving init_object to a celery task. This would provide the user with faster feedback as to what is happening, and for some slow tasks is probably a good thing. Ideally I would only want one task to initialize the object, not sure how I would check this without introducing new race conditions very similar to the one I am trying to remove. e.g.: if task not created: create task In theory create task could be called multiple times. Another solution, that would work in some places is to make sure that the object exists by some other means beforehand. So I can safely do a get instead of a get_or_create. Any other ideas? Quite possibly I will have to try and find a solution on a case by case basis :-(. Shame we didn't realize this before we wrote this code. -- Brian May -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Wed Oct 16 04:13:42 2013 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Wed, 16 Oct 2013 13:13:42 +1100 Subject: [melbourne-pug] django db race conditions In-Reply-To: References: Message-ID: Have you considered using a database constraint? Sent from my mobile - please excuse any brevity. On 16 Oct 2013 12:27, "Brian May" wrote: > Hello All, > > I have a reasonable amount of Django code that follows this general model: > > try: > object = Model.objects.get(name="woof") > except Model.DoesNotExist: > object = Model() > init_object(update) > object.save() > > Or, in some cases: > > object = Model.objects.get_or_create(name="woof") > > In both cases the resultant code is very similar. > > In both cases there is a race condition. Depending on the flow of > execution, I can end up with two or more db objects with name="woof". There > are many forum posts discussing this race condition. > > As an example, for the first case happens when displaying a webpage. Lets > assume init_object() is relatively slow. As the web page takes a while to > load, the user clicks reload. This results in two (or more) objects being > created with name="woof" in error. > > Another example, for the second case occurs when a JavaScript app makes > concurrent calls to the web service. > > Some people have suggested that if I I want name to be unique, I should > make it a database constraint. However that is not always the case that I > want these values to be strictly unique, I just want to reuse an existing > entry or create it if it doesn't exist. Also, the database constraint would > mean the code fails instead of committing two objects, which is not really > helpful. > > Other people have suggested locking the db table, while doing the > get_or_create. Seems to require possible db specific SQL code, am I bit > reluctant to do this. > > Django's select_for_update method is interesting, however as the object > doesn't actually exist yet, not really applicable. > > Another solution I have considered, at least for some cases, is > moving init_object to a celery task. This would provide the user with > faster feedback as to what is happening, and for some slow tasks is > probably a good thing. Ideally I would only want one task to initialize > the object, not sure how I would check this without introducing new race > conditions very similar to the one I am trying to remove. e.g.: > > if task not created: > create task > > In theory create task could be called multiple times. > > Another solution, that would work in some places is to make sure that the > object exists by some other means beforehand. So I can safely do a get > instead of a get_or_create. > > Any other ideas? > > Quite possibly I will have to try and find a solution on a case by case > basis :-(. > > Shame we didn't realize this before we wrote this code. > -- > Brian May > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Wed Oct 16 04:36:15 2013 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Wed, 16 Oct 2013 13:36:15 +1100 Subject: [melbourne-pug] django db race conditions In-Reply-To: References: Message-ID: Ahem, my apologies. On re-reading your email not on a tiny phone I see that's an option you looked into. I'd still advocate use of database constraints, even if it means modifying your data model to make a multiple column uniqueness constraint if one column is not enough (because if it's not enough then something else in your data model should say why you're not allowed to have two values created simultaneously.) Richard On 16 October 2013 13:13, Richard Jones wrote: > Have you considered using a database constraint? > > Sent from my mobile - please excuse any brevity. > On 16 Oct 2013 12:27, "Brian May" wrote: > >> Hello All, >> >> I have a reasonable amount of Django code that follows this general model: >> >> try: >> object = Model.objects.get(name="woof") >> except Model.DoesNotExist: >> object = Model() >> init_object(update) >> object.save() >> >> Or, in some cases: >> >> object = Model.objects.get_or_create(name="woof") >> >> In both cases the resultant code is very similar. >> >> In both cases there is a race condition. Depending on the flow of >> execution, I can end up with two or more db objects with name="woof". There >> are many forum posts discussing this race condition. >> >> As an example, for the first case happens when displaying a webpage. Lets >> assume init_object() is relatively slow. As the web page takes a while to >> load, the user clicks reload. This results in two (or more) objects being >> created with name="woof" in error. >> >> Another example, for the second case occurs when a JavaScript app makes >> concurrent calls to the web service. >> >> Some people have suggested that if I I want name to be unique, I should >> make it a database constraint. However that is not always the case that I >> want these values to be strictly unique, I just want to reuse an existing >> entry or create it if it doesn't exist. Also, the database constraint would >> mean the code fails instead of committing two objects, which is not really >> helpful. >> >> Other people have suggested locking the db table, while doing the >> get_or_create. Seems to require possible db specific SQL code, am I bit >> reluctant to do this. >> >> Django's select_for_update method is interesting, however as the object >> doesn't actually exist yet, not really applicable. >> >> Another solution I have considered, at least for some cases, is >> moving init_object to a celery task. This would provide the user with >> faster feedback as to what is happening, and for some slow tasks is >> probably a good thing. Ideally I would only want one task to initialize >> the object, not sure how I would check this without introducing new race >> conditions very similar to the one I am trying to remove. e.g.: >> >> if task not created: >> create task >> >> In theory create task could be called multiple times. >> >> Another solution, that would work in some places is to make sure that the >> object exists by some other means beforehand. So I can safely do a get >> instead of a get_or_create. >> >> Any other ideas? >> >> Quite possibly I will have to try and find a solution on a case by case >> basis :-(. >> >> Shame we didn't realize this before we wrote this code. >> -- >> Brian May >> >> _______________________________________________ >> melbourne-pug mailing list >> melbourne-pug at python.org >> https://mail.python.org/mailman/listinfo/melbourne-pug >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasjid at familywilcox.net Wed Oct 16 05:01:11 2013 From: rasjid at familywilcox.net (Rasjid Wilcox) Date: Wed, 16 Oct 2013 14:01:11 +1100 Subject: [melbourne-pug] django db race conditions In-Reply-To: References: Message-ID: <525E0177.8010308@familywilcox.net> On 16/10/2013 12:27 PM, Brian May wrote: > > As an example, for the first case happens when displaying a webpage. > Lets assume init_object() is relatively slow. As the web page takes a > while to load, the user clicks reload. This results in two (or more) > objects being created with name="woof" in error. For this, I would use a nonce so that you can pick up the duplicate posts. Where to generate the nonce (client side or server side) and when to generate the nonce will depend on your actual workflow. But the point of the nonce is that it should allow you know that the second post is a 'duplicate' and discard it. > > Another example, for the second case occurs when a JavaScript app > makes concurrent calls to the web service. > > Some people have suggested that if I I want name to be unique, I > should make it a database constraint. However that is not always the > case that I want these values to be strictly unique, I just want to > reuse an existing entry or create it if it doesn't exist. Also, the > database constraint would mean the code fails instead of committing > two objects, which is not really helpful. A nonce might possibly work with this too, although (and I'm reading between the line here a little), I'm guessing you have two requests that really should be called in sequence. If in the javascript you have call_one, followed by call_two, and call_two should be done after call_one on the server, you either need wait for call_one to complete before calling call_two (but this will make things slow since you get the full network latency showing), or have a new back-end method (multicall) where you essentially tell the server to do call_one, then call_two (but sent as a single request). This may be able to be generalised into a 'batch' call method that could be re-used in various situations. Anther idea could be that you sequence each request from the client (1, 2, 3, ...) for a given session, with the server effectively placing them in a queue (relates to your celery idea). The only issue I can see with the this option is what happens if a request gets lost. Suppose the client sends request 1, 2 and 3, but number 2 gets lost. It will need some timeout on, so that upon getting request 3, it will wait for request 2 for a little while, but not too long, and either return an error (request 2 missing) or just process after a certain delay. That is all my idea for the moment. :-) Cheers, Rasjid. From noonslists at gmail.com Wed Oct 16 05:14:17 2013 From: noonslists at gmail.com (Noon Silk) Date: Wed, 16 Oct 2013 14:14:17 +1100 Subject: [melbourne-pug] django db race conditions In-Reply-To: References: Message-ID: Practically the celery thing you mention is probably objectively good and you should do that. But more interestingly, suppose you have: 1. ask for thing 2. if no thing, create thing, do time consuming activity, 3. add or update thing if thing has been added in the meantime. It's now pretty obvious that you just check again, after the time-consuming activity. Yeah, there is still a race-condition here, but no more than there would normally be, I think. On Wed, Oct 16, 2013 at 12:27 PM, Brian May wrote: > Hello All, > > I have a reasonable amount of Django code that follows this general model: > > try: > object = Model.objects.get(name="woof") > except Model.DoesNotExist: > object = Model() > init_object(update) > object.save() > > Or, in some cases: > > object = Model.objects.get_or_create(name="woof") > > In both cases the resultant code is very similar. > > In both cases there is a race condition. Depending on the flow of > execution, I can end up with two or more db objects with name="woof". There > are many forum posts discussing this race condition. > > As an example, for the first case happens when displaying a webpage. Lets > assume init_object() is relatively slow. As the web page takes a while to > load, the user clicks reload. This results in two (or more) objects being > created with name="woof" in error. > > Another example, for the second case occurs when a JavaScript app makes > concurrent calls to the web service. > > Some people have suggested that if I I want name to be unique, I should > make it a database constraint. However that is not always the case that I > want these values to be strictly unique, I just want to reuse an existing > entry or create it if it doesn't exist. Also, the database constraint would > mean the code fails instead of committing two objects, which is not really > helpful. > > Other people have suggested locking the db table, while doing the > get_or_create. Seems to require possible db specific SQL code, am I bit > reluctant to do this. > > Django's select_for_update method is interesting, however as the object > doesn't actually exist yet, not really applicable. > > Another solution I have considered, at least for some cases, is > moving init_object to a celery task. This would provide the user with > faster feedback as to what is happening, and for some slow tasks is > probably a good thing. Ideally I would only want one task to initialize > the object, not sure how I would check this without introducing new race > conditions very similar to the one I am trying to remove. e.g.: > > if task not created: > create task > > In theory create task could be called multiple times. > > Another solution, that would work in some places is to make sure that the > object exists by some other means beforehand. So I can safely do a get > instead of a get_or_create. > > Any other ideas? > > Quite possibly I will have to try and find a solution on a case by case > basis :-(. > > Shame we didn't realize this before we wrote this code. > -- > Brian May > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > > -- Noon Silk Fancy a quantum lunch? https://sites.google.com/site/quantumlunch/ "Every morning when I wake up, I experience an exquisite joy ? the joy of being this signature." -------------- next part -------------- An HTML attachment was scrubbed... URL: From samuel.lai at gmail.com Thu Oct 17 02:31:36 2013 From: samuel.lai at gmail.com (Sam Lai) Date: Thu, 17 Oct 2013 11:31:36 +1100 Subject: [melbourne-pug] Why can't two dicts be added together? Message-ID: It's almost Friday, so I have a question where I'm pretty sure I'm missing something obvious. Given, d1 = { 'a' : 'b' } d2 = { 'c' : 'd' } ... why isn't d3 = d1 + d2 implemented to be equivalent to - d3 = { } d3.update(d1) d3.update(d2) It doesn't work for sets either, but it works in this fashion for lists. Is this because the operation is non-commutative for sets and dicts and may result in a loss of data when clashing keys are involved? Isn't that implicit when working with sets and dicts? Sam From ben+python at benfinney.id.au Thu Oct 17 02:54:29 2013 From: ben+python at benfinney.id.au (Ben Finney) Date: Thu, 17 Oct 2013 11:54:29 +1100 Subject: [melbourne-pug] Why can't two dicts be added together? References: Message-ID: <7wfvs0iunu.fsf@benfinney.id.au> Sam Lai writes: > It's almost Friday, so I have a question where I'm pretty sure I'm > missing something obvious. > > Given, > > d1 = { 'a' : 'b' } > d2 = { 'c' : 'd' } > > ... why isn't d3 = d1 + d2 implemented to be equivalent to - > > d3 = { } > d3.update(d1) > d3.update(d2) Given:: d1 = {'a': "spam", 'b': "eggs"} d2 = {'b': "beans", 'c': "ham"} What should this do:: d3 = d1 + d2 Since the correct behaviour is ambiguous, Python refuses the temptation to guess. If you want to have a particular behaviour, you need to be explicit as in your example. > It doesn't work for sets either, but it works in this fashion for > lists. Not ?in this fashion?; it appends the sequences. There is no defined order for dicts nor sets, and they have a uniqueness guarantee which lists do not have. So appending lists is an unambiguously correct behaviour for ?+? for two lists, whereas the same is not true for dicts and sets. -- \ ?Beware of bugs in the above code; I have only proved it | `\ correct, not tried it.? ?Donald Knuth, 1977-03-29 | _o__) | Ben Finney From javier at candeira.com Thu Oct 17 02:58:32 2013 From: javier at candeira.com (Javier Candeira) Date: Thu, 17 Oct 2013 11:58:32 +1100 Subject: [melbourne-pug] Why can't two dicts be added together? In-Reply-To: References: Message-ID: Python culture runs counter to monkeypatching standard library objects, but this looks be easy to do via injecting __add__ (and __iadd__ for "d1 += d2") straight into the dict class. In Ruby it's done In fact it looks so obvious that ... /me googles... https://www.google.com.au/search?q=dict+__add__ The first result is a bug report, but it was rejected before it got to PEP stage even: http://bugs.python.org/issue6410. Contains good rationale for the rejection. J On Thu, Oct 17, 2013 at 11:31 AM, Sam Lai wrote: > It's almost Friday, so I have a question where I'm pretty sure I'm > missing something obvious. > > Given, > > d1 = { 'a' : 'b' } > d2 = { 'c' : 'd' } > > ... why isn't d3 = d1 + d2 implemented to be equivalent to - > > d3 = { } > d3.update(d1) > d3.update(d2) > > It doesn't work for sets either, but it works in this fashion for > lists. Is this because the operation is non-commutative for sets and > dicts and may result in a loss of data when clashing keys are > involved? Isn't that implicit when working with sets and dicts? > > Sam > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug From brett at brett.geek.nz Thu Oct 17 03:06:39 2013 From: brett at brett.geek.nz (Brett Wilkins) Date: Thu, 17 Oct 2013 12:06:39 +1100 Subject: [melbourne-pug] Why can't two dicts be added together? In-Reply-To: References: Message-ID: In Ruby there is the merge method (returns a new hash) and the merge! method (modifies the hash that the method is called from). These methods are documented as preferring the values on the passed-in in the case of key collision. ActiveSupport (part of the Ruby on Rails ecosystem) provides reverse_merge and reverse_merge!, which prefer the acting hash's values over the values of the hash that is passed in. --? Brett Wilkins On 17 October 2013 at 11:58:46 AM, Javier Candeira (javier at candeira.com) wrote: Python culture runs counter to monkeypatching standard library objects, but this looks be easy to do via injecting __add__ (and __iadd__ for "d1 += d2") straight into the dict class. In Ruby it's done In fact it looks so obvious that ... /me googles... https://www.google.com.au/search?q=dict+__add__ The first result is a bug report, but it was rejected before it got to PEP stage even: http://bugs.python.org/issue6410. Contains good rationale for the rejection. J On Thu, Oct 17, 2013 at 11:31 AM, Sam Lai wrote: > It's almost Friday, so I have a question where I'm pretty sure I'm > missing something obvious. > > Given, > > d1 = { 'a' : 'b' } > d2 = { 'c' : 'd' } > > ... why isn't d3 = d1 + d2 implemented to be equivalent to - > > d3 = { } > d3.update(d1) > d3.update(d2) > > It doesn't work for sets either, but it works in this fashion for > lists. Is this because the operation is non-commutative for sets and > dicts and may result in a loss of data when clashing keys are > involved? Isn't that implicit when working with sets and dicts? > > Sam > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug _______________________________________________ melbourne-pug mailing list melbourne-pug at python.org https://mail.python.org/mailman/listinfo/melbourne-pug -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.sargeant at gmail.com Thu Oct 17 03:10:15 2013 From: tobias.sargeant at gmail.com (Tobias Sargeant) Date: Thu, 17 Oct 2013 12:10:15 +1100 Subject: [melbourne-pug] Why can't two dicts be added together? In-Reply-To: References: Message-ID: Set union (| operator) is commutative. >>> {2,3} | {1,2} == {1,2} | {2,3} True List concatenation isn't >>> [2,3] + [1,2] == [1,2] + [2,3] False so maybe the expectation of commutativity for + isn't a good argument (or is an argument that list concatenation should be called something else :) ). If you want a (verbose) one-liner for "concatenation" of dictionaries, there's always: dict(itertools.chain({1:1}.iteritems(), {2:2}.iteritems())) On 17/10/2013, at 11:31 AM, Sam Lai wrote: > It's almost Friday, so I have a question where I'm pretty sure I'm > missing something obvious. > > Given, > > d1 = { 'a' : 'b' } > d2 = { 'c' : 'd' } > > ... why isn't d3 = d1 + d2 implemented to be equivalent to - > > d3 = { } > d3.update(d1) > d3.update(d2) > > It doesn't work for sets either, but it works in this fashion for > lists. Is this because the operation is non-commutative for sets and > dicts and may result in a loss of data when clashing keys are > involved? Isn't that implicit when working with sets and dicts? > > Sam > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug From rasjidw at openminddev.net Thu Oct 17 03:17:02 2013 From: rasjidw at openminddev.net (Rasjid Wilcox) Date: Thu, 17 Oct 2013 12:17:02 +1100 Subject: [melbourne-pug] Why can't two dicts be added together? In-Reply-To: References: Message-ID: <525F3A8E.8060505@openminddev.net> On 17/10/2013 11:58 AM, Javier Candeira wrote: > The first result is a bug report, but it was rejected before it got to > PEP stage even: http://bugs.python.org/issue6410. Contains good > rationale for the rejection. > > I think the last post there is the clincher. What would be the result of {"a": 1, "b": 2} + {"a": 2, "b": 1}? It could be: a) {"a": 1, "b": 2} b) {"a": 2, "b": 1} c) {"a": [1, 2], "b": [1,2]} d) {"a":[1,2], "b":[2,1]} All of the above make sense in some circumstances. Also, most of the time you don't want a new dictionary - you really do just want to update an existing one, which is what the update method does. Cheers, Rasjid. From samuel.lai at gmail.com Thu Oct 17 03:47:26 2013 From: samuel.lai at gmail.com (Sam Lai) Date: Thu, 17 Oct 2013 12:47:26 +1100 Subject: [melbourne-pug] Why can't two dicts be added together? In-Reply-To: References: Message-ID: @Javier, ah, those were the search terms I was looking for. I understand there's potential ambiguity, but it seems appropriate that there is an accepted convention (where personally, the + is equivalent to .update on a new dict in the order presented). Those examples where a list is created for clashing dict keys feel like they're doing a lot more than what a dict should do. In any case, the performance issues associated with creating a new dict each time probably wouldn't be too good anyway. Thanks everyone! On 17 October 2013 12:10, Tobias Sargeant wrote: > Set union (| operator) is commutative. > >>>> {2,3} | {1,2} == {1,2} | {2,3} > True > > List concatenation isn't > >>>> [2,3] + [1,2] == [1,2] + [2,3] > False > > so maybe the expectation of commutativity for + isn't a good argument (or is an argument that list concatenation should be called something else :) ). > > If you want a (verbose) one-liner for "concatenation" of dictionaries, there's always: > > dict(itertools.chain({1:1}.iteritems(), {2:2}.iteritems())) > > On 17/10/2013, at 11:31 AM, Sam Lai wrote: > >> It's almost Friday, so I have a question where I'm pretty sure I'm >> missing something obvious. >> >> Given, >> >> d1 = { 'a' : 'b' } >> d2 = { 'c' : 'd' } >> >> ... why isn't d3 = d1 + d2 implemented to be equivalent to - >> >> d3 = { } >> d3.update(d1) >> d3.update(d2) >> >> It doesn't work for sets either, but it works in this fashion for >> lists. Is this because the operation is non-commutative for sets and >> dicts and may result in a loss of data when clashing keys are >> involved? Isn't that implicit when working with sets and dicts? >> >> Sam >> _______________________________________________ >> melbourne-pug mailing list >> melbourne-pug at python.org >> https://mail.python.org/mailman/listinfo/melbourne-pug > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug From gcross at fastmail.fm Thu Oct 17 04:21:45 2013 From: gcross at fastmail.fm (Graeme Cross) Date: Thu, 17 Oct 2013 13:21:45 +1100 Subject: [melbourne-pug] Why can't two dicts be added together? In-Reply-To: References: Message-ID: <1381976505.22536.34962597.1B1A2466@webmail.messagingengine.com> On Thu, Oct 17, 2013, at 11:31 AM, Sam Lai wrote: > It's almost Friday, so I have a question where I'm pretty sure I'm > missing something obvious. > > Given, > > d1 = { 'a' : 'b' } > d2 = { 'c' : 'd' } > > ... why isn't d3 = d1 + d2 implemented to be equivalent to - > > d3 = { } > d3.update(d1) > d3.update(d2) > > It doesn't work for sets either, but it works in this fashion for > lists. Is this because the operation is non-commutative for sets and > dicts and may result in a loss of data when clashing keys are > involved? Isn't that implicit when working with sets and dicts? > > Sam The funcy library is worth a look as it provides a number of functions to help with situations like this, as well as providing functions that programmers coming from other languages (such as Haskell or Clojure) are used to: http://hackflow.com/blog/2013/10/13/functional-python-made-easy/ eg In [1]: from funcy import merge In [2]: d1 = {1:1,2:4,3:9} In [3]: d2 = {4:16,5:25} In [4]: d3 = merge(d1,d2) In [5]: d3 Out[5]: {1: 1, 2: 4, 3: 9, 4: 16, 5: 25} Regards Graeme From andy.larrymite at gmail.com Thu Oct 17 07:35:39 2013 From: andy.larrymite at gmail.com (Andrew Jones) Date: Thu, 17 Oct 2013 16:35:39 +1100 Subject: [melbourne-pug] Medibank Health Solutions is looking for a mid level python Python QA Analyst Message-ID: Hi all, another job with my team at MHS. This time its a mid level automation tester. Location Richmond/Melbourne. Fix term 12 month contract. This roll would suite a developer looking to broaden their skill set into the automated testing space. Or a career automation tester. See: http://www.seek.com.au/Job/25406434 Here is the gist of the ad: - Roll out automated tests for health and well-being websites that actually make a difference to their users - Setup automated testing in new Greenfield Django/Flask projects in hours - Continuously improve and maintain tests in said projects - Ace our Buildbot CI environment - Help us get continuous delivery up and running - Be a member of a highly collaborative cross functional agile team We use all the good stuff: fabric, puppet, AngularJS, django, south, flask, git, postgresql, ubuntu, AWS, selenium, requests, py.test, buildbot and python of course. Cheers Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at microcomaustralia.com.au Fri Oct 18 03:05:20 2013 From: brian at microcomaustralia.com.au (Brian May) Date: Fri, 18 Oct 2013 12:05:20 +1100 Subject: [melbourne-pug] django db race conditions In-Reply-To: References: Message-ID: Perhaps I need to give some some specific examples. Example 1: we need to display the user some data, however this is slow, so we try to cache it: try: cache = Cache.objects.get(start=?, stop=?) except Model.DoesNotExist: data = get_data(start=?,stop=?) cache = Cache.objects.create(start=?, stop=?, xxx=data.xxx, yyy=data.yyy, ...) [ render response using cache ] So the first step I can do is make sure start and stop are uniquely indexed. That way if it is run concurrently, the other processes will fail rather then create multiple objects resulting in every request failing. Still not very good from the user's perspective. Ideally, as get_data is a db intensive operation I only want to call it once for a given start/stop. Otherwise we use more resources then required. Also I risk being vulnerable to DOS attacks if I get a lot of requests at the same time (you could argue this is a problem anyway as the start and stop come from the user). I think I could change that to something like (if I understand celery correctly): from app.tasks import get_data try: cache = Cache.objects.get(start=?, stop=?) except Model.DoesNotExist: cache = Cache.objects.create(start=?, stop=?) cache.task = get_data.delay() cache.save() # cache.xxx and cache.yyy to be filled in by celery task if cache.task is not None and not task.ready(): [ render processing message ] else: [ render response using cache ] However, unfortunately, I still have the same race condition. Example 2: I have a photo database that accepts imports from JavaScript. The JavaScript will send a POST request for every file to be uploaded, with the randomly generated name of the album to upload the photo to. At the first step it does: Album.objects.get_or_create(name=?) There is an issue I haven't investigated yet with the JavaScript that for the first upload it will upload the first two files concurrently, despite the fact I configured it to only allow one at a time. Regardless, being able to support concurrent uploads is probably a desirable feature. I can't create a unique index here on name, I don't consider it an error to have two album's with the same name. Regardless, I don't want uploads to randomly fail either. Am thinking the solution here is that I need to make sure that the album is created before the first upload, and maybe even reference it in the POST request by id rather then name. Example 3: Creating new user. User puts in a request for an account. Administrator has to approve the request. If two administrators approve the same request at the same time, we could end up with two accounts for the same user. Ooops. Or an error if some unique index caught, say, the duplicate username or email address. I guess I really to think about minimize the risks, as opposed to total extermination of all possible race conditions. Instead focus on ensuring that the database integrity and that possible damage (e.g. duplicate records) is minimised. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim at growthpath.com.au Thu Oct 17 03:00:04 2013 From: tim at growthpath.com.au (Tim Richardson) Date: Thu, 17 Oct 2013 12:00:04 +1100 Subject: [melbourne-pug] Why can't two dicts be added together? In-Reply-To: <7wfvs0iunu.fsf@benfinney.id.au> References: <7wfvs0iunu.fsf@benfinney.id.au> Message-ID: On Thu, Oct 17, 2013 at 11:54 AM, Ben Finney wrote: > Since the correct behaviour is ambiguous, Python refuses the temptation > to guess. I > Your definition of + would have to be non-commutative, because & and removing clashing values is a subtraction (sensitive to the order). So it's a bad "addition" in two ways. -- *Tim Richardson, Director* Mobile: +61 423 091 732 Office: +61 3 8678 1850 GrowthPath Pty Ltd ABN 76 133 733 963 -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier at candeira.com Fri Oct 18 04:01:26 2013 From: javier at candeira.com (Javier Candeira) Date: Fri, 18 Oct 2013 13:01:26 +1100 Subject: [melbourne-pug] django db race conditions In-Reply-To: References: Message-ID: Example 3 can be fixed by making administrator approval idempotent. User puts in a request for an account, and in doing so creates a unique "I want an account" token. ID of this token is a hash of the associated email, perhaps. Then, it doesn't matter if two admistrators authorize it in sequence, the second authorization will leave the already authorized token untouched. I'm not so sure about example 1, but perhaps it could be done in a similar way. Create a cached object that has some kind of unique key, and always create it empty and with a flag that says "busy being born". That way the race can be minimised, as this creation takes very little time (though not eliminated, as it's not an atomic operation). Requests that find the cache object while it's still being populated can perfrorm a wait(), or maybe register themselves to be notified via callback when the object is finished. J On Fri, Oct 18, 2013 at 12:05 PM, Brian May wrote: > Perhaps I need to give some some specific examples. > > > Example 1: we need to display the user some data, however this is slow, so > we try to cache it: > > try: > cache = Cache.objects.get(start=?, stop=?) > except Model.DoesNotExist: > data = get_data(start=?,stop=?) > cache = Cache.objects.create(start=?, stop=?, xxx=data.xxx, yyy=data.yyy, > ...) > [ render response using cache ] > > So the first step I can do is make sure start and stop are uniquely indexed. > That way if it is run concurrently, the other processes will fail rather > then create multiple objects resulting in every request failing. Still not > very good from the user's perspective. > > Ideally, as get_data is a db intensive operation I only want to call it once > for a given start/stop. Otherwise we use more resources then required. Also > I risk being vulnerable to DOS attacks if I get a lot of requests at the > same time (you could argue this is a problem anyway as the start and stop > come from the user). > > I think I could change that to something like (if I understand celery > correctly): > > from app.tasks import get_data > try: > cache = Cache.objects.get(start=?, stop=?) > except Model.DoesNotExist: > cache = Cache.objects.create(start=?, stop=?) > cache.task = get_data.delay() > cache.save() > # cache.xxx and cache.yyy to be filled in by celery task > > if cache.task is not None and not task.ready(): > [ render processing message ] > else: > [ render response using cache ] > > However, unfortunately, I still have the same race condition. > > > Example 2: I have a photo database that accepts imports from JavaScript. The > JavaScript will send a POST request for every file to be uploaded, with the > randomly generated name of the album to upload the photo to. At the first > step it does: > > Album.objects.get_or_create(name=?) > > There is an issue I haven't investigated yet with the JavaScript that for > the first upload it will upload the first two files concurrently, despite > the fact I configured it to only allow one at a time. Regardless, being able > to support concurrent uploads is probably a desirable feature. > > I can't create a unique index here on name, I don't consider it an error to > have two album's with the same name. > > Regardless, I don't want uploads to randomly fail either. > > Am thinking the solution here is that I need to make sure that the album is > created before the first upload, and maybe even reference it in the POST > request by id rather then name. > > > Example 3: Creating new user. User puts in a request for an account. > Administrator has to approve the request. If two administrators approve the > same request at the same time, we could end up with two accounts for the > same user. Ooops. Or an error if some unique index caught, say, the > duplicate username or email address. > > > I guess I really to think about minimize the risks, as opposed to total > extermination of all possible race conditions. Instead focus on ensuring > that the database integrity and that possible damage (e.g. duplicate > records) is minimised. > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > From rasjidw at openminddev.net Fri Oct 18 04:13:21 2013 From: rasjidw at openminddev.net (Rasjid Wilcox) Date: Fri, 18 Oct 2013 13:13:21 +1100 Subject: [melbourne-pug] django db race conditions In-Reply-To: References: Message-ID: <52609941.1050602@openminddev.net> On 18/10/2013 12:05 PM, Brian May wrote: > Perhaps I need to give some some specific examples. > > > Example 1: we need to display the user some data, however this is > slow, so we try to cache it: > > try: > cache = Cache.objects.get(start=?, stop=?) > except Model.DoesNotExist: > data = get_data(start=?,stop=?) > cache = Cache.objects.create(start=?, stop=?, xxx=data.xxx, > yyy=data.yyy, ...) > [ render response using cache ] > > I would set up a table (guard_get_data say), indexed on (start, stop) with a timestamp field. try: cache = Cache.objects.get(start=?, stop=?) except Model.DoesNotExist: try: insert into guard_get_data (start, stop, now) # not real code try: data = get_data(start=?,stop=?) cache = Cache.objects.create(start=?, stop=?, xxx=data.xxx, yyy=data.yyy, ...) finally: delete (start, stop) from guard_get_data # not real code except insert error: # already being generated wait for guard_get_data record on (start, stop) to be deleted # not real code # the data should be in the cache now cache = Cache.objects.get(start=?, stop=?) Having the timestamp field means you can check that the get_data_guard record is not too far in the past. If it is it probably means that python crashed without deleting the record, or the generation process is spinning in another thread/process. The insert into get_data_guard should be completely atomic - it will only ever succeed for one caller, and so it should eliminate the race condition. I've not used django's orm in ages, so I don't know what method it used for atomic inserts, or whether it would be better to drop down to the sql level. Cheers, Rasjid. From javier at candeira.com Sat Oct 19 11:58:53 2013 From: javier at candeira.com (Javier Candeira) Date: Sat, 19 Oct 2013 20:58:53 +1100 Subject: [melbourne-pug] Next MPUG meeting: Machine Vision and LaTeX on 4 November, 6PM - Inspire 9, 41 Stewart St Message-ID: Dear Melbourne Pythonistas, This is the current lineup for the November MPUG meeting: # Bianca Gibson -- Latex and Python # Lars Yencken -- Machine Vision with SimpleCV. We can still fit in a 5 minute short talk for this session, so please volunteer or dob in a friend! You can do it anonymously using our wiki: https://wiki.python.org/moin/MelbournePUG See you in 15 days, Javier & the MPUG organizers. From bianca.rachel.gibson at gmail.com Sun Oct 20 03:58:33 2013 From: bianca.rachel.gibson at gmail.com (Bianca Gibson) Date: Sun, 20 Oct 2013 12:58:33 +1100 Subject: [melbourne-pug] Next MPUG meeting: Machine Vision and LaTeX on 4 November, 6PM - Inspire 9, 41 Stewart St In-Reply-To: References: Message-ID: I thought I was in for December, not November. Unfortunately I'm not available for November, I have an exam that afternoon. Cheers, Bianca -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier at candeira.com Sun Oct 20 10:13:15 2013 From: javier at candeira.com (Javier Candeira) Date: Sun, 20 Oct 2013 19:13:15 +1100 Subject: [melbourne-pug] Next MPUG meeting: Machine Vision and Indie Gaming on 4 November, 6PM - Inspire 9, 41 Stewart St Message-ID: Oops, my mistake. I should have double-checked. Dear Melbourne Pythonistas, This is the correct lineup for the November MPUG meeting: # Luke Miller -- My big gay adventure. Making, releasing and selling an indie game made in Python. # Lars Yencken -- Machine Vision with SimpleCV. We can still fit in a 5 minute short talk for this session, so please volunteer or dob in a friend! You can do it anonymously using our wiki: https://wiki.python.org/moin/MelbournePUG See you in 15 days, Javier & the MPUG organizers. On Sun, Oct 20, 2013 at 12:58 PM, Bianca Gibson wrote: > I thought I was in for December, not November. > Unfortunately I'm not available for November, I have an exam that afternoon. > > Cheers, > Bianca > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > From javier at candeira.com Mon Oct 21 00:46:19 2013 From: javier at candeira.com (Javier Candeira) Date: Mon, 21 Oct 2013 09:46:19 +1100 Subject: [melbourne-pug] External services for in-the-cloud app Message-ID: I'm about to start evaluating external scm, logging, monitoring, analytics, issues, etc. services for an in-the-cloud application, and I'd like your advice/opinion on the ones you already use. Monitoring: I'm currently using Server Density for monitoring with another client, and dislike it (it initialises your / as a git repository, if you can believe it). It's not cheap either. Any of you uses New Relic? Logging: In the past I used the Splunk free tier for logging and analytics, and it was fine. but I also think we never used it to its full potential. I also wonder if we could have monitoring and logging rolled into one, thus saving cost and complexity. Scm: for private repos, I'm happier with bitbucket than I am with github. Also, I've not been bit by outages once, which is nice. Issue management/Customer feedback: I only know it as a regular punter, but I like UserVoice. Firsthand experience much appreciated! Javier From r1chardj0n3s at gmail.com Mon Oct 21 00:57:21 2013 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Mon, 21 Oct 2013 09:57:21 +1100 Subject: [melbourne-pug] External services for in-the-cloud app In-Reply-To: References: Message-ID: On the logging side I've used Sentry in a few projects. It's pretty good, and the client (raven) has gotten better at handling the server not being contactable lately. It supports multiple languages but it doesn't do metrics though. It's pretty easy to set up a private instance if you don't want to use their service. Richard On 21 October 2013 09:46, Javier Candeira wrote: > I'm about to start evaluating external scm, logging, monitoring, > analytics, issues, etc. services for an in-the-cloud application, and > I'd like your advice/opinion on the ones you already use. > > Monitoring: I'm currently using Server Density for monitoring with > another client, and dislike it (it initialises your / as a git > repository, if you can believe it). It's not cheap either. Any of you > uses New Relic? > > Logging: In the past I used the Splunk free tier for logging and > analytics, and it was fine. but I also think we never used it to its > full potential. I also wonder if we could have monitoring and logging > rolled into one, thus saving cost and complexity. > > Scm: for private repos, I'm happier with bitbucket than I am with > github. Also, I've not been bit by outages once, which is nice. > > Issue management/Customer feedback: I only know it as a regular > punter, but I like UserVoice. > > Firsthand experience much appreciated! > > Javier > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjlmac at gmail.com Mon Oct 21 04:32:29 2013 From: cjlmac at gmail.com (Chris Maclachlan) Date: Mon, 21 Oct 2013 13:32:29 +1100 Subject: [melbourne-pug] External services for in-the-cloud app In-Reply-To: References: Message-ID: I use NewRelic's free tier for an application of mine. It's an incredibly comprehensive solution. It also does a fair bit of log aggregating. Setup was extremely easy - in my case I have a Flask application on uWSGI, so getting it to work was as simple as throwing two lines into the app's __init__.py after the imports. I was really surprised - and impressed - when one of their reps called me from the US on my mobile after I signed up, to discuss some oddities they were seeing (basically, logged high latency from an external service I was using and they were calling to make sure the issue wasn't their agent!). I've been able to use some of the introspection features to improve response times (you can define key transactions and collect detailed info on database latency, slow points, etc). I spent a month in a free trial of the paid tier, of course. It's a bit pricey, so I couldn't get budget for it - but what you get for what you pay is really quite decent. Can't say enough nice things. Also, free T-shirt! For issues and feedback - several people have suggested Trac to me in the past. It of course has the benefit of integrating with Mercurial and Git really nicely. I've been told it can be customised extensively to turn it into a full product helpdesk system, but I've always ended up concluding that it doesn't meet my requirements (I need some features like detailed SLA tracking that it just doesn't have). Cheers, Chris On Mon, Oct 21, 2013 at 9:46 AM, Javier Candeira wrote: > I'm about to start evaluating external scm, logging, monitoring, > analytics, issues, etc. services for an in-the-cloud application, and > I'd like your advice/opinion on the ones you already use. > > Monitoring: I'm currently using Server Density for monitoring with > another client, and dislike it (it initialises your / as a git > repository, if you can believe it). It's not cheap either. Any of you > uses New Relic? > > Logging: In the past I used the Splunk free tier for logging and > analytics, and it was fine. but I also think we never used it to its > full potential. I also wonder if we could have monitoring and logging > rolled into one, thus saving cost and complexity. > > Scm: for private repos, I'm happier with bitbucket than I am with > github. Also, I've not been bit by outages once, which is nice. > > Issue management/Customer feedback: I only know it as a regular > punter, but I like UserVoice. > > Firsthand experience much appreciated! > > Javier > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier at candeira.com Mon Oct 21 05:28:16 2013 From: javier at candeira.com (Javier Candeira) Date: Mon, 21 Oct 2013 14:28:16 +1100 Subject: [melbourne-pug] External services for in-the-cloud app In-Reply-To: References: Message-ID: Thanks, guys. Please keep it coming! J On Mon, Oct 21, 2013 at 1:32 PM, Chris Maclachlan wrote: > I use NewRelic's free tier for an application of mine. It's an incredibly > comprehensive solution. It also does a fair bit of log aggregating. Setup > was extremely easy - in my case I have a Flask application on uWSGI, so > getting it to work was as simple as throwing two lines into the app's > __init__.py after the imports. > > I was really surprised - and impressed - when one of their reps called me > from the US on my mobile after I signed up, to discuss some oddities they > were seeing (basically, logged high latency from an external service I was > using and they were calling to make sure the issue wasn't their agent!). > I've been able to use some of the introspection features to improve response > times (you can define key transactions and collect detailed info on database > latency, slow points, etc). > > I spent a month in a free trial of the paid tier, of course. It's a bit > pricey, so I couldn't get budget for it - but what you get for what you pay > is really quite decent. Can't say enough nice things. Also, free T-shirt! > > For issues and feedback - several people have suggested Trac to me in the > past. It of course has the benefit of integrating with Mercurial and Git > really nicely. I've been told it can be customised extensively to turn it > into a full product helpdesk system, but I've always ended up concluding > that it doesn't meet my requirements (I need some features like detailed SLA > tracking that it just doesn't have). > > Cheers, > > Chris > > > On Mon, Oct 21, 2013 at 9:46 AM, Javier Candeira > wrote: >> >> I'm about to start evaluating external scm, logging, monitoring, >> analytics, issues, etc. services for an in-the-cloud application, and >> I'd like your advice/opinion on the ones you already use. >> >> Monitoring: I'm currently using Server Density for monitoring with >> another client, and dislike it (it initialises your / as a git >> repository, if you can believe it). It's not cheap either. Any of you >> uses New Relic? >> >> Logging: In the past I used the Splunk free tier for logging and >> analytics, and it was fine. but I also think we never used it to its >> full potential. I also wonder if we could have monitoring and logging >> rolled into one, thus saving cost and complexity. >> >> Scm: for private repos, I'm happier with bitbucket than I am with >> github. Also, I've not been bit by outages once, which is nice. >> >> Issue management/Customer feedback: I only know it as a regular >> punter, but I like UserVoice. >> >> Firsthand experience much appreciated! >> >> Javier >> _______________________________________________ >> melbourne-pug mailing list >> melbourne-pug at python.org >> https://mail.python.org/mailman/listinfo/melbourne-pug > > > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > From bruce at brucewang.net Mon Oct 21 05:31:49 2013 From: bruce at brucewang.net (Bruce Wang) Date: Mon, 21 Oct 2013 14:31:49 +1100 Subject: [melbourne-pug] External services for in-the-cloud app In-Reply-To: References: Message-ID: Cloudbees.com for Jenkins On Mon, Oct 21, 2013 at 2:28 PM, Javier Candeira wrote: > Thanks, guys. Please keep it coming! > > J > > On Mon, Oct 21, 2013 at 1:32 PM, Chris Maclachlan > wrote: > > I use NewRelic's free tier for an application of mine. It's an incredibly > > comprehensive solution. It also does a fair bit of log aggregating. Setup > > was extremely easy - in my case I have a Flask application on uWSGI, so > > getting it to work was as simple as throwing two lines into the app's > > __init__.py after the imports. > > > > I was really surprised - and impressed - when one of their reps called me > > from the US on my mobile after I signed up, to discuss some oddities they > > were seeing (basically, logged high latency from an external service I > was > > using and they were calling to make sure the issue wasn't their agent!). > > I've been able to use some of the introspection features to improve > response > > times (you can define key transactions and collect detailed info on > database > > latency, slow points, etc). > > > > I spent a month in a free trial of the paid tier, of course. It's a bit > > pricey, so I couldn't get budget for it - but what you get for what you > pay > > is really quite decent. Can't say enough nice things. Also, free T-shirt! > > > > For issues and feedback - several people have suggested Trac to me in the > > past. It of course has the benefit of integrating with Mercurial and Git > > really nicely. I've been told it can be customised extensively to turn it > > into a full product helpdesk system, but I've always ended up concluding > > that it doesn't meet my requirements (I need some features like detailed > SLA > > tracking that it just doesn't have). > > > > Cheers, > > > > Chris > > > > > > On Mon, Oct 21, 2013 at 9:46 AM, Javier Candeira > > wrote: > >> > >> I'm about to start evaluating external scm, logging, monitoring, > >> analytics, issues, etc. services for an in-the-cloud application, and > >> I'd like your advice/opinion on the ones you already use. > >> > >> Monitoring: I'm currently using Server Density for monitoring with > >> another client, and dislike it (it initialises your / as a git > >> repository, if you can believe it). It's not cheap either. Any of you > >> uses New Relic? > >> > >> Logging: In the past I used the Splunk free tier for logging and > >> analytics, and it was fine. but I also think we never used it to its > >> full potential. I also wonder if we could have monitoring and logging > >> rolled into one, thus saving cost and complexity. > >> > >> Scm: for private repos, I'm happier with bitbucket than I am with > >> github. Also, I've not been bit by outages once, which is nice. > >> > >> Issue management/Customer feedback: I only know it as a regular > >> punter, but I like UserVoice. > >> > >> Firsthand experience much appreciated! > >> > >> Javier > >> _______________________________________________ > >> melbourne-pug mailing list > >> melbourne-pug at python.org > >> https://mail.python.org/mailman/listinfo/melbourne-pug > > > > > > > > _______________________________________________ > > melbourne-pug mailing list > > melbourne-pug at python.org > > https://mail.python.org/mailman/listinfo/melbourne-pug > > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > -- simple is good http://brucewang.net http://twitter.com/number5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at yencken.org Mon Oct 21 11:58:35 2013 From: lars at yencken.org (Lars Yencken) Date: Mon, 21 Oct 2013 20:58:35 +1100 Subject: [melbourne-pug] External services for in-the-cloud app In-Reply-To: References: Message-ID: I can second NewRelic. We've used it for some time at work, and it's definitely the strongest performance monitoring tool that I've used. Recently I used their free tier for my language game in Flask, and it's again been useful. The 24h limit to history does prevent you from investigating performance over time though -- for that you have to pay. We're also moving to sentry at work. We use Github, but Bitbucket's more reasonably priced. It's what I use for my personal projects. The workflow around pull requests is probably not as strong for teams though. On 21 Oct 2013 14:36, "Javier Candeira" wrote: > Thanks, guys. Please keep it coming! > > J > > On Mon, Oct 21, 2013 at 1:32 PM, Chris Maclachlan > wrote: > > I use NewRelic's free tier for an application of mine. It's an incredibly > > comprehensive solution. It also does a fair bit of log aggregating. Setup > > was extremely easy - in my case I have a Flask application on uWSGI, so > > getting it to work was as simple as throwing two lines into the app's > > __init__.py after the imports. > > > > I was really surprised - and impressed - when one of their reps called me > > from the US on my mobile after I signed up, to discuss some oddities they > > were seeing (basically, logged high latency from an external service I > was > > using and they were calling to make sure the issue wasn't their agent!). > > I've been able to use some of the introspection features to improve > response > > times (you can define key transactions and collect detailed info on > database > > latency, slow points, etc). > > > > I spent a month in a free trial of the paid tier, of course. It's a bit > > pricey, so I couldn't get budget for it - but what you get for what you > pay > > is really quite decent. Can't say enough nice things. Also, free T-shirt! > > > > For issues and feedback - several people have suggested Trac to me in the > > past. It of course has the benefit of integrating with Mercurial and Git > > really nicely. I've been told it can be customised extensively to turn it > > into a full product helpdesk system, but I've always ended up concluding > > that it doesn't meet my requirements (I need some features like detailed > SLA > > tracking that it just doesn't have). > > > > Cheers, > > > > Chris > > > > > > On Mon, Oct 21, 2013 at 9:46 AM, Javier Candeira > > wrote: > >> > >> I'm about to start evaluating external scm, logging, monitoring, > >> analytics, issues, etc. services for an in-the-cloud application, and > >> I'd like your advice/opinion on the ones you already use. > >> > >> Monitoring: I'm currently using Server Density for monitoring with > >> another client, and dislike it (it initialises your / as a git > >> repository, if you can believe it). It's not cheap either. Any of you > >> uses New Relic? > >> > >> Logging: In the past I used the Splunk free tier for logging and > >> analytics, and it was fine. but I also think we never used it to its > >> full potential. I also wonder if we could have monitoring and logging > >> rolled into one, thus saving cost and complexity. > >> > >> Scm: for private repos, I'm happier with bitbucket than I am with > >> github. Also, I've not been bit by outages once, which is nice. > >> > >> Issue management/Customer feedback: I only know it as a regular > >> punter, but I like UserVoice. > >> > >> Firsthand experience much appreciated! > >> > >> Javier > >> _______________________________________________ > >> melbourne-pug mailing list > >> melbourne-pug at python.org > >> https://mail.python.org/mailman/listinfo/melbourne-pug > > > > > > > > _______________________________________________ > > melbourne-pug mailing list > > melbourne-pug at python.org > > https://mail.python.org/mailman/listinfo/melbourne-pug > > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier at candeira.com Tue Oct 22 01:27:04 2013 From: javier at candeira.com (Javier Candeira) Date: Tue, 22 Oct 2013 10:27:04 +1100 Subject: [melbourne-pug] External services for in-the-cloud app In-Reply-To: References: Message-ID: On Mon, Oct 21, 2013 at 8:58 PM, Lars Yencken wrote: > I can second NewRelic. We've used it for some time at work, and it's > definitely the strongest performance monitoring tool that I've used. > Recently I used their free tier for my language game in Flask, and it's > again been useful. The 24h limit to history does prevent you from > investigating performance over time though -- for that you have to pay. I've also been looking at open source solutions in this space, like collectd, ganglia, nagios, cacti, but it seems they overlap a bit, and at the same time they leave out a lot of the error monitoring that the likes of New Relic give you. I'll report back when I get a better idea. > We use Github, but Bitbucket's more reasonably priced. It's what I use for > my personal projects. The workflow around pull requests is probably not as > strong for teams though. Can you give an example? Besides the fact that pull requests don't create an issue (which may even be better for some people, I'm agnostic on the point), I can't see much difference. J From dan at acommoncreative.com Tue Oct 22 03:42:14 2013 From: dan at acommoncreative.com (dan) Date: Tue, 22 Oct 2013 12:42:14 +1100 Subject: [melbourne-pug] External services for in-the-cloud app In-Reply-To: References: Message-ID: This may seem a little too much like spammy adverstising but just to disclaim, I have no affliation or anything with the company....I'm just a fan of their software and this does seem particularly topical to this thread. New Relic just released some new (cheaper) pricing options for that might help address the 24 hour issue in a more affordable way for smaller organisations: "Here's the package *(Available through October 31st, 2013)*: ************************************* * *Startup Package (only for companies with less than 10 employees)* - One flat fee for up to 8 servers and up to 5 users - Transaction Traces and all the other features found in our Pro offering - 2 weeks of data retention (vs 24 hours in lite) - *All for $199/month* *Small Business Package (only for companies with less than 20 employees)* - One flat fee for up to 12 servers and up to 10 users - Transaction Traces and all the other features found in our Pro offering - 2 weeks of data retention (vs 24 hours in lite) - *All for $499/month* As you may know, our current pricing is $199 for ONE server per month for more or less the same features - so this is the equivalent of us undercutting ourselves. Oh well. We just want you as a customer!" Cheers, Dan On 22 October 2013 10:27, Javier Candeira wrote: > On Mon, Oct 21, 2013 at 8:58 PM, Lars Yencken wrote: > > I can second NewRelic. We've used it for some time at work, and it's > > definitely the strongest performance monitoring tool that I've used. > > Recently I used their free tier for my language game in Flask, and it's > > again been useful. The 24h limit to history does prevent you from > > investigating performance over time though -- for that you have to pay. > > I've also been looking at open source solutions in this space, like > collectd, ganglia, nagios, cacti, but it seems they overlap a bit, and > at the same time they leave out a lot of the error monitoring that the > likes of New Relic give you. I'll report back when I get a better > idea. > > > We use Github, but Bitbucket's more reasonably priced. It's what I use > for > > my personal projects. The workflow around pull requests is probably not > as > > strong for teams though. > > Can you give an example? Besides the fact that pull requests don't > create an issue (which may even be better for some people, I'm > agnostic on the point), I can't see much difference. > > J > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > -- Common Code = { 'email': 'dan at commoncode.com.au', 'mobile': '0422 987 423', 'address': '114 Hoddle Street, Abbotsford 3067', 'zen': 'http://www.python.org/dev/peps/pep-0020/', } -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at pythoncharmers.com Tue Oct 22 08:44:06 2013 From: ed at pythoncharmers.com (Ed Schofield) Date: Tue, 22 Oct 2013 17:44:06 +1100 Subject: [melbourne-pug] Python 3 porting sprint: Monday 28 Oct Message-ID: <4136F4C56C104758BACBF65FAB920DF0@pythoncharmers.com> Hi everyone, Python Charmers is hosting a Python 3 porting sprint on Monday 28 October from 6pm to 9pm. Come and learn how to port code to Python 3 and get help with porting an open source project you care about! Python 2.7, released 39 months ago, is the final version of Python 2. All further language features and standard library enhancements will happen only in Python 3.x. Python 3 contains powerful new features like function annotations, better memory efficiency, saner Unicode handling, and (with 3.4 due in April) packaging improvements and a powerful ``asyncio`` module providing features from Tornado / gevent / Twisted in the standard library. The Python community needs our help in order to make choosing Python 3 a no-brainer. All this needs is more packages with Python 3 support. With Python's ``__future__`` imports and the ``future`` package, it is now easier than ever to provide compatibility with both Python 2 and 3 from a single clean codebase. Come and learn how to write future-proof Python code and make a difference. The event is free. Bring an open source package you care about and a desire to learn and contribute to the future of Python. We will keep track of how many packages we can port to both versions and publicise our results. We'll order in pizzas for dinner and have good music. It'll be fun! ;) Currently there are 11 people coming. Space is limited to about 25-30, so if you're keen, please add your RSVP to this page: http://www.meetup.com/Melbourne-Python-Meetup-Group/events/146632852/ Cheers, Ed -- Dr. Edward Schofield (M) +61 (0)405 676 229 Python Charmers http://pythoncharmers.com From ben+python at benfinney.id.au Tue Oct 22 10:41:10 2013 From: ben+python at benfinney.id.au (Ben Finney) Date: Tue, 22 Oct 2013 19:41:10 +1100 Subject: [melbourne-pug] Python 3 porting sprint: Monday 28 Oct References: <4136F4C56C104758BACBF65FAB920DF0@pythoncharmers.com> Message-ID: <7wli1lg0k9.fsf@benfinney.id.au> Ed Schofield writes: > Python Charmers is hosting a Python 3 porting sprint on Monday 28 > October from 6pm to 9pm. Come and learn how to port code to Python 3 > and get help with porting an open source project you care about! Great idea! Thanks to Python Charmers for organising this. > Currently there are 11 people coming. Space is limited to about 25-30, > so if you're keen, please add your RSVP to this page: > > http://www.meetup.com/Melbourne-Python-Meetup-Group/events/146632852/ Can I register an RSVP without being a member of Meetup.com? -- \ ?The cost of a thing is the amount of what I call life which is | `\ required to be exchanged for it, immediately or in the long | _o__) run.? ?Henry David Thoreau | Ben Finney From noonslists at gmail.com Tue Oct 22 11:16:06 2013 From: noonslists at gmail.com (Noon Silk) Date: Tue, 22 Oct 2013 20:16:06 +1100 Subject: [melbourne-pug] External services for in-the-cloud app In-Reply-To: References: Message-ID: > Scm: for private repos, I'm happier with bitbucket than I am with > github. Also, I've not been bit by outages once, which is nice. I use bitbucket for some personal things, and it is good; better than git because it is free for private repo's. They'e had outages though, but depending on what you do, it doesn't really matter, as long as they're resolved in a few hours, which they have been. Issue management: I (used to) use a locally-run jira instance. It's super-cheap if you just buy a small license; something like $10. On Mon, Oct 21, 2013 at 9:46 AM, Javier Candeira wrote: > I'm about to start evaluating external scm, logging, monitoring, > analytics, issues, etc. services for an in-the-cloud application, and > I'd like your advice/opinion on the ones you already use. > > Monitoring: I'm currently using Server Density for monitoring with > another client, and dislike it (it initialises your / as a git > repository, if you can believe it). It's not cheap either. Any of you > uses New Relic? > > Logging: In the past I used the Splunk free tier for logging and > analytics, and it was fine. but I also think we never used it to its > full potential. I also wonder if we could have monitoring and logging > rolled into one, thus saving cost and complexity. > > Scm: for private repos, I'm happier with bitbucket than I am with > github. Also, I've not been bit by outages once, which is nice. > > Issue management/Customer feedback: I only know it as a regular > punter, but I like UserVoice. > > Firsthand experience much appreciated! > > Javier > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > -- Noon Silk Fancy a quantum lunch? https://sites.google.com/site/quantumlunch/ "Every morning when I wake up, I experience an exquisite joy ? the joy of being this signature." -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at microcomaustralia.com.au Wed Oct 23 02:53:06 2013 From: brian at microcomaustralia.com.au (Brian May) Date: Wed, 23 Oct 2013 11:53:06 +1100 Subject: [melbourne-pug] External services for in-the-cloud app In-Reply-To: References: Message-ID: On 21 October 2013 20:58, Lars Yencken wrote: > We use Github, but Bitbucket's more reasonably priced. It's what I use for > my personal projects. The workflow around pull requests is probably not as > strong for teams though. > Github pull-requests have been criticised: http://julien.danjou.info/blog/2013/rant-about-github-pull-request-workflow-implementation Just because it is in the cloud doesn't mean that they have implemented in the best possible way. -- Brian May -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at pythoncharmers.com Wed Oct 23 03:00:11 2013 From: ed at pythoncharmers.com (Ed Schofield) Date: Wed, 23 Oct 2013 12:00:11 +1100 Subject: [melbourne-pug] Python 3 porting sprint: Monday 28 Oct In-Reply-To: <7wli1lg0k9.fsf@benfinney.id.au> References: <4136F4C56C104758BACBF65FAB920DF0@pythoncharmers.com> <7wli1lg0k9.fsf@benfinney.id.au> Message-ID: <745B24E499414196A3C801A613F8AC4A@pythoncharmers.com> On Tuesday, 22 October 2013 at 7:41 PM, Ben Finney wrote: > Ed Schofield writes: > > > Python Charmers is hosting a Python 3 porting sprint on Monday 28 > > October from 6pm to 9pm. Come and learn how to port code to Python 3 > > and get help with porting an open source project you care about! > > ... > > Can I register an RSVP without being a member of Meetup.com (http://Meetup.com)? Hi Ben, Yes, I've got you down. Cheers, Ed From javier at candeira.com Mon Oct 28 11:38:52 2013 From: javier at candeira.com (Javier Candeira) Date: Mon, 28 Oct 2013 21:38:52 +1100 Subject: [melbourne-pug] Python 3 porting sprint: Monday 28 Oct In-Reply-To: <745B24E499414196A3C801A613F8AC4A@pythoncharmers.com> References: <4136F4C56C104758BACBF65FAB920DF0@pythoncharmers.com> <7wli1lg0k9.fsf@benfinney.id.au> <745B24E499414196A3C801A613F8AC4A@pythoncharmers.com> Message-ID: Hi all, There were about 8 or 9 of us tonight at the porting party, and it was great. Japanese curry was catered, and some progress was made. My contribution: https://github.com/candeira/githubpy/tree/python33 is Michel Liao's githubpy, working (or at least passing the tests, I haven't tried to actually use it*) on both 2.7 and 3.3. * To quote Don Knuth Future proved useful for a first rough refactoring. Most of the remaining bugs were of two kinds: - translation of library imports (urllib2, hashlib) which could be added to Future's refactoring functions (and I volunteer) - subtle unicode bugs, which I suspect would be difficult to refactor automatically, and even then they would need running the program. Cheers, J On Wed, Oct 23, 2013 at 12:00 PM, Ed Schofield wrote: > On Tuesday, 22 October 2013 at 7:41 PM, Ben Finney wrote: >> Ed Schofield writes: >> >> > Python Charmers is hosting a Python 3 porting sprint on Monday 28 >> > October from 6pm to 9pm. Come and learn how to port code to Python 3 >> > and get help with porting an open source project you care about! >> >> ... >> >> Can I register an RSVP without being a member of Meetup.com (http://Meetup.com)? > > Hi Ben, Yes, I've got you down. > > Cheers, > Ed > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug From javier at candeira.com Tue Oct 29 04:34:39 2013 From: javier at candeira.com (Javier Candeira) Date: Tue, 29 Oct 2013 14:34:39 +1100 Subject: [melbourne-pug] November MPUG meeting: Machine Vision and Indie Gaming on 4 November, 6PM - Inspire 9, 41 Stewart St, Richmond Message-ID: Dear Melbourne Pythonistas, As previously announced, the next meeting of the Melbourne Python Users Group will be next Monday, 4 November at 6PM. Venue: Inspire 9, 41 Stewart St. Richmond. 50m from Richmond Train Station. And this is the current talk lineup: # Luke Miller -- My big gay adventure. Making, releasing and selling an indie game made in Python. # Lars Yencken -- Machine Vision with SimpleCV. We can still fit in a 5 minute short talk for this session, so please volunteer or dob in a friend! You can do it anonymously using our wiki: https://wiki.python.org/moin/MelbournePUG See you in a week, Javier & the MPUG organizers. From ed at pythoncharmers.com Tue Oct 29 05:22:38 2013 From: ed at pythoncharmers.com (Ed Schofield) Date: Tue, 29 Oct 2013 15:22:38 +1100 Subject: [melbourne-pug] November MPUG meeting: Machine Vision and Indie Gaming on 4 November, 6PM - Inspire 9, 41 Stewart St, Richmond In-Reply-To: References: Message-ID: <876483D521D64932BBB25F56DF821683@pythoncharmers.com> Hi all, Nicole Harris has volunteered for the 5-minute slot. She'll be talking about Mezzanine ("the best Django CMS"). Cheers :) Ed -- Dr. Edward Schofield (M) +61 (0)405 676 229 Python Charmers http://pythoncharmers.com On Tuesday, 29 October 2013 at 2:34 pm, Javier Candeira wrote: > Dear Melbourne Pythonistas, > > As previously announced, the next meeting of the Melbourne Python > Users Group will be next Monday, 4 November at 6PM. > > Venue: Inspire 9, 41 Stewart St. Richmond. 50m from Richmond Train Station. > > And this is the current talk lineup: > > # Luke Miller -- My big gay adventure. Making, releasing and selling > an indie game made in Python. > > # Lars Yencken -- Machine Vision with SimpleCV. > > We can still fit in a 5 minute short talk for this session, so please > volunteer or dob in a friend! You can do it anonymously using our > wiki: https://wiki.python.org/moin/MelbournePUG > > See you in a week, > > Javier & the MPUG organizers. > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org (mailto:melbourne-pug at python.org) > https://mail.python.org/mailman/listinfo/melbourne-pug From anath at student.unimelb.edu.au Tue Oct 29 05:38:42 2013 From: anath at student.unimelb.edu.au (Artika Nath) Date: Tue, 29 Oct 2013 15:38:42 +1100 Subject: [melbourne-pug] (no subject) Message-ID: Hello I am new to python programming ..I want to learn python for bioinformatics ..I will be grateful for any resources relevant . Artika From schweitzer.ubiquitous at gmail.com Tue Oct 29 06:01:50 2013 From: schweitzer.ubiquitous at gmail.com (martin schweitzer) Date: Tue, 29 Oct 2013 16:01:50 +1100 Subject: [melbourne-pug] (no subject) In-Reply-To: References: Message-ID: Hi Rosalind (http://rosalind.info/problems/locations/) is an excellent resource to learn Python for Bioinformatics. It is like Project Euler - but has a bioinformatics/Python slant. Regards, Martin On Tue, Oct 29, 2013 at 3:38 PM, Artika Nath wrote: > Hello > > I am new to python programming ..I want to learn python for bioinformatics > ..I will be grateful for any resources relevant . > > Artika > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > -- Martin Schweitzer Mobile: 0412 345 938 -------------- next part -------------- An HTML attachment was scrubbed... URL: From claresloggett at gmail.com Tue Oct 29 07:13:07 2013 From: claresloggett at gmail.com (Clare Sloggett) Date: Tue, 29 Oct 2013 17:13:07 +1100 Subject: [melbourne-pug] (no subject) In-Reply-To: References: Message-ID: There is also a Coursera course based on Rosalind which is only just starting: https://www.coursera.org/course/bioinformatics On 29 October 2013 16:01, martin schweitzer wrote: > Hi > > Rosalind (http://rosalind.info/problems/locations/) is an excellent > resource to learn Python for Bioinformatics. It is like Project Euler - > but has a bioinformatics/Python slant. > > Regards, > Martin > > > > > On Tue, Oct 29, 2013 at 3:38 PM, Artika Nath > wrote: > >> Hello >> >> I am new to python programming ..I want to learn python for >> bioinformatics ..I will be grateful for any resources relevant . >> >> Artika >> _______________________________________________ >> melbourne-pug mailing list >> melbourne-pug at python.org >> https://mail.python.org/mailman/listinfo/melbourne-pug >> > > > > -- > Martin Schweitzer > Mobile: 0412 345 938 > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at pythoncharmers.com Wed Oct 30 06:28:35 2013 From: ed at pythoncharmers.com (Ed Schofield) Date: Wed, 30 Oct 2013 16:28:35 +1100 Subject: [melbourne-pug] Python 3 porting sprint roundup In-Reply-To: References: <4136F4C56C104758BACBF65FAB920DF0@pythoncharmers.com> <7wli1lg0k9.fsf@benfinney.id.au> <745B24E499414196A3C801A613F8AC4A@pythoncharmers.com> Message-ID: <7B9E205CE87944E8B1933D05DC03635C@pythoncharmers.com> Hi everyone, Thanks to everyone who came to the sprint on Monday night!! Here's my roundup of it. About four of us were working on Reportlab. This turned out to be a tough choice of project because the code hasn't had a clean-up in a while. It's crufty, with comments in the code like this: # Build *and install* the basic Python 1.5 distribution. See the Python README for instructions. There is a test suite, but the master branch and especially the py33 branch we were working off had a number of tests failing. Javier had success with porting githubpy. He attributed his success to "aggressive scope management". I had a go at porting mezzanine afterwards (which Nicole is going to talk about on Monday). This is also a big project, but it was much more straightforward than Reportlab, mainly because the code is cleaner. (Only a couple of tests are still failing. I'll aim to push that this week.) For anyone interested in learning how to port code from Python 2 to Python 3, I have attached the cheat sheet from the sprint. This is now also available here: http://python-future.org/porting.html. We'll run another porting sprint in a couple of months -- perhaps mid January. Thanks again! Ed -- Dr. Edward Schofield (M) +61 (0)405 676 229 Python Charmers http://pythoncharmers.com -------------- next part -------------- A non-text attachment was scrubbed... Name: Python 3 porting sprint.pdf Type: application/pdf Size: 67800 bytes Desc: not available URL: From javier at candeira.com Wed Oct 30 12:08:06 2013 From: javier at candeira.com (Javier Candeira) Date: Wed, 30 Oct 2013 22:08:06 +1100 Subject: [melbourne-pug] Python 3 porting sprint roundup In-Reply-To: <7B9E205CE87944E8B1933D05DC03635C@pythoncharmers.com> References: <4136F4C56C104758BACBF65FAB920DF0@pythoncharmers.com> <7wli1lg0k9.fsf@benfinney.id.au> <745B24E499414196A3C801A613F8AC4A@pythoncharmers.com> <7B9E205CE87944E8B1933D05DC03635C@pythoncharmers.com> Message-ID: > Javier had success with porting githubpy. He attributed his success to "aggressive scope management". Translation: I picked a library that fits in 250 lines, has no non-standard dependencies, and which I had already studied and contributed to. Easy target. > For anyone interested in learning how to port code from Python 2 to Python 3, I have attached the cheat sheet from the sprint. This is now also available here: http://python-future.org/porting.html. > > We'll run another porting sprint in a couple of months -- perhaps mid January. I can recommend both the cheatsheet and the sprint. It was great. Come join us! J From lex.lists at gmail.com Thu Oct 31 23:45:08 2013 From: lex.lists at gmail.com (Lex H) Date: Fri, 1 Nov 2013 09:45:08 +1100 Subject: [melbourne-pug] R to Pandas Cookbook Message-ID: If you're not aware of the Pandas project it's Python's answer to R, and it's awesome. http://pandas.pydata.org/ http://blog.wesmckinney.com/ (Pandas author's blog) A while back I started making some notes on how to do the various recipes in O'Reilly's R Cookbook (http://shop.oreilly.com/product/9780596809164.do) with Numpy, Pandas, Scipy. I haven't had time to complete it so I'm sharing it in it's current state, and trying to get some community help to fill in the gaps. I think this could be an extremely useful resource to encourage and help transition lots of people from R to Pandas. So here's the notes: http://notes.lexual.com/tech/r_numpy_pandas_cookbook.html And here's the github repo, patches more than welcome! https://github.com/lexual/sphinx-notes/blob/master/source/tech/r_numpy_pandas_cookbook.rst Cheers, Lex. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at yencken.org Thu Oct 31 23:51:19 2013 From: lars at yencken.org (Lars Yencken) Date: Fri, 1 Nov 2013 09:51:19 +1100 Subject: [melbourne-pug] R to Pandas Cookbook In-Reply-To: References: Message-ID: Nice, we need more rosetta stones like this :) On 1 November 2013 09:45, Lex H wrote: > If you're not aware of the Pandas project it's Python's answer to R, and > it's awesome. > > http://pandas.pydata.org/ > http://blog.wesmckinney.com/ (Pandas author's blog) > > A while back I started making some notes on how to do the various recipes > in O'Reilly's R Cookbook (http://shop.oreilly.com/product/9780596809164.do) > with Numpy, Pandas, Scipy. > > I haven't had time to complete it so I'm sharing it in it's current state, > and trying to get some community help to fill in the gaps. > > I think this could be an extremely useful resource to encourage and help > transition lots of people from R to Pandas. > > So here's the notes: > > http://notes.lexual.com/tech/r_numpy_pandas_cookbook.html > > And here's the github repo, patches more than welcome! > > > https://github.com/lexual/sphinx-notes/blob/master/source/tech/r_numpy_pandas_cookbook.rst > > Cheers, > > Lex. > > > > _______________________________________________ > melbourne-pug mailing list > melbourne-pug at python.org > https://mail.python.org/mailman/listinfo/melbourne-pug > > -------------- next part -------------- An HTML attachment was scrubbed... URL: