From mcbracket at gmail.com Tue Sep 15 09:08:10 2015 From: mcbracket at gmail.com (Stephen McCray) Date: Tue, 15 Sep 2015 00:08:10 -0700 Subject: [portland] GitHub accounts Message-ID: Hey everybody, Since we're going to be doing general issue tracking for PDX Python and issue tracking for the web site through GitHub, it's probably best to get you added as part of the Portland Python Users Group organization on GitHub if you want to be pinged on issues relevant to you. To that end, could anyone who has not done so already send either me or Joe Lewis with your GitHub account name and we'll make sure to get you added. -Bracket -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikelane at gmail.com Tue Sep 15 14:07:56 2015 From: mikelane at gmail.com (Mike Lane) Date: Tue, 15 Sep 2015 12:07:56 +0000 Subject: [portland] GitHub accounts In-Reply-To: References: Message-ID: Mine is mikelane. Thanks! On Tue, Sep 15, 2015 at 00:08 Stephen McCray wrote: > Hey everybody, > > Since we're going to be doing general issue tracking for PDX Python and > issue tracking for the web site through GitHub, it's probably best to get > you added as part of the Portland Python Users Group organization on GitHub > if you want to be pinged on issues relevant to you. To that end, could > anyone who has not done so already send either me or Joe Lewis with your > GitHub account name and we'll make sure to get you added. > > -Bracket > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.python.org/pipermail/portland/attachments/20150915/bdced79a/attachment.html > > > _______________________________________________ > Portland mailing list > Portland at python.org > https://mail.python.org/mailman/listinfo/portland > -------------- next part -------------- An HTML attachment was scrubbed... URL: From boneskull at boneskull.com Tue Sep 15 21:15:08 2015 From: boneskull at boneskull.com (Christopher Hiller) Date: Tue, 15 Sep 2015 12:15:08 -0700 Subject: [portland] GitHub accounts In-Reply-To: References: Message-ID: boneskull on github ? Christopher Hiller http://boneskull.com On September 15, 2015 at 00:08:14, Stephen McCray (mcbracket at gmail.com) wrote: Hey everybody, Since we're going to be doing general issue tracking for PDX Python and issue tracking for the web site through GitHub, it's probably best to get you added as part of the Portland Python Users Group organization on GitHub if you want to be pinged on issues relevant to you. To that end, could anyone who has not done so already send either me or Joe Lewis with your GitHub account name and we'll make sure to get you added. -Bracket -------------- next part -------------- An HTML attachment was scrubbed... URL: _______________________________________________ Portland mailing list Portland at python.org https://mail.python.org/mailman/listinfo/portland -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcbracket at gmail.com Wed Sep 16 09:20:37 2015 From: mcbracket at gmail.com (Stephen McCray) Date: Wed, 16 Sep 2015 00:20:37 -0700 Subject: [portland] GitHub accounts In-Reply-To: References: Message-ID: Whoops. Sorry everyone! I had meant to send this the Portland Python *Organizers* list, not the Portland, *Oregon* Python list :-D. Apologies to those who have already sent their GitHub accounts to me. This was primarily directed at the organizers so that I could make sure proper people were notified when issues relevant to them were created. However, now is as good at time as any to announce that in an effort to increase transparency we moving to managing organizational issues for PDX Python in our public GitHub repo at github.com/portlandpython. There's not much to look at yet, but over time we hope build it up to provide a public view of what is going on behind the scenes at PDX Python. All full group announcements such as Hack Night and Presentation Night will still be done via Meetup and on this list. -Bracket On Tue, Sep 15, 2015 at 12:08 AM, Stephen McCray wrote: > Hey everybody, > > Since we're going to be doing general issue tracking for PDX Python and > issue tracking for the web site through GitHub, it's probably best to get > you added as part of the Portland Python Users Group organization on GitHub > if you want to be pinged on issues relevant to you. To that end, could > anyone who has not done so already send either me or Joe Lewis with your > GitHub account name and we'll make sure to get you added. > > -Bracket > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jchampion at zetacentauri.com Thu Sep 17 01:44:24 2015 From: jchampion at zetacentauri.com (Jason Champion) Date: Wed, 16 Sep 2015 16:44:24 -0700 Subject: [portland] Creating an API with Metered Billing Message-ID: <55F9FED8.6010307@zetacentauri.com> Question: Does anyone have any experience with creating APIs that have usage limits and metered billing? Can you suggest any good articles/howtos/resources on the subject? I've created plenty of APIs (REST, XMLRPC, etc.), but they've always been open to all with no auth or billing. Thank you, Jason From freyley at gmail.com Thu Sep 17 01:56:40 2015 From: freyley at gmail.com (Jeff Schwaber) Date: Wed, 16 Sep 2015 16:56:40 -0700 Subject: [portland] Creating an API with Metered Billing In-Reply-To: <55F9FED8.6010307@zetacentauri.com> References: <55F9FED8.6010307@zetacentauri.com> Message-ID: That's super interesting! I haven't done usage limits, but I've played with the throttling stuff of Django Rest Framework: http://www.django-rest-framework.org/api-guide/throttling/ and you could definitely put usage counting in there. Once you've got usage counting, limits seem like a simple step inside that framework. The big challenge is going to be that, naively, now all of your API requests are database updates, and from a scalability point of view, that sucks. Of course you could make them logging statements instead and then have a background process reading the log to generate the current counts, but then you'll be behind a bit, so users may bounce over the limits a bit. Jeff On Wed, Sep 16, 2015 at 4:44 PM, Jason Champion wrote: > Question: > > Does anyone have any experience with creating APIs that have usage limits > and metered billing? Can you suggest any good articles/howtos/resources on > the subject? > > I've created plenty of APIs (REST, XMLRPC, etc.), but they've always been > open to all with no auth or billing. > > Thank you, > Jason > _______________________________________________ > Portland mailing list > Portland at python.org > https://mail.python.org/mailman/listinfo/portland > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rcoder at gmail.com Thu Sep 17 19:40:26 2015 From: rcoder at gmail.com (Lennon Day-Reynolds) Date: Thu, 17 Sep 2015 10:40:26 -0700 Subject: [portland] Creating an API with Metered Billing In-Reply-To: References: <55F9FED8.6010307@zetacentauri.com> Message-ID: I've worked on these kinds of APIs before. Jeff's point about counts lagging # of calls is valid, and totally an architectural and business decision you need to think about before you sit down to build a rate-limiting system. Think of accuracy vs. efficiency here as a slider you can move in either direction. What's your biggest risk or downside? Would permitting over-usage for a few of your users harm your service or the business in some way? (E.g., a voting system should allow zero over-counting, but a weather API can probably permit a few "free" over-quota reads.) What's your expected nominal usage rate? Do you expect big bursts of activity? If you go the most strict route (e.g., update a counter in your main app DB for every API request) you'll get very accurate gating, but depending on your database access and application usage patterns you could easily end up doing more work maintaining counts than you do actually providing your service to customers. Worst-case, if the DB gets swamped then usage counting could actually cause downtime for the rest of your system. Log-based rollups are slower to converge, but potentially much cheaper to maintain (esp. if you have any existing async job infrastructure in place). Depending on your access rates and how "bursty" traffic can be they might be a totally acceptable option. You also didn't mention your underlying web stack, but assuming you're going to run on multiple hosts you might have to work a bit to gather + order logs in one place to get accurate rollups. There's also a middle ground where you use memcached or redis to store your counts. In-place increments in those systems will be cheaper than a transactional DB write, though you can't do a simple filter on event timestamp as you can in SQL. I've commonly used a simple bucketed storage model with TTLs on each bucket to store a sliding window of recent access counts. I haven' vetted any of these implementations in production, but several people have written libraries accomplishing exactly this pattern: http://flask.pocoo.org/snippets/70/ https://github.com/DomainTools/rate-limit http://limits.readthedocs.org/en/stable/ <- (this one appears to have both Flask and Django adapters available, too) https://pypi.python.org/pypi/rratelimit/0.0.4 (& etc., etc.) On Wed, Sep 16, 2015 at 4:56 PM, Jeff Schwaber wrote: > That's super interesting! > > I haven't done usage limits, but I've played with the throttling stuff of > Django Rest Framework: > http://www.django-rest-framework.org/api-guide/throttling/ and you could > definitely put usage counting in there. Once you've got usage counting, > limits seem like a simple step inside that framework. > > The big challenge is going to be that, naively, now all of your API > requests are database updates, and from a scalability point of view, that > sucks. Of course you could make them logging statements instead and then > have a background process reading the log to generate the current counts, > but then you'll be behind a bit, so users may bounce over the limits a bit. > > Jeff > > On Wed, Sep 16, 2015 at 4:44 PM, Jason Champion > wrote: > >> Question: >> >> Does anyone have any experience with creating APIs that have usage limits >> and metered billing? Can you suggest any good articles/howtos/resources on >> the subject? >> >> I've created plenty of APIs (REST, XMLRPC, etc.), but they've always been >> open to all with no auth or billing. >> >> Thank you, >> Jason >> _______________________________________________ >> Portland mailing list >> Portland at python.org >> https://mail.python.org/mailman/listinfo/portland >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > _______________________________________________ > Portland mailing list > Portland at python.org > https://mail.python.org/mailman/listinfo/portland -- Lennon Day-Reynolds http://rcoder.net/ From freyley at gmail.com Thu Sep 17 20:11:52 2015 From: freyley at gmail.com (Jeff Schwaber) Date: Thu, 17 Sep 2015 11:11:52 -0700 Subject: [portland] Creating an API with Metered Billing In-Reply-To: References: <55F9FED8.6010307@zetacentauri.com> Message-ID: On Thu, Sep 17, 2015 at 10:40 AM, Lennon Day-Reynolds wrote: > Log-based rollups are slower to converge, but potentially much cheaper > to maintain (esp. if you have any existing async job infrastructure in > place). Depending on your access rates and how "bursty" traffic can be > they might be a totally acceptable option. You also didn't mention > your underlying web stack, but assuming you're going to run on > multiple hosts you might have to work a bit to gather + order logs in > one place to get accurate rollups. > Yeah, this is a fair point. There are services, for example Papertrail, that aggregate logs these days, so it can certainly be worked around, but you need to integrate with or build something to do this. Timeliness will definitely vary. > There's also a middle ground where you use memcached or redis to store > your counts. In-place increments in those systems will be cheaper than > a transactional DB write, though you can't do a simple filter on event > timestamp as you can in SQL. I've commonly used a simple bucketed > storage model with TTLs on each bucket to store a sliding window of > recent access counts. > +1 - totally valid option, too. Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From jchampion at zetacentauri.com Fri Sep 18 06:21:00 2015 From: jchampion at zetacentauri.com (Jason Champion) Date: Thu, 17 Sep 2015 21:21:00 -0700 Subject: [portland] Creating an API with Metered Billing In-Reply-To: References: <55F9FED8.6010307@zetacentauri.com> Message-ID: <55FB912C.9090402@zetacentauri.com> Thank you Jeff and Lennon for the links+info. As an example, the YouTube Analytics API takes 1-3 minutes to shut off once you hit your maximum quota. In that time you can go over limit by up to about 1%, which seems like a pretty reasonable margin of error. Plus they look like the "nice guy" for letting you have a few extra calls now and then. :) From bobby at tixie.com Wed Sep 16 20:11:20 2015 From: bobby at tixie.com (Bobby Robertson) Date: Wed, 16 Sep 2015 11:11:20 -0700 Subject: [portland] Full Stack Developer Position Message-ID: Hi, My company is looking to hire a new Python developer. You can see the job description posted at www.tixie.com/careers Please pass this along to anyone you know that may be interested. Thanks, Bobby -------------- next part -------------- An HTML attachment was scrubbed... URL: From dooleyliz at gmail.com Thu Sep 17 22:26:47 2015 From: dooleyliz at gmail.com (Liz Dooley) Date: Thu, 17 Sep 2015 13:26:47 -0700 Subject: [portland] Full Stack Developer for early stage start up Message-ID: Hi Python Users, My client is looking for someone who can crank out code and architecture right now, but also can present to VCs and C level folks. Down the road it will grow into more leadership and whatever role the person wants.. Early stage, so substantial equity available- $90-100K salary. $100/parking allowance, or Tri-met. Benefits are actually quite good for how early stage this is... 2 roles available. Location: Downtown Portland If interested, please send resumes to dooleyliz at gmail.com http://www.bullhornreach.com/job/2130766_full-stack-developer-portland-or Thank you, Liz -- Liz Dooley Recruiting, LLC Cell (503) 680-9826 | E-mail d ooleyliz at gmail.com Website | LinkedIn | Twitter @lizdooley Take the CVI for free HERE! -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at thinkhuman.com Fri Sep 18 16:39:10 2015 From: james at thinkhuman.com (James) Date: Fri, 18 Sep 2015 07:39:10 -0700 Subject: [portland] Portland Digest, Vol 100, Issue 4 In-Reply-To: References: Message-ID: Jason's comment about the YouTube API is spot on, I think. Unless you're doing some serious enterprise scale implementation, you can do this simply, and not worry much about fancy antics to mitigate database call impact. Sounds like a fun project. -james On Fri, September 18, 2015 3:00 am, portland-request at python.org wrote: > Send Portland mailing list submissions to > portland at python.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.python.org/mailman/listinfo/portland > or, via email, send a message with subject or body 'help' to > portland-request at python.org > > You can reach the person managing the list at > portland-owner at python.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Portland digest..." > > > Today's Topics: > > > 1. Re: Creating an API with Metered Billing (Lennon Day-Reynolds) > 2. Re: Creating an API with Metered Billing (Jeff Schwaber) > 3. Re: Creating an API with Metered Billing (Jason Champion) > > > > ---------------------------------------------------------------------- > > > Message: 1 > Date: Thu, 17 Sep 2015 10:40:26 -0700 > From: Lennon Day-Reynolds > To: "Python Users Group -- Portland, Oregon USA" > Subject: Re: [portland] Creating an API with Metered Billing > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > > I've worked on these kinds of APIs before. Jeff's point about counts > lagging # of calls is valid, and totally an architectural and business > decision you need to think about before you sit down to build a > rate-limiting system. > > Think of accuracy vs. efficiency here as a slider you can move in > either direction. What's your biggest risk or downside? Would permitting > over-usage for a few of your users harm your service or the business in > some way? (E.g., a voting system should allow zero over-counting, but a > weather API can probably permit a few "free" over-quota reads.) What's > your expected nominal usage rate? Do you expect big bursts of activity? > > If you go the most strict route (e.g., update a counter in your main > app DB for every API request) you'll get very accurate gating, but > depending on your database access and application usage patterns you could > easily end up doing more work maintaining counts than you do actually > providing your service to customers. Worst-case, if the DB gets swamped > then usage counting could actually cause downtime for the rest of your > system. > > Log-based rollups are slower to converge, but potentially much cheaper > to maintain (esp. if you have any existing async job infrastructure in > place). Depending on your access rates and how "bursty" traffic can be > they might be a totally acceptable option. You also didn't mention your > underlying web stack, but assuming you're going to run on multiple hosts > you might have to work a bit to gather + order logs in one place to get > accurate rollups. > > There's also a middle ground where you use memcached or redis to store > your counts. In-place increments in those systems will be cheaper than a > transactional DB write, though you can't do a simple filter on event > timestamp as you can in SQL. I've commonly used a simple bucketed storage > model with TTLs on each bucket to store a sliding window of recent access > counts. > > I haven' vetted any of these implementations in production, but > several people have written libraries accomplishing exactly this pattern: > > > http://flask.pocoo.org/snippets/70/ > https://github.com/DomainTools/rate-limit > http://limits.readthedocs.org/en/stable/ <- (this one appears to have > both Flask and Django adapters available, too) > https://pypi.python.org/pypi/rratelimit/0.0.4 > > > (& etc., etc.) > > > On Wed, Sep 16, 2015 at 4:56 PM, Jeff Schwaber wrote: > >> That's super interesting! >> >> >> I haven't done usage limits, but I've played with the throttling stuff >> of Django Rest Framework: >> http://www.django-rest-framework.org/api-guide/throttling/ and you could >> definitely put usage counting in there. Once you've got usage >> counting, limits seem like a simple step inside that framework. >> >> The big challenge is going to be that, naively, now all of your API >> requests are database updates, and from a scalability point of view, >> that sucks. Of course you could make them logging statements instead and >> then have a background process reading the log to generate the current >> counts, but then you'll be behind a bit, so users may bounce over the >> limits a bit. >> >> Jeff >> >> >> On Wed, Sep 16, 2015 at 4:44 PM, Jason Champion >> >> wrote: >> >> >>> Question: >>> >>> >>> Does anyone have any experience with creating APIs that have usage >>> limits and metered billing? Can you suggest any good >>> articles/howtos/resources on the subject? >>> >>> I've created plenty of APIs (REST, XMLRPC, etc.), but they've always >>> been open to all with no auth or billing. >>> >>> Thank you, >>> Jason >>> _______________________________________________ >>> Portland mailing list >>> Portland at python.org >>> https://mail.python.org/mailman/listinfo/portland >>> >>> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> > 8/attachment.html> >> _______________________________________________ >> Portland mailing list >> Portland at python.org >> https://mail.python.org/mailman/listinfo/portland >> > > > > -- > Lennon Day-Reynolds > http://rcoder.net/ > > > > ------------------------------ > > > Message: 2 > Date: Thu, 17 Sep 2015 11:11:52 -0700 > From: Jeff Schwaber > To: "Python Users Group -- Portland, Oregon USA" > Subject: Re: [portland] Creating an API with Metered Billing > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > > On Thu, Sep 17, 2015 at 10:40 AM, Lennon Day-Reynolds > wrote: > > >> Log-based rollups are slower to converge, but potentially much cheaper >> to maintain (esp. if you have any existing async job infrastructure in >> place). Depending on your access rates and how "bursty" traffic can be >> they might be a totally acceptable option. You also didn't mention your >> underlying web stack, but assuming you're going to run on multiple hosts >> you might have to work a bit to gather + order logs in one place to get >> accurate rollups. >> > > Yeah, this is a fair point. There are services, for example Papertrail, > that aggregate logs these days, so it can certainly be worked around, but > you need to integrate with or build something to do this. Timeliness will > definitely vary. > > >> There's also a middle ground where you use memcached or redis to store >> your counts. In-place increments in those systems will be cheaper than a >> transactional DB write, though you can't do a simple filter on event >> timestamp as you can in SQL. I've commonly used a simple bucketed >> storage model with TTLs on each bucket to store a sliding window of >> recent access counts. >> > > +1 - totally valid option, too. > > > Jeff > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > attachment.html> > > ------------------------------ > > > Message: 3 > Date: Thu, 17 Sep 2015 21:21:00 -0700 > From: Jason Champion > To: "Python Users Group -- Portland, Oregon USA" > Subject: Re: [portland] Creating an API with Metered Billing > Message-ID: <55FB912C.9090402 at zetacentauri.com> > Content-Type: text/plain; charset=windows-1252; format=flowed > > > Thank you Jeff and Lennon for the links+info. > > > As an example, the YouTube Analytics API takes 1-3 minutes to shut off > once you hit your maximum quota. In that time you can go over limit by up > to about 1%, which seems like a pretty reasonable margin of error. > > Plus they look like the "nice guy" for letting you have a few extra > calls now and then. :) > > > ------------------------------ > > > Subject: Digest Footer > > > _______________________________________________ > Portland mailing list > Portland at python.org > https://mail.python.org/mailman/listinfo/portland > > > > ------------------------------ > > > End of Portland Digest, Vol 100, Issue 4 > **************************************** > >