"non central" package management

Roy Smith roy at panix.com
Tue Nov 27 23:45:56 EST 2012


In article <a6b6f2c8-4f5d-4410-92ab-1e81de529e3e at googlegroups.com>,
 Miki Tebeka <miki.tebeka at gmail.com> wrote:

> >  When we deploy, we create a new virtualenv, then do  
> > "pip install -r requirements.txt".
> 1. Do you do that for every run?

Well, sort of.

We are currently using a single virtualenv per deployment host.  Each 
time we deploy new code, we checkout all the code into a fresh 
directory, but each of those shares a common virtualenv.  As part of the 
deploy process, we do indeed execute "pip install -r requirements.txt", 
which picks up any new required packages.

Unfortunately, that process doesn't give us a good way to back out a bad 
update.  It's easy for us to revert to an previous version of our code, 
but we don't have a good way to revert the virtualenv to its previous 
state.  Fortunately, that's been mostly a theoretical issue for us so 
far.

In the future, the plan is to build a complete fresh virtualenv for 
every deployment.  But we're not there yet.

> 2. Our ops team don't like to depend on pypi, they want internal repo - but I 
> guess I can do that with "pip install -f".

Listen to your ops team.  Right now, we install out of pypi, but that's 
slow, and on occasion, fails.  This is part of why we still use a shared 
virtualenv, and what your ops guys are trying to avoid :-)

Eventually, we'll mirror all the packages we need locally, and pip 
install out of that.  Once we've got that all working, we'll move to a 
new virtualenv per deployment (which sometimes is several times a day).



More information about the Python-list mailing list