Question about installing python and modules on Red Hat Linux 6

Chris Angelico rosuav at gmail.com
Sat Nov 15 21:23:12 EST 2014


On Sun, Nov 16, 2014 at 12:57 PM, Michael Torrie <torriem at gmail.com> wrote:
> In my last system administration job, we forbade installing from source,
> at least in the manner you are describing.  It's a maintenance
> nightmare.  Especially when it comes time to upgrade the system and get
> things up and running on a new OS version.  To make the system
> maintainable and re-creatable (it's easy to dump a list of installed
> packages to install on anther machine), it had to be done using the
> package manager.  Preferably with trusted packages (trusted
> repositories) that were actively maintained.  At that time Red Hat
> software collections didn't exist, so I did have to spend some
> considerable time building and testing RPM packages.  Yes it's a
> headache for the developer in the short term, but in the long term it
> always turned out better than hacking things together from source.
>

Fundamentally, this comes down to a single question: Who do you most trust?

1) Upstream repositories? Then install everything from the provided
package manager. All will be easy; as long as nothing you use got
removed in the version upgrade, you should be able to just grab all
the same packages and expect everything to run. This is what I'd
recommend for most end users.

2) Yourself? Then install stuff from source. For a competent sysadmin
on his/her own computer, this is what I'd recommend. Use the repos
when you can, but install anything from source if you feel like it. I
do this for a number of packages where Debian provides an oldish
version, or where I want to tweak something, or anything like that.
When you upgrade to a new OS version, it's time to reconsider all
those decisions; maybe there's a newer version in repo than the one
you built from source a while ago.

3) A local repository? Then do what you describe above - build and
test the RPMs and don't install anything from source. If you need
something that isn't in upstream, you compile it, package it, and
deploy it locally. Great if you have hundreds or thousands of similar
machines to manage, eliminates the maintenance nightmare, but is
unnecessary work if you have only a handful of machines and they're
all unique.

I'm responsible for maybe a dozen actively-used Linux boxes, so I'm a
pretty small-time admin. Plus, they run a variety of different
hardware, OS releases (they're mostly Debian-family distros, but I
have some Ubuntu, some Debian, one AntiX, and various others around
the place - and a variety of versions as well), and application
software (different needs, different stuff installed). Some are
headless servers that exist solely for the network. Others are
people's clients that they actively use every day. They don't all need
Wine, VirtualBox, DOSBox, or other compatibility/emulation layers; but
some need versions newer than those provided in the upsstream repos.
One box needs audio drivers compiled from source, else there's no
sound. Until a few months ago, several - but not all - needed a few
local patches to one program. (Then the patches got accepted upstream,
but that version isn't in the Debian repos yet.) And of course, they
all need their various configs - a web server is not created by "sudo
apt-get install apache2" so much as by editing /etc/apache2/*. Sure, I
*could* run everything through deb/rpm packages, but I'd be constantly
tweaking and packaging and managing upgrades, and it's much better use
of my time to just deploy from source. With so few computers to
manage, the Big-Oh benefits of your recommendation just don't kick in.

But I doubt these considerations apply to the OP. Of course, we can't
know until our crystal balls get some more to work on.

ChrisA



More information about the Python-list mailing list