How to install Python package from source on Windows

Chris Angelico rosuav at gmail.com
Wed May 17 12:23:18 EDT 2017


On Thu, May 18, 2017 at 1:37 AM, bartc <bc at freeuk.com> wrote:
> On 17/05/2017 15:13, Chris Angelico wrote:
>>
>> On Wed, May 17, 2017 at 11:53 PM, bartc <bc at freeuk.com> wrote:
>
>
>>> That's all true. But the answer is not to make it a nightmare for
>>> everyone
>>> else as well as yourself. If the requirement is to get other people to
>>> build
>>> your product from source for the purpose of using it or testing it (and
>>> for
>>> various reasons using prebuilt binaries is not an option), then the
>>> process
>>> ought to be as painless as possible.
>>
>>
>> What, you mean like this?
>>
>> ./configure
>> make
>> sudo make install
>
>
> No, not like that. I mean genuinely simple. Your example:
>
> (1) Doesn't work on Windows
> (2) Usually seems to involve executing 20,000 to to 30,000 lines of complete
> gobbledygook in that configuration script.
>
> That can't possibly be justified.

[1] Does work on Windows. Install bash for Windows, or (on a
recent-enough Windows) just use the subsystem that Microsoft provides.
[2] So? You don't have to read it, just run it. Or do you read every
line of code that any of your programs executes?

> What would happen if you were presented with a machine, of some unknown OS
> except that it's not Linux, and only had an editor, a bare compiler and
> linker to work with? Would you be completely helpless?

I didn't even have an editor the first time I had that situation. And
it worked, because I was using the same C compiler that I was
accustomed to - it existed for that platform. I was specifically NOT
helpless because I am used to using good tools.

How about if I put you on a different CPU than you're used to? Can you
use your tiny C compiler? I doubt it, because it's emitting Intel byte
code.

> Suppose then the task was to run some Python, but you only had the bare
> sources: .c and .h files, and whatever .py files come as standard; where
> would you start? Would that 18,000-line configure script come in handy, or
> would it be no use at all without your usual support systems?
>
> And you're accusing /me/ of being in a bubble!

You're asking me to bootstrap Python. I would start by looking for the
nearest similar platform and trying to build a hybrid. I haven't done
this with Python itself, but a while ago, I wanted to port a
similarly-sized language to OS/2, and the process went like this:

1) Attempt to run the configure script, using bash and gcc (which
already existed for OS/2)
2) Locate the handful of places in the code where an ifdef checked for
"Windows vs POSIX" regarding path names and make it "Windows or OS/2
vs POSIX" (since Windows and OS/2 both use the MS-DOS style of path
names)
3) Fix exactly *one* flaw in the makefiles, where it wasn't compatible
with a .exe file name extension when running under a broadly POSIX
system.

And #3 resulted in an upstream patch (which wasn't accepted in its
original form, but a refined version did make it into trunk). Yes,
that configure script came in *VERY* handy, because it took care of
99.9999% of the potential differences between systems. (And by the
way, the real source code for configure is actually configure.in,
which is way shorter than the 18,000 lines you cite.) Once you've
supported Red Hat Linux, Debian Linux, Debian Hurd, Fedora PPC,
FreeBSD, OpenBSD, Windows, Solaris, OpenIndiana, HP-UX, AIX, Mac OS 9,
Mac OS X, SUSE on an s390, and whatever else I've forgotten, it's not
hard to add support for OS/2.

>> In other words, it's only targeting one single CPU architecture and OS
>> API.
>
>
> And? My gcc installation only generates code for x86, x64. So does my Pelles
> C**. So does my lccwin**. So does my DMC (only x86). So does my Tiny C**. So
> does my MSVC2008 (x86 only). All also only generate code for Windows.

But that's simply because you didn't choose to install other GCC back
ends. GCC itself supports plenty of CPUs.

> If I wanted to build gcc for example from sources, then I need to download
> and grapple with a package containing 100,000 files, including tens of
> thousands of source files, even if I'm only interested in one target. /That/
> is supposed to be better?

Yes. Yes, it is. For starters, GCC actually can compile to machine
code, instead of depending on nasm. So you don't have a fair
comparison. For seconds, GCC actually implements the entire C standard
(several versions of the C standard, in fact), plus C++, and Fortran.
Once you support the entire C99 standard (I won't even ask you to
support C11) and actual executable output, you can start comparing.

> Anyway adding another target is no big deal. (I've targeted pdp10, z80,
> 8086, 80386[x86] on previous compilers, sometimes as asm, sometimes as
> binary. Also C source code. I written assemblers for z80, 8051, 8086, 80186,
> 80386[partial] all generating binary.)

Then do it. Make your compiler able to target all of the above. See
how much ifdef mess you need.

> The Win64 ABI thing is a detail. Win64 API uses 4 registers to pass
> parameters, and requires a shadow stack space; Linux uses 6 registers and no
> shadow space, and handles XMM a bit differently. I think that's it...

Sure. That's the calling convention. What about having the full header
files so you can actually type-check your calls?

> It's funny though that when /I/ stipulate third party dependencies [small,
> self-contained ones like nasm or golink], then it's bad; when other people
> do it [massive great ones like vs2015 or git] then that's good!

Might that possibly be because your program is utterly useless without
them, but Python only needs them for compilation? A fully-built Python
doesn't depend on a C compiler or git or anything.

Plus, git shouldn't be considered an onerous requirement for a
developer. Go get it. Start using it. And I don't just mean uploading
stuff to github, I mean actually using it to track your changes during
development.

> And I can only conclude from your comments, that CPython is also incomplete
> because it won't work without a bunch of other tools, all chosen to be the
> biggest and most elaborate there are.

Exactly. The core devs reject small solutions, even if they're
perfect, in order to pick up a much larger and more elaborate one.
That's how they make technology decisions.

And you accuse me of belittling?

> For running my language projects, it might be 30% slower than gcc-O3, about
> on a par with compilers such as DMC, lccwin, and PellesC, and generally
> faster than Tiny C. [Here I'm mixing up results from my C compiler and my
> non-C compiler; I've can't be bothered to do any exhaustive testing because
> you'll simply find another way of belittling my efforts.]

You'll have to compare like for like, and on a serious benchmark. I'm
not going to believe "might be 30% slower" without some actual
numbers.

> If you were paying someone by the hour to add sqlite to a project, would you
> rather they just used the amalgamated file, or would you prefer they spent
> all afternoon ******* about with TCL trying to get it to generate those
> sources?

I would expect them to use a precompiled binary, frankly. Why use the
amalgamated file when you can just use your system's package manager
to grab a ready-to-go version? Unless, of course, you're stuck on a
platform that doesn't even HAVE a package manager, in which case your
options are (a) get a package manager, and (b) grab a precompiled
binary off the web and use that.

Rhodri is right. Your naivety is so charming. You seem to genuinely
think that life can be that simple.

ChrisA



More information about the Python-list mailing list