From dnaeon at gmail.com Sat Jun 1 15:29:17 2013 From: dnaeon at gmail.com (Marin Atanasov Nikolov) Date: Sat, 1 Jun 2013 16:29:17 +0300 Subject: [Cython] C data type and a C function() sharing the same name Message-ID: Hello, Working on creating Cython wrappers for a C library I came across a strange problem. I have in the C library a struct like this: struct my_jobs; I also have this function, which returns the next job in the queue: int my_jobs(struct my_jobs *jobs); Translating this into Cython and putting this in the .pxd file it looks like this: cdef struct my_jobs int my_jobs(my_jobs *jobs) During build I'm having issues because it seems that the function my_jobs() is translated in a way that it should return a "int struct my_jobs". The real problem I see is that I cannot have a data type and a function sharing the same name. How can I overcome this issue? Suppose that I wrote the C API I could change that, but how would you really solve this if you cannot touch what's in upstream? Any ways to solve this? Thanks and regards, Marin -- Marin Atanasov Nikolov dnaeon AT gmail DOT com http://www.unix-heaven.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikita at nemkin.ru Sat Jun 1 16:12:08 2013 From: nikita at nemkin.ru (Nikita Nemkin) Date: Sat, 01 Jun 2013 20:12:08 +0600 Subject: [Cython] C data type and a C function() sharing the same name In-Reply-To: References: Message-ID: On Sat, 01 Jun 2013 19:29:17 +0600, Marin Atanasov Nikolov wrote: > Hello, > > Working on creating Cython wrappers for a C library I came across a > strange > problem. > > I have in the C library a struct like this: > > struct my_jobs; > > I also have this function, which returns the next job in the queue: > > int my_jobs(struct my_jobs *jobs); > > Translating this into Cython and putting this in the .pxd file it looks > like this: > > cdef struct my_jobs > int my_jobs(my_jobs *jobs) > > During build I'm having issues because it seems that the function > my_jobs() > is translated in a way that it should return a "int struct my_jobs". > > The real problem I see is that I cannot have a data type and a function > sharing the same name. > > How can I overcome this issue? Suppose that I wrote the C API I could > change that, but how would you really solve this if you cannot touch > what's in upstream? > > Any ways to solve this? This question would be more appropriate on the cython-users mailing list. Use renaming: http://docs.cython.org/src/userguide/external_C_code.html#resolving-naming-conflicts-c-name-specifications For example, rename the function: int my_jobs_func "my_jobs" (my_jobs *jobs) or the struct: cdef struct my_jobs_t "my_jobs" or both. Best regards, Nikita Nemkin From dnaeon at gmail.com Sat Jun 1 16:35:52 2013 From: dnaeon at gmail.com (Marin Atanasov Nikolov) Date: Sat, 1 Jun 2013 17:35:52 +0300 Subject: [Cython] C data type and a C function() sharing the same name In-Reply-To: References: Message-ID: Thanks, Nikita! I've been looking at the Cython documentation, but was not able to find it previously, thanks! I'm still waiting for my previous posts to show up in the cython-users at mailing list (although I am subscribed there), but they don't seem to show up. Thanks for your help! Best regards, Marin //offtopic Why does it take so long for a post to be approved/published on cython-users@ ? On Sat, Jun 1, 2013 at 5:12 PM, Nikita Nemkin wrote: > On Sat, 01 Jun 2013 19:29:17 +0600, Marin Atanasov Nikolov < > dnaeon at gmail.com> wrote: > > Hello, >> >> Working on creating Cython wrappers for a C library I came across a >> strange >> problem. >> >> I have in the C library a struct like this: >> >> struct my_jobs; >> >> I also have this function, which returns the next job in the queue: >> >> int my_jobs(struct my_jobs *jobs); >> >> Translating this into Cython and putting this in the .pxd file it looks >> like this: >> >> cdef struct my_jobs >> int my_jobs(my_jobs *jobs) >> >> During build I'm having issues because it seems that the function >> my_jobs() >> is translated in a way that it should return a "int struct my_jobs". >> >> The real problem I see is that I cannot have a data type and a function >> sharing the same name. >> >> How can I overcome this issue? Suppose that I wrote the C API I could >> change that, but how would you really solve this if you cannot touch >> what's in upstream? >> >> Any ways to solve this? >> > > This question would be more appropriate on the cython-users mailing list. > > Use renaming: > http://docs.cython.org/src/**userguide/external_C_code.** > html#resolving-naming-**conflicts-c-name-**specifications > > For example, rename the function: > > int my_jobs_func "my_jobs" (my_jobs *jobs) > > or the struct: > > cdef struct my_jobs_t "my_jobs" > > or both. > > Best regards, > Nikita Nemkin > ______________________________**_________________ > cython-devel mailing list > cython-devel at python.org > http://mail.python.org/**mailman/listinfo/cython-devel > -- Marin Atanasov Nikolov dnaeon AT gmail DOT com http://www.unix-heaven.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertwb at gmail.com Sat Jun 1 23:19:04 2013 From: robertwb at gmail.com (Robert Bradshaw) Date: Sat, 1 Jun 2013 14:19:04 -0700 Subject: [Cython] C data type and a C function() sharing the same name In-Reply-To: References: Message-ID: On Sat, Jun 1, 2013 at 7:35 AM, Marin Atanasov Nikolov wrote: > Thanks, Nikita! > > I've been looking at the Cython documentation, but was not able to find it > previously, thanks! > > I'm still waiting for my previous posts to show up in the cython-users@ > mailing list (although I am subscribed there), but they don't seem to show > up. > > Thanks for your help! > > Best regards, > Marin > > //offtopic > Why does it take so long for a post to be approved/published on > cython-users@ ? Because it's manually moderated for first-time posters, which is the most effective way of preventing spam (and, personally, I was on vacation this week). > On Sat, Jun 1, 2013 at 5:12 PM, Nikita Nemkin wrote: >> >> On Sat, 01 Jun 2013 19:29:17 +0600, Marin Atanasov Nikolov >> wrote: >> >>> Hello, >>> >>> Working on creating Cython wrappers for a C library I came across a >>> strange >>> problem. >>> >>> I have in the C library a struct like this: >>> >>> struct my_jobs; >>> >>> I also have this function, which returns the next job in the queue: >>> >>> int my_jobs(struct my_jobs *jobs); >>> >>> Translating this into Cython and putting this in the .pxd file it looks >>> like this: >>> >>> cdef struct my_jobs >>> int my_jobs(my_jobs *jobs) >>> >>> During build I'm having issues because it seems that the function >>> my_jobs() >>> is translated in a way that it should return a "int struct my_jobs". >>> >>> The real problem I see is that I cannot have a data type and a function >>> sharing the same name. >>> >>> How can I overcome this issue? Suppose that I wrote the C API I could >>> change that, but how would you really solve this if you cannot touch >>> what's in upstream? >>> >>> Any ways to solve this? >> >> >> This question would be more appropriate on the cython-users mailing list. >> >> Use renaming: >> >> http://docs.cython.org/src/userguide/external_C_code.html#resolving-naming-conflicts-c-name-specifications >> >> For example, rename the function: >> >> int my_jobs_func "my_jobs" (my_jobs *jobs) >> >> or the struct: >> >> cdef struct my_jobs_t "my_jobs" >> >> or both. >> >> Best regards, >> Nikita Nemkin >> _______________________________________________ >> cython-devel mailing list >> cython-devel at python.org >> http://mail.python.org/mailman/listinfo/cython-devel > > > > > -- > Marin Atanasov Nikolov > > dnaeon AT gmail DOT com > http://www.unix-heaven.org/ > > _______________________________________________ > cython-devel mailing list > cython-devel at python.org > http://mail.python.org/mailman/listinfo/cython-devel > From stefan_ml at behnel.de Sun Jun 2 08:22:07 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 02 Jun 2013 08:22:07 +0200 Subject: [Cython] CF based type inference In-Reply-To: References: Message-ID: <51AAE48F.4010800@behnel.de> mark florisson, 21.05.2013 15:32: > On 21 May 2013 14:14, Vitja Makarov wrote: >> >> def foo(int N): >> x = 1 >> y = 0 >> for i in range(N): >> x = x * 0.1 + y * 0.2 >> y = x * 0.3 + y * 0.4 >> print typeof(x), typeof(y) >> >> Here both x and y will be inferred as double > > Ok, so I assume it promotes the incoming types (all reaching > definitions)? If N == 0, then when using objects you get an int, > otherwise a double. I'm not sure what you mean here. I certainly don't think the inferred type of x and y should depend on the value of N. It should always be a double, because that's the spanning type for all paths. In the very unlikely case that that's not what the user wants, explicit typing will easily fix it for them. Stefan From njs at pobox.com Sun Jun 2 15:51:47 2013 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 2 Jun 2013 14:51:47 +0100 Subject: [Cython] CF based type inference In-Reply-To: <51AAE48F.4010800@behnel.de> References: <51AAE48F.4010800@behnel.de> Message-ID: On Sun, Jun 2, 2013 at 7:22 AM, Stefan Behnel wrote: > mark florisson, 21.05.2013 15:32: >> On 21 May 2013 14:14, Vitja Makarov wrote: >>> >>> def foo(int N): >>> x = 1 >>> y = 0 >>> for i in range(N): >>> x = x * 0.1 + y * 0.2 >>> y = x * 0.3 + y * 0.4 >>> print typeof(x), typeof(y) >>> >>> Here both x and y will be inferred as double >> >> Ok, so I assume it promotes the incoming types (all reaching >> definitions)? If N == 0, then when using objects you get an int, >> otherwise a double. > > I'm not sure what you mean here. I certainly don't think the inferred type > of x and y should depend on the value of N. It should always be a double, > because that's the spanning type for all paths. In the very unlikely case > that that's not what the user wants, explicit typing will easily fix it for > them. But 'double' does not actually span 'int', floats and integers are different in all kinds of corner cases. Both have values that are unrepresentable in the other, etc. So this optimization as stated is... not an optimization, it's just wrong. I mean obviously in this example double would be fine, but how do you decide when it's okay to randomly reinterpret users' code as meaning something different than what they wrote, and when it isn't? It's not that I think the Python rules here are particularly awesome, or that I on purpose write code that sometimes returns ints and sometimes doubles. But at least I know what the Python rules are, which means I can always look at a chunk of code and figure out what the interpreter will do. This is why people writing serious C compilers are so anal about obscure problems like aliasing and guaranteeing that you get segfaults at the right time, and generally insisting that optimizations must *exactly* preserve semantics. I'm worried from this discussion that in Cython, the rule for how variables are typed will become "well, you get whatever types our type inference engine guessed; dropping ordinary Python code into Cython might change the outputs or might not; if you want to know 100% what your code will do then your only option is to either put explicit types on every single variable or else go read the source code for the inference engine in the specific version of Cython you're using". -n From markflorisson88 at gmail.com Sun Jun 2 20:04:15 2013 From: markflorisson88 at gmail.com (mark florisson) Date: Sun, 2 Jun 2013 19:04:15 +0100 Subject: [Cython] CF based type inference In-Reply-To: <51AAE48F.4010800@behnel.de> References: <51AAE48F.4010800@behnel.de> Message-ID: On 2 June 2013 07:22, Stefan Behnel wrote: > mark florisson, 21.05.2013 15:32: >> On 21 May 2013 14:14, Vitja Makarov wrote: >>> >>> def foo(int N): >>> x = 1 >>> y = 0 >>> for i in range(N): >>> x = x * 0.1 + y * 0.2 >>> y = x * 0.3 + y * 0.4 >>> print typeof(x), typeof(y) >>> >>> Here both x and y will be inferred as double >> >> Ok, so I assume it promotes the incoming types (all reaching >> definitions)? If N == 0, then when using objects you get an int, >> otherwise a double. > > I'm not sure what you mean here. I certainly don't think the inferred type > of x and y should depend on the value of N. It should always be a double, > because that's the spanning type for all paths. In the very unlikely case > that that's not what the user wants, explicit typing will easily fix it for > them. Right, my point is that taking the spanning type of all paths is different from python semantics. But if you can prove something about the value of N (e.g. N <= 0, or N > 0), you should certainly exploit this information. > Stefan > > _______________________________________________ > cython-devel mailing list > cython-devel at python.org > http://mail.python.org/mailman/listinfo/cython-devel From yury at shurup.com Mon Jun 3 14:24:37 2013 From: yury at shurup.com (Yury V. Zaytsev) Date: Mon, 03 Jun 2013 14:24:37 +0200 Subject: [Cython] Memory views not working on Python 2.6.6, NumPy 1.3.0, doing smth wrong? Message-ID: <1370262277.2722.249.camel@newpride> Hi, I'm a bit confused about the memory views feature and I hope that you can help me out... When I cythonize the following code in my bt.pyx file and run the test below with Python 2.7.1 and NumPy 1.7.1 everything works fine, but when I try Python 2.6.6 and NumPy 1.3.0 I get the following exception: def bar(int[:] z): print(z) >>> import bt >>> import numpy >>> x = numpy.array([1,2,3], dtype=numpy.int32) >>> bt.bar(x) Traceback (most recent call last): File "", line 1, in File "bt.pyx", line 10, in bt.bar (bt.c:1585) File "stringsource", line 619, in View.MemoryView.memoryview_cwrapper (bt.c:7263) File "stringsource", line 327, in View.MemoryView.memoryview.__cinit__ (bt.c:4028) TypeError: 'numpy.ndarray' does not have the buffer interface Is there a minimum version of Python and/or NumPy that should be installed for this feature to work? If yes, would it be possible to include a compile-time check for that? Unfortunately, I couldn't find anything regarding the minimally required versions in the documentation... Thanks! -- Sincerely yours, Yury V. Zaytsev From sturla at molden.no Mon Jun 3 17:50:31 2013 From: sturla at molden.no (Sturla Molden) Date: Mon, 3 Jun 2013 17:50:31 +0200 Subject: [Cython] Memory views not working on Python 2.6.6, NumPy 1.3.0, doing smth wrong? In-Reply-To: <1370262277.2722.249.camel@newpride> References: <1370262277.2722.249.camel@newpride> Message-ID: <51F3FEF0-4909-46B2-99F2-37726D3B50C0@molden.no> Den 3. juni 2013 kl. 14:24 skrev "Yury V. Zaytsev" : > > When I cythonize the following code in my bt.pyx file and run the test > below with Python 2.7.1 and NumPy 1.7.1 everything works fine, but when > I try Python 2.6.6 and NumPy 1.3.0 I get the following exception: > > > Is there a minimum version of Python and/or NumPy that should be > installed for this feature to work? If yes, would it be possible to > include a compile-time check for that? > You need at least NumPy 1.5 for PEP 3118 buffers to work. Sturla From nikita at nemkin.ru Tue Jun 4 10:29:23 2013 From: nikita at nemkin.ru (Nikita Nemkin) Date: Tue, 04 Jun 2013 14:29:23 +0600 Subject: [Cython] array.array member renaming Message-ID: Hi, I just wanted to say that this https://github.com/cython/cython/commit/a3ace265e68ad97c24ce2b52d99d45b60b26eda2#L1L73 renaming seems totally unnecessary as it makes any array code verbose and ugly. I often have to create extra local variables just to avoid endless something.data.as_ints repetition. What was the reason for ranaming? It would be really nice to reintroduce old names (_i, _d etc). Best regards, Nikita Nemkin From stefan_ml at behnel.de Tue Jun 4 10:47:47 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 04 Jun 2013 10:47:47 +0200 Subject: [Cython] array.array member renaming In-Reply-To: References: Message-ID: <51ADA9B3.2080300@behnel.de> Nikita Nemkin, 04.06.2013 10:29: > I just wanted to say that this > https://github.com/cython/cython/commit/a3ace265e68ad97c24ce2b52d99d45b60b26eda2#L1L73 > > renaming seems totally unnecessary as it makes any array code > verbose and ugly. I often have to create extra local variables > just to avoid endless something.data.as_ints repetition. Are one-shot operations on arrays really so common for you that the explicit "unpacking" step matters for your code? > What was the reason for ranaming? It would be really nice to > reintroduce old names (_i, _d etc). IMHO, the explicit names read better and make it clear what happens. Also, I think the original idea was that most people shouldn't access the field directly and use memory views and the buffer interface instead, at least for user provided data. It might be a little different for arrays that are only used internally. Stefan From nikita at nemkin.ru Tue Jun 4 12:17:01 2013 From: nikita at nemkin.ru (Nikita Nemkin) Date: Tue, 04 Jun 2013 16:17:01 +0600 Subject: [Cython] array.array member renaming In-Reply-To: <51ADA9B3.2080300@behnel.de> References: <51ADA9B3.2080300@behnel.de> Message-ID: On Tue, 04 Jun 2013 14:47:47 +0600, Stefan Behnel wrote: > Nikita Nemkin, 04.06.2013 10:29: >> I just wanted to say that this >> https://github.com/cython/cython/commit/a3ace265e68ad97c24ce2b52d99d45b60b26eda2#L1L73 >> >> renaming seems totally unnecessary as it makes any array code >> verbose and ugly. I often have to create extra local variables >> just to avoid endless something.data.as_ints repetition. > > Are one-shot operations on arrays really so common for you that the > explicit "unpacking" step matters for your code? I use array in most places where you would normally see bare pointer and malloc/PyMem_Malloc. Automatic memory management FTW. Many people would do the same if they knew about arrays and a special support for them that Cython provides. (Personally, I had discovered it by browsing standard include .pxd files) Array class members also have "self." prepended which does not help brevity. So, yeah, it matters. Sure I can live with overly verbose names, but there is certainly room for improvement. ATM I have 96 cases of ".data.as_XXX" in my codebase and that's after folding some of them using local variables (like "cdef int* segments = self.segments.data.as_ints"). >> What was the reason for ranaming? It would be really nice to >> reintroduce old names (_i, _d etc). > > IMHO, the explicit names read better and make it clear what happens. Indexing makes it clear enough that, well, indexing happens. Direct array access is sort of magic anyway. Here is an example of unnecessary verbosity: while width + piDx.data.as_ints[start] < maxWidth: width += piDx.data.as_ints[start] start += 1 > Also, I think the original idea was that most people shouldn't access the > field directly and use memory views and the buffer interface instead, at > least for user provided data. It might be a little different for arrays > that are only used internally. When using buffer interface, it really doesn't matter if user have passed an array or ndarray or whatever. Buffer interface covers everything, array-specific declarations are irrelevant. But when I know that the variable is an array, buffer declaration, acquisition and release code is dead weight (especially for class members which can't have buffer declaration attached to themselves, necessitating an extra local variable to declare a fast access view). Best regards, Nikita Nemkin From stefan_ml at behnel.de Tue Jun 4 14:27:15 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 04 Jun 2013 14:27:15 +0200 Subject: [Cython] array.array member renaming In-Reply-To: References: <51ADA9B3.2080300@behnel.de> Message-ID: <51ADDD23.20303@behnel.de> Nikita Nemkin, 04.06.2013 12:17: > On Tue, 04 Jun 2013 14:47:47 +0600, Stefan Behnel wrote: >> Nikita Nemkin, 04.06.2013 10:29: >>> I just wanted to say that this >>> https://github.com/cython/cython/commit/a3ace265e68ad97c24ce2b52d99d45b60b26eda2#L1L73 >>> >>> renaming seems totally unnecessary as it makes any array code >>> verbose and ugly. I often have to create extra local variables >>> just to avoid endless something.data.as_ints repetition. >> >> Are one-shot operations on arrays really so common for you that the >> explicit "unpacking" step matters for your code? > > I use array in most places where you would normally see bare pointer and > malloc/PyMem_Malloc. Automatic memory management FTW. > > Many people would do the same if they knew about arrays > and a special support for them that Cython provides. > (Personally, I had discovered it by browsing standard include .pxd files) > > Array class members also have "self." prepended which does not help brevity. > So, yeah, it matters. Sure I can live with overly verbose names, > but there is certainly room for improvement. > > ATM I have 96 cases of ".data.as_XXX" in my codebase and that's after > folding some of them using local variables > (like "cdef int* segments = self.segments.data.as_ints"). And the local assignment also resolves the pointer indirection for "self" here, which the C compiler can't really reason about otherwise. >>> What was the reason for ranaming? It would be really nice to >>> reintroduce old names (_i, _d etc). >> >> IMHO, the explicit names read better and make it clear what happens. > > Indexing makes it clear enough that, well, indexing happens. > Direct array access is sort of magic anyway. > Here is an example of unnecessary verbosity: > > while width + piDx.data.as_ints[start] < maxWidth: > width += piDx.data.as_ints[start] > start += 1 Agreed that it's more verbose than necessary, but my gut feeling is still: if it's worth shorting, it's worth assigning. If it's not worth assigning, it's likely not worth shortening either. IIRC, the reason why there's a redundant ".data." bit in there is a) because of C declaration issues and b) because we wanted to keep the namespace impact on the Python array object interface as low as possible. >> Also, I think the original idea was that most people shouldn't access the >> field directly and use memory views and the buffer interface instead, at >> least for user provided data. It might be a little different for arrays >> that are only used internally. > > When using buffer interface, it really doesn't matter if user have passed > an array or ndarray or whatever. Buffer interface covers everything, > array-specific declarations are irrelevant. > > But when I know that the variable is an array, buffer declaration, > acquisition and release code is dead weight (especially for class > members which can't have buffer declaration attached to themselves, > necessitating an extra local variable to declare a fast access view). That's what I meant with "only used locally". So, I do see your problem, but it's not obvious to me that it's worth doing something about it. Especially not something as broad as duplicating the direct access interface. Stefan From nikita at nemkin.ru Tue Jun 4 17:23:47 2013 From: nikita at nemkin.ru (Nikita Nemkin) Date: Tue, 04 Jun 2013 21:23:47 +0600 Subject: [Cython] array.array member renaming In-Reply-To: <51ADDD23.20303@behnel.de> References: <51ADA9B3.2080300@behnel.de> <51ADDD23.20303@behnel.de> Message-ID: On Tue, 04 Jun 2013 18:27:15 +0600, Stefan Behnel wrote: > Nikita Nemkin, 04.06.2013 12:17: >> On Tue, 04 Jun 2013 14:47:47 +0600, Stefan Behnel wrote: >>> Nikita Nemkin, 04.06.2013 10:29: >>>> I just wanted to say that this >>>> https://github.com/cython/cython/commit/a3ace265e68ad97c24ce2b52d99d45b60b26eda2#L1L73 >>>> >>>> renaming seems totally unnecessary as it makes any array code >>>> verbose and ugly. I often have to create extra local variables >>>> just to avoid endless something.data.as_ints repetition. >>> >>> Are one-shot operations on arrays really so common for you that the >>> explicit "unpacking" step matters for your code? >> >> I use array in most places where you would normally see bare pointer and >> malloc/PyMem_Malloc. Automatic memory management FTW. >> >> Many people would do the same if they knew about arrays >> and a special support for them that Cython provides. >> (Personally, I had discovered it by browsing standard include .pxd >> files) >> >> Array class members also have "self." prepended which does not help >> brevity. >> So, yeah, it matters. Sure I can live with overly verbose names, >> but there is certainly room for improvement. >> >> ATM I have 96 cases of ".data.as_XXX" in my codebase and that's after >> folding some of them using local variables >> (like "cdef int* segments = self.segments.data.as_ints"). > > And the local assignment also resolves the pointer indirection for "self" > here, which the C compiler can't really reason about otherwise. > > >>>> What was the reason for ranaming? It would be really nice to >>>> reintroduce old names (_i, _d etc). >>> >>> IMHO, the explicit names read better and make it clear what happens. >> >> Indexing makes it clear enough that, well, indexing happens. >> Direct array access is sort of magic anyway. >> Here is an example of unnecessary verbosity: >> >> while width + piDx.data.as_ints[start] < maxWidth: >> width += piDx.data.as_ints[start] >> start += 1 > > Agreed that it's more verbose than necessary, but my gut feeling is > still: > if it's worth shorting, it's worth assigning. If it's not worth > assigning, > it's likely not worth shortening either. Shortening is about readability. Extra CPU time to dereference self is not my concern. (I'm pretty sure L1 cache hides the cost.) > So, I do see your problem, but it's not obvious to me that it's worth > doing > something about it. Especially not something as broad as duplicating the > direct access interface. I guess I'll just copy array.pxd and modify it to suit my needs. (Long member names is not my only grievance.) Modified include path should do the trick. Best regards, Nikita Nemkin From nikita at nemkin.ru Fri Jun 7 13:14:44 2013 From: nikita at nemkin.ru (Nikita Nemkin) Date: Fri, 07 Jun 2013 17:14:44 +0600 Subject: [Cython] Conditional cast and builtin types Message-ID: Hi, I have just discovered that the following operations do not perform any type checking ("?" is ignored): obj, obj, obj, obj, obj. Can someone please confirm this as a bug? Best regards, Nikita Nemkin From nikita at nemkin.ru Fri Jun 7 13:24:38 2013 From: nikita at nemkin.ru (Nikita Nemkin) Date: Fri, 07 Jun 2013 17:24:38 +0600 Subject: [Cython] Feature proposal: conditional cast with None Message-ID: Hi, Currently, conditional casts like obj do not allow None values. I have found it useful to allow None sometimes. The syntax could be one of: * obj * obj * obj Use case (without this feature): tn_obj = self.any.non_trivial.nullable.expression() cdef TypeName tn if tn_obj is not None: tn = tn_obj ... Use case (with this feature): cdef TypeName tn = self.any.non_trivial.nullable.expression() if tn is not None: ... As you can see, without this feature, two local variables pointing to the same object are required. This creates unnecessary confusion. Implementation is trivial, all that is necessary is to pass "notnone=False" flag from parser to TypecastNode to PyTypeTestNode. What do you think? Best regards, Nikita Nemkin From stefan_ml at behnel.de Fri Jun 7 14:58:10 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 07 Jun 2013 14:58:10 +0200 Subject: [Cython] Conditional cast and builtin types In-Reply-To: References: Message-ID: <51B1D8E2.4020704@behnel.de> Nikita Nemkin, 07.06.2013 13:14: > I have just discovered that the following operations do not perform > any type checking ("?" is ignored): > > obj, obj, obj, obj, obj. > > Can someone please confirm this as a bug? I never tried it, but (or rather, therefore) I can well imagine that it doesn't work. In that case, it's definitely a bug. Stefan From stefan_ml at behnel.de Fri Jun 7 15:10:34 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 07 Jun 2013 15:10:34 +0200 Subject: [Cython] Feature proposal: conditional cast with None In-Reply-To: References: Message-ID: <51B1DBCA.4010406@behnel.de> Nikita Nemkin, 07.06.2013 13:24: > Currently, conditional casts like obj do not allow None values. This might seem like an inconsistency, because obj does allow None values, just like a normal assignment. However, it's a special case that asks explicitly for the given type, so I think it's ok to be stricter. > I have found it useful to allow None sometimes. > The syntax could be one of: > * obj > * obj > * obj > > Use case (without this feature): > > tn_obj = self.any.non_trivial.nullable.expression() > cdef TypeName tn > if tn_obj is not None: > tn = tn_obj > ... > > Use case (with this feature): > > cdef TypeName tn = None?>self.any.non_trivial.nullable.expression() > if tn is not None: > ... Why not just cdef TypeName tn = self.any.non_trivial.nullable.expression() if tn is not None: ... ? I.e. why do you need that cast in the first place? Stefan From nikita at nemkin.ru Fri Jun 7 15:16:20 2013 From: nikita at nemkin.ru (Nikita Nemkin) Date: Fri, 07 Jun 2013 19:16:20 +0600 Subject: [Cython] Feature proposal: conditional cast with None In-Reply-To: <51B1DBCA.4010406@behnel.de> References: <51B1DBCA.4010406@behnel.de> Message-ID: On Fri, 07 Jun 2013 19:10:34 +0600, Stefan Behnel wrote: > Nikita Nemkin, 07.06.2013 13:24: >> Currently, conditional casts like obj do not allow None values. > > This might seem like an inconsistency, because obj does allow None > values, just like a normal assignment. However, it's a special case that > asks explicitly for the given type, so I think it's ok to be stricter. > > >> I have found it useful to allow None sometimes. >> The syntax could be one of: >> * obj >> * obj >> * obj >> >> Use case (without this feature): >> >> tn_obj = self.any.non_trivial.nullable.expression() >> cdef TypeName tn >> if tn_obj is not None: >> tn = tn_obj >> ... >> >> Use case (with this feature): >> >> cdef TypeName tn = > None?>self.any.non_trivial.nullable.expression() >> if tn is not None: >> ... > > Why not just > > cdef TypeName tn = self.any.non_trivial.nullable.expression() > if tn is not None: > ... > > ? > > I.e. why do you need that cast in the first place? You are right. The behavior I want is actually the default assignment behavior. Writing almost fully typed code, I started to percieve Cython as a statically typed language... Please disregard my feature request and thank you. Best regards, Nikita Nemkin From nikita at nemkin.ru Tue Jun 11 13:51:44 2013 From: nikita at nemkin.ru (Nikita Nemkin) Date: Tue, 11 Jun 2013 17:51:44 +0600 Subject: [Cython] Funny idea: interpreted def functions Message-ID: Hi, Pure Python functions rarely benefit from compilation. I thought it would be interesting to add an "interpreted" directive (global, module, class, function level + automatic heuristic) that will instruct Cython to compile def functions into _bytecode_ and store that bytecode in the binary. Together with module bundling and embed/freeze it could make a neat deployment solution. (I have no plans to implement this.) Best regards, Nikita Nemkin From stefan_ml at behnel.de Tue Jun 11 16:03:45 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 11 Jun 2013 16:03:45 +0200 Subject: [Cython] Funny idea: interpreted def functions In-Reply-To: References: Message-ID: <51B72E41.10801@behnel.de> Nikita Nemkin, 11.06.2013 13:51: > Pure Python functions rarely benefit from compilation. I thought it > would be interesting to add an "interpreted" directive (global, > module, class, function level + automatic heuristic) that will > instruct Cython to compile def functions into _bytecode_ and store > that bytecode in the binary. > > Together with module bundling and embed/freeze it could make a neat > deployment solution. Well, it shouldn't be all that hard to implement. Basically, we'd send a part of the source file through the Python parser after having parsed and processed it in the compiler. However, I fail to see the advantage of this feature that would make it worth providing to users. There usually *is* a visible performance advantage of compiled code over pure Python code, and the advantages of interpreted Python code in terms of semantics or compatibility are quite limited (debugging, maybe, or introspection). Could you describe how/why you came up with this? > (I have no plans to implement this.) I can imagine. Stefan From nikita at nemkin.ru Tue Jun 11 19:18:08 2013 From: nikita at nemkin.ru (Nikita Nemkin) Date: Tue, 11 Jun 2013 23:18:08 +0600 Subject: [Cython] Funny idea: interpreted def functions In-Reply-To: <51B72E41.10801@behnel.de> References: <51B72E41.10801@behnel.de> Message-ID: On Tue, 11 Jun 2013 20:03:45 +0600, Stefan Behnel wrote: > Nikita Nemkin, 11.06.2013 13:51: >> Pure Python functions rarely benefit from compilation. I thought it >> would be interesting to add an "interpreted" directive (global, >> module, class, function level + automatic heuristic) that will >> instruct Cython to compile def functions into _bytecode_ and store >> that bytecode in the binary. >> >> Together with module bundling and embed/freeze it could make a neat >> deployment solution. > > Well, it shouldn't be all that hard to implement. Basically, we'd send a > part of the source file through the Python parser after having parsed and > processed it in the compiler. > > However, I fail to see the advantage of this feature that would make it > worth providing to users. There usually *is* a visible performance > advantage of compiled code over pure Python code, and the advantages of > interpreted Python code in terms of semantics or compatibility are quite > limited (debugging, maybe, or introspection). Let's just say there are legitimate reasons to stay interpreted, binary size and compatibility among them. > Could you describe how/why you came up with this? Well, I was wondering why isn't CPython written in Cython (actually I know why) and how awesome it would be to have a system with CPython runtime and unified Cython/Python compiler front-end targeting both bytecode and native code. In such system, per-function compiled/interpreted switch would feel natural to me... That's how, if it answers your question. And from a different angle: many people praise Go(lang) for it's "single fat binary" deployment approach. First class bytecode support in Cython colud provide the same for Python. (Maybe not quite the same, but a step in this direction.) Best regards, Nikita Nemkin From michaelklein4685 at outlook.com Tue Jun 11 13:58:43 2013 From: michaelklein4685 at outlook.com (Michael Klein) Date: Tue, 11 Jun 2013 16:28:43 +0430 Subject: ACÍLIO JOÃO CARLOS MOREIRA DE CARVALHO AMORIM NETO Message-ID: <13709676737d6ba6a90149db097845b767b4936ead@outlook.com> GOULART JUCIMARA VICENTE DOS SANTOS, DAMIANA PEREIRA DE OLIVERIA, ANA ANGELICA PEREIRA ALVES, PAULO DE SIQUEIRA SILVA. PAULO DE SIQUEIRA SILVA. AMAURI CLIFE, ELEUT?RIO LEAL, LEAL CORLETTO, CLIFE AC?LIO LEAL. ELEUT?RIO GOULART, VICTOR DI MELLO, JAVERT MONTEIRO, NELSON DANTAS, DANIEL DE OLIVEIRA, RICARDO CORTE REAL, FL?VIO SILVINO, LUIS GUSTAVO, BABU SANTANA, SERAFIM GONZALEZ, HARILDO DEDA, MARCOS FROTA, CARLOS VERGUEIRO, PAULO GIARDINI, ELIEZER MOTTA, JULIANO CAZARR?, ANDERSON LAU ANTUNES, ACILINO. AFONSO CL?UDIO, INDAIABIRA, FERNANDO PEDROZA IGARATINGA. AMAURI ANTUNES, ACILINO. AFONSO CL?UDIO, INDAIABIRA, FERNANDO PEDROZA JO?O CARLOS MOREIRA DE CARVALHO OURO VERDE DE MINAS PAULO DE SIQUEIRA SILVA BALDOMERO GOULART, CLIFE LEAL. From robertwb at gmail.com Sun Jun 16 06:53:10 2013 From: robertwb at gmail.com (Robert Bradshaw) Date: Sat, 15 Jun 2013 21:53:10 -0700 Subject: [Cython] Conditional cast and builtin types In-Reply-To: <51B1D8E2.4020704@behnel.de> References: <51B1D8E2.4020704@behnel.de> Message-ID: On Fri, Jun 7, 2013 at 5:58 AM, Stefan Behnel wrote: > Nikita Nemkin, 07.06.2013 13:14: >> I have just discovered that the following operations do not perform >> any type checking ("?" is ignored): >> >> obj, obj, obj, obj, obj. >> >> Can someone please confirm this as a bug? > > I never tried it, but (or rather, therefore) I can well imagine that it > doesn't work. In that case, it's definitely a bug. A slain bug. https://github.com/cython/cython/commit/d4daf97711f96438291af32d4c422c70bdd8b667 - Robert From wking at tremily.us Sat Jun 15 16:31:14 2013 From: wking at tremily.us (W. Trevor King) Date: Sat, 15 Jun 2013 10:31:14 -0400 Subject: [Cython] Time to close relative import ticket on Trac? Message-ID: <20130615143114.GF24970@odin.tremily.us> Browsing around today I noticed that the relative import ticket is still open [1], even though support for relative imports landed in 0.15 [2]. There's also CEP 307 (Generators and PEP 342 Coroutines) [1], which is still listed as ?planning, undecided? but also seems to be fixed in 0.15. The RTD page is also frozen at 0.15pre [3]. Cheers, Trevor [1]: cython-devel at python.org [2]: http://wiki.cython.org/ReleaseNotes-0.15 [3]: https://cython.readthedocs.org/en/latest/ -- This email may be signed or encrypted with GnuPG (http://www.gnupg.org). For more information, see http://en.wikipedia.org/wiki/Pretty_Good_Privacy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From felix at salfelder.org Wed Jun 19 09:16:38 2013 From: felix at salfelder.org (Felix Salfelder) Date: Wed, 19 Jun 2013 09:16:38 +0200 Subject: [Cython] patch for #655 Message-ID: <20130619071638.GF15034@bin.d-labs.de> Hi there. i am preparing a patch for #655 (write dependency makefiles) at [1]. I think it's best to use the -M, -F and -D short options for this functionality. please comment, improve and/or merge. regards felix [1] http://trac.sagemath.org/sage_trac/ticket/14728 From scopatz at gmail.com Mon Jun 24 09:15:16 2013 From: scopatz at gmail.com (Anthony Scopatz) Date: Mon, 24 Jun 2013 02:15:16 -0500 Subject: [Cython] ANN: XDress v0.2 Message-ID: Hello All, I am pleased to announce the latest version of xdress, in preparation for SciPy 2013. For more information please visit the website: http://xdress.org Be Well Anthony ======================== XDress 0.2 Release Notes ======================== XDress is an automatic wrapper generator for C/C++ written in pure Python. Currently, xdress may generate Python bindings (via Cython) for C++ classes, functions, and certain variable types. It also contains idiomatic wrappers for C++ standard library containers (sets, vectors, maps). In the future, other tools and bindings will be supported. The main enabling feature of xdress is a dynamic type system that was designed with the purpose of API generation in mind. Release highlights: - First class support for C via pycparser. - Python 3 support, by popular demand! - A plethora of awesome type system updates, including: - type matching - lambda-valued converters - function pointer support - Easy to use and implement plugin architecture. Please visit the website for more information: http://xdress.org/ Ask questions on the mailing list: xdress at googlegroups.com Download the code from GitHub: http://github.com/scopatz/xdress XDress is free & open source (BSD 2-clause license) and requires Python 2.7, NumPy 1.5+, Cython 0.19+, and optionally GCC-XML, pycparser, and lxml. New Features ============ First Class C Support --------------------- Wrapping C code is now fully handled through with the optional pycparser. This means that you don't have to worry about whether or not the GCC-XML parser will work on your particular bit of code. Furthermore, C structs and their members are now handled idiomatically. (C++ structs are actually treated as C++ classes, which means that their are allowed to have constructors and other C++ concepts not present in C.) Python 3 Support ---------------- The entire code base is built and tested under Python 3.3. This uses the single code base model, as most development takes pace in Python 2.7. The Cython code that is generated may be used by both Python 2 & 3. Type System ----------- The type system has been expanded and hardened to handle additional use cases, largely motivated by the desire for realistic C support. A new function pointer (``fucntion_pointer``) refinement type has been added. When converting from C to Python, a new function object is created that wraps the underlying call. For converting from Python to C, a virtual table of callback functions is constructed that have the same signature of the pointer but hold a reference to a Python function. The size of the table and thus how many callbacks you can have before overwriting previous ones, is set by the ``max_callbacks`` key in the extra dictionary in class descriptions. This defaults to 8. A new enum refinement type now also comes stock. These may be exposed to Python in the ``rc.variables`` list. The type system now also comes with basic type matching tools. There is a new ``TypeMatcher`` class, a ``matches()`` function, and a singleton ``MatchAny`` that may be used for determining whether a type adheres to a pattern. The TypeMatcher class itself is immutable and hashable and therefore may be used anywhere other type elements (tuples, str, int) may be used including as dict keys! This is helpful for specifying conversions for large groups of types. Finally, the type system conversion dictionaries now accept callable objects as values. This was put in to handle templated types where the number of argument is not in general know beforehand, e.g. enums and function pointers. The values must be callable with only a single argument -- the type itself. For example, ``lambda t: rtnvalue`` is valid. Plugins! -------- XDress is a suite of tools written on top of a type system. Thus the entire core has been refactored to implement a very nimble and easy plugin system. The plugin mechanism enables external projects with their own code generators to easily hook into the xdress tools. Additionally, this allows users to decide at run time which plugins they want to use. Mini-FAQ ======== * Why not use an existing solution (eg, SWIG)? Their type systems don't support run-time, user provided refinement types, and thus are unsuited for verification & validation use cases that often arise in computational science. Furthermore, they tend to not handle C++ dependent types well (i.e. vector does not come back as a np.view(..., dtype=T)). * Why GCC-XML and not Clang's AST? I tried using Clang's AST (and the remnants of a broken visitor class remain in the code base). However, the official Clang AST Python bindings lack support for template argument types. This is a really big deal. Other C++ ASTs may be supported in the future -- including Clang's. * I run xdress and it creates these files, now what?! It is your job to integrate the files created by xdress into your build system. Join in the Fun! ================ If you are interested in using xdress on your project (and need help), contributing back to xdress, starting up a development team, or writing your own code generation plugin tool on top of the type system and autodescriber, please let us know. Participation is very welcome! Authors ======= - `Anthony Scopatz `_ - Spencer Lyon * - Gerald Dalley * - Alexander Eisenhuth * An * indicates a first time contributor. Links ===== 1. Homepage - http://xdress.org/ 2. Mailing List - xdress at googlegroups.com 3. GitHub Organization - https://github.com/xdress 4. Pycparser - https://pypi.python.org/pypi/pycparser -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Tue Jun 25 06:17:31 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 25 Jun 2013 06:17:31 +0200 Subject: [Cython] Python type assignment errors (was: [cython-users] How to prevent garbage collecting an object?) In-Reply-To: References: <50405441-6c87-428c-bb87-f6ddd1d7f815@googlegroups.com> Message-ID: <51C919DB.7040102@behnel.de> Robert Bradshaw, 25.06.2013 05:15: > On Mon, Jun 24, 2013 at 3:01 PM, Zak wrote: >> I have tried a few things, and I always get >> the error at run-time. Cython is certainly doing type checking, but if the >> expected type (dict) and the actual type (Python integer) are both Python >> types and not C types, it seems the error always appears at run-time, never >> at compile-time. For instance, this compiles fine: >> >> cdef dict first_func(int x): >> return x >> >> cdef int second_func(int x): >> cdef dict y >> y = first_func(x) >> return 5 >> >> The code above causes a run-time error, but I feel that in an ideal world it >> should be a compile-time error. > > Yes, it should be. Agreed. Cython has inherited this behaviour from Pyrex which originally only knew "object", and we didn't do much about it since. There are rather fuzzy limits to this, though. For example, this would be stupid but legal wrt. language semantics: cdef dict func(): return None cdef list x = func() So, the only case that we can really handle is when we know the typed value originated from a C type that cannot coerce to None nor to the expected type, i.e. essentially this case: cdef int someting = 5 cdef dict x = something whereas only slightly more involved code would end up passing through the analysis, at least for now: cdef int something = 5 cdef dict x = something + 999 # unknown integer size => object I.e., this can only be handled during coercion of C types to known (builtin) Python object types, not during assignment - although we do have the may_be_none() predicate for nodes, and although there's still Vitja's pending inference rewrite which I didn't have time to look into any recently. Both can be used to push the fuzzy border a bit further into the right direction. Stefan From robertwb at gmail.com Tue Jun 25 10:06:35 2013 From: robertwb at gmail.com (Robert Bradshaw) Date: Tue, 25 Jun 2013 01:06:35 -0700 Subject: [Cython] patch for #655 In-Reply-To: <20130619071638.GF15034@bin.d-labs.de> References: <20130619071638.GF15034@bin.d-labs.de> Message-ID: We prefer changes as pull requests to https://github.com/cython/cython? One first comment, I think it's a lot to reserve three flags for this. Perhaps "-M [filename] would be sufficient, using the convention that - is stdout. If necessary, we could have a special syntax for the -D option. I'm still, however, trying to figure out exactly what the usecase for this is. Generally extensions are created with distutils, and cythonize handles the dependencies in that framework for you, so I'm not sure how you'd use the resulting makefiles (one per .pyx file?) anyways. An example/test would be useful as well (see tests/build). On Wed, Jun 19, 2013 at 12:16 AM, Felix Salfelder wrote: > Hi there. > > i am preparing a patch for #655 (write dependency makefiles) at [1]. I > think it's best to use the -M, -F and -D short options for this > functionality. please comment, improve and/or merge. > > regards > felix > > [1] http://trac.sagemath.org/sage_trac/ticket/14728 > _______________________________________________ > cython-devel mailing list > cython-devel at python.org > http://mail.python.org/mailman/listinfo/cython-devel From felix at salfelder.org Tue Jun 25 10:34:06 2013 From: felix at salfelder.org (Felix Salfelder) Date: Tue, 25 Jun 2013 10:34:06 +0200 Subject: [Cython] patch for #655 In-Reply-To: References: <20130619071638.GF15034@bin.d-labs.de> Message-ID: <20130625083406.GM11552@bin.d-labs.de> On Tue, Jun 25, 2013 at 01:06:35AM -0700, Robert Bradshaw wrote: > One first comment, I think it's a lot to reserve three flags for this. > Perhaps "-M [filename] would be sufficient, using the convention that > - is stdout. If necessary, we could have a special syntax for the -D > option. it's a tradeoff between simplicity and force of habit. anybody using gcc and make knows, what -M and -M[A-Z] mean. anybody who doesnt use any of these, will never need this functionality. but yes, of you *do* need -P and -D otherwise, we might find other characters... > I'm still, however, trying to figure out exactly what the usecase for > this is. it's about keeping track of build dependencies. > Generally extensions are created with distutils, and > cythonize handles the dependencies in that framework for you, so I'm > not sure how you'd use the resulting makefiles (one per .pyx file?) > anyways. cythonize doesnt know, which headers gcc will use when compiling the cython output. now what any other compiler will do. i have no idea how to fix that (design flaw?), and its currently easier to just use makefiles from the beginning. with makefiles, dependencies are easy and fast, if all involved compilers support it. regards felix From stefan_ml at behnel.de Thu Jun 27 08:58:28 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Thu, 27 Jun 2013 08:58:28 +0200 Subject: [Cython] patch for #655 In-Reply-To: <20130625083406.GM11552@bin.d-labs.de> References: <20130619071638.GF15034@bin.d-labs.de> <20130625083406.GM11552@bin.d-labs.de> Message-ID: <51CBE294.50803@behnel.de> Felix Salfelder, 25.06.2013 10:34: > On Tue, Jun 25, 2013 at 01:06:35AM -0700, Robert Bradshaw wrote: >> I'm still, however, trying to figure out exactly what the usecase for >> this is. > > it's about keeping track of build dependencies. > >> Generally extensions are created with distutils, and >> cythonize handles the dependencies in that framework for you, so I'm >> not sure how you'd use the resulting makefiles (one per .pyx file?) >> anyways. I fail to see the use case, too. It's fairly limited in any case. > cythonize doesnt know, which headers gcc will use when compiling the > cython output. Make doesn't know that either. Cython at least knows which ones are used directly. Handling transitive dependencies would require parsing header files. If you need to keep track of changes in transitively included header files, why not cimport from them in the Cython source to make them an explicit dependency? > now what any other compiler will do. This sentence barely passes through my English parser and then fails to link with the rest. > i have no idea how > to fix that (design flaw?), and its currently easier to just use > makefiles from the beginning. with makefiles, dependencies are easy and > fast, if all involved compilers support it. Maybe you should start by describing more clearly what exactly is missing in the current setup. Building with make instead of distutils seems like a major complication to me all by itself. Stefan From felix at salfelder.org Thu Jun 27 10:26:10 2013 From: felix at salfelder.org (Felix Salfelder) Date: Thu, 27 Jun 2013 10:26:10 +0200 Subject: [Cython] patch for #655 In-Reply-To: <51CBE294.50803@behnel.de> References: <20130619071638.GF15034@bin.d-labs.de> <20130625083406.GM11552@bin.d-labs.de> <51CBE294.50803@behnel.de> Message-ID: <20130627082610.GL3756@bin.d-labs.de> Hi Stefan. On Thu, Jun 27, 2013 at 08:58:28AM +0200, Stefan Behnel wrote: > Make doesn't know that either. Cython at least knows which ones are used > directly. Handling transitive dependencies would require parsing header > files. If you need to keep track of changes in transitively included header > files, why not cimport from them in the Cython source to make them an > explicit dependency? explicit dependency tracking would imply "manual". which is painful and error-prone. without running gcc -M (with all flags) you cannot even guess the headers used transitively. I haven't found a gcc -M call within the cython souce code. > > now what any other compiler will do. > This sentence barely passes through my English parser and then fails to > link with the rest. I'm sorry -- read that as noR [cythonize will know] what any other compiler will do. > > i have no idea how > > to fix that (design flaw?), and its currently easier to just use > > makefiles from the beginning. with makefiles, dependencies are easy and > > fast, if all involved compilers support it. > > Maybe you should start by describing more clearly what exactly is missing > in the current setup. Building with make instead of distutils seems like a > major complication to me all by itself. Its still just that "cython does not track (all) build dependencies". but lets make a short story long: look at /src/module_list.py within the sage project. it contains lots of references to headers at hardwired paths. these paths are wrong in most cases, and they require manual messing with build system internals *because* cythonize does not (can not?) keep track of them. building with make (read: autotools) just works the way it always did (+ some obvious quirks that are not currently included within upstram autotools) -- after patching cython. (i know, that many people hate autotools, and i don't want to start a rant about it, but it would be better for everybody if a) make/autotools was taken seriously b) the missing functionality will be implemented into cython(ize) some day, start with dependencies, then port/reimplement the AC_* macros ) regards felix From robertwb at gmail.com Thu Jun 27 18:23:21 2013 From: robertwb at gmail.com (Robert Bradshaw) Date: Thu, 27 Jun 2013 09:23:21 -0700 Subject: [Cython] patch for #655 In-Reply-To: <20130627082610.GL3756@bin.d-labs.de> References: <20130619071638.GF15034@bin.d-labs.de> <20130625083406.GM11552@bin.d-labs.de> <51CBE294.50803@behnel.de> <20130627082610.GL3756@bin.d-labs.de> Message-ID: On Thu, Jun 27, 2013 at 1:26 AM, Felix Salfelder wrote: > Hi Stefan. > > On Thu, Jun 27, 2013 at 08:58:28AM +0200, Stefan Behnel wrote: >> Make doesn't know that either. Cython at least knows which ones are used >> directly. Handling transitive dependencies would require parsing header >> files. If you need to keep track of changes in transitively included header >> files, why not cimport from them in the Cython source to make them an >> explicit dependency? > > explicit dependency tracking would imply "manual". which is painful and > error-prone. without running gcc -M (with all flags) you cannot even > guess the headers used transitively. I haven't found a gcc -M call > within the cython souce code. Why would it be needed? >> > now what any other compiler will do. > >> This sentence barely passes through my English parser and then fails to >> link with the rest. > > I'm sorry -- read that as noR [cythonize will know] what any other > compiler will do. > >> > i have no idea how >> > to fix that (design flaw?), and its currently easier to just use >> > makefiles from the beginning. with makefiles, dependencies are easy and >> > fast, if all involved compilers support it. >> >> Maybe you should start by describing more clearly what exactly is missing >> in the current setup. Building with make instead of distutils seems like a >> major complication to me all by itself. > > Its still just that "cython does not track (all) build dependencies". > but lets make a short story long: > > look at /src/module_list.py within the sage project. it contains lots of > references to headers at hardwired paths. these paths are wrong in most > cases, and they require manual messing with build system internals > *because* cythonize does not (can not?) keep track of them. Ah, I know a bit more here. module_list.py is structured so because it grew up organically by people with a wide range programming backgrounds and one of the explicit goals of cythonize was (among other things) to remove the needs for such explicit and error-prone declarations. module_list.py has not been "simplified" yet because it was a moving target (I think it was rebased something like a dozen times over a period of about a year before we decided to just get cythonize() in and do module_list cleanup later). It should be entirely sufficient, even for sage. > building with make (read: autotools) just works the way it always did > (+ some obvious quirks that are not currently included within upstram > autotools) -- after patching cython. Can you explain? Are you saying you can type cython -M *.pyx make > (i know, that many people hate autotools, and i don't want to start a rant > about it, but it would be better for everybody if > a) make/autotools was taken seriously > b) the missing functionality will be implemented into cython(ize) some > day, start with dependencies, then port/reimplement the AC_* macros ) One of the goals of Cythonize is to *only* handle the pyx -> c[pp] step, and let other existing tools handle the rest. - Robert From felix at salfelder.org Thu Jun 27 19:25:56 2013 From: felix at salfelder.org (Felix Salfelder) Date: Thu, 27 Jun 2013 19:25:56 +0200 Subject: [Cython] patch for #655 In-Reply-To: References: <20130619071638.GF15034@bin.d-labs.de> <20130625083406.GM11552@bin.d-labs.de> <51CBE294.50803@behnel.de> <20130627082610.GL3756@bin.d-labs.de> Message-ID: <20130627172556.GD3356@bin.d-labs.de> On Thu, Jun 27, 2013 at 09:23:21AM -0700, Robert Bradshaw wrote: > > explicit dependency tracking would imply "manual". which is painful and > > error-prone. without running gcc -M (with all flags) you cannot even > > guess the headers used transitively. I haven't found a gcc -M call > > within the cython souce code. > > Why would it be needed? well it is not. if I can use something else to track dependendencies (like autotools), something else takes care of gcc -M. but this now also needs to call cython with -M, to know when cython needs to be called again. > > Its still just that "cython does not track (all) build dependencies". > > but lets make a short story long: I'm probably wrong here, and it's that other tool, "distutils", that would be responsible. anyhow, it's cython I want to write out dependencies. > > look at /src/module_list.py within the sage project. it contains lots of > > references to headers at hardwired paths. these paths are wrong in most > > cases, and they require manual messing with build system internals > > *because* cythonize does not (can not?) keep track of them. > > Ah, I know a bit more here. module_list.py is structured so because it > grew up organically by people with a wide range programming > backgrounds and one of the explicit goals of cythonize was (among > other things) to remove the needs for such explicit and error-prone > declarations. module_list.py has not been "simplified" yet because it > was a moving target (I think it was rebased something like a dozen > times over a period of about a year before we decided to just get > cythonize() in and do module_list cleanup later). > > It should be entirely sufficient, even for sage. sage currently uses hardwired paths for all and everything. in particular for header locations. it works right now, but the plan is to support packages installed to the host system. and: sage is just *my* example, it wasnt the original reason for opening #655. > > building with make (read: autotools) just works the way it always did > > (+ some obvious quirks that are not currently included within upstram > > autotools) -- after patching cython. > > Can you explain? Are you saying you can type > > cython -M *.pyx > make no. the input for autotools contains a list of things, that you want. for example foo.so. now it creates makefiles that implement the rules that achieve this. for example foo.so will be built from foo.c, from foo.pyx (if foo.pyx exists, of course). deep down in the rules, the cython -M (and gcc -M) call just does the right thing without you even noticing. > > (i know, that many people hate autotools, and i don't want to start a rant > > about it, but it would be better for everybody if > > a) make/autotools was taken seriously > > b) the missing functionality will be implemented into cython(ize) some > > day, start with dependencies, then port/reimplement the AC_* macros ) > > One of the goals of Cythonize is to *only* handle the pyx -> c[pp] > step, and let other existing tools handle the rest. That's exactly what i want to do. use cython to translate .pyx->.c[pp] and nothing else. the existing tool (make) needs to know when cython needs to be called, so it has to know the dependency chain. regards felix From robertwb at gmail.com Thu Jun 27 20:05:48 2013 From: robertwb at gmail.com (Robert Bradshaw) Date: Thu, 27 Jun 2013 11:05:48 -0700 Subject: [Cython] patch for #655 In-Reply-To: <20130627172556.GD3356@bin.d-labs.de> References: <20130619071638.GF15034@bin.d-labs.de> <20130625083406.GM11552@bin.d-labs.de> <51CBE294.50803@behnel.de> <20130627082610.GL3756@bin.d-labs.de> <20130627172556.GD3356@bin.d-labs.de> Message-ID: On Thu, Jun 27, 2013 at 10:25 AM, Felix Salfelder wrote: > On Thu, Jun 27, 2013 at 09:23:21AM -0700, Robert Bradshaw wrote: >> > explicit dependency tracking would imply "manual". which is painful and >> > error-prone. without running gcc -M (with all flags) you cannot even >> > guess the headers used transitively. I haven't found a gcc -M call >> > within the cython souce code. >> >> Why would it be needed? > > well it is not. if I can use something else to track dependendencies > (like autotools), something else takes care of gcc -M. but this now also > needs to call cython with -M, to know when cython needs to be called > again. And you're planning on calling cython manually, cutting distutils out of the loop completely? >> > Its still just that "cython does not track (all) build dependencies". >> > but lets make a short story long: > > I'm probably wrong here, and it's that other tool, "distutils", that > would be responsible. anyhow, it's cython I want to write out > dependencies. > >> > look at /src/module_list.py within the sage project. it contains lots of >> > references to headers at hardwired paths. these paths are wrong in most >> > cases, and they require manual messing with build system internals >> > *because* cythonize does not (can not?) keep track of them. >> >> Ah, I know a bit more here. module_list.py is structured so because it >> grew up organically by people with a wide range programming >> backgrounds and one of the explicit goals of cythonize was (among >> other things) to remove the needs for such explicit and error-prone >> declarations. module_list.py has not been "simplified" yet because it >> was a moving target (I think it was rebased something like a dozen >> times over a period of about a year before we decided to just get >> cythonize() in and do module_list cleanup later). >> >> It should be entirely sufficient, even for sage. > > sage currently uses hardwired paths for all and everything. in > particular for header locations. it works right now, but the plan is to > support packages installed to the host system. I don't see how that would change anything. > and: sage is just *my* example, it wasnt the original reason for opening > #655. > >> > building with make (read: autotools) just works the way it always did >> > (+ some obvious quirks that are not currently included within upstram >> > autotools) -- after patching cython. >> >> Can you explain? Are you saying you can type >> >> cython -M *.pyx >> make > > no. the input for autotools contains a list of things, that you want. > for example foo.so. now it creates makefiles that implement the rules > that achieve this. for example foo.so will be built from foo.c, from > foo.pyx (if foo.pyx exists, of course). > > deep down in the rules, the cython -M (and gcc -M) call just does the > right thing without you even noticing. > >> > (i know, that many people hate autotools, and i don't want to start a rant >> > about it, but it would be better for everybody if >> > a) make/autotools was taken seriously >> > b) the missing functionality will be implemented into cython(ize) some >> > day, start with dependencies, then port/reimplement the AC_* macros ) >> >> One of the goals of Cythonize is to *only* handle the pyx -> c[pp] >> step, and let other existing tools handle the rest. > > That's exactly what i want to do. use cython to translate .pyx->.c[pp] > and nothing else. the existing tool (make) needs to know when cython > needs to be called, so it has to know the dependency chain. It also needs to know how cython needs to be called, and then how gcc needs to be called (or, would you invoke setup.py when any .pyx file changes, in which case you don't need a more granular rules). In general, I'm +1 on providing a mechanism for exporting dependencies for tools to do with whatever they like. I have a couple of issues with the current approach: (1) Doing this on a file-by-file basis is quadratic time (which for something like Sage takes unbearably long as you have to actually read and parse the entire file to understand its dependencies, and then recursively merge them up to the leaves). This could be mitigated (the parsing at least) by writing dep files and re-using them, but it's still going to be sub-optimal. The exact dependencies may also depend on the options passed into cythonize (e.g. the specific include directories, some dynamically computed like numpy_get_includes()). (2) I don't think we need to co-opt gcc's flags for this. A single flag that writes its output to a named file should be sufficient. No one expects to be able to pass gcc options to Cython, and Cython can be used with more C compilers than just gcc. (3) The implementation is a bit hackish, with global dictionaries and random printing. - Robert From felix at salfelder.org Thu Jun 27 21:18:32 2013 From: felix at salfelder.org (Felix Salfelder) Date: Thu, 27 Jun 2013 21:18:32 +0200 Subject: [Cython] patch for #655 In-Reply-To: References: <20130619071638.GF15034@bin.d-labs.de> <20130625083406.GM11552@bin.d-labs.de> <51CBE294.50803@behnel.de> <20130627082610.GL3756@bin.d-labs.de> <20130627172556.GD3356@bin.d-labs.de> Message-ID: <20130627191832.GE3356@bin.d-labs.de> Hi Robert. On Thu, Jun 27, 2013 at 11:05:48AM -0700, Robert Bradshaw wrote: > And you're planning on calling cython manually, cutting distutils out > of the loop completely? If someone tells me, how to fix distutils, (better: does it), i might change my mind. also, I need VPATH... just something that works. > > sage currently uses hardwired paths for all and everything. in > > particular for header locations. it works right now, but the plan is to > > support packages installed to the host system. > > I don't see how that would change anything. well, what would $SAGE_LOCAL/include/something.h be then? and how to tell without reimplementing gcc -M functionality? > > That's exactly what i want to do. use cython to translate .pyx->.c[pp] > > and nothing else. the existing tool (make) needs to know when cython > > needs to be called, so it has to know the dependency chain. > > It also needs to know how cython needs to be called, and then how gcc > needs to be called (or, would you invoke setup.py when any .pyx file > changes, in which case you don't need a more granular rules). autotools takes care of that. for C/C++ this has been working for ages. automatically. cython rules need to be added manually (currently, until somebody tweaks autotools a bit). setup.py is not needed. > In general, I'm +1 on providing a mechanism for exporting dependencies > for tools to do with whatever they like. I have a couple of issues > with the current approach: > > (1) Doing this on a file-by-file basis is quadratic time (which for > something like Sage takes unbearably long as you have to actually read > and parse the entire file to understand its dependencies, and then > recursively merge them up to the leaves). This could be mitigated (the > parsing at least) by writing dep files and re-using them, but it's > still going to be sub-optimal. i do not understand. the -MF approach writes out dependencies *during* compilation and does no extra parsing. it can't be more efficient (can it, how?). the makefiles maybe are the dep files you are referring to. a second make run will just read the dependency output and compare timestamps. > The exact dependencies may also depend > on the options passed into cythonize (e.g. the specific include > directories, some dynamically computed like numpy_get_includes()). Do you want to change include paths (reconfigure the whole thing) beween two runs? in general this wont work without "make clean"... if it's options within a config.h file, it may trigger recompilation of course (also you can add any sort of dependency, if you want that) > (2) I don't think we need to co-opt gcc's flags for this. See e.g. /usr/share/automake-1.11/depcomp, to get an idea on how many compilers support the -M family. There is at least "hp, "aix", "icc", "tru64", "gcc". the "sgi" case looks similar. I don't know who started it. > A single > flag that writes its output to a named file should be sufficient. No > one expects to be able to pass gcc options to Cython, and Cython can > be used with more C compilers than just gcc. okay, if you feel like it, lets translate -M -MF, -MD, -MP to something more pythonic. it would be great to use single letter options, as otherwise the commands are unnecessarily lengthy. My current rules just set -M -MD -MP (==-MDP). > (3) The implementation is a bit hackish, with global dictionaries and > random printing. i need a global dictionary, as some files are accessed multiple times. how can i avoid this? what is "random printing?". i'm not a cython expert, but with some hints I might be able to improve the patch. thanks felix From robertwb at gmail.com Thu Jun 27 21:39:48 2013 From: robertwb at gmail.com (Robert Bradshaw) Date: Thu, 27 Jun 2013 12:39:48 -0700 Subject: [Cython] patch for #655 In-Reply-To: <20130627191832.GE3356@bin.d-labs.de> References: <20130619071638.GF15034@bin.d-labs.de> <20130625083406.GM11552@bin.d-labs.de> <51CBE294.50803@behnel.de> <20130627082610.GL3756@bin.d-labs.de> <20130627172556.GD3356@bin.d-labs.de> <20130627191832.GE3356@bin.d-labs.de> Message-ID: On Thu, Jun 27, 2013 at 12:18 PM, Felix Salfelder wrote: > Hi Robert. > > On Thu, Jun 27, 2013 at 11:05:48AM -0700, Robert Bradshaw wrote: >> And you're planning on calling cython manually, cutting distutils out >> of the loop completely? > > If someone tells me, how to fix distutils, (better: does it), i might > change my mind. also, I need VPATH... just something that works. > >> > sage currently uses hardwired paths for all and everything. in >> > particular for header locations. it works right now, but the plan is to >> > support packages installed to the host system. >> >> I don't see how that would change anything. > > well, what would $SAGE_LOCAL/include/something.h be then? and how to > tell without reimplementing gcc -M functionality? OK, currently it resolves this to an absolute path (only because it's ephemeral information that's due to be tossed anyways). >> > That's exactly what i want to do. use cython to translate .pyx->.c[pp] >> > and nothing else. the existing tool (make) needs to know when cython >> > needs to be called, so it has to know the dependency chain. >> >> It also needs to know how cython needs to be called, and then how gcc >> needs to be called (or, would you invoke setup.py when any .pyx file >> changes, in which case you don't need a more granular rules). > > autotools takes care of that. for C/C++ this has been working for ages. > automatically. cython rules need to be added manually (currently, until > somebody tweaks autotools a bit). setup.py is not needed. Building Python extensions with makefiles/autotools rather than distutils is less supported, but I suppose you could do that manually. >> In general, I'm +1 on providing a mechanism for exporting dependencies >> for tools to do with whatever they like. I have a couple of issues >> with the current approach: >> >> (1) Doing this on a file-by-file basis is quadratic time (which for >> something like Sage takes unbearably long as you have to actually read >> and parse the entire file to understand its dependencies, and then >> recursively merge them up to the leaves). This could be mitigated (the >> parsing at least) by writing dep files and re-using them, but it's >> still going to be sub-optimal. > > i do not understand. the -MF approach writes out dependencies *during* > compilation and does no extra parsing. it can't be more efficient (can > it, how?). Currently cythonize() allows you to determine quickly upfront what needs to be compiled *without* actually compiling anything. I suppose the idea is that by default you compile everything, and the next time around you have some kind of artifact that lets you understand the dependencies better? But you still need a rule for the initial run, right? It would still help if you posted exactly how you're using it. E.g. here's a set of .pyx files, I run this to generate some make files, which invokes "cython -M ..." > the makefiles maybe are the dep files you are referring to. a second > make run will just read the dependency output and compare timestamps. > >> The exact dependencies may also depend >> on the options passed into cythonize (e.g. the specific include >> directories, some dynamically computed like numpy_get_includes()). > > Do you want to change include paths (reconfigure the whole thing) beween > two runs? in general this wont work without "make clean"... if it's > options within a config.h file, it may trigger recompilation of course > > (also you can add any sort of dependency, if you want that) > >> (2) I don't think we need to co-opt gcc's flags for this. > > See e.g. /usr/share/automake-1.11/depcomp, to get an idea on how many > compilers support the -M family. There is at least "hp, "aix", "icc", > "tru64", "gcc". the "sgi" case looks similar. I don't know who started > it. > >> A single >> flag that writes its output to a named file should be sufficient. No >> one expects to be able to pass gcc options to Cython, and Cython can >> be used with more C compilers than just gcc. > > okay, if you feel like it, lets translate -M -MF, -MD, -MP to something > more pythonic. it would be great to use single letter options, as > otherwise the commands are unnecessarily lengthy. My current rules just > set -M -MD -MP (==-MDP). OK, fair point. I suppose we can go with -M[x]. >> (3) The implementation is a bit hackish, with global dictionaries and >> random printing. > > i need a global dictionary, as some files are accessed multiple times. > how can i avoid this? what is "random printing?". > > i'm not a cython expert, but with some hints I might be able to improve > the patch. File a pull request and I'll take another look. - Robert From felix at salfelder.org Thu Jun 27 23:06:02 2013 From: felix at salfelder.org (Felix Salfelder) Date: Thu, 27 Jun 2013 23:06:02 +0200 Subject: [Cython] patch for #655 In-Reply-To: References: <20130619071638.GF15034@bin.d-labs.de> <20130625083406.GM11552@bin.d-labs.de> <51CBE294.50803@behnel.de> <20130627082610.GL3756@bin.d-labs.de> <20130627172556.GD3356@bin.d-labs.de> <20130627191832.GE3356@bin.d-labs.de> Message-ID: <20130627210602.GH3356@bin.d-labs.de> On Thu, Jun 27, 2013 at 12:39:48PM -0700, Robert Bradshaw wrote: > Building Python extensions with makefiles/autotools rather than > distutils is less supported, but I suppose you could do that manually. i've done that. I ran into a few peculiarities with "-I", "-w", and __init__.py, but nothing serious. the only thing that will affect the user (and which i should mention) is: make uses file extensions to determine file types. particularly, i have not found a portable hack that allows the use of .pyx for both .c and .cpp within the scope of one makefile yet... > Currently cythonize() allows you to determine quickly upfront what > needs to be compiled *without* actually compiling anything. I suppose > the idea is that by default you compile everything, and the next time > around you have some kind of artifact that lets you understand the > dependencies better? yes, the dependencies for the first run are empty and filled/refreshed during the cython run. no upfront determination required. > But you still need a rule for the initial run, right? the rules are the same each time, if the target doesn't exist, it will be made regardless of how empty the dependency file is. > It would still help if you posted exactly how you're using it. E.g. > here's a set of .pyx files, I run this to generate some make files, > which invokes "cython -M ..." The basic principle to build file.so out of file.pyx is this: $ cat Makefile # handwritten example demo all: file.so %.so: %.cc gcc -shared -fpic $(CFLAGS) $(CPPFLAGS) $< -o $@ -M -MD -MP %.c: %.pyx cython $< -o $@ -MD -MP include file.pyx.d file.cP $ : > file.pyx.d $ : > file.cP # these files are initially empty $ vim file.pyx # write down some code using "file.pxi" somehow [..] $ vim file.pxi # write down some code using somehow [..] $ make file.so # will find the .pyx -> .c -> .so chain cython file.pyx -o file.c -MD -MP # creates file.c and file.pyx.d [..] gcc -shared -fpic file.so -o file.so -M -MD -MP # creates file.{so,cP} [..] $ cat file.pyx.d # the dependency makefile cython has created file.c: file.pxi file.pxi: $ cat file.cP # the dependency makefile gcc has written file.so: /path/to/file.h /path/to/file.h: $ make # doesnt do anything after checking timestamps. $ touch file.pyx; make # calls cython, gcc [..] $ touch /path/to/file.h; make # just calls gcc [..] $ touch file.pxi; make # calls both [..] (lets hope that this is syntactically correct and half way comprehensible) eventually, it (i.e. what i've implemented for sage) could look more like this: $ ./configure # creates Makefiles from templates $ make $ CYTH file.c $ CC file.lo $ LD file.so $ touch file.h; make $ CC file.lo $ LD file.so ... (automake will call the linker seperately, to increase portability or something) > File a pull request and I'll take another look. within the next few days... thanks felix From stefan_ml at behnel.de Fri Jun 28 06:39:45 2013 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 28 Jun 2013 06:39:45 +0200 Subject: [Cython] patch for #655 In-Reply-To: <20130627210602.GH3356@bin.d-labs.de> References: <20130619071638.GF15034@bin.d-labs.de> <20130625083406.GM11552@bin.d-labs.de> <51CBE294.50803@behnel.de> <20130627082610.GL3756@bin.d-labs.de> <20130627172556.GD3356@bin.d-labs.de> <20130627191832.GE3356@bin.d-labs.de> <20130627210602.GH3356@bin.d-labs.de> Message-ID: <51CD1391.3070606@behnel.de> Felix Salfelder, 27.06.2013 23:06: > On Thu, Jun 27, 2013 at 12:39:48PM -0700, Robert Bradshaw wrote: >> Building Python extensions with makefiles/autotools rather than >> distutils is less supported, but I suppose you could do that manually. > > i've done that. I ran into a few peculiarities with "-I", "-w", and > __init__.py, but nothing serious. > > the only thing that will affect the user (and which i should mention) > is: make uses file extensions to determine file types. particularly, i > have not found a portable hack that allows the use of .pyx for both .c > and .cpp within the scope of one makefile yet... That's unfortunate, but not too serious either. Larger projects may end up using both, but since it should often work to compile everything in C++ mode, and the right way to do it is distutils anyway, people who really want to go through the hassle of using make will just have to live with it. In the worst case, you could spell out the build targets explicitly (i.e. not as patterns) in the makefile, as part of the dependencies. >> Currently cythonize() allows you to determine quickly upfront what >> needs to be compiled *without* actually compiling anything. I suppose >> the idea is that by default you compile everything, and the next time >> around you have some kind of artifact that lets you understand the >> dependencies better? > > yes, the dependencies for the first run are empty and filled/refreshed > during the cython run. no upfront determination required. > >> But you still need a rule for the initial run, right? > > the rules are the same each time, if the target doesn't exist, it will be > made regardless of how empty the dependency file is. > >> It would still help if you posted exactly how you're using it. E.g. >> here's a set of .pyx files, I run this to generate some make files, >> which invokes "cython -M ..." > > The basic principle to build file.so out of file.pyx is this: > $ cat Makefile # handwritten example demo > all: file.so > %.so: %.cc > gcc -shared -fpic $(CFLAGS) $(CPPFLAGS) $< -o $@ -M -MD -MP > %.c: %.pyx > cython $< -o $@ -MD -MP > include file.pyx.d file.cP > $ : > file.pyx.d > $ : > file.cP # these files are initially empty Hmm, does that mean you either have to create them manually before the first run, and/or you have to manually collect all dependency files for the "include" list? And then keep track of them yourself when you add new files? I suppose you could also use a wildcard file search to build the list of include files? > $ vim file.pyx # write down some code using "file.pxi" somehow > [..] > $ vim file.pxi # write down some code using somehow > [..] > $ make file.so # will find the .pyx -> .c -> .so chain > cython file.pyx -o file.c -MD -MP # creates file.c and file.pyx.d > [..] > gcc -shared -fpic file.so -o file.so -M -MD -MP # creates file.{so,cP} > [..] > $ cat file.pyx.d # the dependency makefile cython has created > file.c: file.pxi > file.pxi: > $ cat file.cP # the dependency makefile gcc has written > file.so: /path/to/file.h > /path/to/file.h: > $ make # doesnt do anything after checking timestamps. > $ touch file.pyx; make # calls cython, gcc > [..] > $ touch /path/to/file.h; make # just calls gcc > [..] > $ touch file.pxi; make # calls both > [..] > > (lets hope that this is syntactically correct and half way comprehensible) > > eventually, it (i.e. what i've implemented for sage) could look more like this: > $ ./configure # creates Makefiles from templates > $ make > $ CYTH file.c > $ CC file.lo > $ LD file.so > $ touch file.h; make > $ CC file.lo > $ LD file.so > ... > (automake will call the linker seperately, to increase portability or > something) If you want this feature to go in, I think you should write up documentation for it, so that other people can actually use it as well. Even writing a correct makefile requires a huge amount of digging into distutils. Here are examples for portable makefiles: https://github.com/cython/cython/tree/master/Demos/embed It would be nice to have a similar demo setup for a complete make build. The textual documentation should go here: http://docs.cython.org/src/reference/compilation.html (see the .rst files in docs/src/) Stefan