From jaime.frio at gmail.com Thu Aug 1 14:54:19 2013 From: jaime.frio at gmail.com (=?ISO-8859-1?Q?Jaime_Fern=E1ndez_del_R=EDo?=) Date: Thu, 1 Aug 2013 11:54:19 -0700 Subject: Blog updated In-Reply-To: References: Message-ID: Hi Chintak, You have processed the StackOverflow answers impresively well: great blog post! Just a quick note on performance of np.einsum. I have found that it performs much better when handed only two parameters. So you may want to benchmark whether applying the mask to the template before the call to np.einsum makes your code run faster. I don't think there is a way out of this 3 parameter call: ssd += np.einsum('ijkl, ijkl, kl->ij', y, y, valid_mask) But there is a good chance that: ssd = np.einsum('ijkl, kl, kl->ij', y, template, valid_mask, dtype=np.float) runs noticeably faster as: ssd = np.einsum('ijkl, kl->ij', y, template*valid_mask, dtype=np.float) A quick test on my system, with a 1000x1000 image and a 9x9 template and mask, all of floats, show it's 25% faster. And this is where about half of your processing time is being spent, so that little change would give you a 10% performance boost for free in this particular case. You may want to test a wider variety of parameter sizes, to see if the improvement holds. The third call to np.einsum has a negligible impact on overall performance, but if you store the value of template*valid_mask, it also runs faster with a two parameter call, i.e. as: ssd += np.einsum('ij, ij', template, cached_template_times_valid_mask) Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jschoenberger at demuc.de Thu Aug 1 11:35:29 2013 From: jschoenberger at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Thu, 1 Aug 2013 17:35:29 +0200 Subject: PEP8 Update Message-ID: <97332FDE-25DB-4362-B18C-B0EAB878295D@demuc.de> For anyone interested: http://hg.python.org/peps/rev/fb24c80e9afb Johannes Sch?nberger From chintaksheth at gmail.com Thu Aug 1 08:54:07 2013 From: chintaksheth at gmail.com (Chintak Sheth) Date: Thu, 1 Aug 2013 18:24:07 +0530 Subject: Blog updated In-Reply-To: References: Message-ID: Hi Stefan On Wed, Jul 31, 2013 at 1:53 AM, St?fan van der Walt wrote: > > You can often write einsum as some operation + a reduction over one or > more axes. Can this operation be expressed in both ways? If so, > which do you find to be more readable and understandable? > Yes we can. Sum the view and the template and then reduction over 2nd and 3rd axes. However, a simple addition would result in an actual 4D array since it would no longer be able to store it as a view. So we would really loose the advantage of being memory efficient with the as_strided view. In fact, this was the initial answer I got on StackOverflow after which it was corrected as this. I'll however add a comment of 2 lines, explaining in short what einsum does here. Chintak -------------- next part -------------- An HTML attachment was scrubbed... URL: From chintaksheth at gmail.com Sun Aug 4 01:55:16 2013 From: chintaksheth at gmail.com (Chintak Sheth) Date: Sun, 4 Aug 2013 11:25:16 +0530 Subject: Blog updated In-Reply-To: References: Message-ID: Aah, great to find you on the scikit-image mailing list! I can certainly learn a lot from you. =) Coming to the aspect you pointed out, I was thinking if we could probably get rid of `valid_template` all together from this evaluation and introduce another einsum product of `ssd` with `valid_mask`? For example, this is what we are doing, `c*a**2 + c*b**2 - 2c*a*b`, which really is equivalent to `c*(a-b)**2`. However, this does add another call to einsum. And I had performed tests with about 500 pixels which amount to 1500 einsum calls. The bottleneck turns out to be einsum calls. (PR) This is why I refrained from adding another einsum call, since I'd have 2000 calls then. However, your tests indicate that calls with 2 parameters do run considerably faster, so may be I'll go ahead and make this change ? What are your thoughts on this ? Chintak -------------- next part -------------- An HTML attachment was scrubbed... URL: From jschoenberger at demuc.de Mon Aug 5 01:49:15 2013 From: jschoenberger at demuc.de (=?windows-1252?Q?Johannes_Sch=F6nberger?=) Date: Mon, 5 Aug 2013 07:49:15 +0200 Subject: cython -a In-Reply-To: References: Message-ID: Hi Juan, > 1. about optimising Cython code in skimage: how do I compile with -a? Because of local imports and so on, calling it on the source file directly is not an option, and I haven't figured out an obvious place to make it spit out the html otherwise? cython -a does not depend on any imports. At least on my system it works for all files, regardless of imports? ? > 2. about running tests: how do I easily figure out which tests were skipped when running "make test" and why they were skipped? Since make test just invokes nosetests, you could simply use nosetests --no-skip. From emmanuelle.gouillart at nsup.org Mon Aug 5 02:36:42 2013 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Mon, 5 Aug 2013 08:36:42 +0200 Subject: random walker segmentation In-Reply-To: References: <20130730205016.GA27240@phare.normalesup.org> Message-ID: <20130805063642.GA2930@phare.normalesup.org> Hi Josh, thanks for your answer! I'll probably work on the new implementation during the Euroscipy sprint, we'll see whether Cython can decrese the small performance hit :-). Cheers, Emmanuelle On Wed, Jul 31, 2013 at 03:31:19PM -0700, Josh Warner wrote: > Hi Emmanuelle, > I think this would be a good step forward, and would be happy to help! My guess > is > that this approach could be sped up further in Cython with code we control, > making > it easier to maintain and expose similar performance to all users. > The memory issue is a real one, particularly for larger (multichannel) > datasets. That > alone probably justifies the addition, even at the expense of a small > performance hit. > But we'll see how small we can make the hit ;) > Also, the iterative solution framework might be useful for other algorithms. > Josh > On Tuesday, July 30, 2013 3:50:16 PM UTC-5, Emmanuelle Gouillart wrote: > Hello, > a while ago, I contributed to skimage an implementation of the > random walker segmentation algorithm (which has been improved and > extended by many others since then). This algorithm computes a multilabel > segmentation using seeds (already labeled pixels), by determining for an > unlabeled pixel the probability that a seed diffuses to the pixel (with > an anisotropic diffusion coefficient depending on the gradient between > neighboring pixels). > In the current implementation in skimage, the computation of the > probability map is done by inverting a large sparse linear system > (involving the Laplacian of the graph of pixels). Different methods can > be chosen to solve the linear system: a brute force inversion only > works for tiny images; a conjugate gradient method works well but is > quite slow. If the package pyamg is available, a multigrid method is used > to compute a preconditioner, which speeds up the algorithm -- but it > requires pyamg. Also, the memory cost of the algorithm is important > (linear, I think, though. I haven't yet taken the time to use a memory > profiler but I should). > Recently, a colleague brought to my attention that the linear > system was just a set of fixed-point equations, that could be solved > iteratively. Indeed, the solution verifies that the probability of a > pixel is the weighted sum (with weights on edges that are a decreasing > function of gradients) of the probabilities of its neighbors. I have > written a quick and dirty implementation (only for 2-D grayscale images > and for 2 labels) of this "local" version, available on > https://github.com/emmanuelle/scikits.image/blob/local_random_walker/ > skimage/segmentation/local_random_walker.py > It turns out that this implementation is slower than the > conjugate gradient with multigrid acceleration (typically 2 to three > times slower), but it has several advantages. First, it can be as fast as > the "simple" conjugate gradient (without pyamg's muligrid acceleration), > which is the mode that most users will use (we don't expect users to > install pymag when they are just trying out algorithms). Second, its > memory cost is lower (for example, the weight of an edge is stored only > once, while it appears twice in the Laplacian matrix). Finally, because > the operations only involve neighboring pixels, it is possible that > further speed optimization can be done (using cython... or maybe a GPU > implementation, even if we're not that far yet with skimage). > So, should we replace the linear algebra implementation with this > simpler local and iterative implementation ? I'd be interested in knowing > about your opinion. > Cheers, > Emmanuelle From guillaume at mitotic-machine.org Mon Aug 5 05:51:46 2013 From: guillaume at mitotic-machine.org (Guillaume Gay) Date: Mon, 05 Aug 2013 11:51:46 +0200 Subject: Fail to import peak_local_max Message-ID: <51FF75B2.6080404@mitotic-machine.org> Hi every one, I encountered a bug when importing |peak_local_max| (with the latest github version). The traceback goes up to feature.template. Running |test_template.py| gives this: |python /usr/local/src/scikit-image/skimage/feature/tests/test_template.py Traceback (most recent call last): File"/usr/local/src/scikit-image/skimage/feature/tests/test_template.py", line5,in from skimage.featureimport match_template, peak_local_max File"/usr/local/lib/python2.7/dist-packages/scikit_image-0.9dev-py2.7-linux-i686.egg/skimage/feature/__init__.py", line9,in from .templateimport match_template File"/usr/local/lib/python2.7/dist-packages/scikit_image-0.9dev-py2.7-linux-i686.egg/skimage/feature/template.py", line4,in from .import _template File"_template.pyx", line1,in init skimage.feature._template (skimage/feature/_template.c:4042) TypeError: C function skimage._shared.transform.integrate has wrong signature (expected float (PyArrayObject *, Py_ssize_t, Py_ssize_t, Py_ssize_t, Py_ssize_t), got float (__Pyx_memviewslice, Py_ssize_t, Py_ssize_t, Py_ssize_t, Py_ssize_t))| Any hint how to fix this? Cheers, Guillaume -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at mitotic-machine.org Mon Aug 5 06:22:25 2013 From: guillaume at mitotic-machine.org (Guillaume Gay) Date: Mon, 05 Aug 2013 12:22:25 +0200 Subject: Fail to import peak_local_max In-Reply-To: References: <51FF75B2.6080404@mitotic-machine.org> Message-ID: <51FF7CE1.6050409@mitotic-machine.org> That did the trick, thanks Juan! G. Le 05/08/2013 12:14, Juan Nunez-Iglesias a ?crit : > This sort of stuff keeps popping up (see latest issues) because a lot > of functions have moved to use memory views. If you do a `make clean` > and then try again, it should work. > > > On Mon, Aug 5, 2013 at 7:51 PM, Guillaume Gay > > > wrote: > > Hi every one, > > I encountered a bug when importing |peak_local_max| (with the > latest github version). The traceback goes up to feature.template. > Running |test_template.py| gives this: > > |python /usr/local/src/scikit-image/skimage/feature/tests/test_template.py > Traceback (most recent call last): > File"/usr/local/src/scikit-image/skimage/feature/tests/test_template.py", line5,in > from skimage.featureimport match_template, peak_local_max > File"/usr/local/lib/python2.7/dist-packages/scikit_image-0.9dev-py2.7-linux-i686.egg/skimage/feature/__init__.py", line9,in > from .templateimport match_template > File"/usr/local/lib/python2.7/dist-packages/scikit_image-0.9dev-py2.7-linux-i686.egg/skimage/feature/template.py", line4,in > from .import _template > File"_template.pyx", line1,in init skimage.feature._template (skimage/feature/_template.c:4042) > TypeError: C function skimage._shared.transform.integrate has wrong signature (expected float (PyArrayObject *, Py_ssize_t, Py_ssize_t, Py_ssize_t, Py_ssize_t), got float (__Pyx_memviewslice, Py_ssize_t, Py_ssize_t, Py_ssize_t, Py_ssize_t))| > > Any hint how to fix this? > > Cheers, > > Guillaume > > -- > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, > send an email to scikit-image+unsubscribe at googlegroups.com > . > For more options, visit https://groups.google.com/groups/opt_out. > > > > -- > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Mon Aug 5 00:40:34 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Mon, 5 Aug 2013 14:40:34 +1000 Subject: cython -a Message-ID: Hey skimagers, 2 quick questions: 1. about optimising Cython code in skimage: how do I compile with -a? Because of local imports and so on, calling it on the source file directly is not an option, and I haven't figured out an obvious place to make it spit out the html otherwise... 2. about running tests: how do I easily figure out which tests were skipped when running "make test" and why they were skipped? I'll also take this chance to thank Stefan for pushing me to modify SLIC to use memory views. I dragged my feet but in the end the code is way better and it's making my life much easier with the spacing modifications I'm working on now...! Don't you hate the unreasonable effectiveness of peer review? ;) Thanks, Juan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Mon Aug 5 02:17:22 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Mon, 5 Aug 2013 16:17:22 +1000 Subject: cython -a In-Reply-To: References: Message-ID: On Mon, Aug 5, 2013 at 3:49 PM, Johannes Sch?nberger wrote: > cython -a does not depend on any imports. At least on my system it works > for all files, regardless of imports? ? > Huh! I could have sworn that failed on my _slic.pyx before! Maybe an older Cython version? Anyway, worked fine, thanks! ;) Since make test just invokes nosetests, you could simply use nosetests > --no-skip. > Also worked! I wish all my problems were so easy! ;) Thanks for the help! -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Mon Aug 5 12:20:13 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 5 Aug 2013 18:20:13 +0200 Subject: random walker segmentation In-Reply-To: <20130805063642.GA2930@phare.normalesup.org> References: <20130730205016.GA27240@phare.normalesup.org> <20130805063642.GA2930@phare.normalesup.org> Message-ID: On Mon, Aug 5, 2013 at 8:36 AM, Emmanuelle Gouillart wrote: > thanks for your answer! I'll probably work on the new implementation > during the Euroscipy sprint, we'll see whether Cython can decrese the > small performance hit :-). Should we file an issue for this, so that we can track the technical aspects of the discussion there until a PR can be made? St?fan From jni.soma at gmail.com Mon Aug 5 06:14:56 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Mon, 5 Aug 2013 20:14:56 +1000 Subject: Fail to import peak_local_max In-Reply-To: <51FF75B2.6080404@mitotic-machine.org> References: <51FF75B2.6080404@mitotic-machine.org> Message-ID: This sort of stuff keeps popping up (see latest issues) because a lot of functions have moved to use memory views. If you do a `make clean` and then try again, it should work. On Mon, Aug 5, 2013 at 7:51 PM, Guillaume Gay wrote: > Hi every one, > > I encountered a bug when importing peak_local_max (with the latest github > version). The traceback goes up to feature.template. Running > test_template.py gives this: > > python /usr/local/src/scikit-image/skimage/feature/tests/test_template.py > Traceback (most recent call last): > File "/usr/local/src/scikit-image/skimage/feature/tests/test_template.py", line 5, in > from skimage.feature import match_template, peak_local_max > File "/usr/local/lib/python2.7/dist-packages/scikit_image-0.9dev-py2.7-linux-i686.egg/skimage/feature/__init__.py", line 9, in > from .template import match_template > File "/usr/local/lib/python2.7/dist-packages/scikit_image-0.9dev-py2.7-linux-i686.egg/skimage/feature/template.py", line 4, in > from . import _template > File "_template.pyx", line 1, in init skimage.feature._template (skimage/feature/_template.c:4042) > TypeError: C function skimage._shared.transform.integrate has wrong signature (expected float (PyArrayObject *, Py_ssize_t, Py_ssize_t, Py_ssize_t, Py_ssize_t), got float (__Pyx_memviewslice, Py_ssize_t, Py_ssize_t, Py_ssize_t, Py_ssize_t)) > > Any hint how to fix this? > > Cheers, > > Guillaume > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From masahi129 at gmail.com Tue Aug 6 05:36:32 2013 From: masahi129 at gmail.com (masa) Date: Tue, 6 Aug 2013 02:36:32 -0700 (PDT) Subject: integral image for each depth in a three dimensional array Message-ID: <8c9c3abe-2b82-4dde-9e53-e5f273367973@googlegroups.com> Hi all, I want to calculate integral images for each depth in a three dimensional array. My code is something like this: ret = np.empty(height, width, depth) int_imgs = [integral_image(image[:,:,i]) for i in range(depth)] for i in range(depth): ret[:,:,i] = int_imgs[i] Is there better way to do this? Thanks, masa -------------- next part -------------- An HTML attachment was scrubbed... URL: From masahi129 at gmail.com Tue Aug 6 06:51:42 2013 From: masahi129 at gmail.com (masa) Date: Tue, 6 Aug 2013 03:51:42 -0700 (PDT) Subject: integral image for each depth in a three dimensional array In-Reply-To: <20130806094305.GA6016@phare.normalesup.org> References: <8c9c3abe-2b82-4dde-9e53-e5f273367973@googlegroups.com> <20130806094305.GA6016@phare.normalesup.org> Message-ID: <7de61a14-23c5-4c2b-91e8-b8221c3d389f@googlegroups.com> Oh, I didn't notice that. Thanks ! masa -------------- next part -------------- An HTML attachment was scrubbed... URL: From emmanuelle.gouillart at nsup.org Tue Aug 6 05:43:05 2013 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Tue, 6 Aug 2013 11:43:05 +0200 Subject: integral image for each depth in a three dimensional array In-Reply-To: <8c9c3abe-2b82-4dde-9e53-e5f273367973@googlegroups.com> References: <8c9c3abe-2b82-4dde-9e53-e5f273367973@googlegroups.com> Message-ID: <20130806094305.GA6016@phare.normalesup.org> Hi Masa, actually you can pass a 3-D array to integral_image : >>> from skimage import transform >>> a = np.arange(27).reshape((3, 3, 3)) >>> a array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8]], [[ 9, 10, 11], [12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23], [24, 25, 26]]]) >>> transform.integral_image(a) array([[[ 0, 1, 2], [ 3, 5, 7], [ 9, 12, 15]], [[ 9, 11, 13], [ 24, 28, 32], [ 45, 51, 57]], [[ 27, 30, 33], [ 63, 69, 75], [108, 117, 126]]]) Would this do the trick? Cheers, Emmanuelle On Tue, Aug 06, 2013 at 02:36:32AM -0700, masa wrote: > Hi all, > I want to calculate integral images for each depth in a three dimensional > array. > My code is something like this: > ret = np.empty(height, width, depth) > int_imgs = [integral_image(image[:,:,i]) for i in range(depth)] > for i in range(depth): > ret[:,:,i] = int_imgs[i] > Is there better way to do this? > Thanks, > masa From jschoenberger at demuc.de Tue Aug 6 05:44:41 2013 From: jschoenberger at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Tue, 6 Aug 2013 11:44:41 +0200 Subject: integral image for each depth in a three dimensional array In-Reply-To: <8c9c3abe-2b82-4dde-9e53-e5f273367973@googlegroups.com> References: <8c9c3abe-2b82-4dde-9e53-e5f273367973@googlegroups.com> Message-ID: <3D5A1163-A50D-4B8A-B9CB-C86AC70DFCAE@demuc.de> Hi, That's the way to do it, but I'd directly save the result in the output array: > ret = np.empty_like(image) > for i in range(depth): > ret[:,:,i] = integral_image(image[:,:,i]) Johannes Sch?nberger Am 06.08.2013 um 11:36 schrieb masa : > Hi all, > > I want to calculate integral images for each depth in a three dimensional array. > My code is something like this: > > ret = np.empty(height, width, depth) > int_imgs = [integral_image(image[:,:,i]) for i in range(depth)] > for i in range(depth): > ret[:,:,i] = int_imgs[i] > > Is there better way to do this? > > Thanks, > masa > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > From jschoenberger at demuc.de Tue Aug 6 05:45:31 2013 From: jschoenberger at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Tue, 6 Aug 2013 11:45:31 +0200 Subject: integral image for each depth in a three dimensional array In-Reply-To: <20130806094305.GA6016@phare.normalesup.org> References: <8c9c3abe-2b82-4dde-9e53-e5f273367973@googlegroups.com> <20130806094305.GA6016@phare.normalesup.org> Message-ID: <4BD8FD04-250D-4FB7-9544-2C6DAF8E0A87@demuc.de> @Emmanuelle: didn't know about that, thanks for pointing that out! Johannes Sch?nberger Am 06.08.2013 um 11:43 schrieb Emmanuelle Gouillart : > Hi Masa, > > actually you can pass a 3-D array to integral_image : > >>>> from skimage import transform >>>> a = np.arange(27).reshape((3, 3, 3)) >>>> a > array([[[ 0, 1, 2], > [ 3, 4, 5], > [ 6, 7, 8]], > > [[ 9, 10, 11], > [12, 13, 14], > [15, 16, 17]], > > [[18, 19, 20], > [21, 22, 23], > [24, 25, 26]]]) >>>> transform.integral_image(a) > array([[[ 0, 1, 2], > [ 3, 5, 7], > [ 9, 12, 15]], > > [[ 9, 11, 13], > [ 24, 28, 32], > [ 45, 51, 57]], > > [[ 27, 30, 33], > [ 63, 69, 75], > [108, 117, 126]]]) > > > Would this do the trick? > Cheers, > Emmanuelle > > On Tue, Aug 06, 2013 at 02:36:32AM -0700, masa wrote: >> Hi all, > >> I want to calculate integral images for each depth in a three dimensional >> array. >> My code is something like this: > >> ret = np.empty(height, width, depth) >> int_imgs = [integral_image(image[:,:,i]) for i in range(depth)] >> for i in range(depth): >> ret[:,:,i] = int_imgs[i] > >> Is there better way to do this? > >> Thanks, >> masa > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > From emmanuelle.gouillart at nsup.org Tue Aug 6 05:48:53 2013 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Tue, 6 Aug 2013 11:48:53 +0200 Subject: integral image for each depth in a three dimensional array In-Reply-To: <4BD8FD04-250D-4FB7-9544-2C6DAF8E0A87@demuc.de> References: <8c9c3abe-2b82-4dde-9e53-e5f273367973@googlegroups.com> <20130806094305.GA6016@phare.normalesup.org> <4BD8FD04-250D-4FB7-9544-2C6DAF8E0A87@demuc.de> Message-ID: <20130806094853.GA22199@phare.normalesup.org> On Tue, Aug 06, 2013 at 11:45:31AM +0200, Johannes Sch??????nberger wrote: > @Emmanuelle: didn't know about that, thanks for pointing that out! > Johannes Sch??????nberger Didn't know either, but I had a look at the code: return x.cumsum(1).cumsum(0) so additional dimensions are "transparent" Emma > Am 06.08.2013 um 11:43 schrieb Emmanuelle Gouillart : > > Hi Masa, > > actually you can pass a 3-D array to integral_image : > >>>> from skimage import transform > >>>> a = np.arange(27).reshape((3, 3, 3)) > >>>> a > > array([[[ 0, 1, 2], > > [ 3, 4, 5], > > [ 6, 7, 8]], > > [[ 9, 10, 11], > > [12, 13, 14], > > [15, 16, 17]], > > [[18, 19, 20], > > [21, 22, 23], > > [24, 25, 26]]]) > >>>> transform.integral_image(a) > > array([[[ 0, 1, 2], > > [ 3, 5, 7], > > [ 9, 12, 15]], > > [[ 9, 11, 13], > > [ 24, 28, 32], > > [ 45, 51, 57]], > > [[ 27, 30, 33], > > [ 63, 69, 75], > > [108, 117, 126]]]) > > Would this do the trick? > > Cheers, > > Emmanuelle > > On Tue, Aug 06, 2013 at 02:36:32AM -0700, masa wrote: > >> Hi all, > >> I want to calculate integral images for each depth in a three dimensional > >> array. > >> My code is something like this: > >> ret = np.empty(height, width, depth) > >> int_imgs = [integral_image(image[:,:,i]) for i in range(depth)] > >> for i in range(depth): > >> ret[:,:,i] = int_imgs[i] > >> Is there better way to do this? > >> Thanks, > >> masa > > -- > > You received this message because you are subscribed to the Google Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. From jaime.frio at gmail.com Tue Aug 13 09:02:09 2013 From: jaime.frio at gmail.com (=?ISO-8859-1?Q?Jaime_Fern=E1ndez_del_R=EDo?=) Date: Tue, 13 Aug 2013 06:02:09 -0700 Subject: nd image neighbor-finding In-Reply-To: References: Message-ID: Answers from a heavy numpy user with little experience with scikits-image, so they may not be consistent with general API rules: 1. Either accept both, with a keyword argument, e.g. flat_idx=True or ravel_idx=True, or have two functions doing the exact same thing, one for each type of index. I prefer the former than the latter. 2. Whatever the input, accept anything array-like. If you are dealing with multi-indices, either require that the actual dimensions are along the last axis, or have an axis keyword argument that specifies it. It may not be necessary, but it may make users life easier if you do not force a specific shape like (r,d) for r indices with d dimensions, but something like (r, s, d) is also accepted, and produces a return of e.g. shape (n,r,s,d), where n is the number of neighbors. 3. Anything returned that is not an array is going to be a big performance hit... You could adopt a modified version of what scipy.ndimage does for similar situations: have a mode keyword argument, e.g. convolve accepts {?reflect?,?constant?,?nearest?,?mirror?, ?wrap?}, a 'constant' mode could be used to mark (by default) indices out of bounds with -1, which is equivalent to returning a validity mask, specially if you roll an example using np.any in the docstring, 'mirror' or 'reflect' are good options if you need a list of neighbors but don't mind repetitions, and I very often work with images that are tiled, so 'warp' would be a good thing also. Jaime On Mon, Aug 12, 2013 at 10:04 PM, Juan Nunez-Iglesias wrote: > Hi everyone, > > I'm trying to design an interface for an nD "get_neighbors(idxs, ar, > conn)" function: given one or more indices into an ndarray `ar`, and a > connectivity (integer in {1, ..., ar.ndim}), return the indices of its > neighbors. There's multiple design decisions to be made: > > 1. Work with linearized or regular indices? In the past I've used > linearized indices because they are much easier to treat generally in an nd > setting. However, most users probably don't want to deal with converting to > and from linear indices (though np has a function to convert to and from > linear indices). Perhaps more relevant though, linear indices get very very > tricky once you stop dealing with contiguous arrays. In fact, I'm not quite > sure I'm up to the task. ;) > > 2. If we want to work with regular indices, what should be the input > format? There's lots of options: a list of tuples? a tuple of np.arrays of > ints, similar to what numpy expects for __getitem__? a 2D array of ints, > similar to the "coords" format used throughout ndimage? > > 3. What should the return format be? The problem is that edge pixels have > fewer neighbors than non-edge ones. So, anything like a 2D ndarray is > pretty much out of the question, *unless* we also return a matching array > called something like `valid` that shows directions where the > neighbor-finding has hit an edge. This is of course all sidestepped if we > only allow one index to be passed, but that forgoes a lot of possible saved > efficiency by not making lots of python function calls. > > In the past, I've sidestepped the whole issue of the boundaries by padding > my array and never asking for the neighbors of something in the "pad" area, > but I don't think that's viable for a public-facing function. =) > > I'll be very interested to hear what everyone has to say about this...! > Thanks! > > Juan. > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Tue Aug 13 01:04:51 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Tue, 13 Aug 2013 15:04:51 +1000 Subject: nd image neighbor-finding Message-ID: Hi everyone, I'm trying to design an interface for an nD "get_neighbors(idxs, ar, conn)" function: given one or more indices into an ndarray `ar`, and a connectivity (integer in {1, ..., ar.ndim}), return the indices of its neighbors. There's multiple design decisions to be made: 1. Work with linearized or regular indices? In the past I've used linearized indices because they are much easier to treat generally in an nd setting. However, most users probably don't want to deal with converting to and from linear indices (though np has a function to convert to and from linear indices). Perhaps more relevant though, linear indices get very very tricky once you stop dealing with contiguous arrays. In fact, I'm not quite sure I'm up to the task. ;) 2. If we want to work with regular indices, what should be the input format? There's lots of options: a list of tuples? a tuple of np.arrays of ints, similar to what numpy expects for __getitem__? a 2D array of ints, similar to the "coords" format used throughout ndimage? 3. What should the return format be? The problem is that edge pixels have fewer neighbors than non-edge ones. So, anything like a 2D ndarray is pretty much out of the question, *unless* we also return a matching array called something like `valid` that shows directions where the neighbor-finding has hit an edge. This is of course all sidestepped if we only allow one index to be passed, but that forgoes a lot of possible saved efficiency by not making lots of python function calls. In the past, I've sidestepped the whole issue of the boundaries by padding my array and never asking for the neighbors of something in the "pad" area, but I don't think that's viable for a public-facing function. =) I'll be very interested to hear what everyone has to say about this...! Thanks! Juan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Tue Aug 13 18:59:48 2013 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Tue, 13 Aug 2013 15:59:48 -0700 (PDT) Subject: nd image neighbor-finding In-Reply-To: References: Message-ID: <76d3701a-7177-4ea8-82aa-56a1b7a1e742@googlegroups.com> I?m +1 on all of Jaime?s suggestions. Also, for #3 there is no need to reinvent the wheel - I'm pretty sure every one of those padding options, and more, are supported by skimage.util.pad. Depending on how you implement the function, you may need to run np.ascontiguousarray on the output of skimage.util.pad. Josh On Tuesday, August 13, 2013 12:04:51 AM UTC-5, Juan Nunez-Iglesias wrote: Hi everyone, > > I'm trying to design an interface for an nD "get_neighbors(idxs, ar, > conn)" function: given one or more indices into an ndarray `ar`, and a > connectivity (integer in {1, ..., ar.ndim}), return the indices of its > neighbors. There's multiple design decisions to be made: > > 1. Work with linearized or regular indices? In the past I've used > linearized indices because they are much easier to treat generally in an nd > setting. However, most users probably don't want to deal with converting to > and from linear indices (though np has a function to convert to and from > linear indices). Perhaps more relevant though, linear indices get very very > tricky once you stop dealing with contiguous arrays. In fact, I'm not quite > sure I'm up to the task. ;) > > 2. If we want to work with regular indices, what should be the input > format? There's lots of options: a list of tuples? a tuple of np.arrays of > ints, similar to what numpy expects for __getitem__? a 2D array of ints, > similar to the "coords" format used throughout ndimage? > > 3. What should the return format be? The problem is that edge pixels have > fewer neighbors than non-edge ones. So, anything like a 2D ndarray is > pretty much out of the question, *unless* we also return a matching array > called something like `valid` that shows directions where the > neighbor-finding has hit an edge. This is of course all sidestepped if we > only allow one index to be passed, but that forgoes a lot of possible saved > efficiency by not making lots of python function calls. > > In the past, I've sidestepped the whole issue of the boundaries by padding > my array and never asking for the neighbors of something in the "pad" area, > but I don't think that's viable for a public-facing function. =) > > I'll be very interested to hear what everyone has to say about this...! > Thanks! > > Juan. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Tue Aug 13 21:44:32 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 14 Aug 2013 11:44:32 +1000 Subject: nd image neighbor-finding In-Reply-To: <76d3701a-7177-4ea8-82aa-56a1b7a1e742@googlegroups.com> References: <76d3701a-7177-4ea8-82aa-56a1b7a1e742@googlegroups.com> Message-ID: Thanks! I never would have thought of 2 without asking here. ;) btw, my concern with using -1 as flag is that it's a valid indexing input. Probably not something most people would expect the function to return, but if they are careless with the output, they would get weird results. @Josh: I don't think Jaime's suggestions involve actually padding the image, but rather returning indices corresponding to either the reflected elements, or flag indices, or the wrapped elements. e.g. >>> get_neighbor_idxs((5, 5), (0, 0), connectivity=1, mode='wrap') np.array([[4, 0], [0, 4], [1, 0], [0, 1]], dtype=int) Unless I'm missing a clever use of skimage.util.pad that would help me get this? Juan. On Wed, Aug 14, 2013 at 8:59 AM, Josh Warner wrote: > I?m +1 on all of Jaime?s suggestions. > > Also, for #3 there is no need to reinvent the wheel - I'm pretty sure > every one of those padding options, and more, are supported by > skimage.util.pad. Depending on how you implement the function, you may > need to run np.ascontiguousarray on the output of skimage.util.pad. > > Josh > > On Tuesday, August 13, 2013 12:04:51 AM UTC-5, Juan Nunez-Iglesias wrote: > > Hi everyone, >> >> I'm trying to design an interface for an nD "get_neighbors(idxs, ar, >> conn)" function: given one or more indices into an ndarray `ar`, and a >> connectivity (integer in {1, ..., ar.ndim}), return the indices of its >> neighbors. There's multiple design decisions to be made: >> >> 1. Work with linearized or regular indices? In the past I've used >> linearized indices because they are much easier to treat generally in an nd >> setting. However, most users probably don't want to deal with converting to >> and from linear indices (though np has a function to convert to and from >> linear indices). Perhaps more relevant though, linear indices get very very >> tricky once you stop dealing with contiguous arrays. In fact, I'm not quite >> sure I'm up to the task. ;) >> >> 2. If we want to work with regular indices, what should be the input >> format? There's lots of options: a list of tuples? a tuple of np.arrays of >> ints, similar to what numpy expects for __getitem__? a 2D array of ints, >> similar to the "coords" format used throughout ndimage? >> >> 3. What should the return format be? The problem is that edge pixels have >> fewer neighbors than non-edge ones. So, anything like a 2D ndarray is >> pretty much out of the question, *unless* we also return a matching array >> called something like `valid` that shows directions where the >> neighbor-finding has hit an edge. This is of course all sidestepped if we >> only allow one index to be passed, but that forgoes a lot of possible saved >> efficiency by not making lots of python function calls. >> >> In the past, I've sidestepped the whole issue of the boundaries by >> padding my array and never asking for the neighbors of something in the >> "pad" area, but I don't think that's viable for a public-facing function. =) >> >> I'll be very interested to hear what everyone has to say about this...! >> Thanks! >> >> Juan. >> >> -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean.kossaifi at gmail.com Mon Aug 19 14:03:50 2013 From: jean.kossaifi at gmail.com (Jean K) Date: Mon, 19 Aug 2013 11:03:50 -0700 (PDT) Subject: HOG Message-ID: Hello, I am new to the scikit-image, and interested in using HOG. However, the one which is implemented doesn't seem to give as good results as expected. As a possible explanation I think mainly of 2 reasons: 1) the way of computing the gradients ( if I'm not mistaking, you use a [-1, 1] filter when they use a centered one [-1, 0, 1]. 2) They use tri-linear interpolation when here the you seem to use hard binning. Does this make sense or am I missing something? Also, I tried to write another version, trying to stick as much as possible to Dalal&Triggs version, although I don't really know how to assess the results it produces. Would that be of interest? Cheers, Jean -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Aug 20 19:03:21 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 21 Aug 2013 01:03:21 +0200 Subject: HOG In-Reply-To: References: Message-ID: On Mon, Aug 19, 2013 at 8:03 PM, Jean K wrote: > As a possible explanation I think mainly of 2 reasons: > 1) the way of computing the gradients ( if I'm not mistaking, you use a [-1, > 1] filter when they use a centered one [-1, 0, 1]. > 2) They use tri-linear interpolation when here the you seem to use hard > binning. Would someone more intimately familiar with HoG answer Jean? Thanks St?fan From jsch at demuc.de Wed Aug 21 03:06:56 2013 From: jsch at demuc.de (=?windows-1252?Q?Johannes_Sch=F6nberger?=) Date: Wed, 21 Aug 2013 09:06:56 +0200 Subject: HOG In-Reply-To: References: Message-ID: <404D749F-0986-47FB-8116-1E8AA007B062@demuc.de> Hi Jean, First of all, I am not an expert regarding HoG? :-) > 1) the way of computing the gradients ( if I'm not mistaking, you use a [-1, 1] filter when they use a centered one [-1, 0, 1]. Not sure why the original author of the implementation did use np.diff rather than central differences or even Sobel / Scharr and the like (apart from performance). It should return much better approximations of the gradient. > 2) They use tri-linear interpolation when here the you seem to use hard binning. The tri-linear interpolation seems to be the original approach, but I do not know of a simple way to implement it in pure Python in a fast way? I guess scipy.ndimage.map_coordinates might be very useful here. I think, these fixes would be both much appreciated! > Also, I tried to write another version, trying to stick as much as possible to Dalal&Triggs version, although I don't really know how to assess the results it produces. Would that be of interest? Yes, definitely. Johannes From jean.kossaifi at gmail.com Wed Aug 21 14:04:03 2013 From: jean.kossaifi at gmail.com (Jean K) Date: Wed, 21 Aug 2013 11:04:03 -0700 (PDT) Subject: HOG In-Reply-To: <404D749F-0986-47FB-8116-1E8AA007B062@demuc.de> References: <404D749F-0986-47FB-8116-1E8AA007B062@demuc.de> Message-ID: <6734d215-9eac-4961-887c-b2e11cd95827@googlegroups.com> Hi, Thank you for your answers :) @Johannes: For the tri-linear interpolation, you're absolutely right, and I spent a lot of time thinking about it. Eventually I thought of something: Let sx, sy be the size of the image, nbins the number of desired bins. First, we interpolate between the bins, from the original (sx, sy) image to a (sx, sy, nbins) array. Then we can notice that, inside each cell, we have pixels_per_cell_x * pixels_per_cell_y histograms, which position in the cell doesn't matter (because we are going to sum them up to have only one histogram per cell). We can thus virtually divide each cell in 4, each part being interpolated in the 4 diagonally adjacent sub-cells. As a result, each of the 4 sub-cell will be interpolated once in the same cell, and once in the 3 adjacent cells (which is exactly what interpolation is). The only thing to do is to multiply by the right coefficient. Here's an image to illustrate: We sum 4 times in the 4 diagonal directions. The coefficient for the sum can be represented by a single matrix which is turned. Finally you just sum the histograms in each cell to obtain the (n_cells_x, n_cells_y, nbins) desired orientation_histogram (which you can further normalise block-wise). So I implemented a version using this trick, based on the original code, and the result seems to be fast for & 160*160 image. However, as I said, I'm not perfectly sure of the result. Also, I separated the gradient computation from the binning so that the function can also be used for HOF. Maybe I could do a pull request so you can have a look on the code? Cheers, Jean On Wednesday, 21 August 2013 08:06:56 UTC+1, Johannes Sch?nberger wrote: > > Hi Jean, > > First of all, I am not an expert regarding HoG? :-) > > > 1) the way of computing the gradients ( if I'm not mistaking, you use a > [-1, 1] filter when they use a centered one [-1, 0, 1]. > > Not sure why the original author of the implementation did use np.diff > rather than central differences or even Sobel / Scharr and the like (apart > from performance). It should return much better approximations of the > gradient. > > > 2) They use tri-linear interpolation when here the you seem to use hard > binning. > > The tri-linear interpolation seems to be the original approach, but I do > not know of a simple way to implement it in pure Python in a fast way? I > guess scipy.ndimage.map_coordinates might be very useful here. > > I think, these fixes would be both much appreciated! > > > Also, I tried to write another version, trying to stick as much as > possible to Dalal&Triggs version, although I don't really know how to > assess the results it produces. Would that be of interest? > > Yes, definitely. > > Johannes -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean.kossaifi at gmail.com Wed Aug 21 17:59:12 2013 From: jean.kossaifi at gmail.com (Jean K) Date: Wed, 21 Aug 2013 14:59:12 -0700 (PDT) Subject: HOG In-Reply-To: References: <404D749F-0986-47FB-8116-1E8AA007B062@demuc.de> <6734d215-9eac-4961-887c-b2e11cd95827@googlegroups.com> Message-ID: My is accessible here : https://github.com/JeanKossaifi/scikit-image/tree/improve_hog I don't have my own computer so I couldn't run the tests yet, and there must be some issues: should I still do the pull request so we can discuss there? Also, I think the interpolation has to be done on the bins before, otherwise, when we sum the histograms in each cells, the orientation would get mixed... Regards, Jean On Wednesday, 21 August 2013 19:44:41 UTC+1, Johannes Sch?nberger wrote: > > Your ideas seem totally valid to me (if I understand correctly), but how > about turning around the order of interpolation: > > 1. 2-D interpolation (x, y direction) > 2. Interpolation in the 3rd dimension, which could then easily be > implemented with array slicing ``for i, j in pixel_per_cell: > magnitude[i::pixels_per_cell, j::pixels_per_cell] and > orientation[i::pixels_per_cell, j::pixels_per_cell]``. > > This should be basically the same, but you save some memory as you do not > the (sx, sy, nbins) intermediate array. > > It would be great if you could open a PR with your code, then we can > discuss in there :-) > > Regards, Johannes > > Am 21.08.2013 um 20:04 schrieb Jean K >: > > > > Hi, > > > > Thank you for your answers :) > > > > @Johannes: For the tri-linear interpolation, you're absolutely right, > and I spent a lot of time thinking about it. > > > > Eventually I thought of something: > > Let sx, sy be the size of the image, nbins the number of desired bins. > > First, we interpolate between the bins, from the original (sx, sy) image > to a (sx, sy, nbins) array. > > Then we can notice that, inside each cell, we have pixels_per_cell_x * > pixels_per_cell_y histograms, which position in the cell doesn't matter > (because we are going to sum them up to have only one histogram per cell). > > We can thus virtually divide each cell in 4, each part being > interpolated in the 4 diagonally adjacent sub-cells. > > As a result, each of the 4 sub-cell will be interpolated once in the > same cell, and once in the 3 adjacent cells (which is exactly what > interpolation is). > > The only thing to do is to multiply by the right coefficient. > > Here's an image to illustrate: We sum 4 times in the 4 diagonal > directions. The coefficient for the sum can be represented by a single > matrix which is turned. > > > > > > Finally you just sum the histograms in each cell to obtain the > (n_cells_x, n_cells_y, nbins) desired orientation_histogram (which you can > further normalise block-wise). > > > > > > So I implemented a version using this trick, based on the original code, > and the result seems to be fast for & 160*160 image. > > However, as I said, I'm not perfectly sure of the result. > > > > Also, I separated the gradient computation from the binning so that the > function can also be used for HOF. > > > > Maybe I could do a pull request so you can have a look on the code? > > > > Cheers, > > > > Jean > > > > > > On Wednesday, 21 August 2013 08:06:56 UTC+1, Johannes Sch?nberger wrote: > > Hi Jean, > > > > First of all, I am not an expert regarding HoG? :-) > > > > > 1) the way of computing the gradients ( if I'm not mistaking, you use > a [-1, 1] filter when they use a centered one [-1, 0, 1]. > > > > Not sure why the original author of the implementation did use np.diff > rather than central differences or even Sobel / Scharr and the like (apart > from performance). It should return much better approximations of the > gradient. > > > > > 2) They use tri-linear interpolation when here the you seem to use > hard binning. > > > > The tri-linear interpolation seems to be the original approach, but I do > not know of a simple way to implement it in pure Python in a fast way? I > guess scipy.ndimage.map_coordinates might be very useful here. > > > > I think, these fixes would be both much appreciated! > > > > > Also, I tried to write another version, trying to stick as much as > possible to Dalal&Triggs version, although I don't really know how to > assess the results it produces. Would that be of interest? > > > > Yes, definitely. > > > > Johannes > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image... at googlegroups.com . > > For more options, visit https://groups.google.com/groups/opt_out. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Wed Aug 21 14:44:41 2013 From: jsch at demuc.de (=?windows-1252?Q?Johannes_Sch=F6nberger?=) Date: Wed, 21 Aug 2013 20:44:41 +0200 Subject: HOG In-Reply-To: <6734d215-9eac-4961-887c-b2e11cd95827@googlegroups.com> References: <404D749F-0986-47FB-8116-1E8AA007B062@demuc.de> <6734d215-9eac-4961-887c-b2e11cd95827@googlegroups.com> Message-ID: Your ideas seem totally valid to me (if I understand correctly), but how about turning around the order of interpolation: 1. 2-D interpolation (x, y direction) 2. Interpolation in the 3rd dimension, which could then easily be implemented with array slicing ``for i, j in pixel_per_cell: magnitude[i::pixels_per_cell, j::pixels_per_cell] and orientation[i::pixels_per_cell, j::pixels_per_cell]``. This should be basically the same, but you save some memory as you do not the (sx, sy, nbins) intermediate array. It would be great if you could open a PR with your code, then we can discuss in there :-) Regards, Johannes Am 21.08.2013 um 20:04 schrieb Jean K : > Hi, > > Thank you for your answers :) > > @Johannes: For the tri-linear interpolation, you're absolutely right, and I spent a lot of time thinking about it. > > Eventually I thought of something: > Let sx, sy be the size of the image, nbins the number of desired bins. > First, we interpolate between the bins, from the original (sx, sy) image to a (sx, sy, nbins) array. > Then we can notice that, inside each cell, we have pixels_per_cell_x * pixels_per_cell_y histograms, which position in the cell doesn't matter (because we are going to sum them up to have only one histogram per cell). > We can thus virtually divide each cell in 4, each part being interpolated in the 4 diagonally adjacent sub-cells. > As a result, each of the 4 sub-cell will be interpolated once in the same cell, and once in the 3 adjacent cells (which is exactly what interpolation is). > The only thing to do is to multiply by the right coefficient. > Here's an image to illustrate: We sum 4 times in the 4 diagonal directions. The coefficient for the sum can be represented by a single matrix which is turned. > > > Finally you just sum the histograms in each cell to obtain the (n_cells_x, n_cells_y, nbins) desired orientation_histogram (which you can further normalise block-wise). > > > So I implemented a version using this trick, based on the original code, and the result seems to be fast for & 160*160 image. > However, as I said, I'm not perfectly sure of the result. > > Also, I separated the gradient computation from the binning so that the function can also be used for HOF. > > Maybe I could do a pull request so you can have a look on the code? > > Cheers, > > Jean > > > On Wednesday, 21 August 2013 08:06:56 UTC+1, Johannes Sch?nberger wrote: > Hi Jean, > > First of all, I am not an expert regarding HoG? :-) > > > 1) the way of computing the gradients ( if I'm not mistaking, you use a [-1, 1] filter when they use a centered one [-1, 0, 1]. > > Not sure why the original author of the implementation did use np.diff rather than central differences or even Sobel / Scharr and the like (apart from performance). It should return much better approximations of the gradient. > > > 2) They use tri-linear interpolation when here the you seem to use hard binning. > > The tri-linear interpolation seems to be the original approach, but I do not know of a simple way to implement it in pure Python in a fast way? I guess scipy.ndimage.map_coordinates might be very useful here. > > I think, these fixes would be both much appreciated! > > > Also, I tried to write another version, trying to stick as much as possible to Dalal&Triggs version, although I don't really know how to assess the results it produces. Would that be of interest? > > Yes, definitely. > > Johannes > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. From jsch at demuc.de Thu Aug 22 02:43:06 2013 From: jsch at demuc.de (=?windows-1252?Q?Johannes_Sch=F6nberger?=) Date: Thu, 22 Aug 2013 08:43:06 +0200 Subject: HOG In-Reply-To: References: <404D749F-0986-47FB-8116-1E8AA007B062@demuc.de> <6734d215-9eac-4961-887c-b2e11cd95827@googlegroups.com> Message-ID: <48C0EA61-B975-42AB-AB49-7D12195232B8@demuc.de> It would be great if you could open a PR against your branch. Am 21.08.2013 um 23:59 schrieb Jean K : > My is accessible here : https://github.com/JeanKossaifi/scikit-image/tree/improve_hog > I don't have my own computer so I couldn't run the tests yet, and there must be some issues: should I still do the pull request so we can discuss there? > > Also, I think the interpolation has to be done on the bins before, otherwise, when we sum the histograms in each cells, the orientation would get mixed... > > Regards, > > Jean > > On Wednesday, 21 August 2013 19:44:41 UTC+1, Johannes Sch?nberger wrote: > Your ideas seem totally valid to me (if I understand correctly), but how about turning around the order of interpolation: > > 1. 2-D interpolation (x, y direction) > 2. Interpolation in the 3rd dimension, which could then easily be implemented with array slicing ``for i, j in pixel_per_cell: magnitude[i::pixels_per_cell, j::pixels_per_cell] and orientation[i::pixels_per_cell, j::pixels_per_cell]``. > > This should be basically the same, but you save some memory as you do not the (sx, sy, nbins) intermediate array. > > It would be great if you could open a PR with your code, then we can discuss in there :-) > > Regards, Johannes > > Am 21.08.2013 um 20:04 schrieb Jean K : > > > Hi, > > > > Thank you for your answers :) > > > > @Johannes: For the tri-linear interpolation, you're absolutely right, and I spent a lot of time thinking about it. > > > > Eventually I thought of something: > > Let sx, sy be the size of the image, nbins the number of desired bins. > > First, we interpolate between the bins, from the original (sx, sy) image to a (sx, sy, nbins) array. > > Then we can notice that, inside each cell, we have pixels_per_cell_x * pixels_per_cell_y histograms, which position in the cell doesn't matter (because we are going to sum them up to have only one histogram per cell). > > We can thus virtually divide each cell in 4, each part being interpolated in the 4 diagonally adjacent sub-cells. > > As a result, each of the 4 sub-cell will be interpolated once in the same cell, and once in the 3 adjacent cells (which is exactly what interpolation is). > > The only thing to do is to multiply by the right coefficient. > > Here's an image to illustrate: We sum 4 times in the 4 diagonal directions. The coefficient for the sum can be represented by a single matrix which is turned. > > > > > > Finally you just sum the histograms in each cell to obtain the (n_cells_x, n_cells_y, nbins) desired orientation_histogram (which you can further normalise block-wise). > > > > > > So I implemented a version using this trick, based on the original code, and the result seems to be fast for & 160*160 image. > > However, as I said, I'm not perfectly sure of the result. > > > > Also, I separated the gradient computation from the binning so that the function can also be used for HOF. > > > > Maybe I could do a pull request so you can have a look on the code? > > > > Cheers, > > > > Jean > > > > > > On Wednesday, 21 August 2013 08:06:56 UTC+1, Johannes Sch?nberger wrote: > > Hi Jean, > > > > First of all, I am not an expert regarding HoG? :-) > > > > > 1) the way of computing the gradients ( if I'm not mistaking, you use a [-1, 1] filter when they use a centered one [-1, 0, 1]. > > > > Not sure why the original author of the implementation did use np.diff rather than central differences or even Sobel / Scharr and the like (apart from performance). It should return much better approximations of the gradient. > > > > > 2) They use tri-linear interpolation when here the you seem to use hard binning. > > > > The tri-linear interpolation seems to be the original approach, but I do not know of a simple way to implement it in pure Python in a fast way? I guess scipy.ndimage.map_coordinates might be very useful here. > > > > I think, these fixes would be both much appreciated! > > > > > Also, I tried to write another version, trying to stick as much as possible to Dalal&Triggs version, although I don't really know how to assess the results it produces. Would that be of interest? > > > > Yes, definitely. > > > > Johannes > > > > -- > > You received this message because you are subscribed to the Google Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image... at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. From jean.kossaifi at gmail.com Thu Aug 22 06:10:24 2013 From: jean.kossaifi at gmail.com (Jean K) Date: Thu, 22 Aug 2013 11:10:24 +0100 Subject: HOG In-Reply-To: <48C0EA61-B975-42AB-AB49-7D12195232B8@demuc.de> References: <404D749F-0986-47FB-8116-1E8AA007B062@demuc.de> <6734d215-9eac-4961-887c-b2e11cd95827@googlegroups.com> <48C0EA61-B975-42AB-AB49-7D12195232B8@demuc.de> Message-ID: Done: https://github.com/scikit-image/scikit-image/pull/703 Regards, Jean 2013/8/22 Johannes Sch?nberger > It would be great if you could open a PR against your branch. > > Am 21.08.2013 um 23:59 schrieb Jean K : > > > My is accessible here : > https://github.com/JeanKossaifi/scikit-image/tree/improve_hog > > I don't have my own computer so I couldn't run the tests yet, and there > must be some issues: should I still do the pull request so we can discuss > there? > > > > Also, I think the interpolation has to be done on the bins before, > otherwise, when we sum the histograms in each cells, the orientation would > get mixed... > > > > Regards, > > > > Jean > > > > On Wednesday, 21 August 2013 19:44:41 UTC+1, Johannes Sch?nberger wrote: > > Your ideas seem totally valid to me (if I understand correctly), but how > about turning around the order of interpolation: > > > > 1. 2-D interpolation (x, y direction) > > 2. Interpolation in the 3rd dimension, which could then easily be > implemented with array slicing ``for i, j in pixel_per_cell: > magnitude[i::pixels_per_cell, j::pixels_per_cell] and > orientation[i::pixels_per_cell, j::pixels_per_cell]``. > > > > This should be basically the same, but you save some memory as you do > not the (sx, sy, nbins) intermediate array. > > > > It would be great if you could open a PR with your code, then we can > discuss in there :-) > > > > Regards, Johannes > > > > Am 21.08.2013 um 20:04 schrieb Jean K : > > > > > Hi, > > > > > > Thank you for your answers :) > > > > > > @Johannes: For the tri-linear interpolation, you're absolutely right, > and I spent a lot of time thinking about it. > > > > > > Eventually I thought of something: > > > Let sx, sy be the size of the image, nbins the number of desired bins. > > > First, we interpolate between the bins, from the original (sx, sy) > image to a (sx, sy, nbins) array. > > > Then we can notice that, inside each cell, we have pixels_per_cell_x * > pixels_per_cell_y histograms, which position in the cell doesn't matter > (because we are going to sum them up to have only one histogram per cell). > > > We can thus virtually divide each cell in 4, each part being > interpolated in the 4 diagonally adjacent sub-cells. > > > As a result, each of the 4 sub-cell will be interpolated once in the > same cell, and once in the 3 adjacent cells (which is exactly what > interpolation is). > > > The only thing to do is to multiply by the right coefficient. > > > Here's an image to illustrate: We sum 4 times in the 4 diagonal > directions. The coefficient for the sum can be represented by a single > matrix which is turned. > > > > > > > > > Finally you just sum the histograms in each cell to obtain the > (n_cells_x, n_cells_y, nbins) desired orientation_histogram (which you can > further normalise block-wise). > > > > > > > > > So I implemented a version using this trick, based on the original > code, and the result seems to be fast for & 160*160 image. > > > However, as I said, I'm not perfectly sure of the result. > > > > > > Also, I separated the gradient computation from the binning so that > the function can also be used for HOF. > > > > > > Maybe I could do a pull request so you can have a look on the code? > > > > > > Cheers, > > > > > > Jean > > > > > > > > > On Wednesday, 21 August 2013 08:06:56 UTC+1, Johannes Sch?nberger > wrote: > > > Hi Jean, > > > > > > First of all, I am not an expert regarding HoG? :-) > > > > > > > 1) the way of computing the gradients ( if I'm not mistaking, you > use a [-1, 1] filter when they use a centered one [-1, 0, 1]. > > > > > > Not sure why the original author of the implementation did use np.diff > rather than central differences or even Sobel / Scharr and the like (apart > from performance). It should return much better approximations of the > gradient. > > > > > > > 2) They use tri-linear interpolation when here the you seem to use > hard binning. > > > > > > The tri-linear interpolation seems to be the original approach, but I > do not know of a simple way to implement it in pure Python in a fast way? I > guess scipy.ndimage.map_coordinates might be very useful here. > > > > > > I think, these fixes would be both much appreciated! > > > > > > > Also, I tried to write another version, trying to stick as much as > possible to Dalal&Triggs version, although I don't really know how to > assess the results it produces. Would that be of interest? > > > > > > Yes, definitely. > > > > > > Johannes > > > > > > -- > > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image... at googlegroups.com. > > > For more options, visit https://groups.google.com/groups/opt_out. > > > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. > > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/NsM7xrWSzfI/unsubscribe. > To unsubscribe from this group and all of its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirill.shklovsky at gmail.com Mon Aug 26 13:01:38 2013 From: kirill.shklovsky at gmail.com (angelatlarge) Date: Mon, 26 Aug 2013 10:01:38 -0700 (PDT) Subject: HoG orientation Message-ID: Just added a PR (#715) to allow computation of signed orientations in HoG. Minor feature, really, and perhaps not all that useful given that there is a substantial revision proposed to the HoG code in PR #703. All the PR proposes to do is to pull out the 180 degree constant into a separate variable and initialize it based on whether orientations should be signed or not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Mon Aug 26 15:43:29 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 26 Aug 2013 21:43:29 +0200 Subject: New PEP8 recommendations Message-ID: http://www.python.org/dev/peps/pep-0008/#other-recommendations I knew we were onto something with our spacing of those exponents :) From jsch at demuc.de Mon Aug 26 15:45:38 2013 From: jsch at demuc.de (=?windows-1252?Q?Johannes_Sch=F6nberger?=) Date: Mon, 26 Aug 2013 21:45:38 +0200 Subject: New PEP8 recommendations In-Reply-To: References: Message-ID: <1929603570.2635590.1377546338464.open-xchange@localhost> Hi Stefan, I think I sent this link a couple of weeks ago already to the list :-) At least we can be a bit less pedantic about the coding style now ;-) Johannes Sch?nberger Am 26.08.2013 um 21:43 schrieb "St?fan van der Walt" : > http://www.python.org/dev/peps/pep-0008/#other-recommendations > > I knew we were onto something with our spacing of those exponents :) > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > From stefan at sun.ac.za Mon Aug 26 15:48:34 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 26 Aug 2013 21:48:34 +0200 Subject: New PEP8 recommendations In-Reply-To: <1929603570.2635590.1377546338464.open-xchange@localhost> References: <1929603570.2635590.1377546338464.open-xchange@localhost> Message-ID: On Mon, Aug 26, 2013 at 9:45 PM, Johannes Sch?nberger wrote: > I think I sent this link a couple of weeks ago already to the list :-) Reinforcement learning! Now, if we can get the PEP checking tools to be quieter... St?fan From jsch at demuc.de Mon Aug 26 15:51:49 2013 From: jsch at demuc.de (=?windows-1252?Q?Johannes_Sch=F6nberger?=) Date: Mon, 26 Aug 2013 21:51:49 +0200 Subject: New PEP8 recommendations In-Reply-To: References: <1929603570.2635590.1377546338464.open-xchange@localhost> Message-ID: <989369511.2636042.1377546710474.open-xchange@localhost> I guess someone will update the pep8 package soon... ? Johannes Sch?nberger Am 26.08.2013 um 21:48 schrieb "St?fan van der Walt" : > On Mon, Aug 26, 2013 at 9:45 PM, Johannes Sch?nberger wrote: >> I think I sent this link a couple of weeks ago already to the list :-) > > Reinforcement learning! > > Now, if we can get the PEP checking tools to be quieter... > > St?fan > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > From erpayal2010 at gmail.com Tue Aug 27 06:22:30 2013 From: erpayal2010 at gmail.com (Payal Gupta) Date: Tue, 27 Aug 2013 03:22:30 -0700 (PDT) Subject: Error importing feature: Issue with match_template function? In-Reply-To: <96aeaee8-9e00-4ebb-9127-9529157fa176@googlegroups.com> References: <96aeaee8-9e00-4ebb-9127-9529157fa176@googlegroups.com> Message-ID: <30368232-7129-4afe-af7c-da744784f1ae@googlegroups.com> hello.... i have same problem please someone solve this problem... how i import scikit image library ?? On Thursday, November 29, 2012 10:31:19 PM UTC+5:30, Marianne Corvellec wrote: > > Dear skimage people, > > I am having an Import Error when I try to import feature, namely: > > In [28]: from skimage import feature > --------------------------------------------------------------------------- > ImportError Traceback (most recent call last) > > /home/[...]/ in () > > /home/[...]/skimage/feature/__init__.py in () > 1 from ._hog import hog > 2 from .texture import greycomatrix, greycoprops, > local_binary_pattern > 3 from .peak import peak_local_max > 4 from ._harris import harris > ----> 5 from .template import match_template > > /home/[...]/skimage/feature/template.py in () > 2 """ > 3 import numpy as np > ----> 4 from . import _template > 5 > 6 > > ImportError: cannot import name _template > > I saw that the function `match_template` is defined in both template.py > and _template.pyx: > isn't that wrong--and/or related to the import error? > > I am running this version: > In [27]: skimage.__version__ > Out[27]: '0.7dev' > > How do you guys import feature? > > Thanks, > Marianne > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at mitotic-machine.org Tue Aug 27 07:06:33 2013 From: guillaume at mitotic-machine.org (Guillaume Gay) Date: Tue, 27 Aug 2013 13:06:33 +0200 Subject: Error importing feature: Issue with match_template function? In-Reply-To: <30368232-7129-4afe-af7c-da744784f1ae@googlegroups.com> References: <96aeaee8-9e00-4ebb-9127-9529157fa176@googlegroups.com> <30368232-7129-4afe-af7c-da744784f1ae@googlegroups.com> Message-ID: <521C8839.4030605@mitotic-machine.org> Hi, Quoting Juan Nunez-Iglesias reply to a similar issue I encountered last month: This sort of stuff keeps popping up (see latest issues) because a lot of functions have moved to use memory views. If you do a `make clean` and then try again, it should work. So you need to install skimage again after having issued a `make clean` on your source directory... Guillaume Le 27/08/2013 12:22, Payal Gupta a ?crit : > hello.... > i have same problem > please someone solve this problem... how i import scikit image library ?? > > On Thursday, November 29, 2012 10:31:19 PM UTC+5:30, Marianne > Corvellec wrote: > > Dear skimage people, > > I am having an Import Error when I try to import feature, namely: > > In [28]: from skimage import feature > --------------------------------------------------------------------------- > ImportError Traceback (most recent > call last) > > /home/[...]/ in () > > /home/[...]/skimage/feature/__init__.py in () > 1 from ._hog import hog > 2 from .texture import greycomatrix, greycoprops, > local_binary_pattern > 3 from .peak import peak_local_max > 4 from ._harris import harris > ----> 5 from .template import match_template > > /home/[...]/skimage/feature/template.py in () > 2 """ > 3 import numpy as np > ----> 4 from . import _template > 5 > 6 > > ImportError: cannot import name _template > > I saw that the function `match_template` is defined in > both template.py and _template.pyx: > isn't that wrong--and/or related to the import error? > > I am running this version: > In [27]: skimage.__version__ > Out[27]: '0.7dev' > > How do you guys import feature? > > Thanks, > Marianne > > -- > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fboulogne at sciunto.org Tue Aug 27 13:01:56 2013 From: fboulogne at sciunto.org (=?ISO-8859-1?Q?Fran=E7ois_Boulogne?=) Date: Tue, 27 Aug 2013 19:01:56 +0200 Subject: Difference between tutorials and longer examples Message-ID: <521CDB84.4000906@sciunto.org> Hi, What's the difference between tutorials in the user guide http://scikit-image.org/docs/dev/user_guide/tutorials.html and longer examples in the gallery ? http://scikit-image.org/docs/dev/auto_examples/#longer-examples-and-demonstrations Is there any reason to do not gather them? Cheers, -- Fran?ois Boulogne. http://www.sciunto.org GPG fingerprint: 25F6 C971 4875 A6C1 EDD1 75C8 1AA7 216E 32D5 F22F From emmanuelle.gouillart at nsup.org Tue Aug 27 16:22:34 2013 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Tue, 27 Aug 2013 22:22:34 +0200 Subject: Difference between tutorials and longer examples In-Reply-To: <521CDB84.4000906@sciunto.org> References: <521CDB84.4000906@sciunto.org> Message-ID: <20130827202234.GA16762@phare.normalesup.org> Hi Fran??????ois, > What's the difference between tutorials in the user guide > http://scikit-image.org/docs/dev/user_guide/tutorials.html > and longer examples in the gallery ? > http://scikit-image.org/docs/dev/auto_examples/#longer-examples-and-demonstrations > Is there any reason to do not gather them? The idea of the longer tutorials is to include them in the user guide, in a structured and progressive way (with a table of contents), while the examples are exposed in a "flatter" and less structured way. In my opinion, the user guide should be as comprehensive as possible, somewhat like a course in image processing the scikit-image. Also, the examples of the gallery are generated from python scripts (with docstrings converted to rst, that sphinx converts to html), while the user guide is directly written in rst. But it's true that it leads to some duplication (when I'm writing a tutorial, I typically also write an example for the gallery because I need this to generate the figures used in the tutorial). I think it's good to keep the two ways of accessing the documentation (the linear table of contents of the user guide, as well as the gallery) but having less source duplication would be nice. Cheers, Emmanuelle From stefan at sun.ac.za Tue Aug 27 19:09:09 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 28 Aug 2013 01:09:09 +0200 Subject: EuroSciPy sprint Message-ID: Hi everyone, I want to send a big shout-out to everyone who sprinted on skimage at EuroSciPy2013! You guys did a great job, and it was a lot of fun watching all the pull requests and comments fly by. Emmanuelle, how about we do a short blog post on the event? I can combine it with the results of the US sprint that I still have on the shelf. St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From emmanuelle.gouillart at nsup.org Wed Aug 28 02:46:24 2013 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Wed, 28 Aug 2013 08:46:24 +0200 Subject: EuroSciPy sprint In-Reply-To: References: Message-ID: <20130828064624.GC8879@phare.normalesup.org> Hi St??????fan, yes, this sprint was a lot of fun! Although it's always too short! Would we publish such a blog post on your blog? If yes, how about I send you a few elements about the Euroscipy sprint by e-mail? Cheers, Emmanuelle On Wed, Aug 28, 2013 at 01:09:09AM +0200, St??????fan van der Walt wrote: > Hi everyone, > I want to send a big shout-out to everyone who sprinted on skimage at > EuroSciPy2013! You guys did a great job, and it was a lot of fun watching all > the pull requests and comments fly by. > Emmanuelle, how about we do a short blog post on the event? I can combine it > with the results of the US sprint that I still have on the shelf. > St??????fan From stefan at sun.ac.za Wed Aug 28 02:56:55 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 28 Aug 2013 08:56:55 +0200 Subject: EuroSciPy sprint In-Reply-To: <20130828064624.GC8879@phare.normalesup.org> References: <20130828064624.GC8879@phare.normalesup.org> Message-ID: On Wed, Aug 28, 2013 at 8:46 AM, Emmanuelle Gouillart wrote: > Would we publish such a blog post on your blog? If yes, how about I send > you a few elements about the Euroscipy sprint by e-mail? That'd be great, thanks! St?fan From jni.soma at gmail.com Wed Aug 28 03:43:58 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 28 Aug 2013 09:43:58 +0200 Subject: EuroSciPy sprint In-Reply-To: References: <20130828064624.GC8879@phare.normalesup.org> Message-ID: Loved the sprint! I agree with Emmanuelle that it was way too short. I was just hitting my stride when we left! If you know who's organising next year's conference, I'd push them to have two days of sprinting. =) On Wed, Aug 28, 2013 at 8:56 AM, St?fan van der Walt wrote: > On Wed, Aug 28, 2013 at 8:46 AM, Emmanuelle Gouillart > wrote: > > Would we publish such a blog post on your blog? If yes, how about I send > > you a few elements about the Euroscipy sprint by e-mail? > > That'd be great, thanks! > > St?fan > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marianne.corvellec at ens-lyon.org Thu Aug 29 12:27:59 2013 From: marianne.corvellec at ens-lyon.org (Marianne Corvellec) Date: Thu, 29 Aug 2013 09:27:59 -0700 (PDT) Subject: EuroSciPy sprint In-Reply-To: References: <20130828064624.GC8879@phare.normalesup.org> Message-ID: Congrats, guys! Sprinting is the best. :D On Wednesday, August 28, 2013 3:43:58 AM UTC-4, Juan Nunez-Iglesias wrote: > > Loved the sprint! I agree with Emmanuelle that it was way too short. I was > just hitting my stride when we left! If you know who's organising next > year's conference, I'd push them to have two days of sprinting. =) > > > On Wed, Aug 28, 2013 at 8:56 AM, St?fan van der Walt > > wrote: > >> On Wed, Aug 28, 2013 at 8:46 AM, Emmanuelle Gouillart >> > wrote: >> > Would we publish such a blog post on your blog? If yes, how about I send >> > you a few elements about the Euroscipy sprint by e-mail? >> >> That'd be great, thanks! >> >> St?fan >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> For more options, visit https://groups.google.com/groups/opt_out. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fboulogne at sciunto.org Thu Aug 29 14:55:04 2013 From: fboulogne at sciunto.org (=?ISO-8859-1?Q?Fran=E7ois_Boulogne?=) Date: Thu, 29 Aug 2013 20:55:04 +0200 Subject: Difference between tutorials and longer examples In-Reply-To: <20130827202234.GA16762@phare.normalesup.org> References: <521CDB84.4000906@sciunto.org> <20130827202234.GA16762@phare.normalesup.org> Message-ID: <521F9908.2010402@sciunto.org> Thank you Emmanuelle. Le 27/08/2013 22:22, Emmanuelle Gouillart a ?crit : > The idea of the longer tutorials is to include them in the user guide, in > a structured and progressive way (with a table of contents), while the > examples are exposed in a "flatter" and less structured way. In my > opinion, the user guide should be as comprehensive as possible, somewhat > like a course in image processing the scikit-image. So, in your mind, can the guide be also the place to have explanations on the general principles behind algorithms? (That would be really interesting but it represents a huge work.) Or is it the place to treat real examples? like this: http://pythonvision.org/basic-tutorial In my mind, I can distinguish two interesting documents: a/ Illustrations of techniques, like everything on segmentation or everything on denoising... b/ Real image processing with an example picked up from the real world They are two orthogonal points of view. a/ is a nice place to explain and compare algo, whereas b/ aims to train the reader to chain techniques. > > I think it's good to keep the two ways of accessing the documentation > (the linear table of contents of the user guide, as well as the gallery) > but having less source duplication would be nice. Actually, I was wondering why longer examples are not included in the user guide. I use the gallery to find which example looks like the problem I want to treat and longer examples do not really match this usage. Cheers, -- Fran?ois Boulogne. http://www.sciunto.org GPG fingerprint: 25F6 C971 4875 A6C1 EDD1 75C8 1AA7 216E 32D5 F22F From riaanvddool at gmail.com Sat Aug 31 10:04:53 2013 From: riaanvddool at gmail.com (Riaan van den Dool) Date: Sat, 31 Aug 2013 07:04:53 -0700 (PDT) Subject: Dividing a large image into smaller overlapping blocks for parallel processing Message-ID: Hi guys I would like to use scikit-image to process large images, for example (5696, 13500). In the interest of speed I need to divide the image into smaller sub-images with the possibility of processing these in parallel. If I define the sub-images so that neighbouring sub-images overlap then edge effects should not be a problem for the algorithm operating on each sub-image. This is probably a specific case of the more general border/edge-effect handling issue as addressed by the mode parameter here: http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.filters.convolve.html My questions: 1. Is there already a image-division function/strategy implemented in scikit-image? 2. Is this something that might be included in future if an implementation is available? 3. Please share any references to articles or code that deals with this. Riaan -------------- next part -------------- An HTML attachment was scrubbed... URL: From riaanvddool at gmail.com Sat Aug 31 14:05:04 2013 From: riaanvddool at gmail.com (Riaan van den Dool) Date: Sat, 31 Aug 2013 11:05:04 -0700 (PDT) Subject: Dividing a large image into smaller overlapping blocks for parallel processing In-Reply-To: References: Message-ID: <7de7cf83-88c8-4cae-9d7d-f6353ae19e7c@googlegroups.com> The blockproc function's signature provides a useful starting point, thanks. http://www.mathworks.com/help/images/ref/blockproc.html I will have to think about how to do the parallel execution from the function. Blockproc provides two 'padding' methods: replicate and symmetric. I guess what I need could be called margin, or overlap perhaps. For the margin case it might make sense that such a function merely returns an array of block definitions, rather than blocks of pixel data. But this would not be so applicable for the replicate and symmetric cases I think. R On Saturday, August 31, 2013 6:49:31 PM UTC+2, Johannes Sch?nberger wrote: > > Hi Riaan, > > Unfortunately we do not have (at least I do not know of) a function > similar to Matlab's `blockproc`. Such feature would be a great addition to > skimage! > > Regards, Johannes > > Am 31.08.2013 um 16:04 schrieb Riaan van den Dool >: > > > > Hi guys > > > > I would like to use scikit-image to process large images, for example > (5696, 13500). > > > > In the interest of speed I need to divide the image into smaller > sub-images with the possibility of processing these in parallel. > > > > If I define the sub-images so that neighbouring sub-images overlap then > edge effects should not be a problem for the algorithm operating on each > sub-image. > > > > This is probably a specific case of the more general border/edge-effect > handling issue as addressed by the mode parameter here: > > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.filters.convolve.html > > > > My questions: > > ? Is there already a image-division function/strategy > implemented in scikit-image? > > ? Is this something that might be included in future if an > implementation is available? > > ? Please share any references to articles or code that deals > with this. > > Riaan > > > > > > > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image... at googlegroups.com . > > For more options, visit https://groups.google.com/groups/opt_out. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From riaanvddool at gmail.com Sat Aug 31 15:26:51 2013 From: riaanvddool at gmail.com (Riaan van den Dool) Date: Sat, 31 Aug 2013 12:26:51 -0700 (PDT) Subject: Dividing a large image into smaller overlapping blocks for parallel processing In-Reply-To: <42B009F4-ECA8-4A51-B5BF-9301816FE55B@demuc.de> References: <7de7cf83-88c8-4cae-9d7d-f6353ae19e7c@googlegroups.com> <42B009F4-ECA8-4A51-B5BF-9301816FE55B@demuc.de> Message-ID: <6144d33e-430f-4120-9289-386d93227109@googlegroups.com> Thanks On Saturday, August 31, 2013 8:17:25 PM UTC+2, Johannes Sch?nberger wrote: > > Some hints: > > - pad image with skimage.util.pad, which allows a large number of padding > methods > - spawn a pool of processes using Python's multiprocessing package in the > standard library > - use shared memory to provide read access to complete image > - define slices of image blocks and add them to a processing queue > > Am 31.08.2013 um 20:05 schrieb Riaan van den Dool >: > > > > The blockproc function's signature provides a useful starting point, > thanks. > > http://www.mathworks.com/help/images/ref/blockproc.html > > > > I will have to think about how to do the parallel execution from the > function. > > > > Blockproc provides two 'padding' methods: replicate and symmetric. I > guess what I need could be called margin, or overlap perhaps. > > > > For the margin case it might make sense that such a function merely > returns an array of block definitions, rather than blocks of pixel data. > But this would not be so applicable for the replicate and symmetric cases I > think. > > > > R > > > > > > > > On Saturday, August 31, 2013 6:49:31 PM UTC+2, Johannes Sch?nberger > wrote: > > Hi Riaan, > > > > Unfortunately we do not have (at least I do not know of) a function > similar to Matlab's `blockproc`. Such feature would be a great addition to > skimage! > > > > Regards, Johannes > > > > Am 31.08.2013 um 16:04 schrieb Riaan van den Dool : > > > > > Hi guys > > > > > > I would like to use scikit-image to process large images, for example > (5696, 13500). > > > > > > In the interest of speed I need to divide the image into smaller > sub-images with the possibility of processing these in parallel. > > > > > > If I define the sub-images so that neighbouring sub-images overlap > then edge effects should not be a problem for the algorithm operating on > each sub-image. > > > > > > This is probably a specific case of the more general > border/edge-effect handling issue as addressed by the mode parameter here: > > > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.filters.convolve.html > > > > > > My questions: > > > ? Is there already a image-division function/strategy > implemented in scikit-image? > > > ? Is this something that might be included in future if an > implementation is available? > > > ? Please share any references to articles or code that deals > with this. > > > Riaan > > > > > > > > > > > > > > > -- > > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image... at googlegroups.com. > > > For more options, visit https://groups.google.com/groups/opt_out. > > > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image... at googlegroups.com . > > For more options, visit https://groups.google.com/groups/opt_out. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirill.shklovsky at gmail.com Sat Aug 31 15:52:59 2013 From: kirill.shklovsky at gmail.com (angelatlarge) Date: Sat, 31 Aug 2013 12:52:59 -0700 (PDT) Subject: HoG orientation In-Reply-To: References: Message-ID: <66482fd6-cd62-496f-972c-5b1a336b9373@googlegroups.com> Github is reporting that "The Travis CI build could not complete due to an error." - the problem seems to be a timeout. Python 2.7 was fine (took 26 minutes), but Python 3.2 timed out at 50 minutes. My sense is that the timeout has nothing to do with my changes, any suggestions? On Monday, August 26, 2013 1:01:38 PM UTC-4, angelatlarge wrote: > > Just added a PR (#715) > to allow computation of signed orientations in HoG. Minor feature, really, > and perhaps not all that useful given that there is a substantial revision > proposed to the HoG code in PR #703. All the PR proposes to do is to pull > out the 180 degree constant into a separate variable and initialize it > based on whether orientations should be signed or not. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colincsl at gmail.com Sat Aug 31 19:01:55 2013 From: colincsl at gmail.com (Colin Lea) Date: Sat, 31 Aug 2013 16:01:55 -0700 (PDT) Subject: Dividing a large image into smaller overlapping blocks for parallel processing In-Reply-To: <6144d33e-430f-4120-9289-386d93227109@googlegroups.com> References: <7de7cf83-88c8-4cae-9d7d-f6353ae19e7c@googlegroups.com> <42B009F4-ECA8-4A51-B5BF-9301816FE55B@demuc.de> <6144d33e-430f-4120-9289-386d93227109@googlegroups.com> Message-ID: You also might want to look into joblib which makes it very easy to do parallel computations. This is used frequently in sklearn to speedup code. http://pythonhosted.org/joblib/ On Saturday, August 31, 2013 3:26:51 PM UTC-4, Riaan van den Dool wrote: > > Thanks > > On Saturday, August 31, 2013 8:17:25 PM UTC+2, Johannes Sch?nberger wrote: >> >> Some hints: >> >> - pad image with skimage.util.pad, which allows a large number of >> padding methods >> - spawn a pool of processes using Python's multiprocessing package in >> the standard library >> - use shared memory to provide read access to complete image >> - define slices of image blocks and add them to a processing queue >> >> Am 31.08.2013 um 20:05 schrieb Riaan van den Dool : >> >> > The blockproc function's signature provides a useful starting point, >> thanks. >> > http://www.mathworks.com/help/images/ref/blockproc.html >> > >> > I will have to think about how to do the parallel execution from the >> function. >> > >> > Blockproc provides two 'padding' methods: replicate and symmetric. I >> guess what I need could be called margin, or overlap perhaps. >> > >> > For the margin case it might make sense that such a function merely >> returns an array of block definitions, rather than blocks of pixel data. >> But this would not be so applicable for the replicate and symmetric cases I >> think. >> > >> > R >> > >> > >> > >> > On Saturday, August 31, 2013 6:49:31 PM UTC+2, Johannes Sch?nberger >> wrote: >> > Hi Riaan, >> > >> > Unfortunately we do not have (at least I do not know of) a function >> similar to Matlab's `blockproc`. Such feature would be a great addition to >> skimage! >> > >> > Regards, Johannes >> > >> > Am 31.08.2013 um 16:04 schrieb Riaan van den Dool : >> >> > >> > > Hi guys >> > > >> > > I would like to use scikit-image to process large images, for example >> (5696, 13500). >> > > >> > > In the interest of speed I need to divide the image into smaller >> sub-images with the possibility of processing these in parallel. >> > > >> > > If I define the sub-images so that neighbouring sub-images overlap >> then edge effects should not be a problem for the algorithm operating on >> each sub-image. >> > > >> > > This is probably a specific case of the more general >> border/edge-effect handling issue as addressed by the mode parameter here: >> > > >> http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.filters.convolve.html >> > > >> > > My questions: >> > > ? Is there already a image-division function/strategy >> implemented in scikit-image? >> > > ? Is this something that might be included in future if an >> implementation is available? >> > > ? Please share any references to articles or code that deals >> with this. >> > > Riaan >> > > >> > > >> > > >> > > >> > > -- >> > > You received this message because you are subscribed to the Google >> Groups "scikit-image" group. >> > > To unsubscribe from this group and stop receiving emails from it, >> send an email to scikit-image... at googlegroups.com. >> > > For more options, visit https://groups.google.com/groups/opt_out. >> > >> > >> > -- >> > You received this message because you are subscribed to the Google >> Groups "scikit-image" group. >> > To unsubscribe from this group and stop receiving emails from it, send >> an email to scikit-image... at googlegroups.com. >> > For more options, visit https://groups.google.com/groups/opt_out. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Sat Aug 31 12:49:31 2013 From: jsch at demuc.de (=?windows-1252?Q?Johannes_Sch=F6nberger?=) Date: Sat, 31 Aug 2013 18:49:31 +0200 Subject: Dividing a large image into smaller overlapping blocks for parallel processing In-Reply-To: References: Message-ID: Hi Riaan, Unfortunately we do not have (at least I do not know of) a function similar to Matlab's `blockproc`. Such feature would be a great addition to skimage! Regards, Johannes Am 31.08.2013 um 16:04 schrieb Riaan van den Dool : > Hi guys > > I would like to use scikit-image to process large images, for example (5696, 13500). > > In the interest of speed I need to divide the image into smaller sub-images with the possibility of processing these in parallel. > > If I define the sub-images so that neighbouring sub-images overlap then edge effects should not be a problem for the algorithm operating on each sub-image. > > This is probably a specific case of the more general border/edge-effect handling issue as addressed by the mode parameter here: > http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.filters.convolve.html > > My questions: > ? Is there already a image-division function/strategy implemented in scikit-image? > ? Is this something that might be included in future if an implementation is available? > ? Please share any references to articles or code that deals with this. > Riaan > > > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. From jsch at demuc.de Sat Aug 31 14:17:25 2013 From: jsch at demuc.de (=?windows-1252?Q?Johannes_Sch=F6nberger?=) Date: Sat, 31 Aug 2013 20:17:25 +0200 Subject: Dividing a large image into smaller overlapping blocks for parallel processing In-Reply-To: <7de7cf83-88c8-4cae-9d7d-f6353ae19e7c@googlegroups.com> References: <7de7cf83-88c8-4cae-9d7d-f6353ae19e7c@googlegroups.com> Message-ID: <42B009F4-ECA8-4A51-B5BF-9301816FE55B@demuc.de> Some hints: - pad image with skimage.util.pad, which allows a large number of padding methods - spawn a pool of processes using Python's multiprocessing package in the standard library - use shared memory to provide read access to complete image - define slices of image blocks and add them to a processing queue Am 31.08.2013 um 20:05 schrieb Riaan van den Dool : > The blockproc function's signature provides a useful starting point, thanks. > http://www.mathworks.com/help/images/ref/blockproc.html > > I will have to think about how to do the parallel execution from the function. > > Blockproc provides two 'padding' methods: replicate and symmetric. I guess what I need could be called margin, or overlap perhaps. > > For the margin case it might make sense that such a function merely returns an array of block definitions, rather than blocks of pixel data. But this would not be so applicable for the replicate and symmetric cases I think. > > R > > > > On Saturday, August 31, 2013 6:49:31 PM UTC+2, Johannes Sch?nberger wrote: > Hi Riaan, > > Unfortunately we do not have (at least I do not know of) a function similar to Matlab's `blockproc`. Such feature would be a great addition to skimage! > > Regards, Johannes > > Am 31.08.2013 um 16:04 schrieb Riaan van den Dool : > > > Hi guys > > > > I would like to use scikit-image to process large images, for example (5696, 13500). > > > > In the interest of speed I need to divide the image into smaller sub-images with the possibility of processing these in parallel. > > > > If I define the sub-images so that neighbouring sub-images overlap then edge effects should not be a problem for the algorithm operating on each sub-image. > > > > This is probably a specific case of the more general border/edge-effect handling issue as addressed by the mode parameter here: > > http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.filters.convolve.html > > > > My questions: > > ? Is there already a image-division function/strategy implemented in scikit-image? > > ? Is this something that might be included in future if an implementation is available? > > ? Please share any references to articles or code that deals with this. > > Riaan > > > > > > > > > > -- > > You received this message because you are subscribed to the Google Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image... at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out.