From dfarmernv at gmail.com Sat Apr 2 00:38:21 2011 From: dfarmernv at gmail.com (Dan Farmer) Date: Fri, 1 Apr 2011 21:38:21 -0700 Subject: Review: Canny In-Reply-To: References: Message-ID: I've pushed the suggested changes : https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny Thanks, Dan 2011/3/31 Dan Farmer : > I will try to address your comments tonight and also see about > removing smooth.py and just incorporating the one function we're using > into canny.py for now as Thouis suggested. Thanks to both of you for > looking at it. I'll message back when I've pushed the changes. > > -Dan > > 2011/3/31 St?fan van der Walt : >> Hi Dan >> >> On Thu, Mar 31, 2011 at 1:02 AM, Dan Farmer wrote: >>> https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny >> >> I read through your patch and made some preliminary comments. >> >>> Mostly just trying to follow procedure. I already mentioned my >>> concerns in the previous thread. I made one stab at introduced a >>> "None" default for the mask, but I got hung up and reverted it. The >>> default I was going to propose was np.ones(img.shape,bool) (and after >>> the fact I even noticed that's how it is used in one of the unit >>> tests). But I started thinking that that could be quite wasteful of >>> memory if you were working with large images (on my test use case with >>> ~512x512 images it's about 300 KB for the "fake" mask). >> >> It seems as though this specific implementation of the algorithms >> relies on creating the mask, so I don't think you can get away from >> it. ?The typical way to do it would be: >> >> def canny(..., mask=None, ...): >> ? ?if mask is None: >> ? ? ? ?mask = np.ones(x.shape, dtype=bool) >> >>> The problem I had was that if I don't allocate the emask array I get >>> run-time errors starting at line 129 (in the diff of canny.py) because >>> the arrays all have different lengths if they aren't logical_and'd >>> with emask above. >> >> Yes, I think the only way to avoid allocating the mask explicitly is >> to rewrite the algorithm in Cython, where you can modify behaviour >> inside the for-loop. >> >> Regards >> St?fan >> > From thouis.jones at curie.fr Sun Apr 3 16:09:03 2011 From: thouis.jones at curie.fr (Thouis (Ray) Jones) Date: Sun, 3 Apr 2011 22:09:03 +0200 Subject: Review: Canny In-Reply-To: References: Message-ID: Looks good to me. 2011/4/2 Dan Farmer : > I've pushed the suggested changes : > https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny > > Thanks, > Dan > > 2011/3/31 Dan Farmer : >> I will try to address your comments tonight and also see about >> removing smooth.py and just incorporating the one function we're using >> into canny.py for now as Thouis suggested. Thanks to both of you for >> looking at it. I'll message back when I've pushed the changes. >> >> -Dan >> >> 2011/3/31 St?fan van der Walt : >>> Hi Dan >>> >>> On Thu, Mar 31, 2011 at 1:02 AM, Dan Farmer wrote: >>>> https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny >>> >>> I read through your patch and made some preliminary comments. >>> >>>> Mostly just trying to follow procedure. I already mentioned my >>>> concerns in the previous thread. I made one stab at introduced a >>>> "None" default for the mask, but I got hung up and reverted it. The >>>> default I was going to propose was np.ones(img.shape,bool) (and after >>>> the fact I even noticed that's how it is used in one of the unit >>>> tests). But I started thinking that that could be quite wasteful of >>>> memory if you were working with large images (on my test use case with >>>> ~512x512 images it's about 300 KB for the "fake" mask). >>> >>> It seems as though this specific implementation of the algorithms >>> relies on creating the mask, so I don't think you can get away from >>> it. ?The typical way to do it would be: >>> >>> def canny(..., mask=None, ...): >>> ? ?if mask is None: >>> ? ? ? ?mask = np.ones(x.shape, dtype=bool) >>> >>>> The problem I had was that if I don't allocate the emask array I get >>>> run-time errors starting at line 129 (in the diff of canny.py) because >>>> the arrays all have different lengths if they aren't logical_and'd >>>> with emask above. >>> >>> Yes, I think the only way to avoid allocating the mask explicitly is >>> to rewrite the algorithm in Cython, where you can modify behaviour >>> inside the for-loop. >>> >>> Regards >>> St?fan >>> >> > From dfarmernv at gmail.com Tue Apr 5 01:51:52 2011 From: dfarmernv at gmail.com (Dan Farmer) Date: Mon, 4 Apr 2011 22:51:52 -0700 Subject: Review: Canny In-Reply-To: References: Message-ID: Thanks for the detailed feedback. I've pushed some more changes that I think cover everything. I left the mask and smoothing function for the moment based on Thouis feedback. https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny -Dan On Mon, Apr 4, 2011 at 7:56 PM, Chris Colbert wrote: > Hey Dan, > I just had a look at this code, and there are some things I would do before > pulling in the changes: > > Document which dtype is expected of the input image. It appears to work for > floats, int, uint8 etc, but each gives a different output result. Since the > tests are using floats, I assume a floating point image is expected. This > should be documented in the docstring along with the expected range of the > image. i.e. is it a 0.0 - 1.0 float image or a 0.0 - 255.0 float image. > Also, the docs should state it only works on 2D grayscale images, so the > user needs to convert their color image beforehand. > Document the valid ranges of values for the low threshold, and high > threshold. > There is a lot of use of np.logical_* functions. These are around 20% slower > than numpy's overloaded bitwise & and | operators. It seems these logical_* > functions are being applied to boolean images so the two operations (logical > and bitwise) are equivalent. I would use the faster of the two. It's faster > and easier to read. > ndimage has a sobel function, was there a particular reason you chose to do > a 2d convolution instead? 2 separable 1d convolutions as done by > ndimage.sobel should be faster than a 2d convolution (especially since > ndimage does optimizing cache manipulations under the covers). > Does the mask operate over any generic area, or is the mask expected to be > rectangular? If it's a rectangular mask, is there really a need for it when > the user could just pass in a slice to their image instead. > A minor pedantic gripe; I like to have spaces after commas in array indices > and tuples. i.e. this: (1, 2), arr[:, 5:6], instead of: (1,2), arr[:,5:6]. A > space immediately after a comma is recommended in PEP8. > Variable naming. Lots of single and two letter variables like 'c', 'c1, > 'c2', 'w', 'm', 'cc', etc... Let's give these descriptive names. > For computing magntitude, use np.hypot(isobel, jsobel), instead of the > manual computation you use. np.hypot is 2x faster on a 1000x1000 image since > it doesn't create any temporaries. > inline the smoothing the function, unless you're using it elsewhere. > > Lastly, thanks for working and spending time on this code! > Cheers! > Chris > > 2011/4/2 Dan Farmer >> >> I've pushed the suggested changes : >> >> https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny >> >> Thanks, >> Dan >> >> 2011/3/31 Dan Farmer : >> > I will try to address your comments tonight and also see about >> > removing smooth.py and just incorporating the one function we're using >> > into canny.py for now as Thouis suggested. Thanks to both of you for >> > looking at it. I'll message back when I've pushed the changes. >> > >> > -Dan >> > >> > 2011/3/31 St?fan van der Walt : >> >> Hi Dan >> >> >> >> On Thu, Mar 31, 2011 at 1:02 AM, Dan Farmer >> >> wrote: >> >>> >> >>> https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny >> >> >> >> I read through your patch and made some preliminary comments. >> >> >> >>> Mostly just trying to follow procedure. I already mentioned my >> >>> concerns in the previous thread. I made one stab at introduced a >> >>> "None" default for the mask, but I got hung up and reverted it. The >> >>> default I was going to propose was np.ones(img.shape,bool) (and after >> >>> the fact I even noticed that's how it is used in one of the unit >> >>> tests). But I started thinking that that could be quite wasteful of >> >>> memory if you were working with large images (on my test use case with >> >>> ~512x512 images it's about 300 KB for the "fake" mask). >> >> >> >> It seems as though this specific implementation of the algorithms >> >> relies on creating the mask, so I don't think you can get away from >> >> it. ?The typical way to do it would be: >> >> >> >> def canny(..., mask=None, ...): >> >> ? ?if mask is None: >> >> ? ? ? ?mask = np.ones(x.shape, dtype=bool) >> >> >> >>> The problem I had was that if I don't allocate the emask array I get >> >>> run-time errors starting at line 129 (in the diff of canny.py) because >> >>> the arrays all have different lengths if they aren't logical_and'd >> >>> with emask above. >> >> >> >> Yes, I think the only way to avoid allocating the mask explicitly is >> >> to rewrite the algorithm in Cython, where you can modify behaviour >> >> inside the for-loop. >> >> >> >> Regards >> >> St?fan >> >> >> > > > From sccolbert at gmail.com Mon Apr 4 22:56:48 2011 From: sccolbert at gmail.com (Chris Colbert) Date: Mon, 4 Apr 2011 22:56:48 -0400 Subject: Review: Canny In-Reply-To: References: Message-ID: Hey Dan, I just had a look at this code, and there are some things I would do before pulling in the changes: - Document which dtype is expected of the input image. It appears to work for floats, int, uint8 etc, but each gives a different output result. Since the tests are using floats, I assume a floating point image is expected. This should be documented in the docstring along with the expected range of the image. i.e. is it a 0.0 - 1.0 float image or a 0.0 - 255.0 float image. Also, the docs should state it only works on 2D grayscale images, so the user needs to convert their color image beforehand. - Document the valid ranges of values for the low threshold, and high threshold. - There is a lot of use of np.logical_* functions. These are around 20% slower than numpy's overloaded bitwise & and | operators. It seems these logical_* functions are being applied to boolean images so the two operations (logical and bitwise) are equivalent. I would use the faster of the two. It's faster and easier to read. - ndimage has a sobel function, was there a particular reason you chose to do a 2d convolution instead? 2 separable 1d convolutions as done by ndimage.sobel should be faster than a 2d convolution (especially since ndimage does optimizing cache manipulations under the covers). - Does the mask operate over any generic area, or is the mask expected to be rectangular? If it's a rectangular mask, is there really a need for it when the user could just pass in a slice to their image instead. - A minor pedantic gripe; I like to have spaces after commas in array indices and tuples. i.e. this: (1, 2), arr[:, 5:6], instead of: (1,2), arr[:,5:6]. A space immediately after a comma is recommended in PEP8. - Variable naming. Lots of single and two letter variables like 'c', 'c1, 'c2', 'w', 'm', 'cc', etc... Let's give these descriptive names. - For computing magntitude, use np.hypot(isobel, jsobel), instead of the manual computation you use. np.hypot is 2x faster on a 1000x1000 image since it doesn't create any temporaries. - inline the smoothing the function, unless you're using it elsewhere. Lastly, thanks for working and spending time on this code! Cheers! Chris 2011/4/2 Dan Farmer > I've pushed the suggested changes : > > https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny > > Thanks, > Dan > > 2011/3/31 Dan Farmer : > > I will try to address your comments tonight and also see about > > removing smooth.py and just incorporating the one function we're using > > into canny.py for now as Thouis suggested. Thanks to both of you for > > looking at it. I'll message back when I've pushed the changes. > > > > -Dan > > > > 2011/3/31 St?fan van der Walt : > >> Hi Dan > >> > >> On Thu, Mar 31, 2011 at 1:02 AM, Dan Farmer > wrote: > >>> > https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny > >> > >> I read through your patch and made some preliminary comments. > >> > >>> Mostly just trying to follow procedure. I already mentioned my > >>> concerns in the previous thread. I made one stab at introduced a > >>> "None" default for the mask, but I got hung up and reverted it. The > >>> default I was going to propose was np.ones(img.shape,bool) (and after > >>> the fact I even noticed that's how it is used in one of the unit > >>> tests). But I started thinking that that could be quite wasteful of > >>> memory if you were working with large images (on my test use case with > >>> ~512x512 images it's about 300 KB for the "fake" mask). > >> > >> It seems as though this specific implementation of the algorithms > >> relies on creating the mask, so I don't think you can get away from > >> it. The typical way to do it would be: > >> > >> def canny(..., mask=None, ...): > >> if mask is None: > >> mask = np.ones(x.shape, dtype=bool) > >> > >>> The problem I had was that if I don't allocate the emask array I get > >>> run-time errors starting at line 129 (in the diff of canny.py) because > >>> the arrays all have different lengths if they aren't logical_and'd > >>> with emask above. > >> > >> Yes, I think the only way to avoid allocating the mask explicitly is > >> to rewrite the algorithm in Cython, where you can modify behaviour > >> inside the for-loop. > >> > >> Regards > >> St?fan > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thouis.jones at curie.fr Tue Apr 5 01:16:21 2011 From: thouis.jones at curie.fr (Thouis (Ray) Jones) Date: Tue, 5 Apr 2011 07:16:21 +0200 Subject: Review: Canny In-Reply-To: References: Message-ID: I can answer two of these, quickly. The mask can be arbitrary. The smooth function should probably be kept separate, since in the future it will probably be used by other CellProfiler-based functions. Ray Jones On Tue, Apr 5, 2011 at 04:56, Chris Colbert wrote: > Hey Dan, > I just had a look at this code, and there are some things I would do before > pulling in the changes: > > Document which dtype is expected of the input image. It appears to work for > floats, int, uint8 etc, but each gives a different output result. Since the > tests are using floats, I assume a floating point image is expected. This > should be documented in the docstring along with the expected range of the > image. i.e. is it a 0.0 - 1.0 float image or a 0.0 - 255.0 float image. > Also, the docs should state it only works on 2D grayscale images, so the > user needs to convert their color image beforehand. > Document the valid ranges of values for the low threshold, and high > threshold. > There is a lot of use of np.logical_* functions. These are around 20% slower > than numpy's overloaded bitwise & and | operators. It seems these logical_* > functions are being applied to boolean images so the two operations (logical > and bitwise) are equivalent. I would use the faster of the two. It's faster > and easier to read. > ndimage has a sobel function, was there a particular reason you chose to do > a 2d convolution instead? 2 separable 1d convolutions as done by > ndimage.sobel should be faster than a 2d convolution (especially since > ndimage does optimizing cache manipulations under the covers). > Does the mask operate over any generic area, or is the mask expected to be > rectangular? If it's a rectangular mask, is there really a need for it when > the user could just pass in a slice to their image instead. > A minor pedantic gripe; I like to have spaces after commas in array indices > and tuples. i.e. this: (1, 2), arr[:, 5:6], instead of: (1,2), arr[:,5:6]. A > space immediately after a comma is recommended in PEP8. > Variable naming. Lots of single and two letter variables like 'c', 'c1, > 'c2', 'w', 'm', 'cc', etc... Let's give these descriptive names. > For computing magntitude, use np.hypot(isobel, jsobel), instead of the > manual computation you use. np.hypot is 2x faster on a 1000x1000 image since > it doesn't create any temporaries. > inline the smoothing the function, unless you're using it elsewhere. > > Lastly, thanks for working and spending time on this code! > Cheers! > Chris > > 2011/4/2 Dan Farmer >> >> I've pushed the suggested changes : >> >> https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny >> >> Thanks, >> Dan >> >> 2011/3/31 Dan Farmer : >> > I will try to address your comments tonight and also see about >> > removing smooth.py and just incorporating the one function we're using >> > into canny.py for now as Thouis suggested. Thanks to both of you for >> > looking at it. I'll message back when I've pushed the changes. >> > >> > -Dan >> > >> > 2011/3/31 St?fan van der Walt : >> >> Hi Dan >> >> >> >> On Thu, Mar 31, 2011 at 1:02 AM, Dan Farmer >> >> wrote: >> >>> >> >>> https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny >> >> >> >> I read through your patch and made some preliminary comments. >> >> >> >>> Mostly just trying to follow procedure. I already mentioned my >> >>> concerns in the previous thread. I made one stab at introduced a >> >>> "None" default for the mask, but I got hung up and reverted it. The >> >>> default I was going to propose was np.ones(img.shape,bool) (and after >> >>> the fact I even noticed that's how it is used in one of the unit >> >>> tests). But I started thinking that that could be quite wasteful of >> >>> memory if you were working with large images (on my test use case with >> >>> ~512x512 images it's about 300 KB for the "fake" mask). >> >> >> >> It seems as though this specific implementation of the algorithms >> >> relies on creating the mask, so I don't think you can get away from >> >> it. ?The typical way to do it would be: >> >> >> >> def canny(..., mask=None, ...): >> >> ? ?if mask is None: >> >> ? ? ? ?mask = np.ones(x.shape, dtype=bool) >> >> >> >>> The problem I had was that if I don't allocate the emask array I get >> >>> run-time errors starting at line 129 (in the diff of canny.py) because >> >>> the arrays all have different lengths if they aren't logical_and'd >> >>> with emask above. >> >> >> >> Yes, I think the only way to avoid allocating the mask explicitly is >> >> to rewrite the algorithm in Cython, where you can modify behaviour >> >> inside the for-loop. >> >> >> >> Regards >> >> St?fan >> >> >> > > > From stefan at sun.ac.za Tue Apr 5 06:18:17 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 5 Apr 2011 12:18:17 +0200 Subject: Review: Canny In-Reply-To: References: Message-ID: Hi Dan On Tue, Apr 5, 2011 at 7:51 AM, Dan Farmer wrote: > Thanks for the detailed feedback. I've pushed some more changes that I > think cover everything. I left the mask and smoothing function for the > moment based on Thouis feedback. > > https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny I think we're almost ready to pull! Some last nitpicks: - The docstring format requires indentation of items, e.g. image : array The image to smooth - PEP8 suggests not indenting equal signs to be aligned: not_mask = np.logical_not(mask) bleed_over = function(mask.astype(float)) ... With these and Chris's changes, I think we're good to go! Thanks a lot for your effort. Cheers St?fan From eads at soe.ucsc.edu Tue Apr 5 17:27:42 2011 From: eads at soe.ucsc.edu (Damian Eads) Date: Tue, 5 Apr 2011 14:27:42 -0700 Subject: Connected components labelling In-Reply-To: References: Message-ID: Hi, Nice work. Connected components is a very useful algorithm. Can you clarify your definition of neighbouring? Does it apply to pixels diagonal to one another (8 connectivity) or not (4 connectivity)? Damian 2011/4/5 St?fan van der Walt : > Hi all, > > I've added connected components labelling to the morphology module > under the 'ccomp' branch. ?I'd be glad if you could take a look and > suggest improvements. > > https://github.com/stefanv/scikits.image/compare/master...ccomp > > Regards > St?fan From stefan at sun.ac.za Tue Apr 5 17:13:12 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 5 Apr 2011 23:13:12 +0200 Subject: Connected components labelling Message-ID: Hi all, I've added connected components labelling to the morphology module under the 'ccomp' branch. I'd be glad if you could take a look and suggest improvements. https://github.com/stefanv/scikits.image/compare/master...ccomp Regards St?fan From dfarmernv at gmail.com Thu Apr 7 01:13:18 2011 From: dfarmernv at gmail.com (Dan Farmer) Date: Wed, 6 Apr 2011 22:13:18 -0700 Subject: Review: Canny In-Reply-To: References: Message-ID: Ok, I've pushed those changes. Thanks, Dan 2011/4/5 St?fan van der Walt : > Hi Dan > > On Tue, Apr 5, 2011 at 7:51 AM, Dan Farmer wrote: >> Thanks for the detailed feedback. I've pushed some more changes that I >> think cover everything. I left the mask and smoothing function for the >> moment based on Thouis feedback. >> >> https://github.com/dfarmer/scikits.image/compare/master...dfarmer-filters-canny > > I think we're almost ready to pull! > > Some last nitpicks: > > - The docstring format requires indentation of items, e.g. > > ? ?image : array > ? ? ? ?The image to smooth > > - PEP8 suggests not indenting equal signs to be aligned: > > ? ?not_mask ? ? ? ? ? ? ? = np.logical_not(mask) > ? ?bleed_over ? ? ? ? ? ? = function(mask.astype(float)) > ? ?... > > With these and Chris's changes, I think we're good to go! > > Thanks a lot for your effort. > > Cheers > St?fan > From stefan at sun.ac.za Thu Apr 7 09:05:14 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 7 Apr 2011 15:05:14 +0200 Subject: Review: Canny In-Reply-To: References: Message-ID: Hi Dan 2011/4/7 Dan Farmer : > Ok, I've pushed those changes. Thanks so much for all your effort! I've merged your branch. I had to make one small modification, shown here: https://github.com/stefanv/scikits.image/commit/97806c76f4c0b4fa6db00cf39d01741fba9bd55d Cheers St?fan From alex.liberzon at gmail.com Mon Apr 11 13:44:18 2011 From: alex.liberzon at gmail.com (Alex) Date: Mon, 11 Apr 2011 10:44:18 -0700 (PDT) Subject: Help request Message-ID: <1d89628f-2840-4214-9941-77f625f73a89@e9g2000vbk.googlegroups.com> Dear skikits-image members, I have some difficulty to identify specific objects from the grayscale image. For example, in this image https://picasaweb.google.com/lh/photo/FYZ1_F4OAoe920iYj8bk7A?feat=directlink one can see about 15 glass beads on the floor illuminated from the left side. Human eye identifies them easily, but I cannot find the efficient way to identify them in such an image. Using Matlab the approach was 1) crop out one of the beads as is from the given image, and then 2)using normalized cross-correlation (normxcorr2) of the cropped bead with the image to identify the 50 - 60% of beads. The problems are due to uneven background illumination, different size of the beads and different light pattern due to their angle in respect to the light source. I could define the the problem of looking a feature that has two not- connected bright regions, separated by some distance that could be known within some reasonable limits. Any suggestion is gratefully appreciated. If possible, the pseudo-code would be helpful as I'm not yet familiar with all the possibilities of this great skikit. Thanks, Alex From stefan at sun.ac.za Mon Apr 11 04:46:46 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 11 Apr 2011 10:46:46 +0200 Subject: Connected components labelling In-Reply-To: References: Message-ID: > > 2011/4/11 St?fan van der Walt : >> Hi Damian >> >> 2011/4/5 Damian Eads : >>> Nice work. Connected components is a very useful algorithm. Can you >>> clarify your definition of neighbouring? Does it apply to pixels >>> diagonal to one another (8 connectivity) or not (4 connectivity)? >> >> Thanks for the comments! ?Currently, I use 8-connectivity. ?Should I >> make 4-connectivity an option, or simply document that? 2011/4/11 Damian Eads : > No problem! :) Whether you should implement 4-connectivity as an > option is up to you. I just wasn't sure which one the implementation > was using so documenting it would help. :) Great, I updated the docstring and merged. I personally prefer 8-connectivity because it is slightly more rotationally invariant. Regards St?fan From alex.liberzon at gmail.com Tue Apr 12 02:47:00 2011 From: alex.liberzon at gmail.com (Alex) Date: Mon, 11 Apr 2011 23:47:00 -0700 (PDT) Subject: Help request In-Reply-To: References: <1d89628f-2840-4214-9941-77f625f73a89@e9g2000vbk.googlegroups.com> Message-ID: <956ac5cd-92a4-4aec-adf7-ae8080671db7@z31g2000vbs.googlegroups.com> All is right. We think in the same direction. BUT, this will solve the problem for the next experiment, hopefully. The question is what to do with the given image? I am looking into things like: a) multiscale cross correlation or other template matching b) hit-and-miss morphology, again multiscale there is also a step of adaptive/smart background elimination as far as I can understand. Simple median, gaussian, morphological top-hat filters help, but not solve it completely. Please, bring up your brilliant ideas. Thanks Alex On Apr 12, 8:57?am, Ma?l Primet wrote: > Hi Alex, > > why not using colored beads? and why using a focus that blurs some of the > beads? > you could indeed either do cross correlation using numpy, or try to detect > highlights that appear in pairs (but then when several beads are close from > each other, you need to find a way to connect the two highlights from the > same bead, and not from two different beads) > if your image comes from a movie, perhaps you could also track the beads > from frame to frame and thus have a more precise idea of their locations in > the next frame From jeanpatrick.pommier at gmail.com Tue Apr 12 07:16:27 2011 From: jeanpatrick.pommier at gmail.com (jip) Date: Tue, 12 Apr 2011 04:16:27 -0700 (PDT) Subject: "sudo easy_install scikits.image" fails Message-ID: Dear all, The installation of scikits.image fails on my ubuntu10.10 32bits laptop as the following output shows. thanks for yours advices to solve that problem. regards Jean-Patrick Pommier sudo easy_install scikits.image Searching for scikits.image Reading http://pypi.python.org/simple/scikits.image/ Reading http://stefanv.github.com/scikits.image Reading http://github.com/stefanv/scikits.image Best match: scikits.image 0.2.2 Downloading http://pypi.python.org/packages/source/s/scikits.image/scikits.image-0.2.2.tar.gz#md5=53ff771ccbef1661c6f8e35da86ecb2e Processing scikits.image-0.2.2.tar.gz Running scikits.image-0.2.2/setup.py -q bdist_egg --dist-dir /tmp/ easy_install-fktrwG/scikits.image-0.2.2/egg-dist-tmp-sKcnP4 Warning: Assuming default configuration (scikits/ {setup_scikits,setup}.py was not found) Appending scikits configuration to Ignoring attempt to set 'name' (from '' to 'scikits') Traceback (most recent call last): File "/usr/local/bin/easy_install", line 9, in load_entry_point('distribute==0.6.15', 'console_scripts', 'easy_install')() File "/usr/local/lib/python2.6/dist-packages/distribute-0.6.15- py2.6.egg/setuptools/command/easy_install.py", line 1858, in main with_ei_usage(lambda: File "/usr/local/lib/python2.6/dist-packages/distribute-0.6.15- py2.6.egg/setuptools/command/easy_install.py", line 1839, in with_ei_usage return f() File "/usr/local/lib/python2.6/dist-packages/distribute-0.6.15- py2.6.egg/setuptools/command/easy_install.py", line 1862, in distclass=DistributionWithoutHelpCommands, **kw File "/usr/lib/python2.6/distutils/core.py", line 152, in setup dist.run_commands() File "/usr/lib/python2.6/distutils/dist.py", line 975, in run_commands self.run_command(cmd) File "/usr/lib/python2.6/distutils/dist.py", line 995, in run_command cmd_obj.run() File "/usr/local/lib/python2.6/dist-packages/distribute-0.6.15- py2.6.egg/setuptools/command/easy_install.py", line 344, in run self.easy_install(spec, not self.no_deps) File "/usr/local/lib/python2.6/dist-packages/distribute-0.6.15- py2.6.egg/setuptools/command/easy_install.py", line 584, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "/usr/local/lib/python2.6/dist-packages/distribute-0.6.15- py2.6.egg/setuptools/command/easy_install.py", line 614, in install_item dists = self.install_eggs(spec, download, tmpdir) File "/usr/local/lib/python2.6/dist-packages/distribute-0.6.15- py2.6.egg/setuptools/command/easy_install.py", line 804, in install_eggs return self.build_and_install(setup_script, setup_base) File "/usr/local/lib/python2.6/dist-packages/distribute-0.6.15- py2.6.egg/setuptools/command/easy_install.py", line 1081, in build_and_install self.run_setup(setup_script, setup_base, args) File "/usr/local/lib/python2.6/dist-packages/distribute-0.6.15- py2.6.egg/setuptools/command/easy_install.py", line 1070, in run_setup run_setup(setup_script, args) File "/usr/local/lib/python2.6/dist-packages/distribute-0.6.15- py2.6.egg/setuptools/sandbox.py", line 29, in run_setup lambda: execfile( File "/usr/local/lib/python2.6/dist-packages/distribute-0.6.15- py2.6.egg/setuptools/sandbox.py", line 70, in run return func() File "/usr/local/lib/python2.6/dist-packages/distribute-0.6.15- py2.6.egg/setuptools/sandbox.py", line 31, in {'__file__':setup_script, '__name__':'__main__'} File "setup.py", line 80, in File "/usr/lib/python2.6/dist-packages/numpy/distutils/core.py", line 150, in setup config = configuration() File "setup.py", line 34, in configuration File "/usr/lib/python2.6/dist-packages/numpy/distutils/ misc_util.py", line 852, in add_subpackage caller_level = 2) File "/usr/lib/python2.6/dist-packages/numpy/distutils/ misc_util.py", line 835, in get_subpackage caller_level = caller_level + 1) File "/usr/lib/python2.6/dist-packages/numpy/distutils/ misc_util.py", line 782, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scikits/image/setup.py", line 8, in configuration File "/usr/lib/python2.6/dist-packages/numpy/distutils/ misc_util.py", line 852, in add_subpackage caller_level = 2) File "/usr/lib/python2.6/dist-packages/numpy/distutils/ misc_util.py", line 835, in get_subpackage caller_level = caller_level + 1) File "/usr/lib/python2.6/dist-packages/numpy/distutils/ misc_util.py", line 767, in _get_configuration_from_setup_py ('.py', 'U', 1)) File "scikits/image/opencv/setup.py", line 3, in ImportError: No module named image._build From mael.primet at gmail.com Tue Apr 12 01:57:09 2011 From: mael.primet at gmail.com (=?UTF-8?B?TWHDq2wgUHJpbWV0?=) Date: Tue, 12 Apr 2011 07:57:09 +0200 Subject: Help request In-Reply-To: <1d89628f-2840-4214-9941-77f625f73a89@e9g2000vbk.googlegroups.com> References: <1d89628f-2840-4214-9941-77f625f73a89@e9g2000vbk.googlegroups.com> Message-ID: Hi Alex, why not using colored beads? and why using a focus that blurs some of the beads? you could indeed either do cross correlation using numpy, or try to detect highlights that appear in pairs (but then when several beads are close from each other, you need to find a way to connect the two highlights from the same bead, and not from two different beads) if your image comes from a movie, perhaps you could also track the beads from frame to frame and thus have a more precise idea of their locations in the next frame -------------- next part -------------- An HTML attachment was scrubbed... URL: From mael.primet at gmail.com Tue Apr 12 02:59:39 2011 From: mael.primet at gmail.com (=?UTF-8?B?TWHDq2wgUHJpbWV0?=) Date: Tue, 12 Apr 2011 08:59:39 +0200 Subject: Help request In-Reply-To: <956ac5cd-92a4-4aec-adf7-ae8080671db7@z31g2000vbs.googlegroups.com> References: <1d89628f-2840-4214-9941-77f625f73a89@e9g2000vbk.googlegroups.com> <956ac5cd-92a4-4aec-adf7-ae8080671db7@z31g2000vbs.googlegroups.com> Message-ID: For denoising, you can use high performance tvdenoise on my scikits fork at github.com (user maelp) but for some reason (mainly because my code is C) people in scikits don't want to merge my code to the main branch On Tue, Apr 12, 2011 at 08:47, Alex wrote: > All is right. We think in the same direction. BUT, this will solve the > problem for the next experiment, hopefully. The question is what to do > with the given image? I am looking into things like: > a) multiscale cross correlation or other template matching > b) hit-and-miss morphology, again multiscale > > there is also a step of adaptive/smart background elimination as far > as I can understand. Simple median, gaussian, morphological top-hat > filters help, but not solve it completely. > > > Please, bring up your brilliant ideas. > > Thanks > Alex > > On Apr 12, 8:57 am, Ma?l Primet wrote: > > Hi Alex, > > > > why not using colored beads? and why using a focus that blurs some of the > > beads? > > you could indeed either do cross correlation using numpy, or try to > detect > > highlights that appear in pairs (but then when several beads are close > from > > each other, you need to find a way to connect the two highlights from the > > same bead, and not from two different beads) > > if your image comes from a movie, perhaps you could also track the beads > > from frame to frame and thus have a more precise idea of their locations > in > > the next frame > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Apr 12 06:04:39 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 12 Apr 2011 12:04:39 +0200 Subject: Help request In-Reply-To: References: <1d89628f-2840-4214-9941-77f625f73a89@e9g2000vbk.googlegroups.com> <956ac5cd-92a4-4aec-adf7-ae8080671db7@z31g2000vbs.googlegroups.com> Message-ID: Hi Ma?l On Tue, Apr 12, 2011 at 8:59 AM, Ma?l Primet wrote: > For denoising, you can use high performance tvdenoise on my scikits fork at > github.com (user maelp) but for some reason (mainly because my code is C) > people in scikits don't want to merge my code to the main branch While we can't commit the time to maintain C code in the scikit, we'd still love to have your contributions. Amongst other things, your scivi patches look very interesting. Don't give up on us yet :) Cheers St?fan From mael.primet at gmail.com Tue Apr 12 06:11:17 2011 From: mael.primet at gmail.com (=?UTF-8?B?TWHDq2wgUHJpbWV0?=) Date: Tue, 12 Apr 2011 12:11:17 +0200 Subject: Help request In-Reply-To: References: <1d89628f-2840-4214-9941-77f625f73a89@e9g2000vbk.googlegroups.com> <956ac5cd-92a4-4aec-adf7-ae8080671db7@z31g2000vbs.googlegroups.com> Message-ID: No problem, it was not a personal criticism, it's just that I understand you don't want C code in scikits and I won't have time to rewrite all code. On the other hand the C code is mainly numerical code (eg you can almost translate it to python code by pasting) so it should not be a burden to maintain, or at least to integrate temporarily until someone wants to convert it to efficient cython Also many researchers in computer vision love to write C code for their numerical code because it is close to mathematics (most of the algorithms are just additions and multiplications in vision) and it's efficient, so they'd like to use an environment where they can access both the power of C and the GUI and dynamism features of Python as far as scivi is concerned, yes the viewer that I wrote in the fork is quite usable and I'd love if you could include it in the main fork as I think it might be of use for a large number of people keep up your good work see you soon Ma?l 2011/4/12 St?fan van der Walt > Hi Ma?l > > On Tue, Apr 12, 2011 at 8:59 AM, Ma?l Primet > wrote: > > For denoising, you can use high performance tvdenoise on my scikits fork > at > > github.com (user maelp) but for some reason (mainly because my code is > C) > > people in scikits don't want to merge my code to the main branch > > While we can't commit the time to maintain C code in the scikit, we'd > still love to have your contributions. Amongst other things, your > scivi patches look very interesting. > > Don't give up on us yet :) > > Cheers > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thouis at gmail.com Tue Apr 12 06:15:37 2011 From: thouis at gmail.com (Thouis (Ray) Jones) Date: Tue, 12 Apr 2011 12:15:37 +0200 Subject: Help needed using pymorph In-Reply-To: <4685b5c9-3331-4631-8c33-3eecd6377ee1@p16g2000vbi.googlegroups.com> References: <4685b5c9-3331-4631-8c33-3eecd6377ee1@p16g2000vbi.googlegroups.com> Message-ID: (Note, I'm responding to both pythonvision and scikits-image. Sorry to those people only subscribed to one list.) I think more information might be helpful. - How many images do you have that you need to analze? - Are they movies, or completely separate frames? - What is common in the images (background, lighting, camera pose?) I wouldn't worry about efficient multi-template matching until you know if multi-template matching works at all. If template matching can get 60% of the beads, though, I expect multiple templates should be able to get almost all of them, possibly using a voting scheme where you require a bead to match multiple different templates before reporting it as a "hit". You might look at ilastik (http://www.ilastik.org/) as another approach. Thouis Jones On Tue, Apr 12, 2011 at 06:24, Alex Liberzon wrote: > Thanks for the feedback. > > There are both effects, the beads are different sizes and also as you > noticed at different depths. but the variation is not very large, i.e. > within a difference of 10 - 15 pixels I'd say. I have no idea how to > use multiple templates efficiently. Of course I can repeat normalized > cross correlation attempt few times. > > I can modify the experiment, so the next time will be hopefully > better. Meanwhile, this data is important to extract from the images > as is. > > Regards, > Alex > > On Apr 11, 10:56?pm, "Thouis (Ray) Jones" wrote: >> Are the beads actually different sizes, or just at different depths in >> the image. ?And how fixed is your camera system relative to the scene? >> ?I ask, because it seems like you could use multiple templates, >> parameterized by image position, to adjust for the size and blur >> variation. >> >> Also, do you have the option of modifying the scene illumination? ?Can >> you use color, or multiple exposures, or are the images "as-is" and >> there's no option for making the beads more visible? >> >> Best, >> Ray Jones >> >> >> >> On Mon, Apr 11, 2011 at 19:49, Alex Liberzon wrote: >> > Dear pymorph members, >> >> > I have some difficulty to identify specific objects from the grayscale >> > image. For example, in this image >> >https://picasaweb.google.com/lh/photo/FYZ1_F4OAoe920iYj8bk7A?feat=dir... >> > one can see about 15 glass beads on the floor illuminated from the >> > left side. Human eye identifies them easily, but I cannot find the >> > efficient way to identify them in such an image. Using Matlab the >> > approach was 1) crop out one of the beads as is from the given image, >> > and then 2)using normalized cross-correlation (normxcorr2) of the >> > cropped bead with the image to identify the 50 - 60% of beads. The >> > problems are due to uneven background illumination, different size of >> > the beads and different light pattern due to their angle in respect to >> > the light source. >> > I could define the the problem of looking a feature that has two not- >> > connected bright regions, separated by some distance that could be >> > known within some reasonable limits. >> >> > Any suggestion is gratefully appreciated. If possible, the pseudo-code >> > would be helpful. Note that I also posted this question on skikits- >> > image mailing list so if you receive it double, I apologize for that. >> >> > Thanks, >> > Alex From stefan at sun.ac.za Tue Apr 12 06:18:52 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 12 Apr 2011 12:18:52 +0200 Subject: Help request In-Reply-To: References: <1d89628f-2840-4214-9941-77f625f73a89@e9g2000vbk.googlegroups.com> Message-ID: On Tue, Apr 12, 2011 at 7:57 AM, Ma?l Primet wrote: > why not using colored beads? and why using a focus that blurs some of the > beads? I think this is probably the worst issue. You are using a big aperture (I assume in order to have a short exposure time), but that leads to very small depth of field. You may want to try different settings: higher ISO, smaller aperture maybe, in order to get a sharp image. The problem is that you don't have a defined shape or strong edges; by improving the input data, the problem will be much simplified. One approach that might work with this data is to do edge detection, and then to find pairs of bright blobs (the two glints on the beads). Those seem to stand out fairly clearly: http://mentat.za.net/refer/cam_sobel.jpg Regards St?fan From stefan at sun.ac.za Tue Apr 12 10:11:32 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 12 Apr 2011 16:11:32 +0200 Subject: "sudo easy_install scikits.image" fails In-Reply-To: References: Message-ID: Hi Jean-Patrick On Tue, Apr 12, 2011 at 1:16 PM, jip wrote: > Dear all, > The installation of scikits.image fails on my ubuntu10.10 32bits > laptop as the following output shows. > thanks for yours advices to solve that problem. Thanks for the feedback. Guess it's time for release 0.3! Could you do me a favour and try to build the latest version from github? http://github.com/stefanv/scikits.image Regards St?fan From jeanpatrick.pommier at gmail.com Wed Apr 13 02:56:30 2011 From: jeanpatrick.pommier at gmail.com (jip) Date: Tue, 12 Apr 2011 23:56:30 -0700 (PDT) Subject: "sudo easy_install scikits.image" fails In-Reply-To: References: Message-ID: <98ffbf24-77b0-4778-8946-7c7b6997792e@a26g2000vbo.googlegroups.com> Hi St?phan, That doesn't build. I have both opencv2.1 from ubuntu repo and opencv2.2 build from source: creating build/temp.linux-i686-2.6 compile options: '-I/usr/lib/python2.6/dist-packages/numpy/core/ include -I/usr/lib/python2.6/dist-packages/numpy/core/include -I/usr/ include/python2.6 -c' gcc: opencv_backend.c gcc: opencv_backend.c: Aucun fichier ou dossier de ce type gcc: no input files gcc: opencv_backend.c: Aucun fichier ou dossier de ce type gcc: no input files error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv - O2 -Wall -Wstrict-prototypes -fPIC -I/usr/lib/python2.6/dist-packages/ numpy/core/include -I/usr/lib/python2.6/dist-packages/numpy/core/ include -I/usr/include/python2.6 -c opencv_backend.c -o build/ temp.linux-i686-2.6/opencv_backend.o" failed with exit status 1 locate opencv_backend.o doesn't yield anything regards Jean-Patrick On 12 avr, 16:11, St?fan van der Walt wrote: > Hi Jean-Patrick > > On Tue, Apr 12, 2011 at 1:16 PM, jip wrote: > > Dear all, > > The installation of scikits.image fails on my ubuntu10.10 32bits > > laptop as the following output shows. > > thanks for yours advices to solve that problem. > > Thanks for the feedback. ?Guess it's time for release 0.3! > > Could you do me a favour and try to build the latest version from github? > > http://github.com/stefanv/scikits.image > > Regards > St?fan From jeanpatrick.pommier at gmail.com Wed Apr 13 04:46:28 2011 From: jeanpatrick.pommier at gmail.com (jip) Date: Wed, 13 Apr 2011 01:46:28 -0700 (PDT) Subject: "sudo easy_install scikits.image" fails In-Reply-To: References: <98ffbf24-77b0-4778-8946-7c7b6997792e@a26g2000vbo.googlegroups.com> Message-ID: <48f844d4-ee73-4309-a145-a08d2a8651e4@z31g2000vbs.googlegroups.com> Hi, cython -V Cython version 0.12.1 On 13 avr, 10:34, St?fan van der Walt wrote: > On Wed, Apr 13, 2011 at 8:56 AM, jip wrote: > > creating build/temp.linux-i686-2.6 > > compile options: '-I/usr/lib/python2.6/dist-packages/numpy/core/ > > include -I/usr/lib/python2.6/dist-packages/numpy/core/include -I/usr/ > > include/python2.6 -c' > > gcc: opencv_backend.c > > gcc: opencv_backend.c: Aucun fichier ou dossier de ce type > > Looks like it can't find the .c files. ?Can you provide the output of > the "cython -v" command, please? > > I can chat to you on irc or gtalk, if that would help to resolve the > issue more quickly. > On thuesday ? > Regards > St?fan From jeanpatrick.pommier at gmail.com Wed Apr 13 05:39:26 2011 From: jeanpatrick.pommier at gmail.com (jip) Date: Wed, 13 Apr 2011 02:39:26 -0700 (PDT) Subject: "sudo easy_install scikits.image" fails In-Reply-To: References: <98ffbf24-77b0-4778-8946-7c7b6997792e@a26g2000vbo.googlegroups.com> <48f844d4-ee73-4309-a145-a08d2a8651e4@z31g2000vbs.googlegroups.com> Message-ID: Hi, I upgrade cython to 0.14.1 and everything went fine: Adding scikits.image 0.3dev to easy-install.pth file Installing scivi script to /usr/local/bin Installed /usr/local/lib/python2.6/dist-packages/scikits.image-0.3dev- py2.6-linux-i686.egg Processing dependencies for scikits.image==0.3dev Finished processing dependencies for scikits.image==0.3dev Thanks Jean-Patrick http://dip4fish.blogspot.com/ On 13 avr, 10:59, St?fan van der Walt wrote: > On Wed, Apr 13, 2011 at 10:46 AM, jip wrote: > > cython -V > > Cython version 0.12.1 > > This would be the problem; you need version 0.13 or later to compile > the latest Git version of the scikit. > > Cheers > St?fan From stefan at sun.ac.za Wed Apr 13 04:34:57 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 13 Apr 2011 10:34:57 +0200 Subject: "sudo easy_install scikits.image" fails In-Reply-To: <98ffbf24-77b0-4778-8946-7c7b6997792e@a26g2000vbo.googlegroups.com> References: <98ffbf24-77b0-4778-8946-7c7b6997792e@a26g2000vbo.googlegroups.com> Message-ID: On Wed, Apr 13, 2011 at 8:56 AM, jip wrote: > creating build/temp.linux-i686-2.6 > compile options: '-I/usr/lib/python2.6/dist-packages/numpy/core/ > include -I/usr/lib/python2.6/dist-packages/numpy/core/include -I/usr/ > include/python2.6 -c' > gcc: opencv_backend.c > gcc: opencv_backend.c: Aucun fichier ou dossier de ce type Looks like it can't find the .c files. Can you provide the output of the "cython -v" command, please? I can chat to you on irc or gtalk, if that would help to resolve the issue more quickly. Regards St?fan From alex.liberzon at gmail.com Wed Apr 13 13:47:44 2011 From: alex.liberzon at gmail.com (Alex) Date: Wed, 13 Apr 2011 10:47:44 -0700 (PDT) Subject: Help needed using pymorph In-Reply-To: References: <4685b5c9-3331-4631-8c33-3eecd6377ee1@p16g2000vbi.googlegroups.com> Message-ID: Hi, Thanks to all for the help. - there are hundreds to thousands of images to analyze - these are separate frames but taken at short time intervals such that it's possible to consider them as a sequence of movie frames - all these: background, lighting and camera pose are common. But, in addition to the specific balls moving in the frames, there are also other particles moving that need to be tracked separately. Hope it helps, Alex On Apr 12, 1:15?pm, "Thouis (Ray) Jones" wrote: > (Note, I'm responding to both pythonvision and scikits-image. ?Sorry > to those people only subscribed to one list.) > > I think more information might be helpful. > > - How many images do you have that you need to analze? > - Are they movies, or completely separate frames? > - What is common in the images (background, lighting, camera pose?) > > I wouldn't worry about efficient multi-template matching until you > know if multi-template matching works at all. ?If template matching > can get 60% of the beads, though, I expect multiple templates should > be able to get almost all of them, possibly using a voting scheme > where you require a bead to match multiple different templates before > reporting it as a "hit". > > You might look at ilastik (http://www.ilastik.org/) as another approach. > > Thouis Jones > > > > On Tue, Apr 12, 2011 at 06:24, Alex Liberzon wrote: > > Thanks for the feedback. > > > There are both effects, the beads are different sizes and also as you > > noticed at different depths. but the variation is not very large, i.e. > > within a difference of 10 - 15 pixels I'd say. I have no idea how to > > use multiple templates efficiently. Of course I can repeat normalized > > cross correlation attempt few times. > > > I can modify the experiment, so the next time will be hopefully > > better. Meanwhile, this data is important to extract from the images > > as is. > > > Regards, > > Alex > > > On Apr 11, 10:56?pm, "Thouis (Ray) Jones" wrote: > >> Are the beads actually different sizes, or just at different depths in > >> the image. ?And how fixed is your camera system relative to the scene? > >> ?I ask, because it seems like you could use multiple templates, > >> parameterized by image position, to adjust for the size and blur > >> variation. > > >> Also, do you have the option of modifying the scene illumination? ?Can > >> you use color, or multiple exposures, or are the images "as-is" and > >> there's no option for making the beads more visible? > > >> Best, > >> Ray Jones > > >> On Mon, Apr 11, 2011 at 19:49, Alex Liberzon wrote: > >> > Dear pymorph members, > > >> > I have some difficulty to identify specific objects from the grayscale > >> > image. For example, in this image > >> >https://picasaweb.google.com/lh/photo/FYZ1_F4OAoe920iYj8bk7A?feat=dir... > >> > one can see about 15 glass beads on the floor illuminated from the > >> > left side. Human eye identifies them easily, but I cannot find the > >> > efficient way to identify them in such an image. Using Matlab the > >> > approach was 1) crop out one of the beads as is from the given image, > >> > and then 2)using normalized cross-correlation (normxcorr2) of the > >> > cropped bead with the image to identify the 50 - 60% of beads. The > >> > problems are due to uneven background illumination, different size of > >> > the beads and different light pattern due to their angle in respect to > >> > the light source. > >> > I could define the the problem of looking a feature that has two not- > >> > connected bright regions, separated by some distance that could be > >> > known within some reasonable limits. > > >> > Any suggestion is gratefully appreciated. If possible, the pseudo-code > >> > would be helpful. Note that I also posted this question on skikits- > >> > image mailing list so if you receive it double, I apologize for that. > > >> > Thanks, > >> > Alex From stefan at sun.ac.za Wed Apr 13 04:59:32 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 13 Apr 2011 10:59:32 +0200 Subject: "sudo easy_install scikits.image" fails In-Reply-To: <48f844d4-ee73-4309-a145-a08d2a8651e4@z31g2000vbs.googlegroups.com> References: <98ffbf24-77b0-4778-8946-7c7b6997792e@a26g2000vbo.googlegroups.com> <48f844d4-ee73-4309-a145-a08d2a8651e4@z31g2000vbs.googlegroups.com> Message-ID: On Wed, Apr 13, 2011 at 10:46 AM, jip wrote: > cython -V > Cython version 0.12.1 This would be the problem; you need version 0.13 or later to compile the latest Git version of the scikit. Cheers St?fan From dfarmernv at gmail.com Tue Apr 19 10:05:58 2011 From: dfarmernv at gmail.com (Dan Farmer) Date: Tue, 19 Apr 2011 07:05:58 -0700 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: Message-ID: My first thought was the same as Chris's. -Dan On Tue, Apr 19, 2011 at 6:54 AM, Chris Colbert wrote: > Hmmm, I'm not too sure of the need for this, given that ndimage already has > a Sobel and Prewitt filter which are implemented via separable 1D > convolutions (and will thus be faster than a 2D convolution). I can see this > being useful if we want to remove the dependency on ndimage at some point, > once we have our own fast convolution routine (or just have a convolution > that is faster that the one in ndimage). However in that case, I think these > two filters should still be implemented as separable 1D filters. > > 2011/4/19 St?fan van der Walt >> >> Hi all, >> >> Pieter Holtzhausen made a pull request to bring over the Sobel / >> Prewitt edge detection filters from CellProfiler. ?I reviewed and >> merged his code, alongside with the connected components changes >> suggested last week. ?You can find the changesets here: >> >> >> https://github.com/stefanv/scikits.image/commit/cef53c172dd3e15dc98ddbf97a908c0f67a7283b >> >> Please review and comment--you input is highly valued! >> >> Regards >> St?fan > > From sccolbert at gmail.com Tue Apr 19 09:54:40 2011 From: sccolbert at gmail.com (Chris Colbert) Date: Tue, 19 Apr 2011 09:54:40 -0400 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: Message-ID: Hmmm, I'm not too sure of the need for this, given that ndimage already has a Sobel and Prewitt filter which are implemented via separable 1D convolutions (and will thus be faster than a 2D convolution). I can see this being useful if we want to remove the dependency on ndimage at some point, once we have our own fast convolution routine (or just have a convolution that is faster that the one in ndimage). However in that case, I think these two filters should still be implemented as separable 1D filters. 2011/4/19 St?fan van der Walt > Hi all, > > Pieter Holtzhausen made a pull request to bring over the Sobel / > Prewitt edge detection filters from CellProfiler. I reviewed and > merged his code, alongside with the connected components changes > suggested last week. You can find the changesets here: > > > https://github.com/stefanv/scikits.image/commit/cef53c172dd3e15dc98ddbf97a908c0f67a7283b > > Please review and comment--you input is highly valued! > > Regards > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Tue Apr 19 11:12:20 2011 From: sccolbert at gmail.com (Chris Colbert) Date: Tue, 19 Apr 2011 11:12:20 -0400 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: Message-ID: what type of post-processing are you doing? It looks to me like you are getting wrap-around error. The sobel output is signed and will have negative values. 2011/4/19 St?fan van der Walt > 2011/4/19 St?fan van der Walt : > > I think there was a concern about the directionality of the > > scipy.ndimage implementation. Maybe we can modify the version in the > > scikit to make use of the scipy.ndimage one, but to compute the > > horizontal, vertical and averaged sobels? > > Looking at this further, I am not happy with the result of either > scikits.image or scipy.ndimage: > > http://mentat.za.net/refer/sobel.jpg > > Compare this, e.g., to the wikipedia page here: > > http://en.wikipedia.org/wiki/Sobel_operator > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Tue Apr 19 11:33:16 2011 From: sccolbert at gmail.com (Chris Colbert) Date: Tue, 19 Apr 2011 11:33:16 -0400 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: Message-ID: supporting integer images is a must IMO. 2011/4/19 St?fan van der Walt > 2011/4/19 St?fan van der Walt : > > Looking at this further, I am not happy with the result of either > > scikits.image or scipy.ndimage: > > > > http://mentat.za.net/refer/sobel.jpg > > > > Compare this, e.g., to the wikipedia page here: > > > > http://en.wikipedia.org/wiki/Sobel_operator > > Apparently, neither of these routines like integer images as inputs > (should be mentioned in the docs). Here's the output for float > images: > > http://mentat.za.net/refer/sobel2.jpg > > Looking at that wikipedia page, it might be easy to add some of the > other methods that are less sensitive to direction--we only need to > update the masks. > > Regards > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Tue Apr 19 11:45:23 2011 From: sccolbert at gmail.com (Chris Colbert) Date: Tue, 19 Apr 2011 11:45:23 -0400 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: Message-ID: Goodluck making something faster than ndimage without considerable effort (I've already tried in Cython and was 10x slower). I read the ndimage source and it goes to great lengths to make optimum use of the cpu cache by allocating line buffers etc... What we need is to find a fast open source routine, wrap it with Cython, and package it with the scikit. I wouldn't suggest using the one from OpenCV unless we are desperate because it's implemented as a c++ filtering engine. However, they are using sse2 intrinsics and it's fast! Like 10x+ faster than ndimage. 2011/4/19 St?fan van der Walt > 2011/4/19 St?fan van der Walt : > > Apparently, neither of these routines like integer images as inputs > > (should be mentioned in the docs). Here's the output for float > > images: > > To be more specific, scipy.ndimage.convolve does not upcast > appropriately when convolving integer arrays with floating point > arrays. This alone seems like a good enough reason to have our own > version, although I agree that we should use the 1D separated filters. > > It doesn't look as if there is an easy way to coerce numpy.convolve to > do the job, so I guess I should write something in Cython? > > Regards > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Tue Apr 19 13:25:16 2011 From: sccolbert at gmail.com (Chris Colbert) Date: Tue, 19 Apr 2011 13:25:16 -0400 Subject: Sobel / Prewitt Edge Detection In-Reply-To: <4DADC35F.3010803@gemini.edu> References: <4DADC35F.3010803@gemini.edu> Message-ID: I would like to see the scikit become an ndimage replacement, however that would require us evolving to support nd-images rather than 2D images. Not something I'd be opposed to necessary. In truth, there isn't a whole lot of functionality in ndimage, and ignoring the nd part, we could implement all of it ourselves quite easily; a fast convolution being the barrier to entry. On Tue, Apr 19, 2011 at 1:16 PM, James Turner wrote: > Hi Stefan & Chris, > > I hope this isn't too OT, but it's interesting that people would > rather re-implement bits of ndimage than try to improve it. Maybe > it's just too much work for a small part of the functionality. > Can/should the scikit be viewed as a possible ndimage replacement in > the longer run? Ndimage is a bit of a worry for me, since we need > the type of functionality it provides but I'm not optimistic about > having resources to help maintain its code in the near future. > > Just curious. > > Thanks! > > James. > > > > On 19/04/11 12:45, Chris Colbert wrote: > >> Goodluck making something faster than ndimage without considerable effort >> (I've >> already tried in Cython and was 10x slower). I read the ndimage source and >> it >> goes to great lengths to make optimum use of the cpu cache by allocating >> line >> buffers etc... >> >> What we need is to find a fast open source routine, wrap it with Cython, >> and >> package it with the scikit. I wouldn't suggest using the one from OpenCV >> unless >> we are desperate because it's implemented as a c++ filtering engine. >> However, >> they are using sse2 intrinsics and it's fast! Like 10x+ faster than >> ndimage. >> >> >> 2011/4/19 St?fan van der Walt > >> >> >> 2011/4/19 St?fan van der Walt > stefan at sun.ac.za>>: >> >> > Apparently, neither of these routines like integer images as inputs >> > (should be mentioned in the docs). Here's the output for float >> > images: >> >> To be more specific, scipy.ndimage.convolve does not upcast >> appropriately when convolving integer arrays with floating point >> arrays. This alone seems like a good enough reason to have our own >> version, although I agree that we should use the 1D separated filters. >> >> It doesn't look as if there is an easy way to coerce numpy.convolve to >> do the job, so I guess I should write something in Cython? >> >> Regards >> St?fan >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jturner at gemini.edu Tue Apr 19 13:16:15 2011 From: jturner at gemini.edu (James Turner) Date: Tue, 19 Apr 2011 14:16:15 -0300 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: Message-ID: <4DADC35F.3010803@gemini.edu> Hi Stefan & Chris, I hope this isn't too OT, but it's interesting that people would rather re-implement bits of ndimage than try to improve it. Maybe it's just too much work for a small part of the functionality. Can/should the scikit be viewed as a possible ndimage replacement in the longer run? Ndimage is a bit of a worry for me, since we need the type of functionality it provides but I'm not optimistic about having resources to help maintain its code in the near future. Just curious. Thanks! James. On 19/04/11 12:45, Chris Colbert wrote: > Goodluck making something faster than ndimage without considerable effort (I've > already tried in Cython and was 10x slower). I read the ndimage source and it > goes to great lengths to make optimum use of the cpu cache by allocating line > buffers etc... > > What we need is to find a fast open source routine, wrap it with Cython, and > package it with the scikit. I wouldn't suggest using the one from OpenCV unless > we are desperate because it's implemented as a c++ filtering engine. However, > they are using sse2 intrinsics and it's fast! Like 10x+ faster than ndimage. > > > 2011/4/19 St??????fan van der Walt > > > 2011/4/19 St??????fan van der Walt >: > > Apparently, neither of these routines like integer images as inputs > > (should be mentioned in the docs). Here's the output for float > > images: > > To be more specific, scipy.ndimage.convolve does not upcast > appropriately when convolving integer arrays with floating point > arrays. This alone seems like a good enough reason to have our own > version, although I agree that we should use the 1D separated filters. > > It doesn't look as if there is an easy way to coerce numpy.convolve to > do the job, so I guess I should write something in Cython? > > Regards > St??????fan From jturner at gemini.edu Tue Apr 19 14:03:17 2011 From: jturner at gemini.edu (James Turner) Date: Tue, 19 Apr 2011 15:03:17 -0300 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: <4DADC35F.3010803@gemini.edu> Message-ID: <4DADCE65.9060400@gemini.edu> > I would like to see the scikit become an ndimage replacement, however that would > require us evolving to support nd-images rather than 2D images. Ah, yes, I was forgetting that. > Not something I'd be opposed to necessary. In truth, there isn't a whole lot of > functionality in ndimage, and ignoring the nd part, we could implement all of > it ourselves quite easily; a fast convolution being the barrier to entry. I do rely on the ND part myself, but only for the "interpolation" routines in 3D (I also use "filters" in 2D). Generalizing understandable 2D interpolation code to work for 3D/ND might be a more realistic thing for me to help with in future, though, than contributing to ndimage (it's always tough to find time, but it seems more of a bite-sized problem). I suppose the question is just whether you and Stefan want to support >2D at all... Cheers, James. From stefan at sun.ac.za Tue Apr 19 09:03:31 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 19 Apr 2011 15:03:31 +0200 Subject: Sobel / Prewitt Edge Detection Message-ID: Hi all, Pieter Holtzhausen made a pull request to bring over the Sobel / Prewitt edge detection filters from CellProfiler. I reviewed and merged his code, alongside with the connected components changes suggested last week. You can find the changesets here: https://github.com/stefanv/scikits.image/commit/cef53c172dd3e15dc98ddbf97a908c0f67a7283b Please review and comment--you input is highly valued! Regards St?fan From stefan at sun.ac.za Tue Apr 19 10:22:20 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 19 Apr 2011 16:22:20 +0200 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: Message-ID: On Tue, Apr 19, 2011 at 3:54 PM, Chris Colbert wrote: > Hmmm, I'm not too sure of the need for this, given that ndimage already has > a Sobel and Prewitt filter which are implemented via separable 1D > convolutions (and will thus be faster than a 2D convolution). I think there was a concern about the directionality of the scipy.ndimage implementation. Maybe we can modify the version in the scikit to make use of the scipy.ndimage one, but to compute the horizontal, vertical and averaged sobels? > I can see this > being useful if we want to remove the dependency on ndimage at some point, > once we have our own fast convolution routine (or just have a convolution > that is faster that the one in ndimage). However in that case, I think these > two filters should still be implemented as separable 1D filters. I have someone looking at GPU-based convolution; let's hope that pans out. Regards St?fan From stefan at sun.ac.za Tue Apr 19 11:04:56 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 19 Apr 2011 17:04:56 +0200 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: Message-ID: 2011/4/19 St?fan van der Walt : > I think there was a concern about the directionality of the > scipy.ndimage implementation. ?Maybe we can modify the version in the > scikit to make use of the scipy.ndimage one, but to compute the > horizontal, vertical and averaged sobels? Looking at this further, I am not happy with the result of either scikits.image or scipy.ndimage: http://mentat.za.net/refer/sobel.jpg Compare this, e.g., to the wikipedia page here: http://en.wikipedia.org/wiki/Sobel_operator St?fan From stefan at sun.ac.za Tue Apr 19 11:19:29 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 19 Apr 2011 17:19:29 +0200 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: Message-ID: 2011/4/19 St?fan van der Walt : > Looking at this further, I am not happy with the result of either > scikits.image or scipy.ndimage: > > http://mentat.za.net/refer/sobel.jpg > > Compare this, e.g., to the wikipedia page here: > > http://en.wikipedia.org/wiki/Sobel_operator Apparently, neither of these routines like integer images as inputs (should be mentioned in the docs). Here's the output for float images: http://mentat.za.net/refer/sobel2.jpg Looking at that wikipedia page, it might be easy to add some of the other methods that are less sensitive to direction--we only need to update the masks. Regards St?fan From stefan at sun.ac.za Tue Apr 19 11:40:00 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 19 Apr 2011 17:40:00 +0200 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: Message-ID: 2011/4/19 St?fan van der Walt : > Apparently, neither of these routines like integer images as inputs > (should be mentioned in the docs). ?Here's the output for float > images: To be more specific, scipy.ndimage.convolve does not upcast appropriately when convolving integer arrays with floating point arrays. This alone seems like a good enough reason to have our own version, although I agree that we should use the 1D separated filters. It doesn't look as if there is an easy way to coerce numpy.convolve to do the job, so I guess I should write something in Cython? Regards St?fan From stefan at sun.ac.za Tue Apr 19 11:51:56 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 19 Apr 2011 17:51:56 +0200 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: Message-ID: On Tue, Apr 19, 2011 at 5:12 PM, Chris Colbert wrote: > what type of post-processing are you doing? It looks to me like you are > getting wrap-around error. The sobel output is signed and will have negative > values. Yeah, there's wrap-around inside of scipy.ndimage.sobel and scipy.ndimage.convolve. St?fan From stefan at sun.ac.za Tue Apr 19 12:24:52 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 19 Apr 2011 18:24:52 +0200 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: Message-ID: On Tue, Apr 19, 2011 at 5:33 PM, Chris Colbert wrote: > supporting integer images is a must IMO. For now, I just cast the inputs to float--but that's a bad solution. Sobel can be done using integer arithmetic, which should make it even faster. From jeanpatrick.pommier at gmail.com Wed Apr 20 11:44:31 2011 From: jeanpatrick.pommier at gmail.com (jip) Date: Wed, 20 Apr 2011 08:44:31 -0700 (PDT) Subject: convex hull of a 2D binary image Message-ID: <25561179.969.1303314271908.JavaMail.geo-discussion-forums@yqjl1> Hi, Does scikits.image provide a way to compute the convex hull of a binary image? Something like : cHull=convexhull(binary_image) thank you Jean-Patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanpatrick.pommier at gmail.com Wed Apr 20 12:00:16 2011 From: jeanpatrick.pommier at gmail.com (jip) Date: Wed, 20 Apr 2011 09:00:16 -0700 (PDT) Subject: =?UTF-8?Q?Re=C2=A0:_Re:_convex_hull_of_a_2D_binary_image?= In-Reply-To: Message-ID: <18636054.1038.1303315216990.JavaMail.geo-discussion-forums@yqkk6> no, just the convex hull of a a 2D image. I found the definition in "Gonzalez & Woods, p545", but I am not sure to implement the stuff. jean-patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanpatrick.pommier at gmail.com Wed Apr 20 12:12:22 2011 From: jeanpatrick.pommier at gmail.com (jip) Date: Wed, 20 Apr 2011 09:12:22 -0700 (PDT) Subject: How to get an ordered list of coordinates of a curve? Message-ID: <4739989.35.1303315942420.JavaMail.geo-discussion-forums@yqhc1> Hi, Given a 2D image of a closed or open curve, as the outline of a binary particle or a skeleton,I am wondering how to get an order list of the pixels coordinates (I found something in the opencv documentation , but I am unable to call fincontours from python. For som reasons, I write import opencv then if I try opencv.findcontour, findcontour is not available in the list of my python ide( spyder ); I hope I am clear ) Thanks for your advice. Jean-Patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanpatrick.pommier at gmail.com Wed Apr 20 12:33:52 2011 From: jeanpatrick.pommier at gmail.com (jip) Date: Wed, 20 Apr 2011 09:33:52 -0700 (PDT) Subject: =?UTF-8?Q?Re=C2=A0:_Re:_Re_:_Re:_convex_hull_of_a_2D_binary_image?= In-Reply-To: Message-ID: <18177999.1105.1303317232638.JavaMail.geo-discussion-forums@yqkk6> thank you anyway, I'll try to understand the algorithm -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanpatrick.pommier at gmail.com Wed Apr 20 13:21:20 2011 From: jeanpatrick.pommier at gmail.com (jip) Date: Wed, 20 Apr 2011 10:21:20 -0700 (PDT) Subject: =?UTF-8?Q?Re=C2=A0:_Re:_How_to_get_an_ordered_?= =?UTF-8?Q?list_of_coordinates_of_a_curve=3F?= In-Reply-To: Message-ID: <29890810.1197.1303320080433.JavaMail.geo-discussion-forums@yqkk6> unfortunately, FindContours is not in scikits.image. with opencv, there is somothing that I don't understand, I get: In [21]: import opencv In [22]: opencv.FindContours(...) ------------------------------------------------------------ Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'FindContours' I understand this is not an opencv group, but if you know how to solve that issue ... Thank you Jean-Patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Wed Apr 20 11:47:55 2011 From: sccolbert at gmail.com (Chris Colbert) Date: Wed, 20 Apr 2011 11:47:55 -0400 Subject: convex hull of a 2D binary image In-Reply-To: <25561179.969.1303314271908.JavaMail.geo-discussion-forums@yqjl1> References: <25561179.969.1303314271908.JavaMail.geo-discussion-forums@yqjl1> Message-ID: Do you mean the 3D convex hull? If so, you need more than one binary image... On Wed, Apr 20, 2011 at 11:44 AM, jip wrote: > Hi, > Does scikits.image provide a way to compute the convex hull of a binary > image? > Something like : > cHull=convexhull(binary_image) > > > thank you > > Jean-Patrick > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanpatrick.pommier at gmail.com Wed Apr 20 15:18:09 2011 From: jeanpatrick.pommier at gmail.com (jip) Date: Wed, 20 Apr 2011 12:18:09 -0700 (PDT) Subject: =?UTF-8?Q?Re=C2=A0:_Re:_Re_:_Re:_How_to_get_an_ord?= =?UTF-8?Q?ered_list_of_coordinates_of_a_curve=3F?= In-Reply-To: Message-ID: <13290131.1335.1303327089579.JavaMail.geo-discussion-forums@yqkk6> Hi again On my ubuntu box, I have: In [23]: from opencv.imgproc import FindContours ------------------------------------------------------------ Traceback (most recent call last): File "", line 1, in ImportError: No module named imgproc Anyway, thank you for help Best regards Jean-Patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From jturner at gemini.edu Wed Apr 20 11:23:08 2011 From: jturner at gemini.edu (James Turner) Date: Wed, 20 Apr 2011 12:23:08 -0300 Subject: Sobel / Prewitt Edge Detection In-Reply-To: References: <4DADC35F.3010803@gemini.edu> <4DADCE65.9060400@gemini.edu> Message-ID: <4DAEFA5C.3050209@gemini.edu> > I wouldn't be opposed to adding 3D support for some routines; I just > don't want to have it as a requirement, otherwise we may never reach > 1.0 :) Thanks for the feedback, that sounds great. So as and when we're able to work on interpolation routines, I hope we'll be able to contribute them to the scikit. I suppose with some preparation it could make a good sprint, actually, but I'm likely to miss the second sprint day this year as I think my wife will be with me on the way back from the UK and that day is my birthday... Cheers, James. From sccolbert at gmail.com Wed Apr 20 12:24:47 2011 From: sccolbert at gmail.com (Chris Colbert) Date: Wed, 20 Apr 2011 12:24:47 -0400 Subject: convex hull of a 2D binary image Message-ID: So, there are a couple of ways you could do that. One is by using structuring elements as indicated in G & W. Another way would be to take a 3D method like described here in the surface approximation section (shameless plug): http://confactsdatas.inrialpes.fr/IEEE_RSJ_-_IROS_2010_International_Conference_on_Intelligent_Robots_and_Systems___Conference/data/papers/0389.pdf and adapt it for 2D. There is nothing in scikits.image that will do this automatically, but it would be a useful feature so I'll add it to my TODO list. Chris On Wed, Apr 20, 2011 at 12:00 PM, jip wrote: > no, just the convex hull of a a 2D image. > I found the definition in "Gonzalez & Woods, p545", but I am not sure to > implement the stuff. > > jean-patrick > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Wed Apr 20 12:30:40 2011 From: sccolbert at gmail.com (Chris Colbert) Date: Wed, 20 Apr 2011 12:30:40 -0400 Subject: How to get an ordered list of coordinates of a curve? In-Reply-To: <4739989.35.1303315942420.JavaMail.geo-discussion-forums@yqhc1> References: <4739989.35.1303315942420.JavaMail.geo-discussion-forums@yqhc1> Message-ID: Read the opencv docs for the scikits.image opencv bindings to see what is available: http://stefanv.github.com/scikits.image/api/scikits.image.opencv.html They bindings for opencv in the scikit do not cover the entirety of the opencv library. For that, you would need to build and use the Python bindings developed by the OpenCV group (which are not necessarily compatible with scikits.image). To answer your question, what you need to do is write a simple chain coding algorithm, the details of which can be found in G & W or Jain & Kasturi Machine Vision. On Wed, Apr 20, 2011 at 12:12 PM, jip wrote: > Hi, > Given a 2D image of a closed or open curve, as the outline of a binary > particle or a skeleton,I am wondering how to get an order list of the pixels > coordinates (I found something in the opencv documentation > , > but I am unable to call fincontours from python. For som reasons, I write > import opencv then if I try opencv.findcontour, findcontour is not available > in the list of my python ide( > spyder ); I hope I am clear ) > > Thanks for your advice. > > Jean-Patrick > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sccolbert at gmail.com Wed Apr 20 13:47:41 2011 From: sccolbert at gmail.com (Chris Colbert) Date: Wed, 20 Apr 2011 13:47:41 -0400 Subject: How to get an ordered list of coordinates of a curve? Message-ID: I don't use the official opencv bindings, but looking at their documentation: http://opencv.willowgarage.com/documentation/python/index.html It looks like it may live at opencv.imgproc.FindContours try: from opencv.imgproc import FindContours Otherwise, take the question to the opencv mailing list. On Wed, Apr 20, 2011 at 1:21 PM, jip wrote: > unfortunately, FindContours is not in scikits.image. > > with opencv, there is somothing that I don't understand, I get: > > In [21]: import opencv > > In [22]: opencv.FindContours(...) > ------------------------------------------------------------ > Traceback (most recent call last): > File "", line 1, in > AttributeError: 'module' object has no attribute 'FindContours' > > > I understand this is not an opencv group, but if you know how to solve that > issue ... > > Thank you > Jean-Patrick > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Apr 20 10:47:10 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 20 Apr 2011 16:47:10 +0200 Subject: Sobel / Prewitt Edge Detection In-Reply-To: <4DADCE65.9060400@gemini.edu> References: <4DADC35F.3010803@gemini.edu> <4DADCE65.9060400@gemini.edu> Message-ID: On Tue, Apr 19, 2011 at 8:03 PM, James Turner wrote: > I do rely on the ND part myself, but only for the "interpolation" > routines in 3D (I also use "filters" in 2D). Generalizing understandable > 2D interpolation code to work for 3D/ND might be a more realistic thing > for me to help with in future, though, than contributing to ndimage > (it's always tough to find time, but it seems more of a bite-sized > problem). I suppose the question is just whether you and Stefan want to > support >2D at all... I wouldn't be opposed to adding 3D support for some routines; I just don't want to have it as a requirement, otherwise we may never reach 1.0 :) Cheers St?fan From holtzhau at gmail.com Wed Apr 20 12:30:54 2011 From: holtzhau at gmail.com (Pieter Holtzhausen) Date: Wed, 20 Apr 2011 18:30:54 +0200 Subject: How to get an ordered list of coordinates of a curve? In-Reply-To: <4739989.35.1303315942420.JavaMail.geo-discussion-forums@yqhc1> References: <4739989.35.1303315942420.JavaMail.geo-discussion-forums@yqhc1> Message-ID: Findcontours works for me with opencv2. Check cv20squares.py in the opencv samples folder. On Wed, Apr 20, 2011 at 6:12 PM, jip wrote: > Hi, > Given a 2D image of a closed or open curve, as the outline of a binary > particle or a skeleton,I am wondering how to get an order list of the pixels > coordinates (I found something in the opencv documentation , but I am unable > to call fincontours from python. For som reasons, I write import opencv then > if I try opencv.findcontour, findcontour is not available in the? list of my > python ide (spyder); I hope I am clear ) > > Thanks for your advice. > > Jean-Patrick > From stefan at sun.ac.za Wed Apr 20 16:27:29 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 20 Apr 2011 22:27:29 +0200 Subject: Registration algorithms Message-ID: Hi all, I noticed a recent publication in IEEE Transactions on Image Processing today: HAIRIS: A Method for Automatic Image Registration Through Histogram-Based Image Segmentation Hern?ni Gon?alves, Jos? Alberto Gon?alves, and Lu?s Corte-Real, Member, IEEE This could be a fun project! It seems to be a fairly robust method for doing registration without any tweaking parameters. I've also considered some other registration algorithms based on projections of the log-polar transform, in addition to existing code I wrote to do feature-based registration. Registration is one of the few "solved problems" that always causes problems! If you have bored students over the holidays, send them my way... Regards St?fan From jeanpatrick.pommier at gmail.com Thu Apr 21 01:31:58 2011 From: jeanpatrick.pommier at gmail.com (jip) Date: Wed, 20 Apr 2011 22:31:58 -0700 (PDT) Subject: How to get an ordered list of coordinates of a curve? In-Reply-To: References: Message-ID: thank you very much On 21 avr, 00:01, Pieter Holtzhausen wrote: > https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/python/... > Check the examples...they serve as a good guide. > > > > > > > > On Wed, Apr 20, 2011 at 9:18 PM, jip wrote: > > Hi again > > On my ubuntu box, I have: > > > In [23]: from opencv.imgproc import FindContours > > ------------------------------------------------------------ > > Traceback (most recent call last): > > ? File "", line 1, in > > ImportError: No module named imgproc > > > Anyway, thank you for help > > Best regards > > > Jean-Patrick From jeanpatrick.pommier at gmail.com Thu Apr 21 01:38:28 2011 From: jeanpatrick.pommier at gmail.com (jip) Date: Wed, 20 Apr 2011 22:38:28 -0700 (PDT) Subject: =?UTF-8?Q?Re=C2=A0:_Re:_Re_:_Re:_convex_hull_of_a_2D_binary_image?= In-Reply-To: <18177999.1105.1303317232638.JavaMail.geo-discussion-forums@yqkk6> Message-ID: <11591873.780.1303364308313.JavaMail.geo-discussion-forums@yqgy8> I have written an buggy implementation. The idea is: 1. find the corners of the contour 2. add them to the particle 3. do it again 4. up to idempotence but it fails ... An issue for me is how to handle the "don't care" points (pixels) with ndimage hit or miss operator. Regards Jean-Patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From holtzhau at gmail.com Wed Apr 20 18:01:18 2011 From: holtzhau at gmail.com (Pieter Holtzhausen) Date: Thu, 21 Apr 2011 00:01:18 +0200 Subject: How to get an ordered list of coordinates of a curve? Message-ID: https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/python/cv20squares.py Check the examples...they serve as a good guide. On Wed, Apr 20, 2011 at 9:18 PM, jip wrote: > Hi again > On my ubuntu box, I have: > > In [23]: from opencv.imgproc import FindContours > ------------------------------------------------------------ > Traceback (most recent call last): > ? File "", line 1, in > ImportError: No module named imgproc > > Anyway, thank you for help > Best regards > > Jean-Patrick >