From bricklemacho at gmail.com Sun Nov 1 23:02:21 2015 From: bricklemacho at gmail.com (bricklemacho at gmail.com) Date: Mon, 2 Nov 2015 12:02:21 +0800 Subject: Strange behaviour - Normalised Cut (on Macbook) Message-ID: <5636E04D.9030800@gmail.com> Hi All, Running this example: http://scikit-image.org/docs/dev/auto_examples/plot_ncut.html#example-plot-ncut-py On one machine, example works as expected. On my work machine I am getting the following results of normalised cuts example: http://imgur.com/uWmvW2p Details on machines and what I have tried is below. Any help appreciated. Regards, Michael. -- Macbook 1 (work as expected): -------------------------- OS X Yosmite 10.10.4 Python 2.7.9 install via mac ports skimage.__version__ 0.11.3 install via mac ports Macbook 2 (strange behaviour) -------------------------- OS X Yosmite 10.10.5 Python 2.7.10 installed via mac ports skimage.__verison__ 0.11.3 installed via mac ports Things I have tried: 1. Reinstall from source via macports sudo rm -rf /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/skimage* sudo rm -rf /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scikit_image* sudo port upgrade -s -n --force py27-scikit-image skimage.__version__: 0.11.3, identical result (http://imgur.com/uWmvW2p) 2. Install latest development verison sudo rm -rf /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/skimage* sudo rm -rf /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scikit_image* git clone https://github.com/scikit-image/scikit-image.git cd skikit-image python setup.py build sudo python setup.py install skimage.__version__: 0.12dev, identical result: http://imgur.com/uWmvW2p From bricklemacho at gmail.com Sun Nov 1 23:35:05 2015 From: bricklemacho at gmail.com (bricklemacho at gmail.com) Date: Mon, 2 Nov 2015 12:35:05 +0800 Subject: Strange behaviour - Normalised Cut (on Macbook) In-Reply-To: <5636E04D.9030800@gmail.com> References: <5636E04D.9030800@gmail.com> Message-ID: <5636E7F9.8060209@gmail.com> Here is some additional info, the following gallery examples work as expected: RAG Merging: http://scikit-image.org/docs/dev/auto_examples/plot_rag_merge.html RAG Thresholding: http://scikit-image.org/docs/dev/auto_examples/plot_rag_mean_color.html#example-plot-rag-mean-color-py Drawing Region Adjacency Graphs (RAGs): http://scikit-image.org/docs/dev/auto_examples/plot_rag_draw.html#example-plot-rag-draw-py The last example the viridis wasn't available on 0.11.3 so just reused the cmap in the example So it appears that only the Normalized Cut example that is exhibiting strange behaviour, on my main machine. Michael. -- On 2/11/2015 12:02 pm, bricklemacho at gmail.com wrote: > Hi All, > > Running this example: > http://scikit-image.org/docs/dev/auto_examples/plot_ncut.html#example-plot-ncut-py > > On one machine, example works as expected. On my work machine I am > getting the following results of normalised cuts example: > http://imgur.com/uWmvW2p > > Details on machines and what I have tried is below. > > Any help appreciated. > > Regards, > > Michael. > -- > > > Macbook 1 (work as expected): > -------------------------- > OS X Yosmite 10.10.4 > Python 2.7.9 install via mac ports > skimage.__version__ 0.11.3 install via mac ports > > > > Macbook 2 (strange behaviour) > -------------------------- > OS X Yosmite 10.10.5 > Python 2.7.10 installed via mac ports > skimage.__verison__ 0.11.3 installed via mac ports > > Things I have tried: > 1. Reinstall from source via macports > sudo rm -rf > /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/skimage* > sudo rm -rf > /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scikit_image* > sudo port upgrade -s -n --force py27-scikit-image > skimage.__version__: 0.11.3, identical result (http://imgur.com/uWmvW2p) > > 2. Install latest development verison > sudo rm -rf > /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/skimage* > sudo rm -rf > /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scikit_image* > git clone https://github.com/scikit-image/scikit-image.git > cd skikit-image > python setup.py build > sudo python setup.py install > skimage.__version__: 0.12dev, identical result: http://imgur.com/uWmvW2p > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Mon Nov 2 17:26:46 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 02 Nov 2015 14:26:46 -0800 Subject: Help wanted: implementation of 3D medial axis skeletonization Message-ID: <87vb9khwbd.fsf@berkeley.edu> Hi all, I have been approached by a group that is interested in sponsoring the development of 3D skeletonization in scikit-image. One potential starting place would be: http://www.insight-journal.org/browse/publication/181 Is anyone interested in working on this? Please get in touch. Thanks! St?fan From stefanv at berkeley.edu Mon Nov 2 17:50:18 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 02 Nov 2015 14:50:18 -0800 Subject: Feature freeze and release Message-ID: <87oafchv85.fsf@berkeley.edu> Hi, everyone I would like to propose a feature freeze so that we can release a new version of scikit-image on 11/11/2015. Please review your favorite issues and pull requests in preparation: https://github.com/scikit-image/scikit-image/issues https://github.com/scikit-image/scikit-image/pulls The release milestone is here: https://github.com/scikit-image/scikit-image/pulls?q=is%3Aopen+is%3Apr+milestone%3A0.12 St?fan From vighneshbirodkar at gmail.com Mon Nov 2 18:55:21 2015 From: vighneshbirodkar at gmail.com (Vighnesh Birodkar) Date: Mon, 2 Nov 2015 15:55:21 -0800 (PST) Subject: Strange behaviour - Normalised Cut (on Macbook) In-Reply-To: <5636E7F9.8060209@gmail.com> References: <5636E04D.9030800@gmail.com> <5636E7F9.8060209@gmail.com> Message-ID: <534dc7b9-582f-49cf-84b7-f08ebb613d36@googlegroups.com> Hello Can you tell us what numpy, scipy and arpack versions you are using ? Thanks Vighnesh On Sunday, November 1, 2015 at 11:35:45 PM UTC-5, bricklemacho wrote: > > Here is some additional info, the following gallery examples work as > expected: > > RAG Merging: > http://scikit-image.org/docs/dev/auto_examples/plot_rag_merge.html > RAG Thresholding: > http://scikit-image.org/docs/dev/auto_examples/plot_rag_mean_color.html#example-plot-rag-mean-color-py > Drawing Region Adjacency Graphs (RAGs): > http://scikit-image.org/docs/dev/auto_examples/plot_rag_draw.html#example-plot-rag-draw-py > > The last example the viridis wasn't available on 0.11.3 so just reused the > cmap in the example > > So it appears that only the Normalized Cut example that is exhibiting > strange behaviour, on my main machine. > > > Michael. > -- > > > > On 2/11/2015 12:02 pm, brickl... at gmail.com wrote: > > Hi All, > > Running this example: > http://scikit-image.org/docs/dev/auto_examples/plot_ncut.html#example-plot-ncut-py > > On one machine, example works as expected. On my work machine I am > getting the following results of normalised cuts example: > http://imgur.com/uWmvW2p > > Details on machines and what I have tried is below. > > Any help appreciated. > > Regards, > > Michael. > -- > > > Macbook 1 (work as expected): > -------------------------- > OS X Yosmite 10.10.4 > Python 2.7.9 install via mac ports > skimage.__version__ 0.11.3 install via mac ports > > > > Macbook 2 (strange behaviour) > -------------------------- > OS X Yosmite 10.10.5 > Python 2.7.10 installed via mac ports > skimage.__verison__ 0.11.3 installed via mac ports > > Things I have tried: > 1. Reinstall from source via macports > sudo rm -rf > /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/skimage* > sudo rm -rf > /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scikit_image* > sudo port upgrade -s -n --force py27-scikit-image > skimage.__version__: 0.11.3, identical result (http://imgur.com/uWmvW2p) > > 2. Install latest development verison > sudo rm -rf > /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/skimage* > sudo rm -rf > /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scikit_image* > git clone https://github.com/scikit-image/scikit-image.git > cd skikit-image > python setup.py build > sudo python setup.py install > skimage.__version__: 0.12dev, identical result: http://imgur.com/uWmvW2p > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Mon Nov 2 20:26:35 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 02 Nov 2015 17:26:35 -0800 Subject: Help wanted: implementation of 3D medial axis skeletonization In-Reply-To: <20151102223926.GE3685396@phare.normalesup.org> References: <87vb9khwbd.fsf@berkeley.edu> <20151102223926.GE3685396@phare.normalesup.org> Message-ID: <87fv0nj2k4.fsf@berkeley.edu> Hi Emma On 2015-11-02 14:39:26, Emmanuelle Gouillart wrote: > Can you explain which kind of sponsoring it would be? Is it only > available for people living in the US, or in other countries? For > students only? I think there is a small amount of money available for anyone who is interested. And I agree, starting with a quick assessment of existing algorithms would be good! St?fan From silvertrumpet999 at gmail.com Mon Nov 2 20:55:41 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Mon, 2 Nov 2015 17:55:41 -0800 (PST) Subject: Help wanted: implementation of 3D medial axis skeletonization In-Reply-To: <87fv0nj2k4.fsf@berkeley.edu> References: <87vb9khwbd.fsf@berkeley.edu> <20151102223926.GE3685396@phare.normalesup.org> <87fv0nj2k4.fsf@berkeley.edu> Message-ID: <6aba179a-caa8-43b2-b6c1-7820598a2c75@googlegroups.com> Should we use/apply this to a particular volumetric dataset while prototyping different methods, to ensure accurate comparisons? Should anisotropic, regularly sampled voxels be supported? From jni.soma at gmail.com Mon Nov 2 21:20:48 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Mon, 02 Nov 2015 18:20:48 -0800 (PST) Subject: Help wanted: implementation of 3D medial axis skeletonization In-Reply-To: <6aba179a-caa8-43b2-b6c1-7820598a2c75@googlegroups.com> References: <6aba179a-caa8-43b2-b6c1-7820598a2c75@googlegroups.com> Message-ID: <1446517247801.ca5b9f11@Nodemailer> I can't work on this right now but I am very excited to see it happen... And it's the first example of a sponsored scikit-image project, right??? (Not counting GSoC.) Support for anisotropic voxels would be a definite plus, too. Don't forget that Fiji's code is mostly GPL, so don't try to copy it, at least not without first discussing dual licensing with the author(s). Juan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Mon Nov 2 23:16:41 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Mon, 2 Nov 2015 20:16:41 -0800 (PST) Subject: Help wanted: implementation of 3D medial axis skeletonization In-Reply-To: <1446517247801.ca5b9f11@Nodemailer> References: <6aba179a-caa8-43b2-b6c1-7820598a2c75@googlegroups.com> <1446517247801.ca5b9f11@Nodemailer> Message-ID: <56a3895e-5196-4629-8994-2360bee3759d@googlegroups.com> I suggest the lobster, one of the bonsai, or the XMas tree datasets located here as excellent 'torture tests' for 3d skeletonization. http://www9.informatik.uni-erlangen.de/External/vollib/ From emmanuelle.gouillart at nsup.org Mon Nov 2 17:39:26 2015 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Mon, 2 Nov 2015 23:39:26 +0100 Subject: Help wanted: implementation of 3D medial axis skeletonization In-Reply-To: <87vb9khwbd.fsf@berkeley.edu> References: <87vb9khwbd.fsf@berkeley.edu> Message-ID: <20151102223926.GE3685396@phare.normalesup.org> Hi St??fan 3D skeletonization would be a great idea. I wonder if the algorithm you mention is the same as the one used in http://fiji.sc/Skeletonize3D (which is quite popular in the X-ray tomography community). There are so many 3D skeletonization algorithms that understanding which specs are required might be an important first step. Can you explain which kind of sponsoring it would be? Is it only available for people living in the US, or in other countries? For students only? Cheers Emma On Mon, Nov 02, 2015 at 02:26:46PM -0800, Stefan van der Walt wrote: > Hi all, > I have been approached by a group that is interested in sponsoring the > development of 3D skeletonization in scikit-image. One potential > starting place would be: > http://www.insight-journal.org/browse/publication/181 > Is anyone interested in working on this? Please get in touch. > Thanks! > St??fan From njs at vorpus.org Tue Nov 3 04:38:31 2015 From: njs at vorpus.org (Nathaniel Smith) Date: Tue, 3 Nov 2015 01:38:31 -0800 Subject: [ANN] colorspacious 1.0.0 released Message-ID: Hi all, I just released version 1.0.0 of colorspacious, a library for converting between colorspaces in Python, and thought that scikit-image might be interested. This is the library that we used to design viridis and several other new colormaps, and it includes common well-known colorspaces (sRGB, LAB, XYZ, xyY) along with several more sophisticated models for estimating perceptual correlates (CIECAM02), estimating perceptual similarity (CAM02-UCS), and simulating colorblindness, all wrapped up in a very easy-to-use interface. (There's basically just one function.) Notable features of this release include a fancy new tutorial and reference manual: https://colorspacious.readthedocs.org/ and 100% test coverage. Downloads: https://pypi.python.org/pypi/colorspacious/ Source code: https://github.com/njsmith/colorspacious Share and enjoy! -n -- Nathaniel J. Smith -- http://vorpus.org From verstraetem93 at gmail.com Tue Nov 3 08:41:10 2015 From: verstraetem93 at gmail.com (Matthias Verstraete) Date: Tue, 3 Nov 2015 05:41:10 -0800 (PST) Subject: hough ellipse detection Message-ID: Hy, I'm trying to detect ellipses using the hough_ellipse method provided by scikit-image. However, one of the axis of the result ellipse is always zero. I've tries adjusting all the parameters but nothing helped. Does anyone know why only one axis can be larger than 0 and how I can fix it? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Tue Nov 3 11:19:03 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Tue, 3 Nov 2015 08:19:03 -0800 Subject: hough ellipse detection In-Reply-To: References: Message-ID: Hi Matthias On Nov 3, 2015 7:40 AM, "Matthias Verstraete" wrote: > > I'm trying to detect ellipses using the hough_ellipse method provided by scikit-image. However, one of the axis of the result ellipse is always zero. I've tries adjusting all the parameters but nothing helped. Does anyone know why only one axis can be larger than 0 and how I can fix it? Can you please provide us with an example image and code snippet? Thanks St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bricklemacho at gmail.com Mon Nov 2 19:57:41 2015 From: bricklemacho at gmail.com (bricklemacho at gmail.com) Date: Tue, 3 Nov 2015 08:57:41 +0800 Subject: Strange behaviour - Normalised Cut (on Macbook) In-Reply-To: <534dc7b9-582f-49cf-84b7-f08ebb613d36@googlegroups.com> References: <5636E04D.9030800@gmail.com> <5636E7F9.8060209@gmail.com> <534dc7b9-582f-49cf-84b7-f08ebb613d36@googlegroups.com> Message-ID: <56380685.6060206@gmail.com> The versions are the same on both machines. numpy 1.9.2 scipy 0.15.1 arpack - not sure how to find the version, scipy.linalg.__version__ is 0.4.9 On my work machine, I had been mucking around earlier on a local branch of skimage, working on future.graph. As mention below I removed everything from site-packages, I don't think there is anything else lying around. Regards, Michael. -- On 3/11/2015 7:55 am, Vighnesh Birodkar wrote: > Hello > > Can you tell us what numpy, scipy and arpack versions you are using ? > > Thanks > Vighnesh > > On Sunday, November 1, 2015 at 11:35:45 PM UTC-5, bricklemacho wrote: > > Here is some additional info, the following gallery examples work > as expected: > > RAG Merging: > http://scikit-image.org/docs/dev/auto_examples/plot_rag_merge.html > > RAG Thresholding: > http://scikit-image.org/docs/dev/auto_examples/plot_rag_mean_color.html#example-plot-rag-mean-color-py > > Drawing Region Adjacency Graphs (RAGs): > http://scikit-image.org/docs/dev/auto_examples/plot_rag_draw.html#example-plot-rag-draw-py > > > The last example the viridis wasn't available on 0.11.3 so just > reused the cmap in the example > > So it appears that only the Normalized Cut example that is > exhibiting strange behaviour, on my main machine. > > > Michael. > -- > > > > On 2/11/2015 12:02 pm, brickl... at gmail.com wrote: >> Hi All, >> >> Running this example: >> http://scikit-image.org/docs/dev/auto_examples/plot_ncut.html#example-plot-ncut-py >> >> >> On one machine, example works as expected. On my work machine I >> am getting the following results of normalised cuts example: >> http://imgur.com/uWmvW2p >> >> Details on machines and what I have tried is below. >> >> Any help appreciated. >> >> Regards, >> >> Michael. >> -- >> >> >> Macbook 1 (work as expected): >> -------------------------- >> OS X Yosmite 10.10.4 >> Python 2.7.9 install via mac ports >> skimage.__version__ 0.11.3 install via mac ports >> >> >> >> Macbook 2 (strange behaviour) >> -------------------------- >> OS X Yosmite 10.10.5 >> Python 2.7.10 installed via mac ports >> skimage.__verison__ 0.11.3 installed via mac ports >> >> Things I have tried: >> 1. Reinstall from source via macports >> sudo rm -rf >> /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/skimage* >> sudo rm -rf >> /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scikit_image* >> sudo port upgrade -s -n --force py27-scikit-image >> skimage.__version__: 0.11.3, identical result >> (http://imgur.com/uWmvW2p) >> >> 2. Install latest development verison >> sudo rm -rf >> /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/skimage* >> sudo rm -rf >> /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scikit_image* >> git clone https://github.com/scikit-image/scikit-image.git >> >> cd skikit-image >> python setup.py build >> sudo python setup.py install >> skimage.__version__: 0.12dev, identical result: >> http://imgur.com/uWmvW2p >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Tue Nov 3 15:44:40 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Tue, 03 Nov 2015 12:44:40 -0800 Subject: [ANN] colorspacious 1.0.0 released In-Reply-To: References: Message-ID: <87lhaehkxz.fsf@berkeley.edu> Hi Nathaniel On 2015-11-03 01:38:31, Nathaniel Smith wrote: > I just released version 1.0.0 of colorspacious, a library for > converting between colorspaces in Python, and thought that > scikit-image might be interested. This is the library that we used to > design viridis and several other new colormaps, and it includes common > well-known colorspaces (sRGB, LAB, XYZ, xyY) along with several more > sophisticated models for estimating perceptual correlates (CIECAM02), > estimating perceptual similarity (CAM02-UCS), and simulating > colorblindness, all wrapped up in a very easy-to-use interface. > (There's basically just one function.) Do you think it would make sense to turn colorspacious into a dependency and rely on it for our existing conversions? Are there any speed implications? St?fan From emmanuelle.gouillart at nsup.org Tue Nov 3 16:18:09 2015 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Tue, 3 Nov 2015 22:18:09 +0100 Subject: Help wanted: implementation of 3D medial axis skeletonization In-Reply-To: <1446517247801.ca5b9f11@Nodemailer> References: <6aba179a-caa8-43b2-b6c1-7820598a2c75@googlegroups.com> <1446517247801.ca5b9f11@Nodemailer> Message-ID: <20151103211809.GB229197@phare.normalesup.org> > Don't forget that Fiji's code is mostly GPL, so don't try to copy it, at least > not without first discussing dual licensing with the author(s). Excellent point. Actually I'm only using the Fiji page as a way to find the paper by Lee et al. again :-). > On Tue, Nov 3, 2015 at 12:55 PM, Josh Warner > wrote: > Should we use/apply this to a particular volumetric dataset while > prototyping different methods, to ensure accurate comparisons? > Should anisotropic, regularly sampled voxels be supported? From r.t.wilson.bak at googlemail.com Thu Nov 5 16:58:17 2015 From: r.t.wilson.bak at googlemail.com (Robin Wilson) Date: Thu, 5 Nov 2015 13:58:17 -0800 (PST) Subject: Edge completion/linking algorithm in Python? Message-ID: Hi, Does anyone on this list know of an implementation of some sort of edge linking or edge completion algorithm in Python? I've got some edges, produced with the skimage implementation of canny, with gaps in them which I'm finding hard to fill. I've implemented various 'hacky' methods myself, but I wondered if anyone knew of any code already available to do this in Python? Cheers, Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Fri Nov 6 01:02:28 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Fri, 06 Nov 2015 06:02:28 +0000 Subject: Edge completion/linking algorithm in Python? In-Reply-To: References: Message-ID: Hey Robin! Coincidentally, I was just yesterday going through this paper, "Projection onto the Manifold of Elongated Structures for Accurate Extraction": http://infoscience.epfl.ch/record/211536/files/top.pdf Their source code is available here: https://documents.epfl.ch/groups/c/cv/cvlab-unit/www/src/NN_projections_12.09.15.zip I actually don't even know what language they used because I'm travelling and on a crappy connection, so the 65MB package is just chugging along at 33KBps. =P I presume it's C++, based on previous software from that group, but it might be easily wrappable in Cython. I'm not aware of other options at the moment, sorry! Juan. On Fri, Nov 6, 2015 at 7:28 AM Robin Wilson wrote: > Hi, > > Does anyone on this list know of an implementation of some sort of edge > linking or edge completion algorithm in Python? I've got some edges, > produced with the skimage implementation of canny, with gaps in them which > I'm finding hard to fill. I've implemented various 'hacky' methods myself, > but I wondered if anyone knew of any code already available to do this in > Python? > > Cheers, > > Robin > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Fri Nov 6 04:23:20 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Fri, 06 Nov 2015 09:23:20 +0000 Subject: Edge completion/linking algorithm in Python? In-Reply-To: References: Message-ID: Update: it's a mix of C++ and Matlab, licensed as GPL. Good times. =P On Fri, Nov 6, 2015 at 3:32 PM Juan Nunez-Iglesias wrote: > Hey Robin! > > Coincidentally, I was just yesterday going through this paper, "Projection > onto the Manifold of Elongated Structures for Accurate Extraction": > http://infoscience.epfl.ch/record/211536/files/top.pdf > > Their source code is available here: > > https://documents.epfl.ch/groups/c/cv/cvlab-unit/www/src/NN_projections_12.09.15.zip > > I actually don't even know what language they used because I'm travelling > and on a crappy connection, so the 65MB package is just chugging along at > 33KBps. =P I presume it's C++, based on previous software from that group, > but it might be easily wrappable in Cython. > > I'm not aware of other options at the moment, sorry! > > Juan. > > On Fri, Nov 6, 2015 at 7:28 AM Robin Wilson > wrote: > >> Hi, >> >> Does anyone on this list know of an implementation of some sort of edge >> linking or edge completion algorithm in Python? I've got some edges, >> produced with the skimage implementation of canny, with gaps in them which >> I'm finding hard to fill. I've implemented various 'hacky' methods myself, >> but I wondered if anyone knew of any code already available to do this in >> Python? >> >> Cheers, >> >> Robin >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.keraudren at googlemail.com Sat Nov 7 12:46:17 2015 From: kevin.keraudren at googlemail.com (Kevin Keraudren) Date: Sat, 7 Nov 2015 17:46:17 +0000 Subject: Help wanted: implementation of 3D medial axis skeletonization In-Reply-To: <20151103211809.GB229197@phare.normalesup.org> References: <6aba179a-caa8-43b2-b6c1-7820598a2c75@googlegroups.com> <1446517247801.ca5b9f11@Nodemailer> <20151103211809.GB229197@phare.normalesup.org> Message-ID: Hi, I don?t want to volunteer for this project, but I just wanted to mention that the 3D skeletonization from ITK is easily accessible to Python through SimpleITK, see example below for the lobster dataset. SimpleITK could be used for comparison or validation of the proposed scikit-image algorithm. Kind Regards, Kevin PS: is there another way to load those *.pvm datasets in Python without converting them to raw and hardcoding the image dimension and pixel type? An skimage.io.imread() plugin? -------------- next part -------------- A non-text attachment was scrubbed... Name: lobster_mask.png Type: image/png Size: 552788 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: lobster_skeleton.png Type: image/png Size: 486046 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: SimpleITK_skeletonize_lobster.py Type: text/x-python-script Size: 1443 bytes Desc: not available URL: -------------- next part -------------- > On 3 Nov 2015, at 21:18, Emmanuelle Gouillart wrote: > > >> Don't forget that Fiji's code is mostly GPL, so don't try to copy it, at least >> not without first discussing dual licensing with the author(s). > > Excellent point. Actually I'm only using the Fiji page as a way to find > the paper by Lee et al. again :-). > > > > >> On Tue, Nov 3, 2015 at 12:55 PM, Josh Warner >> wrote: > >> Should we use/apply this to a particular volumetric dataset while >> prototyping different methods, to ensure accurate comparisons? > >> Should anisotropic, regularly sampled voxels be supported? > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From stefanv at berkeley.edu Mon Nov 9 20:42:23 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 09 Nov 2015 17:42:23 -0800 Subject: Help wanted: implementation of 3D medial axis skeletonization In-Reply-To: References: <6aba179a-caa8-43b2-b6c1-7820598a2c75@googlegroups.com> <1446517247801.ca5b9f11@Nodemailer> <20151103211809.GB229197@phare.normalesup.org> Message-ID: <87si4eeikg.fsf@berkeley.edu> Hi Kevin On 2015-11-07 09:46:17, 'Kevin Keraudren' via scikit-image wrote: > I don?t want to volunteer for this project, but I just wanted to > mention that the 3D skeletonization from ITK is easily accessible to > Python through SimpleITK, see example below for the lobster > dataset. SimpleITK could be used for comparison or validation of the > proposed scikit-image algorithm. Thanks for the pointer. In this case, one of the purposes of the exercise is to stay away from a heavy dependency such as ITK. > PS: is there another way to load those *.pvm datasets in Python > without converting them to raw and hardcoding the image dimension and > pixel type? An skimage.io.imread() plugin? I have no idea about .pvm files, but perhaps we should start a set of plugin gists on the wiki somewhere? St?fan From patrick_lfa at yahoo.com Tue Nov 10 04:11:17 2015 From: patrick_lfa at yahoo.com (kwc) Date: Tue, 10 Nov 2015 01:11:17 -0800 (PST) Subject: Template matching with transparent regions Message-ID: <18a064ca-32e6-4916-bb12-dd8647415575@googlegroups.com> Hi, I am new to scikit-image. I would like to use the Template Matching function in scikit-image. In my template image, there are some regions that I would like to ignore or exclude from the template matching process. I plan to change those regions to become transparent (alpha channel). Does the alpha channel have any effect on the template matching result? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arokem at gmail.com Tue Nov 10 12:49:12 2015 From: arokem at gmail.com (Ariel Rokem) Date: Tue, 10 Nov 2015 09:49:12 -0800 Subject: Postdoc training at the University of Washington in neuroengineering/data science Message-ID: With apologies for cross-posting, I am posting the following on behalf of my colleague, Ione Fine: Two excellent postdoctoral fellowship opportunities with a deadline of January 15th are available at the University of Washington, Seattle, USA: http://uwin.washington.edu/post-docs/apply-post-docs/ http://escience.washington.edu/postdoctoral-fellowships Candidates with a strong computational background (e.g., machine learning, computer vision, neuroengineering, etc.) are sought to work on the following project: Prof. Fine has over several years worked in collaboration with Second Sight (developers of a retinal prosthetic, analogous to a cochlear implant, on the market). She has developed developed a model that, for any given pulse train, is pretty good at predicting what a patient implanted with a retinal prosthetic will see (essentially a linear-nonlinear model with some weird tweaks because the retina is responding to current instead of light). But what the field really needs is the *reverse* of this model ? we need to be able to predict what electrical pulses (across the set of electrodes) will produce a percept that most closely matches the percept that would normally be elicited by whatever it is the patient is looking at. It?s actually a really tricky problem for a variety of reasons. Building such a model would be of very high impact on the field, because it wouldn?t just help Second Sight patients ? it would likely be generalized by all the other groups trying to build prosthetic devices (e.g., with optogenetics). Please contact Prof. Fine (ionefine at uw.edu) if you are interested in this particular project or just want information about UWIN ( http://uwin.washington.edu/). Feel free to also contact me (arokem at gmail.com) for questions about the Data Science Environment at the University of Washington (http://escience.washington.edu/) and the eScience fellowships. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.keraudren at googlemail.com Tue Nov 10 15:26:55 2015 From: kevin.keraudren at googlemail.com (Kevin Keraudren) Date: Tue, 10 Nov 2015 20:26:55 +0000 Subject: Template matching with transparent regions In-Reply-To: <18a064ca-32e6-4916-bb12-dd8647415575@googlegroups.com> References: <18a064ca-32e6-4916-bb12-dd8647415575@googlegroups.com> Message-ID: Hi, What about simply masking out in the matching results the region you want to exclude: import numpy as np from skimage.feature import match_template ... matching_result = match_template( image, template, pad_input=False ) matching_result = np.array(matching_result) # mask out the region you want to exclude matching_result[mask==0] = 0 # coordinates of the best match best_match = np.unravel_index( np.argmax(matching_result), matching_result.shape ) See this example to better understand what matching_result should look like: http://scikit-image.org/docs/dev/auto_examples/plot_template.html Kind Regards, Kevin > On 10 Nov 2015, at 09:11, 'kwc' via scikit-image wrote: > > Hi, I am new to scikit-image. I would like to use the Template Matching function in scikit-image. > > In my template image, there are some regions that I would like to ignore or exclude from the template matching process. I plan to change those regions to become transparent (alpha channel). Does the alpha channel have any effect on the template matching result? > > Thanks. > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com . > For more options, visit https://groups.google.com/d/optout . -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Nov 11 03:08:25 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Wed, 11 Nov 2015 00:08:25 -0800 Subject: Template matching with transparent regions In-Reply-To: References: <18a064ca-32e6-4916-bb12-dd8647415575@googlegroups.com> Message-ID: I think kwc meant that he wanted parts of the template to be ignored. I have the same problem at the moment, so will have to investigate soon if no other dev informs us how :) St?fan On Nov 10, 2015 12:26 PM, "'Kevin Keraudren' via scikit-image" < scikit-image at googlegroups.com> wrote: > Hi, > > What about simply masking out in the matching results the region you want > to exclude: > > import numpy as np > from skimage.feature import match_template > > ... > > matching_result = match_template( image, template, pad_input=False ) > > matching_result = np.array(matching_result) > > # mask out the region you want to exclude > matching_result[mask==0] = 0 > > # coordinates of the best match > best_match = np.unravel_index( np.argmax(matching_result), matching_result.shape > ) > > See this example to better understand what matching_result should look > like: > http://scikit-image.org/docs/dev/auto_examples/plot_template.html > > Kind Regards, > > Kevin > > > On 10 Nov 2015, at 09:11, 'kwc' via scikit-image < > scikit-image at googlegroups.com> wrote: > > Hi, I am new to scikit-image. I would like to use the Template Matching > function in scikit-image. > > In my template image, there are some regions that I would like to ignore > or exclude from the template matching process. I plan to change those > regions to become transparent (alpha channel). Does the alpha channel have > any effect on the template matching result? > > Thanks. > > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.keraudren at gmail.com Thu Nov 12 04:07:53 2015 From: kevin.keraudren at gmail.com (Kevin Keraudren) Date: Thu, 12 Nov 2015 09:07:53 +0000 Subject: Template matching with transparent regions In-Reply-To: References: <18a064ca-32e6-4916-bb12-dd8647415575@googlegroups.com> Message-ID: <901C8757-F3FF-4098-B1ED-65E4B5BE10B7@googlemail.com> Hi Stefan, Thanks for correcting me! Looking at the template matching code, I think you will have difficulties making it work with non square templates as long as you use fftconvolve(). You need to have a custom convolution code so that you can skip parts of the template as well as the corresponding parts of the image. However, what you want can be achieved in a brute force manner with a few lines of Python (as long as you are working with 2D images of reasonable size!), using: from sklearn.feature_extraction.image import extract_patches_2d from scipy.spatial import distance See example attached. Kind Regards, Kevin > On 11 Nov 2015, at 08:08, St?fan van der Walt wrote: > > I think kwc meant that he wanted parts of the template to be ignored. I have the same problem at the moment, so will have to investigate soon if no other dev informs us how :) > > St?fan > > On Nov 10, 2015 12:26 PM, "'Kevin Keraudren' via scikit-image" > wrote: > Hi, > > What about simply masking out in the matching results the region you want to exclude: > > import numpy as np > from skimage.feature import match_template > > ... > > matching_result = match_template( image, template, pad_input=False ) > > matching_result = np.array(matching_result) > > # mask out the region you want to exclude > matching_result[mask==0] = 0 > > # coordinates of the best match > best_match = np.unravel_index( np.argmax(matching_result), matching_result.shape ) > > See this example to better understand what matching_result should look like: > http://scikit-image.org/docs/dev/auto_examples/plot_template.html > > Kind Regards, > > Kevin > > >> On 10 Nov 2015, at 09:11, 'kwc' via scikit-image > wrote: >> >> Hi, I am new to scikit-image. I would like to use the Template Matching function in scikit-image. >> >> In my template image, there are some regions that I would like to ignore or exclude from the template matching process. I plan to change those regions to become transparent (alpha channel). Does the alpha channel have any effect on the template matching result? >> >> Thanks. >> >> >> -- >> You received this message because you are subscribed to the Google Groups "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com . >> For more options, visit https://groups.google.com/d/optout . > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com . > For more options, visit https://groups.google.com/d/optout . > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com . > For more options, visit https://groups.google.com/d/optout . -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: figure_1.png Type: image/png Size: 133334 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: match_template.py Type: text/x-python-script Size: 1530 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Sat Nov 14 17:35:10 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Sat, 14 Nov 2015 14:35:10 -0800 Subject: Finding pairs of images (homologous chromosomes) In-Reply-To: <907817f6-d601-4932-ba98-e09241022e68@googlegroups.com> References: <907817f6-d601-4932-ba98-e09241022e68@googlegroups.com> Message-ID: <87vb94b469.fsf@berkeley.edu> Hi Jean-Patrick On 2015-02-24 08:21:10, Jean-Patrick Pommier wrote: > I am trying to make pairs of images from the following set of images > (chromosomes sorted by size after rotation). The idea is to make a feature > vector for unsupervised classification (kmeans with 19 clusters) This is a *very* late reply, but I thought I'd mention that Fran?ois Boulogne & Ga?l Varoquaux has included a digit classifier in the skimage-demos repository, which may be helpful. Best regards St?fan From jeanpatrick.pommier at gmail.com Sun Nov 15 08:59:27 2015 From: jeanpatrick.pommier at gmail.com (Jean-Patrick Pommier) Date: Sun, 15 Nov 2015 14:59:27 +0100 Subject: Finding pairs of images (homologous chromosomes) In-Reply-To: <87vb94b469.fsf@berkeley.edu> References: <907817f6-d601-4932-ba98-e09241022e68@googlegroups.com> <87vb94b469.fsf@berkeley.edu> Message-ID: Thank you anyway, Jean-pat 2015-11-14 23:35 GMT+01:00 Stefan van der Walt : > Hi Jean-Patrick > > On 2015-02-24 08:21:10, Jean-Patrick Pommier < > jeanpatrick.pommier at gmail.com> wrote: > > I am trying to make pairs of images from the following set of images > > (chromosomes sorted by size after rotation). The idea is to make a > feature > > vector for unsupervised classification (kmeans with 19 clusters) > > This is a *very* late reply, but I thought I'd mention that Fran?ois > Boulogne & Ga?l Varoquaux has included a digit classifier in the > skimage-demos repository, which may be helpful. > > Best regards > St?fan > > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/vYft2c3uFlk/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -- http://dip4fish.blogspot.fr/ Dedicated to Digital Image Processing for FISH, QFISH and other things about the telomeres. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.niccoli at gmail.com Mon Nov 16 09:51:18 2015 From: matteo.niccoli at gmail.com (Matteo) Date: Mon, 16 Nov 2015 06:51:18 -0800 (PST) Subject: Issue with morphological filters In-Reply-To: <33118697-5377-432a-a5fe-b55135df54a9@googlegroups.com> References: <1427688200136.b53bcefa@Nodemailer> <33118697-5377-432a-a5fe-b55135df54a9@googlegroups.com> Message-ID: <29c732c9-6b6c-4b70-b43f-ef04f333a2ed@googlegroups.com> I never followed up on this (thanks St?phan for reminding me): I never got to try regionprops, the last option suggested by Juan. In my final version (in this Geophysical tutorial notebook , cells 36-40) I ended up exporting the full edges as a full image instead of as filled contours. To remove the small objects I used (cell 38) scipy. ndimage.label and a mask. Thanks for all the suggestions Matteo On Thursday, April 2, 2015 at 7:22:03 AM UTC-6, Matteo wrote: > OK > Thanks so much for your efforts Juan, I will take a look. > Matteo > > On Sunday, March 29, 2015 at 10:03:23 PM UTC-6, Juan Nunez-Iglesias wrote: > >> Hmm, I must say I don't know what's going on with either the >> reconstruction or the binary_fill_holes. (Originally I thought the image >> was inverted but you tried both polarities...) My advice would be to look >> at a few iterations of morphological reconstruction manually and see what's >> going on... >> >> Also, I would use the "grey" colormap, which is the most intuitive to >> look at (you used a reversed colormap for a couple of the images). >> >> Finally, it may be that you need to fill each "blob" independently. If >> so, have a look at skimage.measure.regionprops.filled_image. >> http://scikit-image.org/docs/dev/api/skimage.measure.html#regionprops >> >> Juan. >> >> >> >> >> On Sat, Mar 28, 2015 at 2:32 AM, Matteo wrote: >> >>> Hello Juan >>> >>> Here it is: >>> >>> http://nbviewer.ipython.org/urls/dl.dropbox.com/s/ancfxe2gx1fbyyp/morphology_test.ipynb?dl=0 >>> I get the same, odd results, with both ndimage's binary_fill_holes, and >>> reconstruction. IS it because of the structuring elements/masks? >>> Thanks for your help. >>> Matteo >>> >>> On Thursday, March 26, 2015 at 11:14:05 PM UTC-6, Juan Nunez-Iglesias >>> wrote: >>> >>>> Hi Matteo, >>>> >>>> Can you try putting this notebook up as a gist and pasting a link to >>>> the notebook? It's hard for me to follow all of the steps (and the polarity >>>> of the image) without the images inline. Is it just the inverse of what you >>>> want? And anyway why aren't you just using ndimage's binary_fill_holes? >>>> >>>> >>>> https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.ndimage.morphology.binary_fill_holes.html >>>> >>>> Juan. >>>> >>>> >>>> >>>> >>>> On Fri, Mar 27, 2015 at 9:09 AM, Matteo wrote: >>>> >>>> Hello Juan >>>> >>>> Thanks so much for your suggestions. >>>> Once I convertedthe image as you suggested: >>>> # import back image >>>> cfthdr=io.imread('filled_contour_THDR.png') >>>> cfthdr = color.rgb2gray(cfthdr) > 0.5 >>>> >>>> I get good results with opening: >>>> # clean it up with opening >>>> selem17 = disk(17) >>>> opened_thdr = opening(cfthdr, selem17)/255 >>>> # plot it >>>> fig = plt.figure(figsize=(5, 5)) >>>> ax = fig.add_subplot(1, 1, 1) >>>> ax.set_xticks([]) >>>> ax.set_yticks([]) >>>> plt.imshow(opened_thdr,cmap='bone') >>>> plt.show() >>>> # not bad >>>> >>>> >>>> With remove_small_objects the advantage is that it does not join blobs >>>> in the original: >>>> cfthdr_inv = ~cfthdr >>>> test=remove_small_objects(cfthdr,10000) >>>> # plot it >>>> fig = plt.figure(figsize=(5, 5)) >>>> ax = fig.add_subplot(1, 1, 1) >>>> ax.set_xticks([]) >>>> ax.set_yticks([]) >>>> plt.imshow(test,cmap='bone') >>>> plt.show() >>>> >>>> >>>> but with reconstruction done as this: >>>> # filling holes with morphological reconstruction >>>> seed = np.copy(cfthdr_inv) >>>> seed[1:-1, 1:-1] = cfthdr_inv.max() >>>> mask = cfthdr_inv >>>> filled = reconstruction(seed, mask, method='erosion') >>>> # plot it >>>> fig = plt.figure(figsize=(5, 5)) >>>> ax = fig.add_subplot(1, 1, 1) >>>> ax.set_xticks([]) >>>> ax.set_yticks([]) >>>> plt.imshow(filled,cmap='bone',vmin=cfthdr_inv.min(), vmax=cfthdr_inv. >>>> max()) >>>> plt.show() >>>> >>>> I get a completely white image. Do you have any suggestions as to why? >>>> >>>> Thank again. Cheers, >>>> Matteo >>>> >>>> >>>> On Wednesday, March 25, 2015 at 6:29:43 PM UTC-6, Juan Nunez-Iglesias >>>> wrote: >>>> >>>> Hi Matteo, >>>> >>>> My guess is that even though you are looking at a "black and white" >>>> image, the png is actually an RGB png. Just check the output of >>>> "print(cfthdr.shape)". Should be straightforward to make it a binary image: >>>> >>>> from skimage import color >>>> cfthdr = color.rgb2gray(cfthdr) > 0.5 >>>> >>>> Then you should have a binary image. (And inverting should be as simple >>>> as "cfthdr_inv = ~cfthdr") We have morphology.binary_fill_holes to do what >>>> you want. >>>> >>>> btw, there's also morphology.remove_small_objects, which does exactly >>>> what you did but as a function call. Finally, it looks like you are not >>>> using the latest version of scikit-image (0.11), so you might want to >>>> upgrade. >>>> >>>> Hope that helps! >>>> >>>> Juan. >>>> >>>> >>>> >>>> >>>> On Thu, Mar 26, 2015 at 8:48 AM, Matteo wrote: >>>> >>>> *Issues with morphological filters when trying to remove white holes in >>>> black objects in a binary images. Using opening or filling holes on >>>> inverted (or complement) of the original binary.* >>>> >>>> Hi there >>>> >>>> I have a series of derivatives calculated on geophysical data. >>>> >>>> Many of these derivatives have nice continuous maxima, so I treat them >>>> as images on which I do some cleanup with morphological filter. >>>> >>>> Here's one example of operations that I do routinely, and successfully: >>>> >>>> # threshold theta map using Otsu method >>>> >>>> thresh_th = threshold_otsu(theta) >>>> >>>> binary_th = theta > thresh_th >>>> >>>> # clean up small objects >>>> >>>> label_objects_th, nb_labels_th = sp.ndimage.label(binary_th) >>>> >>>> sizes_th = np.bincount(label_objects_th.ravel()) >>>> >>>> mask_sizes_th = sizes_th > 175 >>>> >>>> mask_sizes_th[0] = 0 >>>> >>>> binary_cleaned_th = mask_sizes_th[label_objects_th] >>>> >>>> # further enhance with morphological closing (dilation followed by an >>>> erosion) to remove small dark spots and connect small bright cracks >>>> >>>> # followed by an extra erosion >>>> >>>> selem = disk(1) >>>> >>>> closed_th = closing(binary_cleaned_th, selem)/255 >>>> >>>> eroded_th = erosion(closed_th, selem)/255 >>>> >>>> # Finally, extract lienaments using skeletonization >>>> >>>> skeleton_th=skeletonize(binary_th) >>>> >>>> skeleton_cleaned_th=skeletonize(binary_cleaned_th) >>>> >>>> # plot to compare >>>> >>>> fig = plt.figure(figsize=(20, 7)) >>>> >>>> ax = fig.add_subplot(1, 2, 1) >>>> >>>> imshow(skeleton_th, cmap='bone_r', interpolation='none') >>>> >>>> ax2 = fig.add_subplot(1, 3, 2) >>>> >>>> imshow(skeleton_cleaned_th, cmap='bone_r', interpolation='none') >>>> >>>> ax.set_xticks([]) >>>> >>>> ax.set_yticks([]) >>>> >>>> ax2.set_xticks([]) >>>> ax2.set_yticks([]) >>>> >>>> Unfortunately I cannot share the data as it is proprietary, but I will >>>> for the next example, which is the one that does not work. >>>> >>>> There's one derivative that shows lots of detail but not continuous >>>> maxima. As a workaround I created filled contours in Matplotlib >>>> >>>> exported as an image. The image is attached. >>>> >>>> Now I want to import back the image and plot it to test: >>>> >>>> # import back image >>>> >>>> cfthdr=io.imread('filled_contour.png') >>>> >>>> # threshold using using Otsu method >>>> >>>> thresh_thdr = threshold_otsu(cfthdr) >>>> >>>> binary_thdr = cfthdr > thresh_thdr >>>> >>>> # plot it >>>> >>>> fig = plt.figure(figsize=(5, 5)) >>>> >>>> ax = fig.add_subplot(1, 1, 1) >>>> >>>> ax.set_xticks([]) >>>> >>>> ax.set_yticks([]) >>>> >>>> plt.imshow(binary_thdr, cmap='bone') >>>> >>>> plt.show() >>>> >>>> The above works without issues. >>>> >>>> >>>> >>>> Next I want to fill the white holes inside the black blobs. I thought >>>> of 2 strategies. >>>> >>>> The first would be to use opening; the second to invert the image, and >>>> then fill the holes as in here: >>>> >>>> http://scikit-image.org/docs/dev/auto_examples/plot_holes_and_peaks.html >>>> >>>> By the way, I found a similar example for opencv here >>>> >>>> >>>> http://stackoverflow.com/questions/10316057/filling-holes-inside-a-binary-object >>>> >>>> Let's start with opening. When I try: >>>> >>>> selem = disk(1) >>>> >>>> opened_thdr = opening(binary_thdr, selem) >>>> >>>> or: >>>> >>>> selem = disk(1) >>>> >>>> opened_thdr = opening(cfthdr, selem) >>>> >>>> I get an error message like this: >>>> >>>> --------------------------------------------------------------------------- >>>> >>>> >>>> ValueError Traceback (most recent call >>>> last) >>>> >>>> in () >>>> >>>> 1 #binary_thdr=img_as_float(binary_thdr,force_copy=False) >>>> >>>> ----> 2 opened_thdr = opening(binary_thdr, selem)/255 >>>> >>>> 3 >>>> >>>> ... >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to scikit-image... at googlegroups.com. >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.niccoli at gmail.com Mon Nov 16 10:03:30 2015 From: matteo.niccoli at gmail.com (Matteo) Date: Mon, 16 Nov 2015 07:03:30 -0800 (PST) Subject: Problems with coordinate ranges when doing conversion from LCH to LAB to RGB In-Reply-To: References: <25d96c44-56dd-4560-82a0-c03093ab4be3@googlegroups.com> <1412140555237.b62475a1@Nodemailer> <1f578d55-98ef-4460-a730-faaf1a777eed@googlegroups.com> <87sij6ruyf.fsf@sun.ac.za> Message-ID: <50135c69-09ba-4f3a-90b3-66e79ae584ea@googlegroups.com> I moved the notebook with the color conversion tests from RGB to LAB to LCH then back to LAB and to RGB to a stable location on GitHub. On Friday, October 3, 2014 at 8:13:54 AM UTC-6, Matteo wrote: > > Hi Stefan (and Juan) > > > I run this test last night. Nothing fancy, essentially I created a 16x16x3 > RGB azure image (RGB 0,153,255 or 0,0.6,1) and took it for a walk from RGB > to LAB to LCH then back to LAB and RGB again. > > http://nbviewer.ipython.org/urls/dl.dropbox.com/s/44b9udiqz4npp0b/color_space_conversion_skimage.ipynb?dl=0 > Feel free to use this if you like as an example of color conversion. I > will be adding it to my GitHub anyway https://github.com/mycarta > > As you can see the loop of transformations closes precisely. > When I tried with pure red (RGB 1,0,0 or 255,0,0) the final RGB values are > all e-16 numbers, some negative. Not sure if that qualifies as deficiencies. > Certainly it points to me to the need to include documenntation on the > coordinate ranges as Juan observed. > > From this I conclude that: > r,g, and b are in the range (0 1) > L is in the range (0 100) as Juan pointed out (already evident from my > color evaluation notebook), however > a and b must be in the range (-100 100) since a is small but positive and > b is large but negative (as expected) in my example > chrima c must be in the range (0 100) because it is the distance from the > polar axis so it can't be negative > h is in the range (0 2pi) as specified already in the documentation > > I hope this is useful. I'l lbe checking my original example with the new > ranges tonigth. > Cheers > Matteo > > > > > > On Thursday, October 2, 2014 4:02:54 PM UTC-6, Stefan van der Walt wrote: > >> Hi Matteo >> >> On 2014-10-01 20:56:09, Matteo wrote: >> > A good test would be to convert a single colour, say red, from RGB to >> LAB, >> > to LCH, then back to LAB and RGB and check the values at each step. >> I'll >> > try tomorrow and post my results back for your information. >> >> Did you have any luck with that? Also, if you come up with any good >> test cases that point out deficiencies in the code, we'd be happy to >> include them in the test suite. >> >> Thanks for the link to the article--it's very enjoyable to learn more >> about color map perception in such a vividly illustrated way. >> >> Regards >> St?fan >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanshuzhi at gmail.com Mon Nov 16 20:41:39 2015 From: kanshuzhi at gmail.com (Dai Yan) Date: Mon, 16 Nov 2015 17:41:39 -0800 (PST) Subject: Improving HoG In-Reply-To: References: Message-ID: Hello, Martin I find out that skimage HOG doesn't have "overlapping feature". May I know whether your patches has merged into master tree or could you please share your branch? Best Regards Dai Yan ? 2015?1?28???? UTC+8??3:18:34?Martin Savc??? > > I've been implementing my own HoG transform looking at different sources. > While the implementation in scikit-image seems to lack certain features > (multiple normalization schemes, general block overlap, Gaussian block > window, trillinear interpolation/weighting of bin assignments,...) these > don't seem to be that important, at least when applied to my current > problem (eye blink analysis). > > Most of these would increase complexity, giving the implementation a > complicated look, with little gain. I've also been looking into some > practical improvements - integral histogram, separating the cell-block > histogram feature to use it with other dense feature transforms such as > LBP, a HoG visualization function that would render the visualization at > higher resolutions that the original image. > > Would any of these be welcomed additions to scikit-image? > > Regards, > Martin Savc > PuppySaturation > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanshuzhi at gmail.com Mon Nov 16 20:52:36 2015 From: kanshuzhi at gmail.com (Dai Yan) Date: Mon, 16 Nov 2015 17:52:36 -0800 (PST) Subject: Improving HoG In-Reply-To: References: Message-ID: <5cf1ae50-5aa8-44fc-b36f-fded100d0420@googlegroups.com> Hello, Martin I am seeking for HOG implementation with "overlap feature". May I know whether you have commited your patches or could you please share your branch? Thanks Best Regards Dai Yan ? 2015?1?28???? UTC+8??3:18:34?Martin Savc??? > > I've been implementing my own HoG transform looking at different sources. > While the implementation in scikit-image seems to lack certain features > (multiple normalization schemes, general block overlap, Gaussian block > window, trillinear interpolation/weighting of bin assignments,...) these > don't seem to be that important, at least when applied to my current > problem (eye blink analysis). > > Most of these would increase complexity, giving the implementation a > complicated look, with little gain. I've also been looking into some > practical improvements - integral histogram, separating the cell-block > histogram feature to use it with other dense feature transforms such as > LBP, a HoG visualization function that would render the visualization at > higher resolutions that the original image. > > Would any of these be welcomed additions to scikit-image? > > Regards, > Martin Savc > PuppySaturation > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ejsaiet at alaska.edu Tue Nov 17 22:22:04 2015 From: ejsaiet at alaska.edu (Arctic_python) Date: Tue, 17 Nov 2015 19:22:04 -0800 (PST) Subject: measuring the longest thread of a skeleton Message-ID: Hello, Anyone has suggestions for an algorithm to measure the length of a skeleton line/thread(e.g http://scikit-image.org/docs/dev/auto_examples/plot_skeleton.html)? The context- I skeletonize a shape to infer its length. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From pratapgr8 at gmail.com Wed Nov 18 01:43:58 2015 From: pratapgr8 at gmail.com (Pratap Vardhan) Date: Tue, 17 Nov 2015 22:43:58 -0800 (PST) Subject: measuring the longest thread of a skeleton In-Reply-To: References: Message-ID: My first thought was what Juan suggested and seemed logical to do that. As an alternative, you could also (this may be an overfill and could be slower than network approach) try. 1. From every endpoints of skeleton compute the distance transform (using flood-fill or neighbourhood methods). 2. Now the maximum distance for above all distances will give you the longest path in skeleton. This way you can have the trace path of the longest thread in skeleton in image form itself. On Wednesday, November 18, 2015 at 8:52:05 AM UTC+5:30, Arctic_python wrote: > > Hello, > Anyone has suggestions for an algorithm to measure the length of a > skeleton line/thread(e.g > http://scikit-image.org/docs/dev/auto_examples/plot_skeleton.html)? > The context- I skeletonize a shape to infer its length. > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliusbierk at gmail.com Wed Nov 18 03:47:29 2015 From: juliusbierk at gmail.com (Julius Bier Kirkegaard) Date: Wed, 18 Nov 2015 08:47:29 +0000 Subject: measuring the longest thread of a skeleton In-Reply-To: References: Message-ID: I have some code lying around that will do this. It's not the most efficient way though, but if you just need a quick solution: def floyd_warshall(x,y): > dist = np.sqrt((x[:,np.newaxis]-x[np.newaxis,:])**2 + > (y[:,np.newaxis]-y[np.newaxis,:])**2) > d = np.array(dist) > d[dist>1.5] = np.inf # sqrt(2) < 1.5 < 2 > n = len(x) > for k in xrange(n): > kd = d[:,k,np.newaxis] + d[k,:] > d = np.minimum(d,kd) > return d > skel = np.argwhere(skel) > x, y = skel[:,0], skel[:,1] > d = np.max(floyd_warshall(x,y)) (if you have many seperated skeletons it's worth doing each label independently) A better method is to find the two end points and use Dijkstra's algorithm on those. On 18 November 2015 at 06:43, Pratap Vardhan wrote: > My first thought was what Juan suggested and seemed logical to do that. > > As an alternative, you could also (this may be an overfill and could be > slower than network approach) try. > > 1. From every endpoints of skeleton compute the distance transform (using > flood-fill or neighbourhood methods). > 2. Now the maximum distance for above all distances will give you > the longest path in skeleton. > > This way you can have the trace path of the longest thread in skeleton in > image form itself. > > > On Wednesday, November 18, 2015 at 8:52:05 AM UTC+5:30, Arctic_python > wrote: >> >> Hello, >> Anyone has suggestions for an algorithm to measure the length of a >> skeleton line/thread(e.g >> http://scikit-image.org/docs/dev/auto_examples/plot_skeleton.html)? >> The context- I skeletonize a shape to infer its length. >> Thanks >> > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ejsaiet at alaska.edu Wed Nov 18 15:07:31 2015 From: ejsaiet at alaska.edu (Eyal Saiet) Date: Wed, 18 Nov 2015 11:07:31 -0900 Subject: measuring the longest thread of a skeleton In-Reply-To: References: Message-ID: Thanks I will look into the network approach. I guess I was naive to assume there is a common algorithm, built in scikit-image to measure the length of the skeleton. On Tue, Nov 17, 2015 at 9:43 PM, Pratap Vardhan wrote: > My first thought was what Juan suggested and seemed logical to do that. > > As an alternative, you could also (this may be an overfill and could be > slower than network approach) try. > > 1. From every endpoints of skeleton compute the distance transform (using > flood-fill or neighbourhood methods). > 2. Now the maximum distance for above all distances will give you > the longest path in skeleton. > > This way you can have the trace path of the longest thread in skeleton in > image form itself. > > > On Wednesday, November 18, 2015 at 8:52:05 AM UTC+5:30, Arctic_python > wrote: >> >> Hello, >> Anyone has suggestions for an algorithm to measure the length of a >> skeleton line/thread(e.g >> http://scikit-image.org/docs/dev/auto_examples/plot_skeleton.html)? >> The context- I skeletonize a shape to infer its length. >> Thanks >> > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/wM-zMGL9dVI/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -- Eyal Saiet Project manager Remote sensing and in-situ measurements Geophysical Institute University of Alaska Fairbanks Fairbanks, AK 99775 (907) 750 6555 (cell) ejsaiet at alaska.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From darleison.f at gmail.com Wed Nov 18 17:59:23 2015 From: darleison.f at gmail.com (Darleison Rodrigues) Date: Wed, 18 Nov 2015 14:59:23 -0800 (PST) Subject: Implementation of imtool for skimage Message-ID: <88259d1a-f206-4b3f-b057-774bed1ddc7a@googlegroups.com> Someone developing this? i will start a project to develop a lot of image functions from MATLAB and i want know who is working in this field. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Wed Nov 18 01:26:30 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 18 Nov 2015 17:26:30 +1100 Subject: measuring the longest thread of a skeleton In-Reply-To: References: Message-ID: Hey Arctic! =) This is probably overkill, but you could build a networkx graph of the pixels of the skeleton using a variation of this recipe: http://ilovesymposia.com/2014/06/24/a-clever-use-of-scipys-ndimage-generic_filter-for-n-dimensional-image-processing/ You will need every pixel of the skeleton to be its own label. You can get this by using the np.arange function and setting to zero every pixel not in the skeleton. and then use networkx's diameter function to find the length of the longest path in the graph: https://networkx.github.io/documentation/latest/reference/generated/networkx.algorithms.distance_measures.diameter.html I hope that's a good enough outline to get you where you want to go! But post back to the list if you need more detail... Just a bit stretched for time right now. Juan. On Wed, Nov 18, 2015 at 2:22 PM, Arctic_python wrote: > Hello, > Anyone has suggestions for an algorithm to measure the length of a > skeleton line/thread(e.g > http://scikit-image.org/docs/dev/auto_examples/plot_skeleton.html)? > The context- I skeletonize a shape to infer its length. > Thanks > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ejsaiet at alaska.edu Wed Nov 18 23:02:52 2015 From: ejsaiet at alaska.edu (Arctic_python) Date: Wed, 18 Nov 2015 20:02:52 -0800 (PST) Subject: Counting objects using label in skimage.measure Message-ID: <18179150-16c2-4d34-b56f-8f3e6d91401f@googlegroups.com> Hello, I am trying to count the number of objects(that has "ones") in a binary array. Of course I do not expect to count holes. But when I run the below code, file1='4.csv' a=np.loadtxt(open(file1,'rb'),delimiter=',',dtype=int) #print (a.shape) img=measure.label(a) propsa = measure.regionprops(img) length = len(propsa) print ('length='+str(length)) for label in propsa: print (label.centroid) returns length=2 (214.23444957510378, 505.25546156532539) (238.77173913043478, 740.28260869565213) I get two objects. From reading the centroid coordinates(above), it seems it is counting the center of the white object and the cavity (can be seen in the image bellow). Why is this algorithm counting cavities and not only objects that are of "ones"? Is there an argument to enforce only objects and not cavities? Attached is the csv file if you want to try for yourself. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 4.png Type: image/png Size: 3605 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 4.csv Type: text/csv Size: 853333 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Auto Generated Inline Image 1 Type: image/png Size: 10392 bytes Desc: not available URL: From jsch at demuc.de Wed Nov 18 23:34:26 2015 From: jsch at demuc.de (=?utf-8?Q?Johannes_Sch=C3=B6nberger?=) Date: Wed, 18 Nov 2015 23:34:26 -0500 Subject: Counting objects using label in skimage.measure In-Reply-To: <18179150-16c2-4d34-b56f-8f3e6d91401f@googlegroups.com> References: <18179150-16c2-4d34-b56f-8f3e6d91401f@googlegroups.com> Message-ID: Hi, set background=0 when calling the label function. Best, Johannes > On Nov 18, 2015, at 11:02 PM, Arctic_python wrote: > > Hello, > I am trying to count the number of objects(that has "ones") in a binary array. Of course I do not expect to count holes. > But when I run the below code, > file1='4.csv' > a=np.loadtxt(open(file1,'rb'),delimiter=',',dtype=int) > #print (a.shape) > > img=measure.label(a) > propsa = measure.regionprops(img) > length = len(propsa) > print ('length='+str(length)) > for label in propsa: > print (label.centroid) > > returns > > length=2 > (214.23444957510378, 505.25546156532539) > (238.77173913043478, 740.28260869565213) > I get two objects. From reading the centroid coordinates(above), it seems it is counting the center of the white object and the cavity (can be seen in the image bellow). > Why is this algorithm counting cavities and not only objects that are of "ones"? Is there an argument to enforce only objects and not cavities? > Attached is the csv file if you want to try for yourself. > Thanks > > > > > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com . > To post to this group, send email to scikit-image at googlegroups.com . > To view this discussion on the web, visit https://groups.google.com/d/msgid/scikit-image/18179150-16c2-4d34-b56f-8f3e6d91401f%40googlegroups.com . > For more options, visit https://groups.google.com/d/optout . > <4.png><4.csv> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Wed Nov 18 20:08:36 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 19 Nov 2015 12:08:36 +1100 Subject: measuring the longest thread of a skeleton In-Reply-To: References: Message-ID: Eyal, not naive at all, it's a reasonable expectation! Skeletonization and related algorithms are a place where the library needs improving. If you come up with a good solution, we can help you get it into the library. See http://scikit-image.org/docs/stable/contribute.html . Thanks! Juan. On Thu, Nov 19, 2015 at 7:07 AM, Eyal Saiet wrote: > Thanks I will look into the network approach. > I guess I was naive to assume there is a common algorithm, built in > scikit-image to measure the length of the skeleton. > > On Tue, Nov 17, 2015 at 9:43 PM, Pratap Vardhan > wrote: > >> My first thought was what Juan suggested and seemed logical to do that. >> >> As an alternative, you could also (this may be an overfill and could be >> slower than network approach) try. >> >> 1. From every endpoints of skeleton compute the distance transform (using >> flood-fill or neighbourhood methods). >> 2. Now the maximum distance for above all distances will give you >> the longest path in skeleton. >> >> This way you can have the trace path of the longest thread in skeleton in >> image form itself. >> >> >> On Wednesday, November 18, 2015 at 8:52:05 AM UTC+5:30, Arctic_python >> wrote: >>> >>> Hello, >>> Anyone has suggestions for an algorithm to measure the length of a >>> skeleton line/thread(e.g >>> http://scikit-image.org/docs/dev/auto_examples/plot_skeleton.html)? >>> The context- I skeletonize a shape to infer its length. >>> Thanks >>> >> -- >> You received this message because you are subscribed to a topic in the >> Google Groups "scikit-image" group. >> To unsubscribe from this topic, visit >> https://groups.google.com/d/topic/scikit-image/wM-zMGL9dVI/unsubscribe. >> To unsubscribe from this group and all its topics, send an email to >> scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > > > -- > Eyal Saiet > > Project manager > Remote sensing and in-situ measurements > > Geophysical Institute > University of Alaska Fairbanks > Fairbanks, AK 99775 > (907) 750 6555 (cell) > > ejsaiet at alaska.edu > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Wed Nov 18 20:14:50 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 19 Nov 2015 12:14:50 +1100 Subject: Implementation of imtool for skimage In-Reply-To: <88259d1a-f206-4b3f-b057-774bed1ddc7a@googlegroups.com> References: <88259d1a-f206-4b3f-b057-774bed1ddc7a@googlegroups.com> Message-ID: Hi Darleison, Many of the functions in the imaging toolbox are already in scikit-image. If you implement additional functions, they would be very welcome in scikit-image! See http://scikit-image.org/docs/stable/contribute.html Please note this though: the code from Matlab, though sometimes visible, is *not* available under an open source license. That means that if you look at it and simply port it to Python, it is *illegal* to include it in scikit-image. When you write functions to match Matlab functionality, you have to write them from first principles, not from looking at the Matlab code. (Disclaimer: I am not a lawyer, but that is my interpretation and I think it matches the scikit-image policy. Others might chime in with different opinions.) Juan. On Thu, Nov 19, 2015 at 9:59 AM, Darleison Rodrigues wrote: > Someone developing this? > i will start a project to develop a lot of image functions from MATLAB and > i want know who is working in this field. > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ejsaiet at alaska.edu Thu Nov 19 17:55:32 2015 From: ejsaiet at alaska.edu (Arctic_python) Date: Thu, 19 Nov 2015 14:55:32 -0800 (PST) Subject: Counting objects using label in skimage.measure In-Reply-To: References: <18179150-16c2-4d34-b56f-8f3e6d91401f@googlegroups.com> Message-ID: Hello Johannes, thanks for the advise, but it did not work. When I set the background=0, the script counted one object when there were two. According to the one object coordinate, with centroid prop=(214.59826983468628, 505.59264087219293), it missed the small dot in the bottom of the image. Trying to understand what is the algorithm dose, I tested the following code: a=np.array(np.matrix('0 1 0 0 1;0 1 0 0 0; 0 0 0 0 0;0 0 0 0 1')) print(a) import numpy as np from skimage import filters, morphology, measure img=measure.label(a, background=0) propsa = measure.regionprops(img) length = len(propsa) print ('length='+str(length)) for label in propsa: print (label.centroid) returns 0 1 0 0 1] [0 1 0 0 0] [0 0 0 0 0] [0 0 0 0 1]] length=2 (0.0, 4.0) (3.0, 4.0) When trying a=np.array(np.matrix('0 1 0 0 1;0 1 0 0 0; 0 0 0 0 0;0 0 0 0 1')) print(a) import numpy as np from skimage import filters, morphology, measure img=measure.label(a) propsa = measure.regionprops(img) length = len(propsa) print ('length='+str(length)) for label in propsa: print (label.centroid) returns [[0 1 0 0 1] [0 1 0 0 0] [0 0 0 0 0] [0 0 0 0 1]] length=3 (0.5, 1.0) (0.0, 4.0) (3.0, 4.0) I do not understand why when background=0 the program counts two objects, and three object when background is default. Thanks for the help On Wednesday, November 18, 2015 at 7:34:30 PM UTC-9, Johannes Sch?nberger wrote: > > Hi, > > set background=0 when calling the label function. > > Best, Johannes > > On Nov 18, 2015, at 11:02 PM, Arctic_python > wrote: > > Hello, > I am trying to count the number of objects(that has "ones") in a binary > array. Of course I do not expect to count holes. > But when I run the below code, > file1='4.csv' > a=np.loadtxt(open(file1,'rb'),delimiter=',',dtype=int) > #print (a.shape) > > img=measure.label(a) > propsa = measure.regionprops(img) > length = len(propsa) > print ('length='+str(length)) > for label in propsa: > print (label.centroid) > > returns > > length=2 > (214.23444957510378, 505.25546156532539) > (238.77173913043478, 740.28260869565213) > I get two objects. From reading the centroid coordinates(above), it seems > it is counting the center of the white object and the cavity (can be seen > in the image bellow). > Why is this algorithm counting cavities and not only objects that are of > "ones"? Is there an argument to enforce only objects and not cavities? > Attached is the csv file if you want to try for yourself. > Thanks > > > > > > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image... at googlegroups.com . > To post to this group, send email to scikit... at googlegroups.com > . > To view this discussion on the web, visit > https://groups.google.com/d/msgid/scikit-image/18179150-16c2-4d34-b56f-8f3e6d91401f%40googlegroups.com > > . > For more options, visit https://groups.google.com/d/optout. > <4.png><4.csv> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Auto Generated Inline Image 1 Type: image/png Size: 10971 bytes Desc: not available URL: From silvertrumpet999 at gmail.com Fri Nov 20 00:23:49 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Thu, 19 Nov 2015 21:23:49 -0800 (PST) Subject: Help wanted: implementation of 3D medial axis skeletonization In-Reply-To: <87si4eeikg.fsf@berkeley.edu> References: <6aba179a-caa8-43b2-b6c1-7820598a2c75@googlegroups.com> <1446517247801.ca5b9f11@Nodemailer> <20151103211809.GB229197@phare.normalesup.org> <87si4eeikg.fsf@berkeley.edu> Message-ID: <1ac2d847-8e1f-40cf-a022-da793e186020@googlegroups.com> It looks like the lobster and bonsai can be downloaded directly as raw volumes (8 bit only, but will serve these purposes perfectly well) here: http://www.volvis.org/ This simple wrapper for np.fromfile will load them import numpy as np def loadraw(rawfile, shape=None, dtype=np.uint8): """ Load RAW volume to a NumPy array. Parameters ---------- rawfile : string Path to *.raw volume. shape : tuple Shape of the volume. If not provided, output will be a rank-1 stream which can be reshaped as desired. dtype : NumPy dtype Dtype of the raw image volume. """ vol = np.fromfile(rawfile, dtype=dtype) if shape is not None: vol = vol.reshape(shape) return vol For the lobster, use shape=(56, 324, 301) and recall the voxel spacing has a ratio of 1.4:1:1 For the bonsai, use shape=(256, 256, 256) and the volume is isotropic (1:1:1 spacing) On Monday, November 9, 2015 at 8:42:26 PM UTC-5, stefanv wrote: Hi Kevin > > On 2015-11-07 09:46:17, 'Kevin Keraudren' via scikit-image < > scikit-image at googlegroups.com> wrote: > > I don?t want to volunteer for this project, but I just wanted to > > mention that the 3D skeletonization from ITK is easily accessible to > > Python through SimpleITK, see example below for the lobster > > dataset. SimpleITK could be used for comparison or validation of the > > proposed scikit-image algorithm. > > Thanks for the pointer. In this case, one of the purposes of the > exercise is to stay away from a heavy dependency such as ITK. > > > PS: is there another way to load those *.pvm datasets in Python > > without converting them to raw and hardcoding the image dimension and > > pixel type? An skimage.io.imread() plugin? > > I have no idea about .pvm files, but perhaps we should start a set of > plugin gists on the wiki somewhere? > > St?fan > ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.benoudjit at gmail.com Fri Nov 20 08:20:15 2015 From: h.benoudjit at gmail.com (Hakim Benoudjit) Date: Fri, 20 Nov 2015 05:20:15 -0800 (PST) Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) Message-ID: Hi, Is there a clustering algorithm implemented in *scikit-image *that perform the image clustering by taking into account the *spatial context *of the clustered pixel (its neighbourhood), besides its *pixel brightness*? For the time being, I'm clustering images by reshaping them as vectors of pixels intensities distributions, and then performing the *K-means *or *Gaussian mixture models* implemented in *scikit-learn*. But, I'm looking for a image clustering technique implemented (or could be implemented) in *scikit-image *that would consider the neighbourhood of a pixel when classifying it. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Fri Nov 20 18:20:02 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Fri, 20 Nov 2015 15:20:02 -0800 Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: References: Message-ID: Hi Hakim Are you looking for a metric? Perhaps consider structural similarity index. Regards St?fan On Nov 20, 2015 22:20, "Hakim Benoudjit" wrote: > Hi, > > Is there a clustering algorithm implemented in *scikit-image *that > perform the image clustering by taking into account the *spatial context *of > the clustered pixel (its neighbourhood), besides its *pixel brightness*? > > For the time being, I'm clustering images by reshaping them as vectors of > pixels intensities distributions, and then performing the *K-means *or *Gaussian > mixture models* implemented in *scikit-learn*. But, I'm looking for a > image clustering technique implemented (or could be implemented) in *scikit-image > *that would consider the neighbourhood of a pixel when classifying it. > > Thanks. > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > To post to this group, send email to scikit-image at googlegroups.com. > To view this discussion on the web, visit > https://groups.google.com/d/msgid/scikit-image/d6eeb2b6-2abc-40c0-9c15-17185731f414%40googlegroups.com > > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.benoudjit at gmail.com Fri Nov 20 19:28:05 2015 From: h.benoudjit at gmail.com (Hakim Benoudjit) Date: Fri, 20 Nov 2015 16:28:05 -0800 (PST) Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: References: Message-ID: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> Hi St?fan, Thanks for your reponse. What I'm looking for is a *spatial criteria* that encourages the *clustering algorithm* (K-means or others) to group together similar *neighbouring pixels* inside the same cluster. This will help avoid having persistent noise inside a cluster. Le vendredi 20 novembre 2015 13:20:15 UTC, Hakim Benoudjit a ?crit : > > Hi, > > Is there a clustering algorithm implemented in *scikit-image *that > perform the image clustering by taking into account the *spatial context *of > the clustered pixel (its neighbourhood), besides its *pixel brightness*? > > For the time being, I'm clustering images by reshaping them as vectors of > pixels intensities distributions, and then performing the *K-means *or *Gaussian > mixture models* implemented in *scikit-learn*. But, I'm looking for a > image clustering technique implemented (or could be implemented) in *scikit-image > *that would consider the neighbourhood of a pixel when classifying it. > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.benoudjit at gmail.com Fri Nov 20 20:23:27 2015 From: h.benoudjit at gmail.com (Hakim Benoudjit) Date: Fri, 20 Nov 2015 17:23:27 -0800 (PST) Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: References: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> Message-ID: Hi Juan, Thanks for your answer, this seems to be a nice algorithm for the denoising of speckle. But actually I'm looking for an image clustering (segmentation) technique instead (that would take into consideration the spatial context of pixels). Le samedi 21 novembre 2015 00:47:21 UTC, Juan Nunez-Iglesias a ?crit : > > Hey Hakim, > > The right answer here depends on your ultimate goal. If you're after > denoising, non-local means denoising (recently added to skimage) sounds > like exactly what you're after. > > Juan. > > On Sat, Nov 21, 2015 at 11:28 AM, Hakim Benoudjit > wrote: > >> Hi St?fan, >> >> Thanks for your reponse. >> What I'm looking for is a *spatial criteria* that encourages the *clustering >> algorithm* (K-means or others) to group together similar *neighbouring >> pixels* inside the same cluster. This will help avoid having persistent >> noise inside a cluster. >> >> Le vendredi 20 novembre 2015 13:20:15 UTC, Hakim Benoudjit a ?crit : >>> >>> Hi, >>> >>> Is there a clustering algorithm implemented in *scikit-image *that >>> perform the image clustering by taking into account the *spatial >>> context *of the clustered pixel (its neighbourhood), besides its *pixel >>> brightness*? >>> >>> For the time being, I'm clustering images by reshaping them as vectors >>> of pixels intensities distributions, and then performing the *K-means *or >>> *Gaussian mixture models* implemented in *scikit-learn*. But, I'm >>> looking for a image clustering technique implemented (or could be >>> implemented) in *scikit-image *that would consider the neighbourhood of >>> a pixel when classifying it. >>> >>> Thanks. >>> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> To post to this group, send email to scikit... at googlegroups.com >> . >> To view this discussion on the web, visit >> https://groups.google.com/d/msgid/scikit-image/0aad2045-b9da-442c-97bc-06c596b0469e%40googlegroups.com >> >> . >> >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Fri Nov 20 19:46:58 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sat, 21 Nov 2015 11:46:58 +1100 Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> References: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> Message-ID: Hey Hakim, The right answer here depends on your ultimate goal. If you're after denoising, non-local means denoising (recently added to skimage) sounds like exactly what you're after. Juan. On Sat, Nov 21, 2015 at 11:28 AM, Hakim Benoudjit wrote: > Hi St?fan, > > Thanks for your reponse. > What I'm looking for is a *spatial criteria* that encourages the *clustering > algorithm* (K-means or others) to group together similar *neighbouring > pixels* inside the same cluster. This will help avoid having persistent > noise inside a cluster. > > Le vendredi 20 novembre 2015 13:20:15 UTC, Hakim Benoudjit a ?crit : >> >> Hi, >> >> Is there a clustering algorithm implemented in *scikit-image *that >> perform the image clustering by taking into account the *spatial context >> *of the clustered pixel (its neighbourhood), besides its *pixel >> brightness*? >> >> For the time being, I'm clustering images by reshaping them as vectors of >> pixels intensities distributions, and then performing the *K-means *or *Gaussian >> mixture models* implemented in *scikit-learn*. But, I'm looking for a >> image clustering technique implemented (or could be implemented) in *scikit-image >> *that would consider the neighbourhood of a pixel when classifying it. >> >> Thanks. >> > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > To post to this group, send email to scikit-image at googlegroups.com. > To view this discussion on the web, visit > https://groups.google.com/d/msgid/scikit-image/0aad2045-b9da-442c-97bc-06c596b0469e%40googlegroups.com > > . > > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jowulff at gmail.com Sat Nov 21 09:24:35 2015 From: jowulff at gmail.com (Jonas Wulff) Date: Sat, 21 Nov 2015 15:24:35 +0100 Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: References: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> Message-ID: Hi Hakim, Have you tried just adding the coordinates of a pixel to its features? For each pixel, the features would then be R,G,B,X,Y. From your description, that seems what you're looking for. So if you have an RGB image I (so that I.shape = (height,width,3)), you can do: y,x = np.mgrid[:height,:width] I_stacked = np.dstack((I,x,y)) data = I_stacked.reshape((-1,5)) ... and then use "data" as input to your clustering algorithm. You might want to scale / normalize the coordinates to fit the general range of your color values -- but in general, this should do what I think you're looking for. Cheers, -Jonas On Sat, Nov 21, 2015 at 2:23 AM, Hakim Benoudjit wrote: > Hi Juan, > > Thanks for your answer, this seems to be a nice algorithm for the > denoising of speckle. > But actually I'm looking for an image clustering (segmentation) technique > instead (that would take into consideration the spatial context of pixels). > > Le samedi 21 novembre 2015 00:47:21 UTC, Juan Nunez-Iglesias a ?crit : >> >> Hey Hakim, >> >> The right answer here depends on your ultimate goal. If you're after >> denoising, non-local means denoising (recently added to skimage) sounds >> like exactly what you're after. >> >> Juan. >> >> On Sat, Nov 21, 2015 at 11:28 AM, Hakim Benoudjit >> wrote: >> >>> Hi St?fan, >>> >>> Thanks for your reponse. >>> What I'm looking for is a *spatial criteria* that encourages the *clustering >>> algorithm* (K-means or others) to group together similar *neighbouring >>> pixels* inside the same cluster. This will help avoid having persistent >>> noise inside a cluster. >>> >>> Le vendredi 20 novembre 2015 13:20:15 UTC, Hakim Benoudjit a ?crit : >>>> >>>> Hi, >>>> >>>> Is there a clustering algorithm implemented in *scikit-image *that >>>> perform the image clustering by taking into account the *spatial >>>> context *of the clustered pixel (its neighbourhood), besides its *pixel >>>> brightness*? >>>> >>>> For the time being, I'm clustering images by reshaping them as vectors >>>> of pixels intensities distributions, and then performing the *K-means *or >>>> *Gaussian mixture models* implemented in *scikit-learn*. But, I'm >>>> looking for a image clustering technique implemented (or could be >>>> implemented) in *scikit-image *that would consider the neighbourhood >>>> of a pixel when classifying it. >>>> >>>> Thanks. >>>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to scikit-image... at googlegroups.com. >>> To post to this group, send email to scikit... at googlegroups.com. >>> To view this discussion on the web, visit >>> https://groups.google.com/d/msgid/scikit-image/0aad2045-b9da-442c-97bc-06c596b0469e%40googlegroups.com >>> >>> . >>> >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > To post to this group, send email to scikit-image at googlegroups.com. > To view this discussion on the web, visit > https://groups.google.com/d/msgid/scikit-image/a2895510-2490-4ccf-a70a-20d67c74d2cd%40googlegroups.com > > . > > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakethess at gmail.com Mon Nov 23 07:46:22 2015 From: jakethess at gmail.com (Iakovos Halegoua) Date: Mon, 23 Nov 2015 04:46:22 -0800 (PST) Subject: Radius-scale matching in Determinant of Hessian (DoH) blob detector Message-ID: <2ad35b8b-aeb2-4e18-9d7a-bbde72b4875e@googlegroups.com> Hello there, I'd like to point out to the documentation authors a minor error in the blob_doh description. In the Notes section it is stated that: "The radius of each blob is approximately sigma.". As I tested the DoH responses of synthetic blob images, it seemed that the radius of the blob is matched to the scale according to: r = sqrt(2) * s. Could you provide me with some reference on how you calculated the matching scale to be at r = s? I guess this is nothing serious, but it was a little confusing when I first encountered it :P -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.benoudjit at gmail.com Mon Nov 23 19:19:39 2015 From: h.benoudjit at gmail.com (Hakim Benoudjit) Date: Mon, 23 Nov 2015 16:19:39 -0800 (PST) Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: References: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> Message-ID: <5e41ffef-c3a3-421f-b6e4-d5566b5c37c0@googlegroups.com> Hi Jonas, Thanks for your response. That's exactly what I've tried this week-end, by adding the (x, y) to gray-level intensity and giving the matrix of 3-components vector as input to k-means. As for the normalization, I applied this formula to each column (intensity, x, y): (value - mean) / std_dev. But, even with this normalization step, adding the (x, y) coordinates will influence the pixels on the left (resp. right) to be grouped together (See http://imgur.com/HxfkRig and original image taken from http://uk.mathworks.com/help/images/texture-segmentation-using-gabor-filters.html?refresh=true). Maybe I will need to find another normalization to apply of the (intensity, x, y) space. Le lundi 23 novembre 2015 23:53:34 UTC, Jonas Wulff a ?crit : > > Hi Hakim, > > Have you tried just adding the coordinates of a pixel to its features? For > each pixel, the features would then be R,G,B,X,Y. From your description, > that seems what you're looking for. > > So if you have an RGB image I (so that I.shape = (height,width,3)), you > can do: > > y,x = np.mgrid[:height,:width] > I_stacked = np.dstack((I,x,y)) > data = I_stacked.reshape((-1,5)) > > ... and then use "data" as input to your clustering algorithm. > > You might want to scale / normalize the coordinates to fit the general > range of your color values -- but in general, this should do what I think > you're looking for. > > Cheers, > -Jonas > > > > > > On Sat, Nov 21, 2015 at 2:23 AM, Hakim Benoudjit > wrote: > >> Hi Juan, >> >> Thanks for your answer, this seems to be a nice algorithm for the >> denoising of speckle. >> But actually I'm looking for an image clustering (segmentation) technique >> instead (that would take into consideration the spatial context of pixels). >> >> Le samedi 21 novembre 2015 00:47:21 UTC, Juan Nunez-Iglesias a ?crit : >>> >>> Hey Hakim, >>> >>> The right answer here depends on your ultimate goal. If you're after >>> denoising, non-local means denoising (recently added to skimage) sounds >>> like exactly what you're after. >>> >>> Juan. >>> >>> On Sat, Nov 21, 2015 at 11:28 AM, Hakim Benoudjit >>> wrote: >>> >>>> Hi St?fan, >>>> >>>> Thanks for your reponse. >>>> What I'm looking for is a *spatial criteria* that encourages the *clustering >>>> algorithm* (K-means or others) to group together similar *neighbouring >>>> pixels* inside the same cluster. This will help avoid having >>>> persistent noise inside a cluster. >>>> >>>> Le vendredi 20 novembre 2015 13:20:15 UTC, Hakim Benoudjit a ?crit : >>>>> >>>>> Hi, >>>>> >>>>> Is there a clustering algorithm implemented in *scikit-image *that >>>>> perform the image clustering by taking into account the *spatial >>>>> context *of the clustered pixel (its neighbourhood), besides its *pixel >>>>> brightness*? >>>>> >>>>> For the time being, I'm clustering images by reshaping them as vectors >>>>> of pixels intensities distributions, and then performing the *K-means >>>>> *or *Gaussian mixture models* implemented in *scikit-learn*. But, I'm >>>>> looking for a image clustering technique implemented (or could be >>>>> implemented) in *scikit-image *that would consider the neighbourhood >>>>> of a pixel when classifying it. >>>>> >>>>> Thanks. >>>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "scikit-image" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to scikit-image... at googlegroups.com. >>>> To post to this group, send email to scikit... at googlegroups.com. >>>> To view this discussion on the web, visit >>>> https://groups.google.com/d/msgid/scikit-image/0aad2045-b9da-442c-97bc-06c596b0469e%40googlegroups.com >>>> >>>> . >>>> >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>> >>> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> To post to this group, send email to scikit... at googlegroups.com >> . >> To view this discussion on the web, visit >> https://groups.google.com/d/msgid/scikit-image/a2895510-2490-4ccf-a70a-20d67c74d2cd%40googlegroups.com >> >> . >> >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.benoudjit at gmail.com Tue Nov 24 04:47:00 2015 From: h.benoudjit at gmail.com (Hakim Benoudjit) Date: Tue, 24 Nov 2015 01:47:00 -0800 (PST) Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: References: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> <5e41ffef-c3a3-421f-b6e4-d5566b5c37c0@googlegroups.com> Message-ID: <4db90f57-c3ac-4d7e-8746-c8d9a590b539@googlegroups.com> Thanks Juan, I think you're right. I might have to read the paper on SLIC algorithm to understand how to tune the "compactness" parameter. Le mardi 24 novembre 2015 01:11:45 UTC, Juan Nunez-Iglesias a ?crit : > > Incidentally, it seems you are just doing SLIC on a non-RGB image... Which > SLIC supports. (skimage.segmentation.slic). The "compactness" parameter > changes the weighting of intensity and space. > > On Tue, Nov 24, 2015 at 11:19 AM, Hakim Benoudjit > wrote: > >> Hi Jonas, >> >> Thanks for your response. >> That's exactly what I've tried this week-end, by adding the (x, y) to >> gray-level intensity and giving the matrix of 3-components vector as input >> to k-means. >> As for the normalization, I applied this formula to each column >> (intensity, x, y): (value - mean) / std_dev. >> But, even with this normalization step, adding the (x, y) coordinates >> will influence the pixels on the left (resp. right) to be grouped together >> (See http://imgur.com/HxfkRig and original image taken from >> http://uk.mathworks.com/help/images/texture-segmentation-using-gabor-filters.html?refresh=true >> ). >> >> Maybe I will need to find another normalization to apply of the >> (intensity, x, y) space. >> >> Le lundi 23 novembre 2015 23:53:34 UTC, Jonas Wulff a ?crit : >>> >>> Hi Hakim, >>> >>> Have you tried just adding the coordinates of a pixel to its features? >>> For each pixel, the features would then be R,G,B,X,Y. From your >>> description, that seems what you're looking for. >>> >>> So if you have an RGB image I (so that I.shape = (height,width,3)), you >>> can do: >>> >>> y,x = np.mgrid[:height,:width] >>> I_stacked = np.dstack((I,x,y)) >>> data = I_stacked.reshape((-1,5)) >>> >>> ... and then use "data" as input to your clustering algorithm. >>> >>> You might want to scale / normalize the coordinates to fit the general >>> range of your color values -- but in general, this should do what I think >>> you're looking for. >>> >>> Cheers, >>> -Jonas >>> >>> >>> >>> >>> >>> On Sat, Nov 21, 2015 at 2:23 AM, Hakim Benoudjit >>> wrote: >>> >>>> Hi Juan, >>>> >>>> Thanks for your answer, this seems to be a nice algorithm for the >>>> denoising of speckle. >>>> But actually I'm looking for an image clustering (segmentation) >>>> technique instead (that would take into consideration the spatial context >>>> of pixels). >>>> >>>> Le samedi 21 novembre 2015 00:47:21 UTC, Juan Nunez-Iglesias a ?crit : >>>>> >>>>> Hey Hakim, >>>>> >>>>> The right answer here depends on your ultimate goal. If you're after >>>>> denoising, non-local means denoising (recently added to skimage) sounds >>>>> like exactly what you're after. >>>>> >>>>> Juan. >>>>> >>>>> On Sat, Nov 21, 2015 at 11:28 AM, Hakim Benoudjit >>>>> wrote: >>>>> >>>>>> Hi St?fan, >>>>>> >>>>>> Thanks for your reponse. >>>>>> What I'm looking for is a *spatial criteria* that encourages the *clustering >>>>>> algorithm* (K-means or others) to group together similar *neighbouring >>>>>> pixels* inside the same cluster. This will help avoid having >>>>>> persistent noise inside a cluster. >>>>>> >>>>>> Le vendredi 20 novembre 2015 13:20:15 UTC, Hakim Benoudjit a ?crit : >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Is there a clustering algorithm implemented in *scikit-image *that >>>>>>> perform the image clustering by taking into account the *spatial >>>>>>> context *of the clustered pixel (its neighbourhood), besides its *pixel >>>>>>> brightness*? >>>>>>> >>>>>>> For the time being, I'm clustering images by reshaping them as >>>>>>> vectors of pixels intensities distributions, and then performing the *K-means >>>>>>> *or *Gaussian mixture models* implemented in *scikit-learn*. But, >>>>>>> I'm looking for a image clustering technique implemented (or could be >>>>>>> implemented) in *scikit-image *that would consider the >>>>>>> neighbourhood of a pixel when classifying it. >>>>>>> >>>>>>> Thanks. >>>>>>> >>>>>> -- >>>>>> You received this message because you are subscribed to the Google >>>>>> Groups "scikit-image" group. >>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>> send an email to scikit-image... at googlegroups.com. >>>>>> To post to this group, send email to scikit... at googlegroups.com. >>>>>> To view this discussion on the web, visit >>>>>> https://groups.google.com/d/msgid/scikit-image/0aad2045-b9da-442c-97bc-06c596b0469e%40googlegroups.com >>>>>> >>>>>> . >>>>>> >>>>>> For more options, visit https://groups.google.com/d/optout. >>>>>> >>>>> >>>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "scikit-image" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to scikit-image... at googlegroups.com. >>>> To post to this group, send email to scikit... at googlegroups.com. >>>> To view this discussion on the web, visit >>>> https://groups.google.com/d/msgid/scikit-image/a2895510-2490-4ccf-a70a-20d67c74d2cd%40googlegroups.com >>>> >>>> . >>>> >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>> >>> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> To post to this group, send email to scikit... at googlegroups.com >> . >> To view this discussion on the web, visit >> https://groups.google.com/d/msgid/scikit-image/5e41ffef-c3a3-421f-b6e4-d5566b5c37c0%40googlegroups.com >> >> . >> >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Mon Nov 23 20:11:24 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Tue, 24 Nov 2015 12:11:24 +1100 Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: <5e41ffef-c3a3-421f-b6e4-d5566b5c37c0@googlegroups.com> References: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> <5e41ffef-c3a3-421f-b6e4-d5566b5c37c0@googlegroups.com> Message-ID: Incidentally, it seems you are just doing SLIC on a non-RGB image... Which SLIC supports. (skimage.segmentation.slic). The "compactness" parameter changes the weighting of intensity and space. On Tue, Nov 24, 2015 at 11:19 AM, Hakim Benoudjit wrote: > Hi Jonas, > > Thanks for your response. > That's exactly what I've tried this week-end, by adding the (x, y) to > gray-level intensity and giving the matrix of 3-components vector as input > to k-means. > As for the normalization, I applied this formula to each column > (intensity, x, y): (value - mean) / std_dev. > But, even with this normalization step, adding the (x, y) coordinates will > influence the pixels on the left (resp. right) to be grouped together (See > http://imgur.com/HxfkRig and original image taken from > http://uk.mathworks.com/help/images/texture-segmentation-using-gabor-filters.html?refresh=true > ). > > Maybe I will need to find another normalization to apply of the > (intensity, x, y) space. > > Le lundi 23 novembre 2015 23:53:34 UTC, Jonas Wulff a ?crit : >> >> Hi Hakim, >> >> Have you tried just adding the coordinates of a pixel to its features? >> For each pixel, the features would then be R,G,B,X,Y. From your >> description, that seems what you're looking for. >> >> So if you have an RGB image I (so that I.shape = (height,width,3)), you >> can do: >> >> y,x = np.mgrid[:height,:width] >> I_stacked = np.dstack((I,x,y)) >> data = I_stacked.reshape((-1,5)) >> >> ... and then use "data" as input to your clustering algorithm. >> >> You might want to scale / normalize the coordinates to fit the general >> range of your color values -- but in general, this should do what I think >> you're looking for. >> >> Cheers, >> -Jonas >> >> >> >> >> >> On Sat, Nov 21, 2015 at 2:23 AM, Hakim Benoudjit >> wrote: >> >>> Hi Juan, >>> >>> Thanks for your answer, this seems to be a nice algorithm for the >>> denoising of speckle. >>> But actually I'm looking for an image clustering (segmentation) >>> technique instead (that would take into consideration the spatial context >>> of pixels). >>> >>> Le samedi 21 novembre 2015 00:47:21 UTC, Juan Nunez-Iglesias a ?crit : >>>> >>>> Hey Hakim, >>>> >>>> The right answer here depends on your ultimate goal. If you're after >>>> denoising, non-local means denoising (recently added to skimage) sounds >>>> like exactly what you're after. >>>> >>>> Juan. >>>> >>>> On Sat, Nov 21, 2015 at 11:28 AM, Hakim Benoudjit >>>> wrote: >>>> >>>>> Hi St?fan, >>>>> >>>>> Thanks for your reponse. >>>>> What I'm looking for is a *spatial criteria* that encourages the *clustering >>>>> algorithm* (K-means or others) to group together similar *neighbouring >>>>> pixels* inside the same cluster. This will help avoid having >>>>> persistent noise inside a cluster. >>>>> >>>>> Le vendredi 20 novembre 2015 13:20:15 UTC, Hakim Benoudjit a ?crit : >>>>>> >>>>>> Hi, >>>>>> >>>>>> Is there a clustering algorithm implemented in *scikit-image *that >>>>>> perform the image clustering by taking into account the *spatial >>>>>> context *of the clustered pixel (its neighbourhood), besides its *pixel >>>>>> brightness*? >>>>>> >>>>>> For the time being, I'm clustering images by reshaping them as >>>>>> vectors of pixels intensities distributions, and then performing the *K-means >>>>>> *or *Gaussian mixture models* implemented in *scikit-learn*. But, >>>>>> I'm looking for a image clustering technique implemented (or could be >>>>>> implemented) in *scikit-image *that would consider the neighbourhood >>>>>> of a pixel when classifying it. >>>>>> >>>>>> Thanks. >>>>>> >>>>> -- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "scikit-image" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to scikit-image... at googlegroups.com. >>>>> To post to this group, send email to scikit... at googlegroups.com. >>>>> To view this discussion on the web, visit >>>>> https://groups.google.com/d/msgid/scikit-image/0aad2045-b9da-442c-97bc-06c596b0469e%40googlegroups.com >>>>> >>>>> . >>>>> >>>>> For more options, visit https://groups.google.com/d/optout. >>>>> >>>> >>>> -- >>> You received this message because you are subscribed to the Google >>> Groups "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to scikit-image... at googlegroups.com. >>> To post to this group, send email to scikit... at googlegroups.com. >>> To view this discussion on the web, visit >>> https://groups.google.com/d/msgid/scikit-image/a2895510-2490-4ccf-a70a-20d67c74d2cd%40googlegroups.com >>> >>> . >>> >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > To post to this group, send email to scikit-image at googlegroups.com. > To view this discussion on the web, visit > https://groups.google.com/d/msgid/scikit-image/5e41ffef-c3a3-421f-b6e4-d5566b5c37c0%40googlegroups.com > > . > > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kmichael.aye at gmail.com Tue Nov 24 18:47:22 2015 From: kmichael.aye at gmail.com (Michael Aye) Date: Tue, 24 Nov 2015 15:47:22 -0800 (PST) Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: References: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> <5e41ffef-c3a3-421f-b6e4-d5566b5c37c0@googlegroups.com> Message-ID: As SLIC uses K-means where one has to provide a number of clusters, I wonder what a SLIC implementation with DBSCAN could do, considering that it is free from the burden of defining the number of clusters. One would have to come up with a method of constraining `eps` and `min_samples`, but maybe that could be quite powerful. On Monday, November 23, 2015 at 6:11:45 PM UTC-7, Juan Nunez-Iglesias wrote: > > Incidentally, it seems you are just doing SLIC on a non-RGB image... Which > SLIC supports. (skimage.segmentation.slic). The "compactness" parameter > changes the weighting of intensity and space. > > On Tue, Nov 24, 2015 at 11:19 AM, Hakim Benoudjit > wrote: > >> Hi Jonas, >> >> Thanks for your response. >> That's exactly what I've tried this week-end, by adding the (x, y) to >> gray-level intensity and giving the matrix of 3-components vector as input >> to k-means. >> As for the normalization, I applied this formula to each column >> (intensity, x, y): (value - mean) / std_dev. >> But, even with this normalization step, adding the (x, y) coordinates >> will influence the pixels on the left (resp. right) to be grouped together >> (See http://imgur.com/HxfkRig and original image taken from >> http://uk.mathworks.com/help/images/texture-segmentation-using-gabor-filters.html?refresh=true >> ). >> >> Maybe I will need to find another normalization to apply of the >> (intensity, x, y) space. >> >> Le lundi 23 novembre 2015 23:53:34 UTC, Jonas Wulff a ?crit : >>> >>> Hi Hakim, >>> >>> Have you tried just adding the coordinates of a pixel to its features? >>> For each pixel, the features would then be R,G,B,X,Y. From your >>> description, that seems what you're looking for. >>> >>> So if you have an RGB image I (so that I.shape = (height,width,3)), you >>> can do: >>> >>> y,x = np.mgrid[:height,:width] >>> I_stacked = np.dstack((I,x,y)) >>> data = I_stacked.reshape((-1,5)) >>> >>> ... and then use "data" as input to your clustering algorithm. >>> >>> You might want to scale / normalize the coordinates to fit the general >>> range of your color values -- but in general, this should do what I think >>> you're looking for. >>> >>> Cheers, >>> -Jonas >>> >>> >>> >>> >>> >>> On Sat, Nov 21, 2015 at 2:23 AM, Hakim Benoudjit >>> wrote: >>> >>>> Hi Juan, >>>> >>>> Thanks for your answer, this seems to be a nice algorithm for the >>>> denoising of speckle. >>>> But actually I'm looking for an image clustering (segmentation) >>>> technique instead (that would take into consideration the spatial context >>>> of pixels). >>>> >>>> Le samedi 21 novembre 2015 00:47:21 UTC, Juan Nunez-Iglesias a ?crit : >>>>> >>>>> Hey Hakim, >>>>> >>>>> The right answer here depends on your ultimate goal. If you're after >>>>> denoising, non-local means denoising (recently added to skimage) sounds >>>>> like exactly what you're after. >>>>> >>>>> Juan. >>>>> >>>>> On Sat, Nov 21, 2015 at 11:28 AM, Hakim Benoudjit >>>>> wrote: >>>>> >>>>>> Hi St?fan, >>>>>> >>>>>> Thanks for your reponse. >>>>>> What I'm looking for is a *spatial criteria* that encourages the *clustering >>>>>> algorithm* (K-means or others) to group together similar *neighbouring >>>>>> pixels* inside the same cluster. This will help avoid having >>>>>> persistent noise inside a cluster. >>>>>> >>>>>> Le vendredi 20 novembre 2015 13:20:15 UTC, Hakim Benoudjit a ?crit : >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Is there a clustering algorithm implemented in *scikit-image *that >>>>>>> perform the image clustering by taking into account the *spatial >>>>>>> context *of the clustered pixel (its neighbourhood), besides its *pixel >>>>>>> brightness*? >>>>>>> >>>>>>> For the time being, I'm clustering images by reshaping them as >>>>>>> vectors of pixels intensities distributions, and then performing the *K-means >>>>>>> *or *Gaussian mixture models* implemented in *scikit-learn*. But, >>>>>>> I'm looking for a image clustering technique implemented (or could be >>>>>>> implemented) in *scikit-image *that would consider the >>>>>>> neighbourhood of a pixel when classifying it. >>>>>>> >>>>>>> Thanks. >>>>>>> >>>>>> -- >>>>>> You received this message because you are subscribed to the Google >>>>>> Groups "scikit-image" group. >>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>> send an email to scikit-image... at googlegroups.com. >>>>>> To post to this group, send email to scikit... at googlegroups.com. >>>>>> To view this discussion on the web, visit >>>>>> https://groups.google.com/d/msgid/scikit-image/0aad2045-b9da-442c-97bc-06c596b0469e%40googlegroups.com >>>>>> >>>>>> . >>>>>> >>>>>> For more options, visit https://groups.google.com/d/optout. >>>>>> >>>>> >>>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "scikit-image" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to scikit-image... at googlegroups.com. >>>> To post to this group, send email to scikit... at googlegroups.com. >>>> To view this discussion on the web, visit >>>> https://groups.google.com/d/msgid/scikit-image/a2895510-2490-4ccf-a70a-20d67c74d2cd%40googlegroups.com >>>> >>>> . >>>> >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>> >>> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> To post to this group, send email to scikit... at googlegroups.com >> . >> To view this discussion on the web, visit >> https://groups.google.com/d/msgid/scikit-image/5e41ffef-c3a3-421f-b6e4-d5566b5c37c0%40googlegroups.com >> >> . >> >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Tue Nov 24 21:20:56 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Tue, 24 Nov 2015 18:20:56 -0800 Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: <4db90f57-c3ac-4d7e-8746-c8d9a590b539@googlegroups.com> References: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> <5e41ffef-c3a3-421f-b6e4-d5566b5c37c0@googlegroups.com> <4db90f57-c3ac-4d7e-8746-c8d9a590b539@googlegroups.com> Message-ID: <877fl6bz07.fsf@berkeley.edu> On 2015-11-24 01:47:00, Hakim Benoudjit wrote: > Thanks Juan, I think you're right. > > I might have to read the paper on SLIC algorithm to understand how to tune > the "compactness" parameter. You can also use SLIC to label the image, and then compute features of each SLIC region. Or perhaps that is what is being suggested already, I wasn't sure. St?fan From h.benoudjit at gmail.com Wed Nov 25 06:50:08 2015 From: h.benoudjit at gmail.com (Hakim Benoudjit) Date: Wed, 25 Nov 2015 03:50:08 -0800 (PST) Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: <877fl6bz07.fsf@berkeley.edu> References: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> <5e41ffef-c3a3-421f-b6e4-d5566b5c37c0@googlegroups.com> <4db90f57-c3ac-4d7e-8746-c8d9a590b539@googlegroups.com> <877fl6bz07.fsf@berkeley.edu> Message-ID: <9c61e03a-a993-4d40-ad69-4ed189ab0a42@googlegroups.com> Hi St?fan, Thanks for the suggestion. I didn't try SLIC algorithm yet, but it seems to me (from the resulting images) that it segment images into small regions (superpixels) that could belong (visually) to the same object. In my case (ideally), these superpixels need to be merged afterwards. Do you have an idea on how to achieve the subsequent merging? Le mercredi 25 novembre 2015 02:21:00 UTC, stefanv a ?crit : > > On 2015-11-24 01:47:00, Hakim Benoudjit > > wrote: > > Thanks Juan, I think you're right. > > > > I might have to read the paper on SLIC algorithm to understand how to > tune > > the "compactness" parameter. > > You can also use SLIC to label the image, and then compute features of > each SLIC region. Or perhaps that is what is being suggested already, I > wasn't sure. > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.benoudjit at gmail.com Wed Nov 25 10:50:38 2015 From: h.benoudjit at gmail.com (Hakim Benoudjit) Date: Wed, 25 Nov 2015 07:50:38 -0800 (PST) Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: <20151125120232.GA989474@phare.normalesup.org> References: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> <5e41ffef-c3a3-421f-b6e4-d5566b5c37c0@googlegroups.com> <4db90f57-c3ac-4d7e-8746-c8d9a590b539@googlegroups.com> <877fl6bz07.fsf@berkeley.edu> <9c61e03a-a993-4d40-ad69-4ed189ab0a42@googlegroups.com> <20151125120232.GA989474@phare.normalesup.org> Message-ID: <218d4669-0c23-4ffd-bbea-61f72e16bd63@googlegroups.com> Thanks Emma, that's exactely what I was looking for. Le mercredi 25 novembre 2015 12:02:34 UTC, Emmanuelle Gouillart a ?crit : > > Hi Hakim, > > I think this example from the gallery does what you want: merging slic > superpixels. > > http://scikit-image.org/docs/dev/auto_examples/plot_rag_merge.html#example-plot-rag-merge-py > > Best, > Emma > > On Wed, Nov 25, 2015 at 03:50:08AM -0800, Hakim Benoudjit wrote: > > Hi St?fan, > > > Thanks for the suggestion. > > I didn't try SLIC algorithm yet, but it seems to me (from the resulting > images) > > that it segment images into small regions (superpixels) that could > belong > > (visually) to the same object. > > In my case (ideally), these superpixels need to be merged afterwards. > > Do you have an idea on how to achieve the subsequent merging? > > > Le mercredi 25 novembre 2015 02:21:00 UTC, stefanv a ?crit : > > > On 2015-11-24 01:47:00, Hakim Benoudjit wrote: > > > Thanks Juan, I think you're right. > > > > I might have to read the paper on SLIC algorithm to understand how > to > > tune > > > the "compactness" parameter. > > > You can also use SLIC to label the image, and then compute features > of > > each SLIC region. Or perhaps that is what is being suggested > already, I > > wasn't sure. > > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Tue Nov 24 19:29:42 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 25 Nov 2015 11:29:42 +1100 Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: References: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> <5e41ffef-c3a3-421f-b6e4-d5566b5c37c0@googlegroups.com> Message-ID: Cool idea! On Wed, Nov 25, 2015 at 10:47 AM, Michael Aye wrote: > As SLIC uses K-means where one has to provide a number of clusters, I > wonder what a SLIC implementation with DBSCAN could do, considering that it > is free from the burden of defining the number of clusters. One would have > to come up with a method of constraining `eps` and `min_samples`, but maybe > that could be quite powerful. > > > On Monday, November 23, 2015 at 6:11:45 PM UTC-7, Juan Nunez-Iglesias > wrote: >> >> Incidentally, it seems you are just doing SLIC on a non-RGB image... >> Which SLIC supports. (skimage.segmentation.slic). The "compactness" >> parameter changes the weighting of intensity and space. >> >> On Tue, Nov 24, 2015 at 11:19 AM, Hakim Benoudjit >> wrote: >> >>> Hi Jonas, >>> >>> Thanks for your response. >>> That's exactly what I've tried this week-end, by adding the (x, y) to >>> gray-level intensity and giving the matrix of 3-components vector as input >>> to k-means. >>> As for the normalization, I applied this formula to each column >>> (intensity, x, y): (value - mean) / std_dev. >>> But, even with this normalization step, adding the (x, y) coordinates >>> will influence the pixels on the left (resp. right) to be grouped together >>> (See http://imgur.com/HxfkRig and original image taken from >>> http://uk.mathworks.com/help/images/texture-segmentation-using-gabor-filters.html?refresh=true >>> ). >>> >>> Maybe I will need to find another normalization to apply of the >>> (intensity, x, y) space. >>> >>> Le lundi 23 novembre 2015 23:53:34 UTC, Jonas Wulff a ?crit : >>>> >>>> Hi Hakim, >>>> >>>> Have you tried just adding the coordinates of a pixel to its features? >>>> For each pixel, the features would then be R,G,B,X,Y. From your >>>> description, that seems what you're looking for. >>>> >>>> So if you have an RGB image I (so that I.shape = (height,width,3)), you >>>> can do: >>>> >>>> y,x = np.mgrid[:height,:width] >>>> I_stacked = np.dstack((I,x,y)) >>>> data = I_stacked.reshape((-1,5)) >>>> >>>> ... and then use "data" as input to your clustering algorithm. >>>> >>>> You might want to scale / normalize the coordinates to fit the general >>>> range of your color values -- but in general, this should do what I think >>>> you're looking for. >>>> >>>> Cheers, >>>> -Jonas >>>> >>>> >>>> >>>> >>>> >>>> On Sat, Nov 21, 2015 at 2:23 AM, Hakim Benoudjit >>>> wrote: >>>> >>>>> Hi Juan, >>>>> >>>>> Thanks for your answer, this seems to be a nice algorithm for the >>>>> denoising of speckle. >>>>> But actually I'm looking for an image clustering (segmentation) >>>>> technique instead (that would take into consideration the spatial context >>>>> of pixels). >>>>> >>>>> Le samedi 21 novembre 2015 00:47:21 UTC, Juan Nunez-Iglesias a ?crit : >>>>>> >>>>>> Hey Hakim, >>>>>> >>>>>> The right answer here depends on your ultimate goal. If you're after >>>>>> denoising, non-local means denoising (recently added to skimage) sounds >>>>>> like exactly what you're after. >>>>>> >>>>>> Juan. >>>>>> >>>>>> On Sat, Nov 21, 2015 at 11:28 AM, Hakim Benoudjit >>>>> > wrote: >>>>>> >>>>>>> Hi St?fan, >>>>>>> >>>>>>> Thanks for your reponse. >>>>>>> What I'm looking for is a *spatial criteria* that encourages the *clustering >>>>>>> algorithm* (K-means or others) to group together similar *neighbouring >>>>>>> pixels* inside the same cluster. This will help avoid having >>>>>>> persistent noise inside a cluster. >>>>>>> >>>>>>> Le vendredi 20 novembre 2015 13:20:15 UTC, Hakim Benoudjit a ?crit : >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> Is there a clustering algorithm implemented in *scikit-image *that >>>>>>>> perform the image clustering by taking into account the *spatial >>>>>>>> context *of the clustered pixel (its neighbourhood), besides its *pixel >>>>>>>> brightness*? >>>>>>>> >>>>>>>> For the time being, I'm clustering images by reshaping them as >>>>>>>> vectors of pixels intensities distributions, and then performing the *K-means >>>>>>>> *or *Gaussian mixture models* implemented in *scikit-learn*. But, >>>>>>>> I'm looking for a image clustering technique implemented (or could be >>>>>>>> implemented) in *scikit-image *that would consider the >>>>>>>> neighbourhood of a pixel when classifying it. >>>>>>>> >>>>>>>> Thanks. >>>>>>>> >>>>>>> -- >>>>>>> You received this message because you are subscribed to the Google >>>>>>> Groups "scikit-image" group. >>>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>>> send an email to scikit-image... at googlegroups.com. >>>>>>> To post to this group, send email to scikit... at googlegroups.com. >>>>>>> To view this discussion on the web, visit >>>>>>> https://groups.google.com/d/msgid/scikit-image/0aad2045-b9da-442c-97bc-06c596b0469e%40googlegroups.com >>>>>>> >>>>>>> . >>>>>>> >>>>>>> For more options, visit https://groups.google.com/d/optout. >>>>>>> >>>>>> >>>>>> -- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "scikit-image" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to scikit-image... at googlegroups.com. >>>>> To post to this group, send email to scikit... at googlegroups.com. >>>>> To view this discussion on the web, visit >>>>> https://groups.google.com/d/msgid/scikit-image/a2895510-2490-4ccf-a70a-20d67c74d2cd%40googlegroups.com >>>>> >>>>> . >>>>> >>>>> For more options, visit https://groups.google.com/d/optout. >>>>> >>>> >>>> -- >>> You received this message because you are subscribed to the Google >>> Groups "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to scikit-image... at googlegroups.com. >>> To post to this group, send email to scikit... at googlegroups.com. >>> To view this discussion on the web, visit >>> https://groups.google.com/d/msgid/scikit-image/5e41ffef-c3a3-421f-b6e4-d5566b5c37c0%40googlegroups.com >>> >>> . >>> >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > To post to this group, send email to scikit-image at googlegroups.com. > To view this discussion on the web, visit > https://groups.google.com/d/msgid/scikit-image/cb5869d3-0b93-421b-969d-45a1db088f36%40googlegroups.com > > . > > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emmanuelle.gouillart at nsup.org Wed Nov 25 07:02:32 2015 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Wed, 25 Nov 2015 13:02:32 +0100 Subject: Clustering of an image by taking into account the spatial context of each pixel (besides its intensity) In-Reply-To: <9c61e03a-a993-4d40-ad69-4ed189ab0a42@googlegroups.com> References: <0aad2045-b9da-442c-97bc-06c596b0469e@googlegroups.com> <5e41ffef-c3a3-421f-b6e4-d5566b5c37c0@googlegroups.com> <4db90f57-c3ac-4d7e-8746-c8d9a590b539@googlegroups.com> <877fl6bz07.fsf@berkeley.edu> <9c61e03a-a993-4d40-ad69-4ed189ab0a42@googlegroups.com> Message-ID: <20151125120232.GA989474@phare.normalesup.org> Hi Hakim, I think this example from the gallery does what you want: merging slic superpixels. http://scikit-image.org/docs/dev/auto_examples/plot_rag_merge.html#example-plot-rag-merge-py Best, Emma On Wed, Nov 25, 2015 at 03:50:08AM -0800, Hakim Benoudjit wrote: > Hi St??fan, > Thanks for the suggestion. > I didn't try SLIC algorithm yet, but it seems to me (from the resulting images) > that it segment images into small regions (superpixels) that could belong > (visually) to the same object. > In my case (ideally), these superpixels need to be merged afterwards. > Do you have an idea on how to achieve the subsequent merging? > Le mercredi 25 novembre 2015 02:21:00 UTC, stefanv a ??crit??: > On 2015-11-24 01:47:00, Hakim Benoudjit wrote: > > Thanks Juan, I think you're right. > > I might have to read the paper on SLIC algorithm to understand how to > tune > > the "compactness" parameter. > You can also use SLIC to label the image, and then compute features of > each SLIC region. ??Or perhaps that is what is being suggested already, I > wasn't sure. > St??fan From kshitijsaraogi at gmail.com Sat Nov 28 07:14:38 2015 From: kshitijsaraogi at gmail.com (Kshitij Saraogi) Date: Sat, 28 Nov 2015 04:14:38 -0800 (PST) Subject: New to Contributing Message-ID: <12db7f85-8526-471f-943b-cefca08c6c76@googlegroups.com> Hello, I am Kshitij Saraogi, a second year under-graduate student at IIT Kharagpur. I find scikit-image really intriguing and I would like to contribute to it. I went through the Issues section on GitHub but I am having a hard time finding an issue to get started Also, while going through the list of "Requested Features", I read about Image Colorisation. I would like to work on this. So, I would really appreciate if someone can guide me. Thanks Kshitij Saraogi -------------- next part -------------- An HTML attachment was scrubbed... URL: From vighneshbirodkar at gmail.com Sat Nov 28 23:16:59 2015 From: vighneshbirodkar at gmail.com (Vighnesh Birodkar) Date: Sat, 28 Nov 2015 20:16:59 -0800 (PST) Subject: New to Contributing In-Reply-To: <12db7f85-8526-471f-943b-cefca08c6c76@googlegroups.com> References: <12db7f85-8526-471f-943b-cefca08c6c76@googlegroups.com> Message-ID: Hello Kshitij https://github.com/scikit-image/scikit-image/issues/1645 Seems like a very easy issue to fix. Image Colorization would need a GUI built with the viewer module. But before you can tackle that, it would be nice to handle some easy fixes to get you acquainted with the code and conventions. Thanks Vighnesh On Saturday, November 28, 2015 at 11:58:21 AM UTC-5, Kshitij Saraogi wrote: > > Hello, > > I am Kshitij Saraogi, a second year under-graduate student at IIT > Kharagpur. > I find scikit-image really intriguing and I would like to contribute to it. > > I went through the Issues section on GitHub but I am having a hard time > finding an issue to get started > Also, while going through the list of "Requested Features", I read about > Image Colorisation. > I would like to work on this. > > So, I would really appreciate if someone can guide me. > > Thanks > Kshitij Saraogi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vighneshbirodkar at gmail.com Sun Nov 29 01:31:37 2015 From: vighneshbirodkar at gmail.com (Vighnesh Birodkar) Date: Sat, 28 Nov 2015 22:31:37 -0800 (PST) Subject: Radius-scale matching in Determinant of Hessian (DoH) blob detector In-Reply-To: <2ad35b8b-aeb2-4e18-9d7a-bbde72b4875e@googlegroups.com> References: <2ad35b8b-aeb2-4e18-9d7a-bbde72b4875e@googlegroups.com> Message-ID: <55b68e9d-d528-4149-a105-94f11a4d0fca@googlegroups.com> Hello Could you provide an image and short code example ? If that it not the case then the unit tests would fail https://github.com/scikit-image/scikit-image/blob/master/skimage/feature/tests/test_blob.py Thanks Vighnesh On Monday, November 23, 2015 at 1:21:39 PM UTC-5, Iakovos Halegoua wrote: > > Hello there, > > I'd like to point out to the documentation authors a minor error in the > blob_doh > > description. In the Notes section it is stated that: "The radius of each > blob is approximately sigma.". > As I tested the DoH responses of synthetic blob images, it seemed that the > radius of the blob is matched to the scale according to: r = sqrt(2) * s. > Could you provide me with some reference on how you calculated the matching > scale to be at r = s? I guess this is nothing serious, but it was a little > confusing when I first encountered it :P > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kshitijsaraogi at gmail.com Mon Nov 30 14:30:54 2015 From: kshitijsaraogi at gmail.com (Kshitij Saraogi) Date: Mon, 30 Nov 2015 11:30:54 -0800 (PST) Subject: New to Contributing In-Reply-To: References: <12db7f85-8526-471f-943b-cefca08c6c76@googlegroups.com> Message-ID: Hello Vighnesh, I am working on that issue. Thanks for helping me out. -------- Kshitij On Sunday, November 29, 2015 at 9:46:59 AM UTC+5:30, Vighnesh Birodkar wrote: > > Hello Kshitij > > https://github.com/scikit-image/scikit-image/issues/1645 > Seems like a very easy issue to fix. > > Image Colorization would need a GUI built with the viewer module. But > before you can tackle that, it would be nice to handle some easy fixes to > get you acquainted with the code and conventions. > > Thanks > Vighnesh > > On Saturday, November 28, 2015 at 11:58:21 AM UTC-5, Kshitij Saraogi wrote: >> >> Hello, >> >> I am Kshitij Saraogi, a second year under-graduate student at IIT >> Kharagpur. >> I find scikit-image really intriguing and I would like to contribute to >> it. >> >> I went through the Issues section on GitHub but I am having a hard time >> finding an issue to get started >> Also, while going through the list of "Requested Features", I read about >> Image Colorisation. >> I would like to work on this. >> >> So, I would really appreciate if someone can guide me. >> >> Thanks >> Kshitij Saraogi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: