From warmspringwinds at gmail.com Sun Mar 1 10:56:52 2015 From: warmspringwinds at gmail.com (Daniil Pakhomov) Date: Sun, 1 Mar 2015 07:56:52 -0800 (PST) Subject: Hessian-Laplace blob detector. Message-ID: <39809677-4893-4261-9220-b9cd9be8b580@googlegroups.com> Hello, I want to try to implement Hessian-Laplace blob detector (as mentioned in requested features on github page). Can someone give me the list of corresponding papers, using which I can implement it. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From warmspringwinds at gmail.com Sun Mar 1 11:03:21 2015 From: warmspringwinds at gmail.com (Daniil Pakhomov) Date: Sun, 1 Mar 2015 08:03:21 -0800 (PST) Subject: Hessian-Laplace blob detector. Message-ID: <9282855f-5e28-425c-9022-64e99f8d00cc@googlegroups.com> Hello, I would like to implement this detector. Can someone give me a list of papers that may help. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Mon Mar 2 12:26:08 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Mon, 2 Mar 2015 12:26:08 -0500 Subject: regionprops - displaying region properties In-Reply-To: <46469c78-2cfb-4c8c-913a-a639745c4ab9@googlegroups.com> References: <46469c78-2cfb-4c8c-913a-a639745c4ab9@googlegroups.com> Message-ID: <8B196A79-5ED9-48DB-ADA6-1C57EAFA3944@demuc.de> That sounds great. Would you be willing to work on integrating this into skimage? Thanks. > On Feb 26, 2015, at 11:51 AM, ciaran.robb at googlemail.com wrote: > > Hi > Adding to my own post but hey.... > > I have since written my own code which allows visualising of region properties (eg area, eccentricity etc) via colormap, if anyone is interested let me know! > > Ciaran > > On Sunday, February 1, 2015 at 11:45:44 PM UTC, ciara... at googlemail.com wrote: > Hello everyone, > > I have recently been attempting to modify some existing skimage code to display regionprops for a labeled image (e.g. area or eccentricity) > > I initially tried to translate a vectorized bit of old matlab code I had, but gave up on that and decided to alter the existing label2rgb skimage function > > I am attempting to change each label value to it's area property value similar to the label2rgb "avg" function. > > so I have: > labels = a labeled image > > out = np.zeros_like(labels) #a blank array > labels2 = np.unique(labels) #a vector of label vals > out = np.zeros_like(labels) > Props = regionprops(labels, ['Area']) > bg_label=0 > bg = (labels2 == bg_label) > if bg.any(): > labels2 = labels2[labels2 != bg_label] > out[bg] = 0 > for label in labels2: > mask = (labels == label).nonzero() > color = Props[label].area > out[mask] = color > but the "out" props image does not correspond to the correct area values? > Can anyone help me with this? > It also throws the following error: > "list index out of range" > It would certainly be useful to have a way to view the spatial distribution of label properties in this way - perhaps in a future skimage version? > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From ciaran.robb at googlemail.com Mon Mar 2 18:02:54 2015 From: ciaran.robb at googlemail.com (ciaran.robb at googlemail.com) Date: Mon, 2 Mar 2015 15:02:54 -0800 (PST) Subject: regionprops - displaying region properties In-Reply-To: <8B196A79-5ED9-48DB-ADA6-1C57EAFA3944@demuc.de> References: <46469c78-2cfb-4c8c-913a-a639745c4ab9@googlegroups.com> <8B196A79-5ED9-48DB-ADA6-1C57EAFA3944@demuc.de> Message-ID: <5b40325e-aff4-4b49-9533-7722efba9905@googlegroups.com> Hi Johannes, Yeah of course. Would it be best placed in module color? Ciaran On Monday, March 2, 2015 at 5:26:12 PM UTC, Johannes Sch?nberger wrote: > > That sounds great. Would you be willing to work on integrating this into > skimage? > > Thanks. > > > On Feb 26, 2015, at 11:51 AM, ciara... at googlemail.com > wrote: > > > > Hi > > Adding to my own post but hey.... > > > > I have since written my own code which allows visualising of region > properties (eg area, eccentricity etc) via colormap, if anyone is > interested let me know! > > > > Ciaran > > > > On Sunday, February 1, 2015 at 11:45:44 PM UTC, ciara... at googlemail.com > wrote: > > Hello everyone, > > > > I have recently been attempting to modify some existing skimage code to > display regionprops for a labeled image (e.g. area or eccentricity) > > > > I initially tried to translate a vectorized bit of old matlab code I > had, but gave up on that and decided to alter the existing label2rgb > skimage function > > > > I am attempting to change each label value to it's area property value > similar to the label2rgb "avg" function. > > > > so I have: > > labels = a labeled image > > > > out = np.zeros_like(labels) #a blank array > > labels2 = np.unique(labels) #a vector of label vals > > out = np.zeros_like(labels) > > Props = regionprops(labels, ['Area']) > > bg_label=0 > > bg = (labels2 == bg_label) > > if bg.any(): > > labels2 = labels2[labels2 != bg_label] > > out[bg] = 0 > > for label in labels2: > > mask = (labels == label).nonzero() > > color = Props[label].area > > out[mask] = color > > but the "out" props image does not correspond to the correct area > values? > > Can anyone help me with this? > > It also throws the following error: > > "list index out of range" > > It would certainly be useful to have a way to view the spatial > distribution of label properties in this way - perhaps in a future skimage > version? > > > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image... at googlegroups.com . > > For more options, visit https://groups.google.com/d/optout. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.silvester at gmail.com Mon Mar 2 20:03:47 2015 From: steven.silvester at gmail.com (Steven Silvester) Date: Mon, 2 Mar 2015 17:03:47 -0800 (PST) Subject: Building 0.11dev - issues In-Reply-To: <9b878619-0b6f-4d39-af1a-b97f384940e1@googlegroups.com> References: <9b878619-0b6f-4d39-af1a-b97f384940e1@googlegroups.com> Message-ID: Ben, It looks like you may need to delete the files (other than the .git folder), do a `git checkout .`, and then try `pip install .`. Regards, Steve On Monday, March 2, 2015 at 11:05:08 AM UTC-6, Benjamin Cichy wrote: > > Hi all, > > I am attempting this install on Windows with > > >pip install . > as per the instructions. > > The last compiler I have access to, is Visual Studio 10, so according to > scikit-learn, and digging through the compiler script, it should be the > last one recognized for .dll compilation before Python 3.5. > > I have the following errors right at the end of the build. This is under a > fresh Anaconda install, with all the packages updated. > creating build\temp.win-amd64-3.4\Release\skimage\_shared > > > C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\amd64\cl.exe /c > /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\Anaconda3\lib\core\include > -IC:\Anaconda3\include -IC:\Anaconda3\include /Tcskimage\_shared\geometry.c > /Fobuild\temp.win-amd64-3.4\Release\skimagej > Found executable C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\ > BIN\amd64\cl.exe > C:\Anaconda3\lib\site-packages\setuptools-12.2-py3.4.egg\setuptools\dist. > py:282: UserWarning: Normalizing '0.11dev' to '0.11.dev > > error: Command "C:\Program Files (x86)\Microsoft Visual Studio > 10.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG > -IC-packages\numpy\core\include -IC:\Anaconda3\include > -IC:\Anaconda3\include /Tcskimage\_shared\geometry.c > /Fobuild\temp.win-amd64-3.4ared\geometry.obj" failed with exit status 2 > > geometry.c > > C:\Anaconda3\include\pyconfig.h(68) : fatal error C1083: Cannot open > include file: 'io.h': No such file or directory > ---------------------------------------- > Rolling back uninstall of scikit-image > Command "C:\Anaconda3\python.exe -c "import setuptools, > tokenize;__file__='C:\\cygwin64\\tmp\\pip-jkks4ooy-build\\setup.py';execenize, > 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" > install --record C:\cygwin64\tmp\pip-xqr2l87y-recor > --single-version-externally-managed --compile" failed with error code 1 in > C:\cygwin64\tmp\pip-jkks4ooy-build > > > Any suggestions? > -Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin.cichy at gmail.com Mon Mar 2 21:21:37 2015 From: benjamin.cichy at gmail.com (Benjamin Cichy) Date: Mon, 2 Mar 2015 18:21:37 -0800 (PST) Subject: Building 0.11dev - issues In-Reply-To: <9b878619-0b6f-4d39-af1a-b97f384940e1@googlegroups.com> References: <9b878619-0b6f-4d39-af1a-b97f384940e1@googlegroups.com> Message-ID: <3686a09f-e35f-45ff-8a39-2e03669bcf66@googlegroups.com> So it looks like this is a bit obscure, and I probably should have noticed that when the io.h was missing, that a grete global problem was occuring. So I did solve this, and it will be of benefit for whoever searches on the on a missing X.h file. I searched for other failures in builds with msvc and it said to reinstall Visual Studio 10, which is used for the current builds. Re-installing VC10 resulted in the the same failure. I downloaded a VC10 cleaning tool, and tried to remove anything VC10 related from the registry, but some keys must be shared with other install. Anyway, the final solution is as follows: 1. Create a Windows virtual image 2. Install a clean VC10 on the Win7 + 3. Copy the VC10 folder and overwrite the VC10 folder on the target computer The difference in missing files ended up being 1GB+, so that's a pretty significant problem for anyone who has ever removed VC10 from their machine. -Ben On Monday, March 2, 2015 at 9:05:08 AM UTC-8, Benjamin Cichy wrote: > > Hi all, > > I am attempting this install on Windows with > > >pip install . > as per the instructions. > > The last compiler I have access to, is Visual Studio 10, so according to > scikit-learn, and digging through the compiler script, it should be the > last one recognized for .dll compilation before Python 3.5. > > I have the following errors right at the end of the build. This is under a > fresh Anaconda install, with all the packages updated. > creating build\temp.win-amd64-3.4\Release\skimage\_shared > > > C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\amd64\cl.exe /c > /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\Anaconda3\lib\core\include > -IC:\Anaconda3\include -IC:\Anaconda3\include /Tcskimage\_shared\geometry.c > /Fobuild\temp.win-amd64-3.4\Release\skimagej > Found executable C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\ > BIN\amd64\cl.exe > C:\Anaconda3\lib\site-packages\setuptools-12.2-py3.4.egg\setuptools\dist. > py:282: UserWarning: Normalizing '0.11dev' to '0.11.dev > > error: Command "C:\Program Files (x86)\Microsoft Visual Studio > 10.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG > -IC-packages\numpy\core\include -IC:\Anaconda3\include > -IC:\Anaconda3\include /Tcskimage\_shared\geometry.c > /Fobuild\temp.win-amd64-3.4ared\geometry.obj" failed with exit status 2 > > geometry.c > > C:\Anaconda3\include\pyconfig.h(68) : fatal error C1083: Cannot open > include file: 'io.h': No such file or directory > ---------------------------------------- > Rolling back uninstall of scikit-image > Command "C:\Anaconda3\python.exe -c "import setuptools, > tokenize;__file__='C:\\cygwin64\\tmp\\pip-jkks4ooy-build\\setup.py';execenize, > 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" > install --record C:\cygwin64\tmp\pip-xqr2l87y-recor > --single-version-externally-managed --compile" failed with error code 1 in > C:\cygwin64\tmp\pip-jkks4ooy-build > > > Any suggestions? > -Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Mon Mar 2 18:38:15 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Mon, 2 Mar 2015 18:38:15 -0500 Subject: regionprops - displaying region properties In-Reply-To: <5b40325e-aff4-4b49-9533-7722efba9905@googlegroups.com> References: <46469c78-2cfb-4c8c-913a-a639745c4ab9@googlegroups.com> <8B196A79-5ED9-48DB-ADA6-1C57EAFA3944@demuc.de> <5b40325e-aff4-4b49-9533-7722efba9905@googlegroups.com> Message-ID: Maybe, there is a way to elegantly integrate this into the RegionProperty class? Could you share your current implementation, so we can decide for a good strategy? > On Mar 2, 2015, at 6:02 PM, ciaran.robb at googlemail.com wrote: > > Hi Johannes, > > Yeah of course. Would it be best placed in module color? > > Ciaran > > On Monday, March 2, 2015 at 5:26:12 PM UTC, Johannes Sch?nberger wrote: > That sounds great. Would you be willing to work on integrating this into skimage? > > Thanks. > > > On Feb 26, 2015, at 11:51 AM, ciara... at googlemail.com wrote: > > > > Hi > > Adding to my own post but hey.... > > > > I have since written my own code which allows visualising of region properties (eg area, eccentricity etc) via colormap, if anyone is interested let me know! > > > > Ciaran > > > > On Sunday, February 1, 2015 at 11:45:44 PM UTC, ciara... at googlemail.com wrote: > > Hello everyone, > > > > I have recently been attempting to modify some existing skimage code to display regionprops for a labeled image (e.g. area or eccentricity) > > > > I initially tried to translate a vectorized bit of old matlab code I had, but gave up on that and decided to alter the existing label2rgb skimage function > > > > I am attempting to change each label value to it's area property value similar to the label2rgb "avg" function. > > > > so I have: > > labels = a labeled image > > > > out = np.zeros_like(labels) #a blank array > > labels2 = np.unique(labels) #a vector of label vals > > out = np.zeros_like(labels) > > Props = regionprops(labels, ['Area']) > > bg_label=0 > > bg = (labels2 == bg_label) > > if bg.any(): > > labels2 = labels2[labels2 != bg_label] > > out[bg] = 0 > > for label in labels2: > > mask = (labels == label).nonzero() > > color = Props[label].area > > out[mask] = color > > but the "out" props image does not correspond to the correct area values? > > Can anyone help me with this? > > It also throws the following error: > > "list index out of range" > > It would certainly be useful to have a way to view the spatial distribution of label properties in this way - perhaps in a future skimage version? > > > > > > -- > > You received this message because you are subscribed to the Google Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image... at googlegroups.com. > > For more options, visit https://groups.google.com/d/optout. > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From vighneshbirodkar at gmail.com Tue Mar 3 00:18:22 2015 From: vighneshbirodkar at gmail.com (Vighnesh Birodkar) Date: Mon, 2 Mar 2015 21:18:22 -0800 (PST) Subject: Hessian-Laplace blob detector. In-Reply-To: <9282855f-5e28-425c-9022-64e99f8d00cc@googlegroups.com> References: <9282855f-5e28-425c-9022-64e99f8d00cc@googlegroups.com> Message-ID: Hello The Hessian Laplace blob detector was explained in [1]. Also, look at see `skimage.feature.blob_doh` and skimage.feature.blob_log` [1] : http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/mikolajczyk_ijcv2004.pdf Thanks Vighnesh On Monday, March 2, 2015 at 10:35:09 PM UTC+5:30, Daniil Pakhomov wrote: > > Hello, > > I would like to implement this detector. > > Can someone give me a list of papers that may help. > > Thank you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin.cichy at gmail.com Tue Mar 3 16:23:26 2015 From: benjamin.cichy at gmail.com (Benjamin Cichy) Date: Tue, 3 Mar 2015 13:23:26 -0800 (PST) Subject: Building 0.11dev - issues In-Reply-To: <3686a09f-e35f-45ff-8a39-2e03669bcf66@googlegroups.com> References: <9b878619-0b6f-4d39-af1a-b97f384940e1@googlegroups.com> <3686a09f-e35f-45ff-8a39-2e03669bcf66@googlegroups.com> Message-ID: <4e86099b-98f1-41bf-a416-076662ec227d@googlegroups.com> Just to be completely thorough, I used this to install SP1 as well and the 7.1 SDK. SP1 install instructions There are some installer issues for the redistributable packages that cause it to fail, and give a pretty poor reason in the log. -Ben On Monday, March 2, 2015 at 6:21:37 PM UTC-8, Benjamin Cichy wrote: > > So it looks like this is a bit obscure, and I probably should have noticed > that when the io.h was missing, that a grete global problem was occuring. > > So I did solve this, and it will be of benefit for whoever searches on the > on a missing X.h file. > > I searched for other failures in builds with msvc and it said to reinstall > Visual Studio 10, which is used for the current builds. Re-installing VC10 > resulted in the the same failure. > I downloaded a VC10 cleaning tool, and tried to remove anything VC10 > related from the registry, but some keys must be shared with other install. > > Anyway, the final solution is as follows: > > 1. Create a Windows virtual image > 2. Install a clean VC10 on the Win7 + > 3. Copy the VC10 folder and overwrite the VC10 folder on the target > computer > > The difference in missing files ended up being 1GB+, so that's a pretty > significant problem for anyone who has ever removed VC10 from their > machine. > > -Ben > > On Monday, March 2, 2015 at 9:05:08 AM UTC-8, Benjamin Cichy wrote: >> >> Hi all, >> >> I am attempting this install on Windows with >> >> >pip install . >> as per the instructions. >> >> The last compiler I have access to, is Visual Studio 10, so according to >> scikit-learn, and digging through the compiler script, it should be the >> last one recognized for .dll compilation before Python 3.5. >> >> I have the following errors right at the end of the build. This is under >> a fresh Anaconda install, with all the packages updated. >> creating build\temp.win-amd64-3.4\Release\skimage\_shared >> >> >> C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\amd64\cl.exe >> /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\Anaconda3\lib\core\include >> -IC:\Anaconda3\include -IC:\Anaconda3\include /Tcskimage\_shared\geometry >> .c /Fobuild\temp.win-amd64-3.4\Release\skimagej >> Found executable C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\ >> BIN\amd64\cl.exe >> C:\Anaconda3\lib\site-packages\setuptools-12.2-py3.4.egg\setuptools\dist. >> py:282: UserWarning: Normalizing '0.11dev' to '0.11.dev >> >> error: Command "C:\Program Files (x86)\Microsoft Visual Studio >> 10.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG >> -IC-packages\numpy\core\include -IC:\Anaconda3\include >> -IC:\Anaconda3\include /Tcskimage\_shared\geometry.c >> /Fobuild\temp.win-amd64-3.4ared\geometry.obj" failed with exit status 2 >> >> geometry.c >> >> C:\Anaconda3\include\pyconfig.h(68) : fatal error C1083: Cannot open >> include file: 'io.h': No such file or directory >> ---------------------------------------- >> Rolling back uninstall of scikit-image >> Command "C:\Anaconda3\python.exe -c "import setuptools, >> tokenize;__file__='C:\\cygwin64\\tmp\\pip-jkks4ooy-build\\setup.py';execenize, >> 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" >> install --record C:\cygwin64\tmp\pip-xqr2l87y-recor >> --single-version-externally-managed --compile" failed with error code 1 in >> C:\cygwin64\tmp\pip-jkks4ooy-build >> >> >> Any suggestions? >> -Ben >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Tue Mar 3 16:33:52 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Tue, 03 Mar 2015 13:33:52 -0800 Subject: Building 0.11dev - issues In-Reply-To: <4e86099b-98f1-41bf-a416-076662ec227d@googlegroups.com> References: <9b878619-0b6f-4d39-af1a-b97f384940e1@googlegroups.com> <3686a09f-e35f-45ff-8a39-2e03669bcf66@googlegroups.com> <4e86099b-98f1-41bf-a416-076662ec227d@googlegroups.com> Message-ID: <87lhjd4vgf.fsf@berkeley.edu> Hi Benjamin On 2015-03-03 13:23:26, Benjamin Cichy wrote: > Just to be completely thorough, I used this to install SP1 as > well and the 7.1 SDK. > > SP1 install instructions > > > There are some installer issues for the redistributable packages > that cause it to fail, and give a pretty poor reason in the > log. It would be fantastic if we could bring out wheels for the 0.11 release that I am in the process of tagging. Do you have a suitable build setup to help with that? I wonder if we can set up an Appveyor instance to do it, similar to what Matthew Brett has done for the OSX wheels, but I haven't looked into it. St?fan From warmspringwinds at gmail.com Wed Mar 4 02:14:20 2015 From: warmspringwinds at gmail.com (Daniil Pakhomov) Date: Tue, 3 Mar 2015 23:14:20 -0800 (PST) Subject: Hessian-Laplace blob detector. In-Reply-To: References: <9282855f-5e28-425c-9022-64e99f8d00cc@googlegroups.com> Message-ID: <156c8d88-1dec-4273-b5ce-ce09b7752c7a@googlegroups.com> Hello, Vighnesh Birodkar. Thank you for your reply. I have read through some of the related papers and I have a question: As it seems to me: I will have to compute Hessian with different sigmas (to find probable feature points) and also Laplacian of Gaussian with different sigmas( to check whether or not the points found on the previous step are local minimas in the scale space). What do you think about efficiency? Because if I have to compute two image cubes, it will take some time. ???????, 3 ????? 2015 ?., 6:18:22 UTC+1 ???????????? Vighnesh Birodkar ???????: > > Hello > > The Hessian Laplace blob detector was explained in [1]. > Also, look at see `skimage.feature.blob_doh` and skimage.feature.blob_log` > > [1] : > http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/mikolajczyk_ijcv2004.pdf > > Thanks > Vighnesh > > > On Monday, March 2, 2015 at 10:35:09 PM UTC+5:30, Daniil Pakhomov wrote: >> >> Hello, >> >> I would like to implement this detector. >> >> Can someone give me a list of papers that may help. >> >> Thank you. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warmspringwinds at gmail.com Wed Mar 4 02:19:22 2015 From: warmspringwinds at gmail.com (Daniil Pakhomov) Date: Tue, 3 Mar 2015 23:19:22 -0800 (PST) Subject: Face detection In-Reply-To: References: Message-ID: Hello, Is there still interest in the implementation? I have experience with face detection. And I can do it as a GSOC project. Thank you. ???????, 28 ????? 2013 ?., 10:35:31 UTC+1 ???????????? Stefan van der Walt ???????: > > Hi everyone > > I've been interested in getting face detection into skimage for a > while. This morning, Nathan Faggian reminded me that the highly > popular Viola-Jones detector is patent encumbered (yes, if you're not > careful you can use patented code in packages like OpenCV). However, > the following link seems to suggest that we can work around that by > training our own classifier with different features: > > > http://rafaelmizrahi.blogspot.com/2007/02/intel-opencv-face-detection-license.html > > If there's any interest in working on this, or if you already have an > algorithm available, please get in touch. > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Mar 4 02:38:38 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Tue, 03 Mar 2015 23:38:38 -0800 Subject: Face detection In-Reply-To: References: Message-ID: <877fux43gh.fsf@berkeley.edu> Hi Daniil On 2015-03-03 23:19:22, Daniil Pakhomov wrote: > Is there still interest in the implementation? There is definitely still interest. > I have experience with face detection. And I can do it as a GSOC > project. I would consider doing a summer of code (although I have to make sure we are registered with the PSF), but note that we only do GSOCs with students who have contributed PRs to the project *before the start of the GSOC*. Regards St?fan From warmspringwinds at gmail.com Wed Mar 4 03:25:09 2015 From: warmspringwinds at gmail.com (Daniil Pakhomov) Date: Wed, 4 Mar 2015 00:25:09 -0800 (PST) Subject: Face detection In-Reply-To: References: Message-ID: <26100dbd-941f-4759-a220-529756cd3c14@googlegroups.com> Thank you for really fast response :) Great! Yes, I know about this and I am working on it right now: https://groups.google.com/forum/?utm_source=digest&utm_medium=email#!topic/scikit-image/ghIYwQFubEU When is the deadline for getting the PR? ???????, 28 ????? 2013 ?., 10:35:31 UTC+1 ???????????? Stefan van der Walt ???????: > > Hi everyone > > I've been interested in getting face detection into skimage for a > while. This morning, Nathan Faggian reminded me that the highly > popular Viola-Jones detector is patent encumbered (yes, if you're not > careful you can use patented code in packages like OpenCV). However, > the following link seems to suggest that we can work around that by > training our own classifier with different features: > > > http://rafaelmizrahi.blogspot.com/2007/02/intel-opencv-face-detection-license.html > > If there's any interest in working on this, or if you already have an > algorithm available, please get in touch. > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Mar 4 03:50:48 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 04 Mar 2015 00:50:48 -0800 Subject: Face detection In-Reply-To: <26100dbd-941f-4759-a220-529756cd3c14@googlegroups.com> References: <26100dbd-941f-4759-a220-529756cd3c14@googlegroups.com> Message-ID: <87twy12ljr.fsf@berkeley.edu> On 2015-03-04 00:25:09, Daniil Pakhomov wrote: > When is the deadline for getting the PR? There is no deadline as such. We want to get to know you, your coding style and your way of interacting with the team before GSoC. So, the more we get to engage, the better the chances of doing a project this summer! From stefanv at berkeley.edu Wed Mar 4 04:17:42 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 04 Mar 2015 01:17:42 -0800 Subject: GSoC 2015 mentors Message-ID: <87mw3t2kax.fsf@berkeley.edu> Hi everyone Who would be interested in mentoring Google Summer of Code for scikit-image this year? Projects include porting ndimage to Cython, dynamic time warping, and possibly face detection. Regards St?fan From jni.soma at gmail.com Wed Mar 4 08:28:22 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 04 Mar 2015 05:28:22 -0800 (PST) Subject: GSoC 2015 mentors In-Reply-To: <87mw3t2kax.fsf@berkeley.edu> References: <87mw3t2kax.fsf@berkeley.edu> Message-ID: <1425475701867.2ce75c22@Nodemailer> Hi St?fan, As I've mentioned before, this year I'd like to play a smaller role in GSoC. I won't register as a mentor. (Though I love my GSoC'14 t-shirt! =D) Having said that, I have a very strong interest in the ndimage port! Juan. On Wed, Mar 4, 2015 at 8:17 PM, Stefan van der Walt wrote: > Hi everyone Who would be interested in mentoring Google Summer of > Code for scikit-image this year? Projects include porting > ndimage to Cython, dynamic time warping, and possibly face > detection. Regards St?fan > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.silvester at gmail.com Wed Mar 4 09:07:16 2015 From: steven.silvester at gmail.com (Steven Silvester) Date: Wed, 4 Mar 2015 06:07:16 -0800 (PST) Subject: Wikipedia Page Message-ID: <88ed2f28-f61c-4a71-a215-4e307ed0fe70@googlegroups.com> Hi all, I created a Wikipedia page for us today: https://en.wikipedia.org/wiki/Scikit-image. Any updates are welcome! I was not able to upload our logo, as I am not a "confirmed user". Anyone in that camp? Regards, Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Wed Mar 4 11:59:22 2015 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Wed, 4 Mar 2015 08:59:22 -0800 Subject: GSoC 2015 mentors In-Reply-To: <87mw3t2kax.fsf@berkeley.edu> References: <87mw3t2kax.fsf@berkeley.edu> Message-ID: On Wed, Mar 4, 2015 at 1:17 AM, Stefan van der Walt wrote: > Hi everyone Who would be interested in mentoring Google Summer of Code > for scikit-image this year? Projects include porting ndimage to Cython, > dynamic time warping, and possibly face detection. Regards St?fan I wouldn't mind getting involved in the ndimage port to Cython, as in co-mentoring it or helping out any other way you see fit, . I haven't done much in skimage, but am a numpy developer and have done some stuff in ndimage. If nothing else, I actually understand what most of the ndimage C code is doing, which is probably a useful skill for the job! Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Mar 4 15:04:22 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 04 Mar 2015 12:04:22 -0800 Subject: GSoC 2015 mentors In-Reply-To: References: <87mw3t2kax.fsf@berkeley.edu> Message-ID: <874mq034xl.fsf@berkeley.edu> Hi Jaime On 2015-03-04 08:59:22, Jaime Fern?ndez del R?o wrote: > I wouldn't mind getting involved in the ndimage port to Cython, > as in co-mentoring it or helping out any other way you see fit, > . > > I haven't done much in skimage, but am a numpy developer and > have done some stuff in ndimage. If nothing else, I actually > understand what most of the ndimage C code is doing, which is > probably a useful skill for the job! I've been following your work on NumPy, and I would love to have you on board! I'll send you the signup form offline. St?fan From stefanv at berkeley.edu Wed Mar 4 15:05:38 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 04 Mar 2015 12:05:38 -0800 Subject: GSoC 2015 mentors In-Reply-To: <87mw3t2kax.fsf@berkeley.edu> References: <87mw3t2kax.fsf@berkeley.edu> Message-ID: <87385k34vh.fsf@berkeley.edu> On 2015-03-04 01:17:42, Stefan van der Walt wrote: > Who would be interested in mentoring Google Summer of Code for > scikit-image this year? Projects include porting ndimage to > Cython, dynamic time warping, and possibly face detection. We need one last backup mentor for GSoC to continue (as 3rd backup mentor, you won't be expected to do much, unless I happen to disappear into the void). St?fan From warmspringwinds at gmail.com Wed Mar 4 17:37:59 2015 From: warmspringwinds at gmail.com (Daniil Pakhomov) Date: Wed, 4 Mar 2015 14:37:59 -0800 (PST) Subject: Hessian-Laplace blob detector. In-Reply-To: <39809677-4893-4261-9220-b9cd9be8b580@googlegroups.com> References: <39809677-4893-4261-9220-b9cd9be8b580@googlegroups.com> Message-ID: Now I have a more well-formed question: Do you think it is also feasible to approximate laplacian of gaussian with haar wavelets? ???????????, 2 ????? 2015 ?., 18:05:09 UTC+1 ???????????? Daniil Pakhomov ???????: > > Hello, > > I want to try to implement Hessian-Laplace blob detector (as mentioned in > requested features on github page). > > Can someone give me the list of corresponding papers, using which I can > implement it. > > Thank you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnodietz86 at googlemail.com Wed Mar 4 17:44:59 2015 From: arnodietz86 at googlemail.com (Arno Dietz) Date: Wed, 4 Mar 2015 14:44:59 -0800 (PST) Subject: hough ellipse fit inaccurate? Message-ID: Hello, I'm working on a project where I need to test various methods to fit ellipses as accurate as possible. The hough ellipse fit from scikit-image for my images with perfect Ellipses is quite inaccurate as you can see in the examples. The white ellipse is my edge image. The red are the fitted ones. Why is there always a offset although my source image has perfect ellipses? I tried to vary the parameters but without success. Thank you so far. best regards Arno -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Mar 4 17:56:19 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Wed, 4 Mar 2015 14:56:19 -0800 Subject: GSoC 2015 mentors In-Reply-To: <96ED6E54-C5AF-45F3-9765-67A00946D2F4@demuc.de> References: <87mw3t2kax.fsf@berkeley.edu> <87385k34vh.fsf@berkeley.edu> <96ED6E54-C5AF-45F3-9765-67A00946D2F4@demuc.de> Message-ID: On Wed, Mar 4, 2015 at 2:21 PM, Johannes Schoenberger wrote: > Sorry, I?ll be too busy over the summer myself, but I am happy to function > as a backup mentor, if occasional reviews of PRs and sporadic participation > in discussions are okay with you. > Good enough! Please fill out the mentor signup form here: http://goo.gl/forms/PMXVM1CUAS St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Mar 4 18:20:06 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Wed, 4 Mar 2015 15:20:06 -0800 Subject: hough ellipse fit inaccurate? In-Reply-To: References: Message-ID: Hi Arno On Wed, Mar 4, 2015 at 2:44 PM, Arno Dietz wrote: > I'm working on a project where I need to test various > methods to fit ellipses > as accurate as > possible. The hough ellipse fit from scikit-image for my images with > perfect Ellipses is quite inaccurate as you can see in the examples. > The white ellipse is my edge image. The red are the fitted ones. > Please provide us with a minimal code snippet, then we can see where the problem is. Thanks St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnodietz86 at googlemail.com Wed Mar 4 18:49:52 2015 From: arnodietz86 at googlemail.com (Arno Dietz) Date: Wed, 4 Mar 2015 15:49:52 -0800 (PST) Subject: hough ellipse fit inaccurate? In-Reply-To: References: Message-ID: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> Ok sorry. Here is my code: from skimage import color > from skimage.filter import canny > from skimage.transform import hough_ellipse > from skimage.draw import ellipse_perimeter > from skimage import io > from skimage.viewer import ImageViewer > # load image > img = io.imread('ellipse.png') > cimg = color.gray2rgb(img) > # edges and ellipse fit > edges = canny(img, sigma=0.1, low_threshold=0.55, high_threshold=0.8) > result = hough_ellipse(edges, accuracy=4, threshold=25, min_size=47, > max_size=60) > result.sort(order='accumulator') > # Estimated parameters for the ellipse > best = result[-1] > yc = int(best[1]) > xc = int(best[2]) > a = int(best[3]) > b = int(best[4]) > orientation = best[5] > # Draw the ellipse on the original image > cy, cx = ellipse_perimeter(yc, xc, a, b, orientation) > cimg[cy, cx] = (0, 0, 255) > # Draw the edge (white) and the resulting ellipse (red) > edges = color.gray2rgb(edges) > edges[cy, cx] = (250, 0, 0) > viewer = ImageViewer(edges) > viewer.show() I noticed, that the ellipse center is detected only in half pixel accuracy. Maybe this is the Problem? Is there a possibility to get the ellipse center with sub-pixel accuracy? regards Arno > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ellipse.png Type: image/png Size: 836 bytes Desc: not available URL: From jsch at demuc.de Wed Mar 4 17:21:30 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Wed, 4 Mar 2015 17:21:30 -0500 Subject: GSoC 2015 mentors In-Reply-To: <87385k34vh.fsf@berkeley.edu> References: <87mw3t2kax.fsf@berkeley.edu> <87385k34vh.fsf@berkeley.edu> Message-ID: <96ED6E54-C5AF-45F3-9765-67A00946D2F4@demuc.de> Hi Stefan, Sorry, I?ll be too busy over the summer myself, but I am happy to function as a backup mentor, if occasional reviews of PRs and sporadic participation in discussions are okay with you. Best, Johannes > On Mar 4, 2015, at 3:05 PM, Stefan van der Walt wrote: > > On 2015-03-04 01:17:42, Stefan van der Walt wrote: >> Who would be interested in mentoring Google Summer of Code for scikit-image this year? Projects include porting ndimage to Cython, dynamic time warping, and possibly face detection. > > We need one last backup mentor for GSoC to continue (as 3rd backup mentor, you won't be expected to do much, unless I happen to disappear into the void). > > St?fan > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From jsch at demuc.de Wed Mar 4 18:25:21 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Wed, 4 Mar 2015 18:25:21 -0500 Subject: GSoC 2015 mentors In-Reply-To: References: <87mw3t2kax.fsf@berkeley.edu> <87385k34vh.fsf@berkeley.edu> <96ED6E54-C5AF-45F3-9765-67A00946D2F4@demuc.de> Message-ID: Done. > On Mar 4, 2015, at 5:56 PM, St?fan van der Walt wrote: > > On Wed, Mar 4, 2015 at 2:21 PM, Johannes Schoenberger wrote: > Sorry, I?ll be too busy over the summer myself, but I am happy to function as a backup mentor, if occasional reviews of PRs and sporadic participation in discussions are okay with you. > > Good enough! Please fill out the mentor signup form here: > > http://goo.gl/forms/PMXVM1CUAS > > St?fan > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From jsch at demuc.de Wed Mar 4 19:32:26 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Wed, 4 Mar 2015 19:32:26 -0500 Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> Message-ID: Third, you could fit an ellipse using RANSAC. How does this approach work for you: http://stackoverflow.com/questions/28281742/fitting-a-circle-to-a-binary-image/28289147#28289147 > On Mar 4, 2015, at 7:24 PM, Kevin Keraudren wrote: > > A second source of inaccuracy comes from your input ellipse: it is not a perfect ellipse because you drew it using anti-aliasing. > > On Thu, Mar 5, 2015 at 12:21 AM, Kevin Keraudren wrote: > Hi Arno, > > The first source of inaccuracy comes from your code, you need to round the values instead of truncating them: > > #yc = int(best[1]) > #xc = int(best[2]) > #a = int(best[3]) > #b = int(best[4]) > > yc = int(round(best[1])) > xc = int(round(best[2])) > a = int(round(best[3])) > b = int(round(best[4])) > > See resulting image attached. > > Kind regards, > > Kevin > > > > On Wed, Mar 4, 2015 at 11:49 PM, Arno Dietz wrote: > > Ok sorry. Here is my code: > > from skimage import color > from skimage.filter import canny > from skimage.transform import hough_ellipse > from skimage.draw import ellipse_perimeter > from skimage import io > from skimage.viewer import ImageViewer > # load image > img = io.imread('ellipse.png') > cimg = color.gray2rgb(img) > # edges and ellipse fit > edges = canny(img, sigma=0.1, low_threshold=0.55, high_threshold=0.8) > result = hough_ellipse(edges, accuracy=4, threshold=25, min_size=47, max_size=60) > result.sort(order='accumulator') > # Estimated parameters for the ellipse > best = result[-1] > yc = int(best[1]) > xc = int(best[2]) > a = int(best[3]) > b = int(best[4]) > orientation = best[5] > # Draw the ellipse on the original image > cy, cx = ellipse_perimeter(yc, xc, a, b, orientation) > cimg[cy, cx] = (0, 0, 255) > # Draw the edge (white) and the resulting ellipse (red) > edges = color.gray2rgb(edges) > edges[cy, cx] = (250, 0, 0) > viewer = ImageViewer(edges) > viewer.show() > > I noticed, that the ellipse center is detected only in half pixel accuracy. Maybe this is the Problem? Is there a possibility to get the ellipse center with sub-pixel accuracy? > > regards Arno > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From kevin.keraudren at googlemail.com Wed Mar 4 19:21:47 2015 From: kevin.keraudren at googlemail.com (Kevin Keraudren) Date: Thu, 5 Mar 2015 00:21:47 +0000 Subject: hough ellipse fit inaccurate? In-Reply-To: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> Message-ID: Hi Arno, The first source of inaccuracy comes from your code, you need to round the values instead of truncating them: #yc = int(best[1]) #xc = int(best[2]) #a = int(best[3]) #b = int(best[4]) yc = int(round(best[1])) xc = int(round(best[2])) a = int(round(best[3])) b = int(round(best[4])) See resulting image attached. Kind regards, Kevin On Wed, Mar 4, 2015 at 11:49 PM, Arno Dietz wrote: > > Ok sorry. Here is my code: > > from skimage import color >> from skimage.filter import canny >> from skimage.transform import hough_ellipse >> from skimage.draw import ellipse_perimeter >> from skimage import io >> from skimage.viewer import ImageViewer >> # load image >> img = io.imread('ellipse.png') >> cimg = color.gray2rgb(img) >> # edges and ellipse fit >> edges = canny(img, sigma=0.1, low_threshold=0.55, high_threshold=0.8) >> result = hough_ellipse(edges, accuracy=4, threshold=25, min_size=47, >> max_size=60) >> result.sort(order='accumulator') >> # Estimated parameters for the ellipse >> best = result[-1] >> yc = int(best[1]) >> xc = int(best[2]) >> a = int(best[3]) >> b = int(best[4]) >> orientation = best[5] >> # Draw the ellipse on the original image >> cy, cx = ellipse_perimeter(yc, xc, a, b, orientation) >> cimg[cy, cx] = (0, 0, 255) >> # Draw the edge (white) and the resulting ellipse (red) >> edges = color.gray2rgb(edges) >> edges[cy, cx] = (250, 0, 0) >> viewer = ImageViewer(edges) >> viewer.show() > > > I noticed, that the ellipse center is detected only in half pixel > accuracy. Maybe this is the Problem? Is there a possibility to get the > ellipse center with sub-pixel accuracy? > > regards Arno > >> -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: res1.png Type: image/png Size: 1943 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: res2.png Type: image/png Size: 1751 bytes Desc: not available URL: From kevin.keraudren at googlemail.com Wed Mar 4 19:24:46 2015 From: kevin.keraudren at googlemail.com (Kevin Keraudren) Date: Thu, 5 Mar 2015 00:24:46 +0000 Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> Message-ID: A second source of inaccuracy comes from your input ellipse: it is not a perfect ellipse because you drew it using anti-aliasing. On Thu, Mar 5, 2015 at 12:21 AM, Kevin Keraudren < kevin.keraudren at googlemail.com> wrote: > Hi Arno, > > The first source of inaccuracy comes from your code, you need to round the > values instead of truncating them: > > #yc = int(best[1]) > > > #xc = int(best[2]) > > > #a = int(best[3]) > > > #b = int(best[4]) > > > > yc = int(round(best[1])) > > xc = int(round(best[2])) > > a = int(round(best[3])) > > b = int(round(best[4])) > > See resulting image attached. > > Kind regards, > > Kevin > > > > On Wed, Mar 4, 2015 at 11:49 PM, Arno Dietz > wrote: > >> >> Ok sorry. Here is my code: >> >> from skimage import color >>> from skimage.filter import canny >>> from skimage.transform import hough_ellipse >>> from skimage.draw import ellipse_perimeter >>> from skimage import io >>> from skimage.viewer import ImageViewer >>> # load image >>> img = io.imread('ellipse.png') >>> cimg = color.gray2rgb(img) >>> # edges and ellipse fit >>> edges = canny(img, sigma=0.1, low_threshold=0.55, high_threshold=0.8) >>> result = hough_ellipse(edges, accuracy=4, threshold=25, min_size=47, >>> max_size=60) >>> result.sort(order='accumulator') >>> # Estimated parameters for the ellipse >>> best = result[-1] >>> yc = int(best[1]) >>> xc = int(best[2]) >>> a = int(best[3]) >>> b = int(best[4]) >>> orientation = best[5] >>> # Draw the ellipse on the original image >>> cy, cx = ellipse_perimeter(yc, xc, a, b, orientation) >>> cimg[cy, cx] = (0, 0, 255) >>> # Draw the edge (white) and the resulting ellipse (red) >>> edges = color.gray2rgb(edges) >>> edges[cy, cx] = (250, 0, 0) >>> viewer = ImageViewer(edges) >>> viewer.show() >> >> >> I noticed, that the ellipse center is detected only in half pixel >> accuracy. Maybe this is the Problem? Is there a possibility to get the >> ellipse center with sub-pixel accuracy? >> >> regards Arno >> >>> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vighneshbirodkar at gmail.com Thu Mar 5 04:32:18 2015 From: vighneshbirodkar at gmail.com (Vighnesh Birodkar) Date: Thu, 5 Mar 2015 01:32:18 -0800 (PST) Subject: Hessian-Laplace blob detector. In-Reply-To: References: <39809677-4893-4261-9220-b9cd9be8b580@googlegroups.com> Message-ID: Hello Daniil Unfortunately, I am not that well-informed to comment about haar wavelets, I will definitely read up though. Constructing 2 image cubes is not required, you will construct one to determine (x,y) coordinates of maximas using the the determinant of hessian image cube. Once that is done, for only those (x, y) points you will compute Laplacian Of Gaussian for different scales, and final the scale space maxima for only those points. This gives us the best of both approaches. Thanks Vighnesh On Thursday, March 5, 2015 at 4:07:59 AM UTC+5:30, Daniil Pakhomov wrote: > > Now I have a more well-formed question: > Do you think it is also feasible to approximate laplacian of gaussian with > haar wavelets? > > ???????????, 2 ????? 2015 ?., 18:05:09 UTC+1 ???????????? Daniil Pakhomov > ???????: >> >> Hello, >> >> I want to try to implement Hessian-Laplace blob detector (as mentioned in >> requested features on github page). >> >> Can someone give me the list of corresponding papers, using which I can >> implement it. >> >> Thank you. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnodietz86 at googlemail.com Thu Mar 5 04:53:06 2015 From: arnodietz86 at googlemail.com (Arno Dietz) Date: Thu, 5 Mar 2015 01:53:06 -0800 (PST) Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> Message-ID: <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> Thank you very much Kevin and Johannes. I see the rounding Problem, but it is just for the ellipse drawing. In my actual code I just use the Ellipse center like best [1] and best[2] without rounding. This still produces much more inaccurate ellipse center results than other methods like center of mass for example, althoug h I also use the anti-aliased input image. So is there any possibility to get more accurate results from the hough ellipse fit approach? If not, this is also ok, I just want to be on the safe side that it's not my fault. In that case I will have a look at the suggested approach from Johannes. Am Donnerstag, 5. M?rz 2015 01:21:48 UTC+1 schrieb Kevin Keraudren: > > Hi Arno, > > The first source of inaccuracy comes from your code, you need to round the > values instead of truncating them: > > #yc = int(best[1]) > > > #xc = int(best[2]) > > > #a = int(best[3]) > > > #b = int(best[4]) > > > > yc = int(round(best[1])) > > xc = int(round(best[2])) > > a = int(round(best[3])) > > b = int(round(best[4])) > > See resulting image attached. > > Kind regards, > > Kevin > > > A second source of inaccuracy comes from your input ellipse: it is not a > perfect ellipse because you drew it using anti-aliasing. Third, you could fit an ellipse using RANSAC. How does this approach work > for you: > http://stackoverflow.com/questions/28281742/fitting-a-circle-to-a-binary-image/28289147#28289147 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Thu Mar 5 04:57:53 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Thu, 05 Mar 2015 01:57:53 -0800 Subject: ANN: scikit-image 0.11 Message-ID: <878ufbu5pa.fsf@berkeley.edu> Announcement: scikit-image 0.11.0 ================================= We're happy to announce the release of scikit-image v0.11.0! scikit-image is an image processing toolbox for SciPy that includes algorithms for segmentation, geometric transformations, color space manipulation, analysis, filtering, morphology, feature detection, and more. For more information, examples, and documentation, please visit our website: http://scikit-image.org Highlights ---------- For this release, we merged over 200 pull requests with bug fixes, cleanups, improved documentation and new features. Highlights include: - Region Adjacency Graphs - Color distance RAGs (#1031) - Threshold Cut on RAGs (#1031) - Similarity RAGs (#1080) - Normalized Cut on RAGs (#1080) - RAG drawing (#1087) - Hierarchical merging (#1100) - Sub-pixel shift registration (#1066) - Non-local means denoising (#874) - Sliding window histogram (#1127) - More illuminants in color conversion (#1130) - Handling of CMYK images (#1360) - `stop_probability` for RANSAC (#1176) - Li thresholding (#1376) - Signed edge operators (#1240) - Full ndarray support for `peak_local_max` (#1355) - Improve conditioning of geometric transformations (#1319) - Standardize handling of multi-image files (#1200) - Ellipse structuring element (#1298) - Multi-line drawing tool (#1065), line handle style (#1179) - Point in polygon testing (#1123) - Rotation around a specified center (#1168) - Add `shape` option to drawing functions (#1222) - Faster regionprops (#1351) - `skimage.future` package (#1365) - More robust I/O module (#1189) API Changes ----------- - The ``skimage.filter`` subpackage has been renamed to ``skimage.filters``. - Some edge detectors returned values greater than 1--their results are now appropriately scaled with a factor of ``sqrt(2)``. Contributors to this release ---------------------------- (Listed alphabetically by last name) - Fedor Baart - Vighnesh Birodkar - Fran?ois Boulogne - Nelson Brown - Alexey Buzmakov - Julien Coste - Phil Elson - Adam Feuer - Jim Fienup - Geoffrey French - Emmanuelle Gouillart - Charles Harris - Jonathan Helmus - Alexander Iacchetta - Ivana Kaji? - Kevin Keraudren - Almar Klein - Gregory R. Lee - Jeremy Metz - Stuart Mumford - Damian Nadales - Pablo M?rquez Neila - Juan Nunez-Iglesias - Rebecca Roisin - Jasper St. Pierre - Jacopo Sabbatini - Michael Sarahan - Salvatore Scaramuzzino - Phil Schaf - Johannes Sch?nberger - Tim Seifert - Arve Seljebu - Steven Silvester - Julian Taylor - Mat?j T?? - Alexey Umnov - Pratap Vardhan - Stefan van der Walt - Joshua Warner - Tony S Yu From muditjain18011995 at gmail.com Thu Mar 5 06:06:05 2015 From: muditjain18011995 at gmail.com (Mudit Jain) Date: Thu, 5 Mar 2015 03:06:05 -0800 (PST) Subject: GSoC - Dynamic Time Warping Message-ID: <76af707c-26f9-4a4a-81bb-6813cadaac7b@googlegroups.com> Hello everyone, I am Mudit, a third year undergraduate student from Birla Institute of Technology and Sciences, India. I had come across the idea of implementing a Dynamic Time Warping Library on the Ideas Page and would like to pursue the project for GSOC-2015. I have read the paper regarding Adaptive Feature Based Dynamic Time Warping Library, that was attached in the Ideas page and have a thorough understanding of the system to be implemented. I have attached a summary regarding the same. I have also given a procedure to implement the methodology stated in the paper. I understand that the implementation of dynamic programming is similar to that of the edit distance problem.The summary also has a description of the workflow for the algorithm used. I am presently going through the DTW package is R Some links that I am following are: http://www.jstatsoft.org/v31/i07/paper It would be highly appreciated if more insight regarding the project and what would be the role of a person if he/she is selected for this project. I would also appreciate links that can better my understanding regarding various topics. Cheers Mudit -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Summary_AFBDTW_GSOC_2015.pdf Type: application/pdf Size: 219627 bytes Desc: not available URL: From arnodietz86 at googlemail.com Thu Mar 5 06:09:34 2015 From: arnodietz86 at googlemail.com (Arno Dietz) Date: Thu, 5 Mar 2015 03:09:34 -0800 (PST) Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> Message-ID: <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> Ok sorry, maybe I need to explain my project better. I have a Autodesk Maya Model of a simplified eyeball with the pupil (see image). I render the images of the eyeball from different angles and try to detect the pupil center as accurate as possible. Since I know the geometry and rotation of the eyeball, I can calculate the mapping of the real center of the pupil on my virtual maya camera sensor. I proved the validity of my calculation with several other methods of ellipse center detection (center of mass, distance transform, opencv ellipse fit, starbust) where I get errors between my calculation and measurement with sometimes less than 0.02 pixels. But nevertheless I want to try the hough ellipse approach, because it may be more robust against noise or other errors I want to simulate later. And it seems so far only the hough ellipse approach is quite inaccurate so I was wondering why. I think the reason could be the ellipse center detection has only a half pixel accuracy while my other approaches have sub-pixel accuracy. I think posting the calculation code would be too much, but I am quite sure the calculation is right. Kind regards Arno -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: eye.png Type: image/png Size: 45471 bytes Desc: not available URL: From jni.soma at gmail.com Thu Mar 5 09:02:56 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 05 Mar 2015 06:02:56 -0800 (PST) Subject: ANN: scikit-image 0.11 In-Reply-To: <878ufbu5pa.fsf@berkeley.edu> References: <878ufbu5pa.fsf@berkeley.edu> Message-ID: <1425564176214.d9e580b6@Nodemailer> Wooooo! Thanks St?fan for putting the release together! I must admit I was burned out after the morphology, dimension, and data types docs. I'm happy you finally carried the baton past the finish line! So many goodies in here! On Thu, Mar 5, 2015 at 8:57 PM, Stefan van der Walt wrote: > Announcement: scikit-image 0.11.0 > ================================= > We're happy to announce the release of scikit-image v0.11.0! > scikit-image is an image processing toolbox for SciPy that includes algorithms > for segmentation, geometric transformations, color space manipulation, > analysis, filtering, morphology, feature detection, and more. > For more information, examples, and documentation, please visit our website: > http://scikit-image.org > Highlights > ---------- > For this release, we merged over 200 pull requests with bug fixes, > cleanups, improved documentation and new features. Highlights > include: > - Region Adjacency Graphs > - Color distance RAGs (#1031) > - Threshold Cut on RAGs (#1031) > - Similarity RAGs (#1080) > - Normalized Cut on RAGs (#1080) > - RAG drawing (#1087) > - Hierarchical merging (#1100) > - Sub-pixel shift registration (#1066) > - Non-local means denoising (#874) > - Sliding window histogram (#1127) > - More illuminants in color conversion (#1130) > - Handling of CMYK images (#1360) > - `stop_probability` for RANSAC (#1176) > - Li thresholding (#1376) > - Signed edge operators (#1240) > - Full ndarray support for `peak_local_max` (#1355) > - Improve conditioning of geometric transformations (#1319) > - Standardize handling of multi-image files (#1200) > - Ellipse structuring element (#1298) > - Multi-line drawing tool (#1065), line handle style (#1179) > - Point in polygon testing (#1123) > - Rotation around a specified center (#1168) > - Add `shape` option to drawing functions (#1222) > - Faster regionprops (#1351) > - `skimage.future` package (#1365) > - More robust I/O module (#1189) > API Changes > ----------- > - The ``skimage.filter`` subpackage has been renamed to ``skimage.filters``. > - Some edge detectors returned values greater than 1--their results are now > appropriately scaled with a factor of ``sqrt(2)``. > Contributors to this release > ---------------------------- > (Listed alphabetically by last name) > - Fedor Baart > - Vighnesh Birodkar > - Fran?ois Boulogne > - Nelson Brown > - Alexey Buzmakov > - Julien Coste > - Phil Elson > - Adam Feuer > - Jim Fienup > - Geoffrey French > - Emmanuelle Gouillart > - Charles Harris > - Jonathan Helmus > - Alexander Iacchetta > - Ivana Kaji? > - Kevin Keraudren > - Almar Klein > - Gregory R. Lee > - Jeremy Metz > - Stuart Mumford > - Damian Nadales > - Pablo M?rquez Neila > - Juan Nunez-Iglesias > - Rebecca Roisin > - Jasper St. Pierre > - Jacopo Sabbatini > - Michael Sarahan > - Salvatore Scaramuzzino > - Phil Schaf > - Johannes Sch?nberger > - Tim Seifert > - Arve Seljebu > - Steven Silvester > - Julian Taylor > - Mat?j T?? > - Alexey Umnov > - Pratap Vardhan > - Stefan van der Walt > - Joshua Warner > - Tony S Yu > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.keraudren at googlemail.com Thu Mar 5 05:07:57 2015 From: kevin.keraudren at googlemail.com (Kevin Keraudren) Date: Thu, 5 Mar 2015 10:07:57 +0000 Subject: hough ellipse fit inaccurate? In-Reply-To: <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> Message-ID: Hi Arno, In order to stay on the safe side, why don't you post your actual code, with a test case highlighting the error you measure between the true centre of the ellipse and the detected one? Kind regards, Kevin On Thu, Mar 5, 2015 at 9:53 AM, Arno Dietz wrote: > Thank you very much Kevin and Johannes. > > I see the rounding Problem, but it is just for the ellipse drawing. In my > actual code I just use the Ellipse center like best [1] and best[2] without > rounding. This still produces much more inaccurate ellipse center results > than other methods like center of mass for example, althoug > h I also use > the anti-aliased input image. So is there any possibility to get more > accurate results from the hough ellipse fit approach? If not, this is also > ok, I just want to be on the safe side that it's not my fault. In that > case I will have a look at the suggested approach from Johannes. > > Am Donnerstag, 5. M?rz 2015 01:21:48 UTC+1 schrieb Kevin Keraudren: >> >> Hi Arno, >> >> The first source of inaccuracy comes from your code, you need to round >> the values instead of truncating them: >> >> #yc = int(best[1]) >> >> >> #xc = int(best[2]) >> >> >> #a = int(best[3]) >> >> >> #b = int(best[4]) >> >> >> >> yc = int(round(best[1])) >> >> xc = int(round(best[2])) >> >> a = int(round(best[3])) >> >> b = int(round(best[4])) >> >> See resulting image attached. >> >> Kind regards, >> >> Kevin >> >> >> A second source of inaccuracy comes from your input ellipse: it is not a >> perfect ellipse because you drew it using anti-aliasing. > > > > Third, you could fit an ellipse using RANSAC. How does this approach work >> for you: http://stackoverflow.com/questions/28281742/fitting-a- >> circle-to-a-binary-image/28289147#28289147 > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.keraudren at googlemail.com Thu Mar 5 05:10:45 2015 From: kevin.keraudren at googlemail.com (Kevin Keraudren) Date: Thu, 5 Mar 2015 10:10:45 +0000 Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> Message-ID: By the way, your ground truth is the values you used when drawing the ellipse, not the values you detect with a different detection method. On Thu, Mar 5, 2015 at 10:07 AM, Kevin Keraudren < kevin.keraudren at googlemail.com> wrote: > Hi Arno, > In order to stay on the safe side, why don't you post your actual code, > with a test case highlighting the error you measure between the true centre > of the ellipse and the detected one? > Kind regards, > Kevin > > On Thu, Mar 5, 2015 at 9:53 AM, Arno Dietz > wrote: > >> Thank you very much Kevin and Johannes. >> >> I see the rounding Problem, but it is just for the ellipse drawing. In my >> actual code I just use the Ellipse center like best [1] and best[2] without >> rounding. This still produces much more inaccurate ellipse center results >> than other methods like center of mass for example, althoug >> h I also use >> the anti-aliased input image. So is there any possibility to get more >> accurate results from the hough ellipse fit approach? If not, this is also >> ok, I just want to be on the safe side that it's not my fault. In that >> case I will have a look at the suggested approach from Johannes. >> >> Am Donnerstag, 5. M?rz 2015 01:21:48 UTC+1 schrieb Kevin Keraudren: >>> >>> Hi Arno, >>> >>> The first source of inaccuracy comes from your code, you need to round >>> the values instead of truncating them: >>> >>> #yc = int(best[1]) >>> >>> >>> #xc = int(best[2]) >>> >>> >>> #a = int(best[3]) >>> >>> >>> #b = int(best[4]) >>> >>> >>> >>> yc = int(round(best[1])) >>> >>> xc = int(round(best[2])) >>> >>> a = int(round(best[3])) >>> >>> b = int(round(best[4])) >>> >>> See resulting image attached. >>> >>> Kind regards, >>> >>> Kevin >>> >>> >>> A second source of inaccuracy comes from your input ellipse: it is not a >>> perfect ellipse because you drew it using anti-aliasing. >> >> >> >> Third, you could fit an ellipse using RANSAC. How does this approach work >>> for you: http://stackoverflow.com/questions/28281742/fitting-a- >>> circle-to-a-binary-image/28289147#28289147 >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnodietz86 at googlemail.com Thu Mar 5 19:22:55 2015 From: arnodietz86 at googlemail.com (Arno Dietz) Date: Thu, 5 Mar 2015 16:22:55 -0800 (PST) Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> Message-ID: Hi Kevin and Johannes, no the accuracy parameter has no effect on my measure accuracy. Yes I tried opencv fitellipse and it is much more accurate. But I want to test several methods and I heard that the hough transform is quite robust. The eyeball is a sphere and the Pupil a flat circle on the flat white plane. Since I view the circle from different angles it always appears as a ellipse. So circle detection is not an option. Ok Thank you..it seams the skimage hough ellipse fit just isn't that accurate. @Johannes: I tried the EllipseModel with ransac from your link today and I like it very much. But I have some problems with greater angles. I always have a set of 25 images where the eye is looking at different targets on a display. I plot the difference between the true center and the measured center in one error diagram (see images). When the camera is located directly in front of the eye, so the angles are not to big, it works fine with low errors (see image 1). But when I move the camera down so that the pupil appear more elliptical I always get some outlier s with bigger errors like Point 2, 7, 12, etc. (see image 2). I tried to vary the parameters (min_samples, residual_threshold and max_trials) but there are always some outliers, but ever y time at different images. Do you have an idea where this comes from? Thank you so far. Kind regards Arno -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ellipse_errors1.jpg Type: image/jpeg Size: 35176 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ellipse_errors2.jpg Type: image/jpeg Size: 27691 bytes Desc: not available URL: From jsch at demuc.de Thu Mar 5 18:06:46 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Thu, 5 Mar 2015 18:06:46 -0500 Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> Message-ID: Well, there is not reason to use `fit ellipse` from OpenCV, you can use `skimage.measure.EllipseModel`. > On Mar 5, 2015, at 5:38 PM, Kevin Keraudren wrote: > > Hi Arno, > > Looking at the code, I would ask: Did your score improve by setting accuracy=1 ? > > https://github.com/scikit-image/scikit-image/blob/master/skimage/transform/_hough_transform.pyx > > Considering that you are asking for accuracy below half a pixel, I would not be surprised if the voting process of the Hough transform is not that accurate. A least-square fitting (Opencv fitellipse) might be more accurate than a voting process for a perfect ellipse. > > Aren't the eyeball and the pupil both balls? If you slice them in any way, wouldn't you obtain disks? So why detecting elllipses and not circles? Maybe hough_circle will be more accurate. > > Sorry I cannot provide any proof or certitude on how accurate hough_ellipse is. > > Kind regards, > > Kevin > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From kevin.keraudren at googlemail.com Thu Mar 5 17:38:12 2015 From: kevin.keraudren at googlemail.com (Kevin Keraudren) Date: Thu, 5 Mar 2015 22:38:12 +0000 Subject: hough ellipse fit inaccurate? In-Reply-To: <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> Message-ID: Hi Arno, Looking at the code, I would ask: Did your score improve by setting accuracy=1 ? https://github.com/scikit-image/scikit-image/blob/master/skimage/transform/_hough_transform.pyx Considering that you are asking for accuracy below half a pixel, I would not be surprised if the voting process of the Hough transform is not that accurate. A least-square fitting (Opencv fitellipse) might be more accurate than a voting process for a perfect ellipse. Aren't the eyeball and the pupil both balls? If you slice them in any way, wouldn't you obtain disks? So why detecting elllipses and not circles? Maybe hough_circle will be more accurate. Sorry I cannot provide any proof or certitude on how accurate hough_ellipse is. Kind regards, Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Fri Mar 6 23:52:00 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Fri, 6 Mar 2015 23:52:00 -0500 Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> Message-ID: Hi Arno, So, I just figured, that there was a bug in the most recent addition of RANSAC. The iteration terminated early, even if stop_probability was set to 1. Should be fixed in https://github.com/scikit-image/scikit-image/pull/1411 You may want to update your local installation with that changeset, and let RANSAC run for a sufficient number of iterations to get reliable estimates. Best, Johannes > On Mar 5, 2015, at 7:22 PM, Arno Dietz wrote: > > Hi Kevin and Johannes, > > no the accuracy parameter has no effect on my measure accuracy. > Yes I tried opencv fitellipse and it is much more accurate. But I want to test several methods and I heard that the hough transform is quite robust. > The eyeball is a sphere and the Pupil a flat circle on the flat white plane. Since I view the circle from different angles it always appears as a ellipse. So circle detection is not an option. > Ok Thank you..it seams the skimage hough ellipse fit just isn't that accurate. > > @Johannes: I tried the EllipseModel with ransac from your link today and I like it very much. > But I have some problems with greater angles. I always have a set of 25 images where the eye is looking at different targets on a display. I plot the difference between the true center and the measured center in one error diagram (see images). > When the camera is located directly in front of the eye, so the angles are not to big, it works fine with low errors (see image 1). > But when I move the camera down so that the pupil appear more elliptical I always get some outliers with bigger errors like Point 2, 7, 12, etc. (see image 2). > I tried to vary the parameters (min_samples, residual_threshold and max_trials) but there are always some outliers, but every time at different images. > Do you have an idea where this comes from? > > Thank you so far. > > Kind regards > Arno > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > From arnodietz86 at googlemail.com Sat Mar 7 05:31:37 2015 From: arnodietz86 at googlemail.com (Arno Dietz) Date: Sat, 7 Mar 2015 02:31:37 -0800 (PST) Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> Message-ID: <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> Hi Johannes, thank you for your support. I have just recognized I have installed scikit-image 0.10.1, since I use Anaconda. How can I update scikit-image to 0.11? And how to update the changeset you mentioned? Can I just replace the files in "..\Anaconda\Lib\site-packages\skimage" ? Sorry I'm a beginner in programming. Best regards, Arno -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnodietz86 at googlemail.com Sat Mar 7 06:19:29 2015 From: arnodietz86 at googlemail.com (Arno Dietz) Date: Sat, 7 Mar 2015 03:19:29 -0800 (PST) Subject: hough ellipse fit inaccurate? In-Reply-To: <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> Message-ID: <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> Ok I just downloaded the latest version from "https://github.com/scikit-image/scikit-image/zipball/master" and run "pip install .". Then I changed the files "fit.py", "test_fit.py" and "_geometric.py" from your github link. Is this correct? It doesn't seem to solve my probleme since I still have some outliers (see Image). Regards, Arno -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ransac_ellipse_outlier.png Type: image/png Size: 25077 bytes Desc: not available URL: From arnodietz86 at googlemail.com Sat Mar 7 08:32:03 2015 From: arnodietz86 at googlemail.com (Arno Dietz) Date: Sat, 7 Mar 2015 05:32:03 -0800 (PST) Subject: hough ellipse fit inaccurate? In-Reply-To: <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> Message-ID: Do you mean the max_trials parameter? At the moment I use these: "model, inliers = measure.ransac(coords, measure.EllipseModel, min_samples=10, residual_threshold=1, max_trials=100)" I varied these parameters (min_samples=5 to 40, residual_threshold=0.005 to 10, max_trials=10 to 400) but with no success. The images with outliers remain the same with equal parameters but with different parameters the outliers appear on different images. Sometimes there also appear a warning message but in this case it is actually random, when it occurs: C:\Anaconda\lib\site-packages\scipy\optimize\minpack.py:419: RuntimeWarning: Number of calls to function has reached maxfev = 2600. warnings.warn(errors[info][0], RuntimeWarning) Another interesting fact, when I use a starburst algorithm (like this ) to detect my points for ellipse fitting instead of the canny edge detector, it seems to work fine mostly without outliers. I think the only difference is, that my starburst algorithm generates much less points (about 300) then canny. Regards, Arno -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Sat Mar 7 08:10:15 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Sat, 7 Mar 2015 08:10:15 -0500 Subject: hough ellipse fit inaccurate? In-Reply-To: <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> Message-ID: For how many iterations are you running RANSAC? > On Mar 7, 2015, at 6:19 AM, Arno Dietz wrote: > > Ok I just downloaded the latest version from "https://github.com/scikit-image/scikit-image/zipball/master" and run "pip install .". > Then I changed the files "fit.py", "test_fit.py" and "_geometric.py" from your github link. > Is this correct? It doesn't seem to solve my probleme since I still have some outliers (see Image). > > Regards, > Arno > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > From jsch at demuc.de Sat Mar 7 08:12:04 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Sat, 7 Mar 2015 08:12:04 -0500 Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> Message-ID: <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> Another question: Is it still random images for which you see the outliers? > On Mar 7, 2015, at 8:10 AM, Johannes Schoenberger wrote: > > For how many iterations are you running RANSAC? > >> On Mar 7, 2015, at 6:19 AM, Arno Dietz wrote: >> >> Ok I just downloaded the latest version from "https://github.com/scikit-image/scikit-image/zipball/master" and run "pip install .". >> Then I changed the files "fit.py", "test_fit.py" and "_geometric.py" from your github link. >> Is this correct? It doesn't seem to solve my probleme since I still have some outliers (see Image). >> >> Regards, >> Arno >> >> -- >> You received this message because you are subscribed to the Google Groups "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From arnodietz86 at googlemail.com Sat Mar 7 11:30:23 2015 From: arnodietz86 at googlemail.com (Arno Dietz) Date: Sat, 7 Mar 2015 08:30:23 -0800 (PST) Subject: hough ellipse fit inaccurate? In-Reply-To: <5E085A92-AB38-472F-99F3-150E604D4CD8@demuc.de> References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> <5E085A92-AB38-472F-99F3-150E604D4CD8@demuc.de> Message-ID: <5c1155c0-bfd7-4b45-80e5-65d860d90a8c@googlegroups.com> Okay. But I also tried your parameters without success. It was hard work but I created a minimal example from my code (see attachment). It takes quite a long time to run but I would be really thankful if you could take a look. regards, Arno -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ransac_ellipse_fit.zip Type: application/zip Size: 53456 bytes Desc: not available URL: From jsch at demuc.de Sat Mar 7 08:47:14 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Sat, 7 Mar 2015 08:47:14 -0500 Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> Message-ID: <5E085A92-AB38-472F-99F3-150E604D4CD8@demuc.de> No, that?s not good. You need 5 points to estimate an ellipse model, and you should stick to the minimum parameters with RANSAC. Otherwise, you have to sample exponentially more to converge to a confident correct solution. Try something like: min_samples=5, max_trials>200 (depending on the outlier ratio of your edge points this may have to increase significantly), residual_threshold>2 (depending on the spread of your edge points, excluding the outlier points) Hope this helps, otherwise the only thing that helps would be to share your images and a code snippet. Best, Johannes > On Mar 7, 2015, at 8:32 AM, Arno Dietz wrote: > > Do you mean the max_trials parameter? At the moment I use these: "model, inliers = measure.ransac(coords, measure.EllipseModel, min_samples=10, residual_threshold=1, max_trials=100)" > I varied these parameters (min_samples=5 to 40, residual_threshold=0.005 to 10, max_trials=10 to 400) but with no success. > The images with outliers remain the same with equal parameters but with different parameters the outliers appear on different images. > > Sometimes there also appear a warning message but in this case it is actually random, when it occurs: > C:\Anaconda\lib\site-packages\scipy\optimize\minpack.py:419: RuntimeWarning: Number of calls to function has reached maxfev = 2600. > warnings.warn(errors[info][0], RuntimeWarning) > > Another interesting fact, when I use a starburst algorithm (like this) to detect my points for ellipse fitting instead of the canny edge detector, it seems to work fine mostly without outliers. I think the only difference is, that my starburst algorithm generates much less points (about 300) then canny. > > Regards, > Arno > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From arnodietz86 at googlemail.com Sat Mar 7 14:48:33 2015 From: arnodietz86 at googlemail.com (Arno Dietz) Date: Sat, 7 Mar 2015 11:48:33 -0800 (PST) Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> <5E085A92-AB38-472F-99F3-150E604D4CD8@demuc.de> <5c1155c0-bfd7-4b45-80> Message-ID: <2cba4027-77ce-4187-ab0a-54467864b562@googlegroups.com> Hm I don't know. Do you have the file "true_coords.pickle" in the directory? So I think the simplest way is to post the true coordinates here so you can just copy and paste instead of loading it. true_coords = np.float32([[116.16552734, 56.91558838], [119.50671387, 50.36520386], [120.07568359, 47.97659302], [118.15393066, 51.35003662], [113.87670898, 58.54443359], [115.45068359, 67.83599854], [121.19805908, 61.79907227], [122.86755371, 58.47949219], [122.07769775, 59.83483887], [117.28759766, 65.25402832], [118.67297363, 74.89511108], [123.27319336, 70.87173462], [124.70935059, 69.62966919], [122.70861816, 70.76901245], [116.82958984, 76.34967041], [118.11944580, 85.86563110], [124.74987793, 82.02990723], [127.60803223, 80.19348145], [125.04113770, 81.24456787], [120.22363281, 83.61611938], [122.88574219, 93.01083374], [128.35363770, 91.51263428], [129.56744385, 89.71978760], [126.35644531, 92.14715576], [120.37841797, 93.98809814]]) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Sat Mar 7 13:57:28 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Sat, 7 Mar 2015 13:57:28 -0500 Subject: hough ellipse fit inaccurate? In-Reply-To: <5c1155c0-bfd7-4b45-80e5-65d860d90a8c@googlegroups.com> References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> <5E085A92-AB38-472F-99F3-150E604D4CD8@demuc.de> <5c1155c0-bfd7-4b45-80 e5-65d860d90a8c@googlegroups.com> Message-ID: I tried to run this, but I get: Traceback (most recent call last): File "ellipse_fit.py", line 41, in true_coords = pickle.load(file) ImportError: No module named multiarray > On Mar 7, 2015, at 11:30 AM, Arno Dietz wrote: > > Okay. But I also tried your parameters without success. > It was hard work but I created a minimal example from my code (see attachment). It takes quite a long time to run but I would be really thankful if you could take a look. > > regards, > Arno > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > From warmspringwinds at gmail.com Sat Mar 7 18:15:47 2015 From: warmspringwinds at gmail.com (Daniil Pakhomov) Date: Sat, 7 Mar 2015 15:15:47 -0800 (PST) Subject: Hessian-Laplace blob detector. In-Reply-To: <39809677-4893-4261-9220-b9cd9be8b580@googlegroups.com> References: <39809677-4893-4261-9220-b9cd9be8b580@googlegroups.com> Message-ID: <8233cb95-0f3f-4558-bd34-53625b315f08@googlegroups.com> Great! Thank you. What I am thinking about is to take your _hessian_matrix_det() and make it also return d_xx + d_yy for each element. So, on the output I will get determinant of Hessian and also a Laplacian. I have a small question: d_xx + d_yy will be a scale normalized Laplacian in your notation? ???????????, 2 ????? 2015 ?., 18:05:09 UTC+1 ???????????? Daniil Pakhomov ???????: > > Hello, > > I want to try to implement Hessian-Laplace blob detector (as mentioned in > requested features on github page). > > Can someone give me the list of corresponding papers, using which I can > implement it. > > Thank you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rockingsumes at gmail.com Sun Mar 8 11:17:53 2015 From: rockingsumes at gmail.com (Sumesh K.C.) Date: Sun, 8 Mar 2015 08:17:53 -0700 (PDT) Subject: Extracting/Isolating Segmented Objects in scikit-image Message-ID: Hello Everyone, I've just started working with scikit-image python package for image processing and segmentation. Its cool to learn. I'm getting problem in extracting or simply isolating the segmented objects after running segmentation like quickshift, watershed, slic. How can i do this? Need your help! Thank you, in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From warmspringwinds at gmail.com Sun Mar 8 17:01:12 2015 From: warmspringwinds at gmail.com (Daniil Pakhomov) Date: Sun, 8 Mar 2015 14:01:12 -0700 (PDT) Subject: Hessian-Laplace blob detector. In-Reply-To: <39809677-4893-4261-9220-b9cd9be8b580@googlegroups.com> References: <39809677-4893-4261-9220-b9cd9be8b580@googlegroups.com> Message-ID: <1ad04c45-e4a5-4796-8e85-429b13e90b69@googlegroups.com> Really sorry for spamming you with questions. No more need to answer. I implemented this detector and it works as fast as your determinant of Hessian approach implementation. It passes all you tests and works better with coin() images (it doesn't detect a false coin as determinant of Hessian does in the example). May I ask you to do a review of my code later? Thank you. ???????????, 2 ????? 2015 ?., 18:05:09 UTC+1 ???????????? Daniil Pakhomov ???????: > > Hello, > > I want to try to implement Hessian-Laplace blob detector (as mentioned in > requested features on github page). > > Can someone give me the list of corresponding papers, using which I can > implement it. > > Thank you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Sun Mar 8 19:52:32 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Sun, 8 Mar 2015 16:52:32 -0700 Subject: Extracting/Isolating Segmented Objects in scikit-image In-Reply-To: References: Message-ID: Hi Sumesh On Sun, Mar 8, 2015 at 8:17 AM, Sumesh K.C. wrote: > I'm getting problem in extracting or simply isolating the segmented objects after running segmentation like quickshift, watershed, slic. Please have a look at the tutorial at https://github.com/scikit-image/skimage-tutorials Regards St?fan From silvertrumpet999 at gmail.com Sun Mar 8 20:39:24 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Sun, 8 Mar 2015 17:39:24 -0700 (PDT) Subject: Hessian-Laplace blob detector. In-Reply-To: <1ad04c45-e4a5-4796-8e85-429b13e90b69@googlegroups.com> References: <39809677-4893-4261-9220-b9cd9be8b580@googlegroups.com> <1ad04c45-e4a5-4796-8e85-429b13e90b69@googlegroups.com> Message-ID: <2165ac74-8580-4ebc-9431-9b32749d3522@googlegroups.com> We'd welcome this as a PR on GitHub. That would be the ideal place for code review, etc. On Sunday, March 8, 2015 at 4:01:12 PM UTC-5, Daniil Pakhomov wrote: > > Really sorry for spamming you with questions. > No more need to answer. > I implemented this detector and it works as fast as your determinant of > Hessian approach implementation. > It passes all you tests and works better with coin() images (it doesn't > detect a false coin as determinant of Hessian does in the example). > > May I ask you to do a review of my code later? > > Thank you. > > ???????????, 2 ????? 2015 ?., 18:05:09 UTC+1 ???????????? Daniil Pakhomov > ???????: >> >> Hello, >> >> I want to try to implement Hessian-Laplace blob detector (as mentioned in >> requested features on github page). >> >> Can someone give me the list of corresponding papers, using which I can >> implement it. >> >> Thank you. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vighneshbirodkar at gmail.com Mon Mar 9 02:48:45 2015 From: vighneshbirodkar at gmail.com (Vighnesh Birodkar) Date: Sun, 8 Mar 2015 23:48:45 -0700 (PDT) Subject: Hessian-Laplace blob detector. In-Reply-To: References: <39809677-4893-4261-9220-b9cd9be8b580@googlegroups.com> <1ad04c45-e4a5-4796-8e85-429b13e90b69@googlegroups.com> <2165ac74-8580-4ebc-9431-9b32749d3522@googlegroups.com> Message-ID: <92a216fa-2ab1-4197-8390-b6521d835a5f@googlegroups.com> Hey Daniil A good technical discussion is always welcome. Its never considered as spamming. Thanks Vighnesh On Monday, March 9, 2015 at 6:29:28 AM UTC+5:30, Daniil Pakhomov wrote: > > Thanks. > I've sent it. > > 2015-03-09 1:39 GMT+01:00 Josh Warner > >: > >> We'd welcome this as a PR on GitHub. That would be the ideal place for >> code review, etc. >> >> On Sunday, March 8, 2015 at 4:01:12 PM UTC-5, Daniil Pakhomov wrote: >>> >>> Really sorry for spamming you with questions. >>> No more need to answer. >>> I implemented this detector and it works as fast as your determinant of >>> Hessian approach implementation. >>> It passes all you tests and works better with coin() images (it doesn't >>> detect a false coin as determinant of Hessian does in the example). >>> >>> May I ask you to do a review of my code later? >>> >>> Thank you. >>> >>> ???????????, 2 ????? 2015 ?., 18:05:09 UTC+1 ???????????? Daniil >>> Pakhomov ???????: >>>> >>>> Hello, >>>> >>>> I want to try to implement Hessian-Laplace blob detector (as mentioned >>>> in requested features on github page). >>>> >>>> Can someone give me the list of corresponding papers, using which I can >>>> implement it. >>>> >>>> Thank you. >>>> >>> -- >> You received this message because you are subscribed to a topic in the >> Google Groups "scikit-image" group. >> To unsubscribe from this topic, visit >> https://groups.google.com/d/topic/scikit-image/ghIYwQFubEU/unsubscribe. >> To unsubscribe from this group and all its topics, send an email to >> scikit-image... at googlegroups.com . >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warmspringwinds at gmail.com Sun Mar 8 20:59:27 2015 From: warmspringwinds at gmail.com (=?UTF-8?B?0JTQsNC90LjQuNC7INCf0LDRhdC+0LzQvtCy?=) Date: Mon, 9 Mar 2015 01:59:27 +0100 Subject: Hessian-Laplace blob detector. In-Reply-To: <2165ac74-8580-4ebc-9431-9b32749d3522@googlegroups.com> References: <39809677-4893-4261-9220-b9cd9be8b580@googlegroups.com> <1ad04c45-e4a5-4796-8e85-429b13e90b69@googlegroups.com> <2165ac74-8580-4ebc-9431-9b32749d3522@googlegroups.com> Message-ID: Thanks. I've sent it. 2015-03-09 1:39 GMT+01:00 Josh Warner : > We'd welcome this as a PR on GitHub. That would be the ideal place for > code review, etc. > > On Sunday, March 8, 2015 at 4:01:12 PM UTC-5, Daniil Pakhomov wrote: >> >> Really sorry for spamming you with questions. >> No more need to answer. >> I implemented this detector and it works as fast as your determinant of >> Hessian approach implementation. >> It passes all you tests and works better with coin() images (it doesn't >> detect a false coin as determinant of Hessian does in the example). >> >> May I ask you to do a review of my code later? >> >> Thank you. >> >> ???????????, 2 ????? 2015 ?., 18:05:09 UTC+1 ???????????? Daniil Pakhomov >> ???????: >>> >>> Hello, >>> >>> I want to try to implement Hessian-Laplace blob detector (as mentioned >>> in requested features on github page). >>> >>> Can someone give me the list of corresponding papers, using which I can >>> implement it. >>> >>> Thank you. >>> >> -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/ghIYwQFubEU/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rockingsumes at gmail.com Mon Mar 9 12:50:54 2015 From: rockingsumes at gmail.com (Sumesh K.C.) Date: Mon, 9 Mar 2015 09:50:54 -0700 (PDT) Subject: Extracting/Isolating Segmented Objects in scikit-image In-Reply-To: References: Message-ID: <5de89d49-bef5-4a97-a88a-1a57260fd56b@googlegroups.com> > > Thank you, St?fan and Emma! > It really helped me to extract the objects after segmentation! (skimage.measure and other subpackages) I'm trying to perform indirect georeferencing of image where the object of interest will be the markers which represent GCP (Ground Control Point). These markers will be segmented and then the centroid of the segmented markers will correspond to the position of GCP in image. Right now, I'm using SLIC algorithm (Simple Linear Iterative Clustering) (skimage.segmentation.slic). According to your opinion, which segmentation algorithm would give better result. (even though the segmentation algorithm to be used depends upon the type/quality of the image) Again, thank you very much! waiting for your response! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ug201310004 at iitj.ac.in Mon Mar 9 12:52:48 2015 From: ug201310004 at iitj.ac.in (AMAN singh) Date: Mon, 9 Mar 2015 09:52:48 -0700 (PDT) Subject: GSoC: Rewriting scipy.ndimage in Cython Message-ID: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> Hi developers My name is Aman Singh and I am currently a second year undergraduate student of Computer Science department at Indian Institute of Technology, Jodhpur. I want to participate in GSoC'15 and the project I am aiming for is *porting scipy.ndimage to cython*. I have been following scipy for the last few months and have also made some contributions. I came across this project on their GSoC'15 ideas' page and found it interesting. I have done some research in the last week on my part. I am going through Cython documentation, scipy lecture on github and Richard's work of GSoC' 14 in which he ported cluster package to cython. While going through the module scipy.ndimage I also found that Thouis Jones had already ported a function ndimage.label() to cython. I can use that as a reference for the rest of the project. Please tell me whether I am on right track or not. If you can suggest me some resources which will be helpful to me in understanding the project, I would be highly obliged. Also, I would like to know that how much part of ndimage is to be ported under this project since it is a big module. Kindly provide me some suggestions and guide me through this. Regards, Aman Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From emmanuelle.gouillart at nsup.org Mon Mar 9 08:26:18 2015 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Mon, 9 Mar 2015 13:26:18 +0100 Subject: Extracting/Isolating Segmented Objects in scikit-image In-Reply-To: References: Message-ID: <20150309122618.GC952164@phare.normalesup.org> Hi Sumesh, also see http://scikit-image.org/docs/dev/auto_examples/plot_label.html#example-plot-label-py with the use of "label". Cheers, Emma On Sun, Mar 08, 2015 at 04:52:32PM -0700, St??fan van der Walt wrote: > Hi Sumesh > On Sun, Mar 8, 2015 at 8:17 AM, Sumesh K.C. wrote: > > I'm getting problem in extracting or simply isolating the segmented objects after running segmentation like quickshift, watershed, slic. > Please have a look at the tutorial at > https://github.com/scikit-image/skimage-tutorials > Regards > St??fan From tsyu80 at gmail.com Mon Mar 9 20:19:30 2015 From: tsyu80 at gmail.com (Tony Yu) Date: Mon, 9 Mar 2015 19:19:30 -0500 Subject: SciPy 2015 Conference: Call for proposals Message-ID: SciPy 2015 is requesting proposals for tutorials, talks, and posters for this year's conference. Note that there's a mini-symposium on "Visualization, Vision and Imaging", so this year will be particularly relevant for people here. Cheers! -Tony ---- *SciPy 2015 Conference (Scientific Computing with Python) Call for Proposals is Open: Submit Your Tutorial and Talk Ideas ASAP !* SciPy 2015, the fourteenth annual Scientific Computing with Python conference, will be held July 6-12, 2015 in Austin, Texas. SciPy is a community dedicated to the advancement of scientific computing through open source Python software for mathematics, science, and engineering. The annual SciPy Conference brings together over 500 participants from industry, academia, and government to showcase their latest projects, learn from skilled users and developers, and collaborate on code development. The full program will consist of two days of tutorials by followed by three days of presentations, and concludes with two days of developer sprints. More info available on the conference website:http://www.scipy2015.scipy.org ; you can also sign up for mailing list updates . Registration is expected to open at the end of March. We encourage you to submit tutorial or talk proposals in the categories below ; please also share with others who you?d like to see participate! *SCIPY TUTORIAL SESSION PROPOSALS ? requested by March 16, 2015* The SciPy experience kicks off with two days of tutorials. These sessions provide extremely affordable access to expert training, and consistently receive fantastic feedback from participants. We're looking for submissions on topics from introductory to advanced - we'll have attendees across the gamut looking to learn. Whether you are a major contributor to a scientific Python library or an expert-level user, this is a great opportunity to share your knowledge and stipends are available. *Submit Your Tutorial Proposal *on the SciPy 2015 website:http://scipy2015.scipy.org *SCIPY TALK AND POSTER SUBMISSIONS ? requested by April 1, 2015* SciPy 2015 will include 3 major topic tracks and 7 mini-symposia tracks. *Submit Your Talk Proposal *on the SciPy 2015 website: http://scipy2015.scipy.org Major topic tracks include: ? Scientific Computing in Python (General track) ? Python in Data Science ? and Quantitative and Computational Social Sciences. Mini-symposia will include the applications of Python in: ? Astronomy and astrophysics ? Computational life and medical sciences ? Engineering ? Geographic information systems (GIS) ? Geophysics ? Oceanography and meteorology ? Visualization, vision and imaging -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Tue Mar 10 02:21:54 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Mon, 9 Mar 2015 23:21:54 -0700 Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> Message-ID: Hi Aman On Mon, Mar 9, 2015 at 9:52 AM, AMAN singh wrote: > Please tell me whether I am on right track or not. If you can suggest me > some resources which will be helpful to me in understanding the project, I > would be highly obliged. Also, I would like to know that how much part of > ndimage is to be ported under this project since it is a big module. > Kindly provide me some suggestions and guide me through this. Thanks for your interest in GSoC 2015! Please have a look at the issues for scikit-image, and try and submit a few PRs so that we can work together and get to know you a bit better. Thanks! St?fan From giorgosragos at gmail.com Tue Mar 10 06:41:18 2015 From: giorgosragos at gmail.com (GiorgosR) Date: Tue, 10 Mar 2015 03:41:18 -0700 (PDT) Subject: extract_edges_after_segmentation Message-ID: <3db4de3d-e143-4cff-8c0f-8eb5d13af167@googlegroups.com> Hi there, First of all, many thanks for this very nice and useful package! I just started using the skimage library and I do some image processing in very noisy angiographic data. After some image enhancement in the obtained images, I applied a canny edge detection algorithm implemented in skimage and I came up with the attached image (after dilation and erosion). Do you have any advice on how could I extract only the vessel edges automatically from the image? I looked in examples with labels or binary hole filling but it does not work well for my problem. Many thanks in advance, Giorgos -------------- next part -------------- An HTML attachment was scrubbed... URL: From claiborne.morton at gmail.com Tue Mar 10 14:12:27 2015 From: claiborne.morton at gmail.com (Claiborne Morton) Date: Tue, 10 Mar 2015 11:12:27 -0700 (PDT) Subject: Equivalent of watershed for cutting connected components of an image of particles? In-Reply-To: References: Message-ID: Hey guys, Im following up on Adam's behalf, but this is an example of an image we are working with in trying to separate cells that are touching each other. Also you can see the top middle particle has a crescent shape, but is actually a healthy red blood cell that has been segmented incorrectly because of glare. Is that a way to connect the two tips of the shape so that I could then run "binary_fill_holes()" to correctly segment the cell. Thanks! On Wednesday, February 18, 2015 at 7:04:10 PM UTC-5, Adam Hughes wrote: > Hi, > > In ImageJ, one can select watershedding to break up connected regions of > particles. Are there any examples of using watershed in this capacity in > scikit image? All of the examples I see seem to use watershedding to do > segmentation, not to break connected particles in an already-segmented > black and white image. > > Also, is there a straightforward way to remove particles on a the edge of > an image? Sorry, googling is failing me, but I know this is possible. > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: RBC_Example.png Type: image/png Size: 17295 bytes Desc: not available URL: From stefanv at berkeley.edu Tue Mar 10 14:49:28 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Tue, 10 Mar 2015 11:49:28 -0700 Subject: hough ellipse fit inaccurate? In-Reply-To: <7004AA9B-CF59-4E3D-B978-9D5FEE90EB5B@demuc.de> References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> <5E085A92-AB38-472F-99F3-150E604D4CD8@demuc.de> <2cba4027-77ce-4187-ab0a-54467864b562@googlegroups.com> <7004AA9B-CF59-4E3D-B978-9D5FEE90EB5B@demuc.de> Message-ID: It may be a good idea to upsample your image before doing canny, because edges lie in between pixels, and can only be accurately marked with enough resolution. From jsch at demuc.de Tue Mar 10 14:18:41 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Tue, 10 Mar 2015 14:18:41 -0400 Subject: hough ellipse fit inaccurate? In-Reply-To: <2cba4027-77ce-4187-ab0a-54467864b562@googlegroups.com> References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> <5E085A92-AB38-472F-99F3-150E604D4CD8@demuc.de> <5c1155c0-bfd7-4b45-80 > <2cba4027-77ce-4187-ab0a-54467864b562@googlegroups.com> Message-ID: <7004AA9B-CF59-4E3D-B978-9D5FEE90EB5B@demuc.de> I just looked at it, and it seems like this is caused by canny - you probably want to focus on optimizing that part. (0.4px error is also not that bad) > On Mar 7, 2015, at 2:48 PM, Arno Dietz wrote: > > Hm I don't know. Do you have the file "true_coords.pickle" in the directory? So I think the simplest way is to post the true coordinates here so you can just copy and paste instead of loading it. > > true_coords = np.float32([[116.16552734, 56.91558838], > [119.50671387, 50.36520386], > [120.07568359, 47.97659302], > [118.15393066, 51.35003662], > [113.87670898, 58.54443359], > [115.45068359, 67.83599854], > [121.19805908, 61.79907227], > [122.86755371, 58.47949219], > [122.07769775, 59.83483887], > [117.28759766, 65.25402832], > [118.67297363, 74.89511108], > [123.27319336, 70.87173462], > [124.70935059, 69.62966919], > [122.70861816, 70.76901245], > [116.82958984, 76.34967041], > [118.11944580, 85.86563110], > [124.74987793, 82.02990723], > [127.60803223, 80.19348145], > [125.04113770, 81.24456787], > [120.22363281, 83.61611938], > [122.88574219, 93.01083374], > [128.35363770, 91.51263428], > [129.56744385, 89.71978760], > [126.35644531, 92.14715576], > [120.37841797, 93.98809814]]) > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From jsch at demuc.de Tue Mar 10 15:20:05 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Tue, 10 Mar 2015 15:20:05 -0400 Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> <5E085A92-AB38-472F-99F3-150E604D4CD8@demuc.de> <2cba4027-77ce-4187-ab0a-54467864b562@googlegroups.com> <7004AA9B-CF59-4E3D-B978-9D5FEE90EB5B@demuc.de> Message-ID: <67EC0A3D-E3ED-4799-AEAC-6D43F52CA7D9@demuc.de> @Stefan, good idea! I am curious to know whether this solves your problem. > On Mar 10, 2015, at 2:49 PM, St?fan van der Walt wrote: > > It may be a good idea to upsample your image before doing canny, > because edges lie in between pixels, and can only be accurately marked > with enough resolution. > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From stefanv at berkeley.edu Tue Mar 10 18:38:56 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Tue, 10 Mar 2015 15:38:56 -0700 Subject: extract_edges_after_segmentation In-Reply-To: <3db4de3d-e143-4cff-8c0f-8eb5d13af167@googlegroups.com> References: <3db4de3d-e143-4cff-8c0f-8eb5d13af167@googlegroups.com> Message-ID: Hi Giorgos On Tue, Mar 10, 2015 at 3:41 AM, GiorgosR wrote: > Do you have any advice on how could I extract only the vessel edges automatically from the image? I looked in examples with labels or binary hole filling but it does not work well for my problem. I think once you have gotten to the edge representation it may already be too late. What does your input data look like? St?fan From jni.soma at gmail.com Tue Mar 10 18:52:01 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Tue, 10 Mar 2015 15:52:01 -0700 (PDT) Subject: Equivalent of watershed for cutting connected components of an image of particles? In-Reply-To: References: Message-ID: <1426027921405.e0e5dc6@Nodemailer> You could do a morphology.closing. That's kind of why it's called that. =D Obviously you don't want to run it on the whole image, but I presume you're doing classification on the regionprops objects, so you could do the closing on each object individually. On Wed, Mar 11, 2015 at 5:12 AM, Claiborne Morton wrote: > Hey guys, Im following up on Adam's behalf, but this is an example of an > image we are working with in trying to separate cells that are touching > each other. > Also you can see the top middle particle has a crescent shape, but is > actually a healthy red blood cell that has been segmented incorrectly > because of glare. Is that a way to connect the two tips of the shape so > that I could then run "binary_fill_holes()" to correctly segment the cell. > Thanks! > On Wednesday, February 18, 2015 at 7:04:10 PM UTC-5, Adam Hughes wrote: >> Hi, >> >> In ImageJ, one can select watershedding to break up connected regions of >> particles. Are there any examples of using watershed in this capacity in >> scikit image? All of the examples I see seem to use watershedding to do >> segmentation, not to break connected particles in an already-segmented >> black and white image. >> >> Also, is there a straightforward way to remove particles on a the edge of >> an image? Sorry, googling is failing me, but I know this is possible. >> >> Thanks >> > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnodietz86 at googlemail.com Wed Mar 11 06:18:52 2015 From: arnodietz86 at googlemail.com (Arno Dietz) Date: Wed, 11 Mar 2015 03:18:52 -0700 (PDT) Subject: hough ellipse fit inaccurate? In-Reply-To: <67EC0A3D-E3ED-4799-AEAC-6D43F52CA7D9@demuc.de> References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> <5E085A92-AB38-472F-99F3-150E604D4CD8@demuc.de> <2cba4027-77ce-4187-ab0a-54467864b562@googlegroups.com> <7004AA9B-CF59-4E3D-B978-9D5FEE90EB5B@demuc.de> <67EC0A3D-E3ED-4799-AEAC-6D43F52CA7D9@demuc.de> Message-ID: <22ab8d1a-8ad5-45fa-a427-316685f3a0fa@googlegroups.com> Hi, ok it seams reasonable that its caused by canny because with other edge detection method (starburst) the ellipse fit works fine. Certainly 0.4 px isn't too bad. But my aim is a very high accuracy and the outliers are clearly systematic error s so they should be avoidable. Upsampling sounds like a good Idea. I tried it like this: ... img_upsampled = cv2.resize(img, (0, 0), fx=(8.0), fy=(8.0), interpolation = cv2.INTER_CUBIC) ret, thresh = cv2.threshold(img_upsampled, 20, 255, cv2.THRESH_BINARY_INV) img = canny(thresh, sigma=3).astype(np.uint8) img[img > 0] = 255 coords = np.column_stack(np.nonzero(img)) model, inliers = measure.ransac(coords, measure.EllipseModel, min_samples=5, residual_threshold=1, max_trials=200) cx = model.params[1] / 8.0 cy = model.params[0] / 8.0 But there are still a lot of outliers. The upsampled canny image doesn't look too good (see image). I also tried without thresholding and with different interpolation methods but without success. Regards, Arno -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: canny_upsampled.png Type: image/png Size: 7627 bytes Desc: not available URL: From stefanv at berkeley.edu Wed Mar 11 14:43:50 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Wed, 11 Mar 2015 11:43:50 -0700 Subject: hough ellipse fit inaccurate? In-Reply-To: <22ab8d1a-8ad5-45fa-a427-316685f3a0fa@googlegroups.com> References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> <5E085A92-AB38-472F-99F3-150E604D4CD8@demuc.de> <2cba4027-77ce-4187-ab0a-54467864b562@googlegroups.com> <7004AA9B-CF59-4E3D-B978-9D5FEE90EB5B@demuc.de> <67EC0A3D-E3ED-4799-AEAC-6D43F52CA7D9@demuc.de> <22ab8d1a-8ad5-45fa-a427-316685f3a0fa@googlegroups.com> Message-ID: Hi Arno On Wed, Mar 11, 2015 at 3:18 AM, Arno Dietz wrote: > ok it seams reasonable that its caused by canny because with other edge > detection method (starburst) the ellipse fit works fine. > Certainly 0.4 px isn't too bad. But my aim is a very high accuracy and the > outliers are clearly systematic errors so they should be avoidable. Do you suspect that there is something wrong with our implementation of Canny? Or can it be improved? If so, it would be well worth investigating further! St?fan From stefanv at berkeley.edu Wed Mar 11 15:23:36 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Wed, 11 Mar 2015 12:23:36 -0700 Subject: SciPy 2015 Conference: Call for proposals In-Reply-To: References: Message-ID: On Mon, Mar 9, 2015 at 5:19 PM, Tony Yu wrote: > SciPy 2015 is requesting proposals for tutorials, talks, and posters for > this year's conference. Note that there's a mini-symposium on > "Visualization, Vision and Imaging", so this year will be particularly > relevant for people here. I am interested in presenting a tutorial this year, but will be gone most of June, so I need a partner in crime to share in the fun. Any volunteers? St?fan From silvertrumpet999 at gmail.com Wed Mar 11 19:39:42 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Wed, 11 Mar 2015 16:39:42 -0700 (PDT) Subject: SciPy 2015 Conference: Call for proposals In-Reply-To: References: Message-ID: I'm definitely in this year! Blocked off the entire week for SciPy. Josh On Wednesday, March 11, 2015 at 2:24:02 PM UTC-5, stefanv wrote: > > On Mon, Mar 9, 2015 at 5:19 PM, Tony Yu wrote: > > SciPy 2015 is requesting proposals for tutorials, talks, and posters for > > this year's conference. Note that there's a mini-symposium on > > "Visualization, Vision and Imaging", so this year will be particularly > > relevant for people here. > > I am interested in presenting a tutorial this year, but will be gone > most of June, so I need a partner in crime to share in the fun. Any > volunteers? > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From claiborne.morton at gmail.com Wed Mar 11 17:15:33 2015 From: claiborne.morton at gmail.com (Claiborne Morton) Date: Wed, 11 Mar 2015 17:15:33 -0400 Subject: Equivalent of watershed for cutting connected components of an image of particles? In-Reply-To: References: Message-ID: Hey thanks for the help, here are a few other issues we are running into. When a sickle cell is in contact with a regular cell, we cannot find a way to separate the two. Also bottom-middle circle is of a healthy blood cell that is on its side. The watershed function tends to break these cells into two or more partitions when the should not be separated. Any idea on how to fix these problems? On Tue, Mar 10, 2015 at 2:12 PM, Claiborne Morton < claiborne.morton at gmail.com> wrote: > Hey guys, Im following up on Adam's behalf, but this is an example of an > image we are working with in trying to separate cells that are touching > each other. > Also you can see the top middle particle has a crescent shape, but is > actually a healthy red blood cell that has been segmented incorrectly > because of glare. Is that a way to connect the two tips of the shape so > that I could then run "binary_fill_holes()" to correctly segment the cell. > Thanks! > > > On Wednesday, February 18, 2015 at 7:04:10 PM UTC-5, Adam Hughes wrote: > >> Hi, >> >> In ImageJ, one can select watershedding to break up connected regions of >> particles. Are there any examples of using watershed in this capacity in >> scikit image? All of the examples I see seem to use watershedding to do >> segmentation, not to break connected particles in an already-segmented >> black and white image. >> >> Also, is there a straightforward way to remove particles on a the edge of >> an image? Sorry, googling is failing me, but I know this is possible. >> >> Thanks >> > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/VL6SZTWvAz8/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: WS Errors.png Type: image/png Size: 75805 bytes Desc: not available URL: From tcaswell at gmail.com Wed Mar 11 19:01:39 2015 From: tcaswell at gmail.com (Thomas Caswell) Date: Wed, 11 Mar 2015 23:01:39 +0000 Subject: Equivalent of watershed for cutting connected components of an image of particles? References: Message-ID: Jumping in from the peanut gallery, can you reliable identify when the segmentation has gone sideways? Looking at the second moment, area to bounding box area, or some other compactness measure? If you can get away with it, you could just drop the offending cells. If not, then you can try eroding the joined cells until they split into multiple segments. Tom On Wed, Mar 11, 2015, 17:15 Claiborne Morton wrote: > Hey thanks for the help, here are a few other issues we are running into. > When a sickle cell is in contact with a regular cell, we cannot find a way > to separate the two. Also bottom-middle circle is of a healthy blood cell > that is on its side. The watershed function tends to break these cells into > two or more partitions when the should not be separated. > Any idea on how to fix these problems? > > > On Tue, Mar 10, 2015 at 2:12 PM, Claiborne Morton < > claiborne.morton at gmail.com> wrote: > >> Hey guys, Im following up on Adam's behalf, but this is an example of an >> image we are working with in trying to separate cells that are touching >> each other. >> Also you can see the top middle particle has a crescent shape, but is >> actually a healthy red blood cell that has been segmented incorrectly >> because of glare. Is that a way to connect the two tips of the shape so >> that I could then run "binary_fill_holes()" to correctly segment the cell. >> Thanks! >> >> >> On Wednesday, February 18, 2015 at 7:04:10 PM UTC-5, Adam Hughes wrote: >> >>> Hi, >>> >>> In ImageJ, one can select watershedding to break up connected regions of >>> particles. Are there any examples of using watershed in this capacity in >>> scikit image? All of the examples I see seem to use watershedding to do >>> segmentation, not to break connected particles in an already-segmented >>> black and white image. >>> >>> Also, is there a straightforward way to remove particles on a the edge >>> of an image? Sorry, googling is failing me, but I know this is possible. >>> >>> Thanks >>> >> -- >> You received this message because you are subscribed to a topic in the >> Google Groups "scikit-image" group. >> To unsubscribe from this topic, visit >> https://groups.google.com/d/topic/scikit-image/VL6SZTWvAz8/unsubscribe. >> To unsubscribe from this group and all its topics, send an email to >> scikit-image+unsubscribe at googlegroups.com. > > >> For more options, visit https://groups.google.com/d/optout. >> > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Thu Mar 12 04:23:42 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Thu, 12 Mar 2015 01:23:42 -0700 Subject: SciPy 2015 Conference: Call for proposals In-Reply-To: References: Message-ID: On Wed, Mar 11, 2015 at 4:39 PM, Josh Warner wrote: > I'm definitely in this year! Blocked off the entire week for SciPy. Excellent, Josh! http://scipy2015.scipy.org/ehome/115969/259288/?& It's a 4 hour tutorial, which is quite lengthy and will mean having to add some extra material. Do you have any bandwidth to work on an abstract? The deadline is the 16th, I think. St?fan From arnodietz86 at googlemail.com Thu Mar 12 06:15:03 2015 From: arnodietz86 at googlemail.com (Arno Dietz) Date: Thu, 12 Mar 2015 03:15:03 -0700 (PDT) Subject: hough ellipse fit inaccurate? In-Reply-To: References: <8b232a1c-052c-44fb-8ed6-5d8bd5380761@googlegroups.com> <497ca5a4-2874-4650-8cde-bd160f1125d0@googlegroups.com> <762d8ac0-1081-44aa-9420-af88f4cc590d@googlegroups.com> <057812d8-80cd-4e8c-91e5-40bbf0dd6f8a@googlegroups.com> <982c4c08-fbbd-4db1-8a25-a210fee7f9f4@googlegroups.com> <22D5E74A-176B-4241-9AB6-299EC4A21DD7@demuc.de> <5E085A92-AB38-472F-99F3-150E604D4CD8@demuc.de> <2cba4027-77ce-4187-ab0a-54467864b562@googlegroups.com> <7004AA9B-CF59-4E3D-B978-9D5FEE90EB5B@demuc.de> <67EC0A3D-E3ED-4799-AEAC-6D43F52CA7D9@demuc.de> <22ab8d1a-8ad5-45fa-a427-316685f3a0fa@googlegroups.com> Message-ID: Hi St?fan, I don't know if I did something wrong. But I think there are not that much possibilities to do something wrong with canny. I also just realised.. when I use cv2.fitEllipse() with the same canny input, the ellipses are detected very accurate with no outliers. So maybe canny is not the problem? Regards, Arno -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Thu Mar 12 13:09:11 2015 From: hughesadam87 at gmail.com (Adam Hughes) Date: Thu, 12 Mar 2015 10:09:11 -0700 (PDT) Subject: Equivalent of watershed for cutting connected components of an image of particles? In-Reply-To: References: Message-ID: <94f8d82b-c978-4f68-b490-defcb00c8925@googlegroups.com> Thomas, Unfortunately, the cells of interest are the sickle cells, so isolating cells on their side and sickle cells is really important. If anything, it would be better to toss out the healthy cells. When you say the "segmentation has gone sideways", what do you mean exactly? Juan, What would binary closing do in particular? I didn't understand what you were saying On Wednesday, March 11, 2015 at 7:01:54 PM UTC-4, Thomas Caswell wrote: > > Jumping in from the peanut gallery, can you reliable identify when the > segmentation has gone sideways? Looking at the second moment, area to > bounding box area, or some other compactness measure? > > If you can get away with it, you could just drop the offending cells. If > not, then you can try eroding the joined cells until they split into > multiple segments. > > Tom > > On Wed, Mar 11, 2015, 17:15 Claiborne Morton > wrote: > >> Hey thanks for the help, here are a few other issues we are running into. >> When a sickle cell is in contact with a regular cell, we cannot find a way >> to separate the two. Also bottom-middle circle is of a healthy blood cell >> that is on its side. The watershed function tends to break these cells into >> two or more partitions when the should not be separated. >> Any idea on how to fix these problems? >> >> >> On Tue, Mar 10, 2015 at 2:12 PM, Claiborne Morton > > wrote: >> >>> Hey guys, Im following up on Adam's behalf, but this is an example of an >>> image we are working with in trying to separate cells that are touching >>> each other. >>> Also you can see the top middle particle has a crescent shape, but is >>> actually a healthy red blood cell that has been segmented incorrectly >>> because of glare. Is that a way to connect the two tips of the shape so >>> that I could then run "binary_fill_holes()" to correctly segment the cell. >>> Thanks! >>> >>> >>> On Wednesday, February 18, 2015 at 7:04:10 PM UTC-5, Adam Hughes wrote: >>> >>>> Hi, >>>> >>>> In ImageJ, one can select watershedding to break up connected regions >>>> of particles. Are there any examples of using watershed in this capacity >>>> in scikit image? All of the examples I see seem to use watershedding to >>>> do segmentation, not to break connected particles in an already-segmented >>>> black and white image. >>>> >>>> Also, is there a straightforward way to remove particles on a the edge >>>> of an image? Sorry, googling is failing me, but I know this is possible. >>>> >>>> Thanks >>>> >>> -- >>> You received this message because you are subscribed to a topic in the >>> Google Groups "scikit-image" group. >>> To unsubscribe from this topic, visit >>> https://groups.google.com/d/topic/scikit-image/VL6SZTWvAz8/unsubscribe. >>> To unsubscribe from this group and all its topics, send an email to >>> scikit-image... at googlegroups.com . >> >> >>> For more options, visit https://groups.google.com/d/optout. >>> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> For more options, visit https://groups.google.com/d/optout. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From claiborne.morton at gmail.com Thu Mar 12 14:15:49 2015 From: claiborne.morton at gmail.com (Claiborne Morton) Date: Thu, 12 Mar 2015 11:15:49 -0700 (PDT) Subject: peak_local_max() Question. Message-ID: <12b951f2-c51a-4c31-9bd3-a0aa9ec993b4@googlegroups.com> Hey guys, I am currently using this function for water-shedding in a project of mine. I am a bit curious as to what the parameter, threshold_abs is actually doing. The website here: http://scikit-image.org/docs/dev/api/skimage.feature.html#skimage.feature.peak_local_max says "Minimum intensity of peaks", but I am not sure how this applies to a binary image where intensity of a pixel is either 1 or zero. When set the parameter equal to an integer, say 12, it removes the smaller particles from the image, which is nice because that is something I need to do, but I am just not sure why it does this. Below is a comparison of the results with and with out the parameter. Could someone please explain what is happening? Images are attached for: With threshold_abs = 12 Without threshold_abs (I assume default is threshold_abs = zero?) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Thresh_12.png Type: image/png Size: 19071 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Without_Thresh.png Type: image/png Size: 20263 bytes Desc: not available URL: From siggin at gmail.com Thu Mar 12 15:05:42 2015 From: siggin at gmail.com (Sigmund) Date: Thu, 12 Mar 2015 12:05:42 -0700 (PDT) Subject: peak_local_max() Question. In-Reply-To: <12b951f2-c51a-4c31-9bd3-a0aa9ec993b4@googlegroups.com> References: <12b951f2-c51a-4c31-9bd3-a0aa9ec993b4@googlegroups.com> Message-ID: funny! Struggling with the same function at the same time. All I can say. Yes, it behaves funny and it stops being funny when you change line 141 in Lib\site-packages\skimage\feature\peak.py the max() has to be a min() Siggi -------------- next part -------------- An HTML attachment was scrubbed... URL: From siggin at gmail.com Thu Mar 12 15:42:05 2015 From: siggin at gmail.com (Sigmund) Date: Thu, 12 Mar 2015 12:42:05 -0700 (PDT) Subject: peak_local_max() Question. In-Reply-To: References: <12b951f2-c51a-4c31-9bd3-a0aa9ec993b4@googlegroups.com> Message-ID: On Thursday, March 12, 2015 at 8:05:42 PM UTC+1, Sigmund wrote: > funny! Struggling with the same function at the same time. > All I can say. Yes, it behaves funny and it stops being funny when you > change line 141 in Lib\site-packages\skimage\feature\peak.py > the max() has to be a min() > but only if you want your absolute threshold to be smaller than the default relative threshold which is 0.1 . In my opinion it would be better if the options "threshold_abs" and "threshold_rel" can't coexist. btw I'm referring to version='0.9.3' siggi -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Thu Mar 12 18:10:55 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Thu, 12 Mar 2015 15:10:55 -0700 Subject: peak_local_max() Question. In-Reply-To: References: <12b951f2-c51a-4c31-9bd3-a0aa9ec993b4@googlegroups.com> Message-ID: On Thu, Mar 12, 2015 at 12:42 PM, Sigmund wrote: > > > On Thursday, March 12, 2015 at 8:05:42 PM UTC+1, Sigmund wrote: >> >> funny! Struggling with the same function at the same time. >> All I can say. Yes, it behaves funny and it stops being funny when you >> change line 141 in Lib\site-packages\skimage\feature\peak.py >> the max() has to be a min() > > > but only if you want your absolute threshold to be smaller than the default > relative threshold which is 0.1 . > > In my opinion it would be better if the options "threshold_abs" and > "threshold_rel" can't coexist. It is know that the API of this function is problematic, and I'd like to see it fixed. But it means we will need someone to do some careful evaluation and propose fixes. Sigmund, Claiborne, would you be interested in helping us figure this one out? St?fan From silvertrumpet999 at gmail.com Thu Mar 12 18:43:56 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Thu, 12 Mar 2015 15:43:56 -0700 (PDT) Subject: SciPy 2015 Conference: Call for proposals In-Reply-To: References: Message-ID: <19eeb037-3b86-413b-bfa4-0f258295c30a@googlegroups.com> I can toss it around this weekend. Do we have a starting point, or should we work from scratch? Josh On Thursday, March 12, 2015 at 3:24:06 AM UTC-5, stefanv wrote: > > On Wed, Mar 11, 2015 at 4:39 PM, Josh Warner wrote: > > I'm definitely in this year! Blocked off the entire week for SciPy. > > Excellent, Josh! > > http://scipy2015.scipy.org/ehome/115969/259288/?& > > It's a 4 hour tutorial, which is quite lengthy and will mean having to > add some extra material. > > Do you have any bandwidth to work on an abstract? The deadline is the > 16th, I think. > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yutaxsato at gmail.com Thu Mar 12 03:04:13 2015 From: yutaxsato at gmail.com (Yuta Sato) Date: Thu, 12 Mar 2015 16:04:13 +0900 Subject: Range of beta values in segmentation algorithm? Message-ID: In the following skimage.segmentation.random_walker algorithm: What is the range of 'beta' values that can be supplied? I am working with a single band 8bit unsigned image. Is it 0 to 255? skimage.segmentation.random_walker(data, labels, beta=130, mode='bf', tol=0.001, copy=True,multichannel=False, return_full_prob=False, spacing=None) beta : float [Penalization coefficient for the random walker motion (the greater beta, the more difficult the diffusion)] Thanks for your support. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcaswell at gmail.com Thu Mar 12 13:16:03 2015 From: tcaswell at gmail.com (Thomas Caswell) Date: Thu, 12 Mar 2015 17:16:03 +0000 Subject: Equivalent of watershed for cutting connected components of an image of particles? In-Reply-To: <94f8d82b-c978-4f68-b490-defcb00c8925@googlegroups.com> References: <94f8d82b-c978-4f68-b490-defcb00c8925@googlegroups.com> Message-ID: By 'sideways' I mean "didn't work right". On Thu, Mar 12, 2015 at 1:09 PM Adam Hughes wrote: > Thomas, > > Unfortunately, the cells of interest are the sickle cells, so isolating > cells on their side and sickle cells is really important. If anything, it > would be better to toss out the healthy cells. When you say the > "segmentation has gone sideways", what do you mean exactly? > > Juan, > > What would binary closing do in particular? I didn't understand what you > were saying > > > On Wednesday, March 11, 2015 at 7:01:54 PM UTC-4, Thomas Caswell wrote: > >> Jumping in from the peanut gallery, can you reliable identify when the >> segmentation has gone sideways? Looking at the second moment, area to >> bounding box area, or some other compactness measure? >> >> If you can get away with it, you could just drop the offending cells. If >> not, then you can try eroding the joined cells until they split into >> multiple segments. >> >> Tom >> >> On Wed, Mar 11, 2015, 17:15 Claiborne Morton >> wrote: >> > Hey thanks for the help, here are a few other issues we are running into. >>> When a sickle cell is in contact with a regular cell, we cannot find a way >>> to separate the two. Also bottom-middle circle is of a healthy blood cell >>> that is on its side. The watershed function tends to break these cells into >>> two or more partitions when the should not be separated. >>> Any idea on how to fix these problems? >>> >>> >>> On Tue, Mar 10, 2015 at 2:12 PM, Claiborne Morton >> > wrote: >>> >>>> Hey guys, Im following up on Adam's behalf, but this is an example of >>>> an image we are working with in trying to separate cells that are touching >>>> each other. >>>> Also you can see the top middle particle has a crescent shape, but is >>>> actually a healthy red blood cell that has been segmented incorrectly >>>> because of glare. Is that a way to connect the two tips of the shape so >>>> that I could then run "binary_fill_holes()" to correctly segment the cell. >>>> Thanks! >>>> >>>> >>>> On Wednesday, February 18, 2015 at 7:04:10 PM UTC-5, Adam Hughes wrote: >>>> >>>>> Hi, >>>>> >>>>> In ImageJ, one can select watershedding to break up connected regions >>>>> of particles. Are there any examples of using watershed in this capacity >>>>> in scikit image? All of the examples I see seem to use watershedding to >>>>> do segmentation, not to break connected particles in an already-segmented >>>>> black and white image. >>>>> >>>>> Also, is there a straightforward way to remove particles on a the edge >>>>> of an image? Sorry, googling is failing me, but I know this is possible. >>>>> >>>>> Thanks >>>>> >>>> -- >>>> You received this message because you are subscribed to a topic in the >>>> Google Groups "scikit-image" group. >>>> To unsubscribe from this topic, visit https://groups.google.com/d/ >>>> topic/scikit-image/VL6SZTWvAz8/unsubscribe. >>>> >>> To unsubscribe from this group and all its topics, send an email to >>>> scikit-image... at googlegroups.com. >>> >>> >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "scikit-image" group. >>> >> To unsubscribe from this group and stop receiving emails from it, send an >>> email to scikit-image... at googlegroups.com. >> >> >>> For more options, visit https://groups.google.com/d/optout. >>> >> -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dzungng89 at gmail.com Thu Mar 12 20:27:58 2015 From: dzungng89 at gmail.com (Dzung Nguyen) Date: Thu, 12 Mar 2015 17:27:58 -0700 (PDT) Subject: Steerable pyramid Message-ID: Hi all, I implemented Steerable pyramid (similar to Gabor transform). Would skimage community be interested in this? I am thinking of adding API for image transforms, and have all popular transform out there? (orthogonal, Gabor, steerable etc) https://github.com/andreydung/Steerable-filter -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Thu Mar 12 20:48:35 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Thu, 12 Mar 2015 17:48:35 -0700 (PDT) Subject: Steerable pyramid In-Reply-To: References: Message-ID: <1c82516d-08a5-46d4-b8fb-4dc4dcb8328d@googlegroups.com> We have Gabor filters implemented in `skimage.filters`, but IMO I'd be open to adding alternative perceptual filters. Looks like nice clean work! On Thursday, March 12, 2015 at 7:29:18 PM UTC-5, Dzung Nguyen wrote: > > Hi all, > > I implemented Steerable pyramid (similar to Gabor transform). Would > skimage community be interested in this? I am thinking of adding API for > image transforms, and have all popular transform out there? (orthogonal, > Gabor, steerable etc) > > https://github.com/andreydung/Steerable-filter > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Thu Mar 12 22:03:01 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 12 Mar 2015 19:03:01 -0700 (PDT) Subject: SciPy 2015 Conference: Call for proposals In-Reply-To: <19eeb037-3b86-413b-bfa4-0f258295c30a@googlegroups.com> References: <19eeb037-3b86-413b-bfa4-0f258295c30a@googlegroups.com> Message-ID: <1426212181175.ae302573@Nodemailer> Josh, did you get my email with last year's proposal? On Fri, Mar 13, 2015 at 9:43 AM, Josh Warner wrote: > I can toss it around this weekend. Do we have a starting point, or should > we work from scratch? > Josh > On Thursday, March 12, 2015 at 3:24:06 AM UTC-5, stefanv wrote: >> >> On Wed, Mar 11, 2015 at 4:39 PM, Josh Warner wrote: >> > I'm definitely in this year! Blocked off the entire week for SciPy. >> >> Excellent, Josh! >> >> http://scipy2015.scipy.org/ehome/115969/259288/?& >> >> It's a 4 hour tutorial, which is quite lengthy and will mean having to >> add some extra material. >> >> Do you have any bandwidth to work on an abstract? The deadline is the >> 16th, I think. >> >> St?fan >> > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Thu Mar 12 22:51:30 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 12 Mar 2015 19:51:30 -0700 (PDT) Subject: Equivalent of watershed for cutting connected components of an image of particles? In-Reply-To: <94f8d82b-c978-4f68-b490-defcb00c8925@googlegroups.com> References: <94f8d82b-c978-4f68-b490-defcb00c8925@googlegroups.com> Message-ID: <1426215089779.b1771889@Nodemailer> Hey Adam, "Closing" is a dilation followed by an erosion. If you have "gaps" that are smaller than the footprint (aka structuring element) of the operation, they will be "closed". =) (e.g. a C turning into an O if the distance between the tips of the C is small enough.) See a small example here: http://nbviewer.ipython.org/github/jni/skimage-tutorials/blob/master/scipy-2014/solved/3_morphological_operations.ipynb Though come to think of it we should change the shape to be a C rather than an O with a thin bit. Juan. On Fri, Mar 13, 2015 at 4:09 AM, Adam Hughes wrote: > Thomas, > Unfortunately, the cells of interest are the sickle cells, so isolating > cells on their side and sickle cells is really important. If anything, it > would be better to toss out the healthy cells. When you say the > "segmentation has gone sideways", what do you mean exactly? > Juan, > What would binary closing do in particular? I didn't understand what you > were saying > On Wednesday, March 11, 2015 at 7:01:54 PM UTC-4, Thomas Caswell wrote: >> >> Jumping in from the peanut gallery, can you reliable identify when the >> segmentation has gone sideways? Looking at the second moment, area to >> bounding box area, or some other compactness measure? >> >> If you can get away with it, you could just drop the offending cells. If >> not, then you can try eroding the joined cells until they split into >> multiple segments. >> >> Tom >> >> On Wed, Mar 11, 2015, 17:15 Claiborne Morton > > wrote: >> >>> Hey thanks for the help, here are a few other issues we are running into. >>> When a sickle cell is in contact with a regular cell, we cannot find a way >>> to separate the two. Also bottom-middle circle is of a healthy blood cell >>> that is on its side. The watershed function tends to break these cells into >>> two or more partitions when the should not be separated. >>> Any idea on how to fix these problems? >>> >>> >>> On Tue, Mar 10, 2015 at 2:12 PM, Claiborne Morton >> > wrote: >>> >>>> Hey guys, Im following up on Adam's behalf, but this is an example of an >>>> image we are working with in trying to separate cells that are touching >>>> each other. >>>> Also you can see the top middle particle has a crescent shape, but is >>>> actually a healthy red blood cell that has been segmented incorrectly >>>> because of glare. Is that a way to connect the two tips of the shape so >>>> that I could then run "binary_fill_holes()" to correctly segment the cell. >>>> Thanks! >>>> >>>> >>>> On Wednesday, February 18, 2015 at 7:04:10 PM UTC-5, Adam Hughes wrote: >>>> >>>>> Hi, >>>>> >>>>> In ImageJ, one can select watershedding to break up connected regions >>>>> of particles. Are there any examples of using watershed in this capacity >>>>> in scikit image? All of the examples I see seem to use watershedding to >>>>> do segmentation, not to break connected particles in an already-segmented >>>>> black and white image. >>>>> >>>>> Also, is there a straightforward way to remove particles on a the edge >>>>> of an image? Sorry, googling is failing me, but I know this is possible. >>>>> >>>>> Thanks >>>>> >>>> -- >>>> You received this message because you are subscribed to a topic in the >>>> Google Groups "scikit-image" group. >>>> To unsubscribe from this topic, visit >>>> https://groups.google.com/d/topic/scikit-image/VL6SZTWvAz8/unsubscribe. >>>> To unsubscribe from this group and all its topics, send an email to >>>> scikit-image... at googlegroups.com . >>> >>> >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>> -- >>> You received this message because you are subscribed to the Google Groups >>> "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to scikit-image... at googlegroups.com . >>> For more options, visit https://groups.google.com/d/optout. >>> >> > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dzungng89 at gmail.com Fri Mar 13 16:09:27 2015 From: dzungng89 at gmail.com (Dzung Nguyen) Date: Fri, 13 Mar 2015 13:09:27 -0700 (PDT) Subject: Steerable pyramid In-Reply-To: <1c82516d-08a5-46d4-b8fb-4dc4dcb8328d@googlegroups.com> References: <1c82516d-08a5-46d4-b8fb-4dc4dcb8328d@googlegroups.com> Message-ID: <842195c1-5efe-412a-9bcf-47806e017012@googlegroups.com> I created a PR here: https://github.com/scikit-image/scikit-image/pull/1425 On Thursday, March 12, 2015 at 7:48:35 PM UTC-5, Josh Warner wrote: > > We have Gabor filters implemented in `skimage.filters`, but IMO I'd be > open to adding alternative perceptual filters. > > Looks like nice clean work! > > > On Thursday, March 12, 2015 at 7:29:18 PM UTC-5, Dzung Nguyen wrote: >> >> Hi all, >> >> I implemented Steerable pyramid (similar to Gabor transform). Would >> skimage community be interested in this? I am thinking of adding API for >> image transforms, and have all popular transform out there? (orthogonal, >> Gabor, steerable etc) >> >> https://github.com/andreydung/Steerable-filter >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yutaxsato at gmail.com Fri Mar 13 04:19:29 2015 From: yutaxsato at gmail.com (Yuta Sato) Date: Fri, 13 Mar 2015 17:19:29 +0900 Subject: Apply segmentation to a large binary image Message-ID: Dear SKIMAGE Developers and Users: I want to use the following algorithm in a large binary image that does not fit into my PC memory. So, I am thinking to split my large image into tiles and apply algorithm one by one. However, the original border definition change when I do it in parts. I need the result as applied in original full image. How can I do it efficiently? skimage.segmentation.clear_border(image, buffer_size=0, bgval=0) Thanks for your ideas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Fri Mar 13 21:22:56 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Fri, 13 Mar 2015 18:22:56 -0700 (PDT) Subject: SciPy 2015 Conference: Call for proposals In-Reply-To: <1426212181175.ae302573@Nodemailer> References: <19eeb037-3b86-413b-bfa4-0f258295c30a@googlegroups.com> <1426212181175.ae302573@Nodemailer> Message-ID: <90faf374-f354-47e3-b59f-92ea003be013@googlegroups.com> Got it, thanks - just now getting time to deal with it ;) On Thursday, March 12, 2015 at 9:03:03 PM UTC-5, Juan Nunez-Iglesias wrote: > > Josh, did you get my email with last year's proposal? > > > On Fri, Mar 13, 2015 at 9:43 AM, Josh Warner wrote: > >> I can toss it around this weekend. Do we have a starting point, or should >> we work from scratch? >> >> Josh >> >> >> On Thursday, March 12, 2015 at 3:24:06 AM UTC-5, stefanv wrote: >>> >>> On Wed, Mar 11, 2015 at 4:39 PM, Josh Warner wrote: >>> > I'm definitely in this year! Blocked off the entire week for SciPy. >>> >>> Excellent, Josh! >>> >>> http://scipy2015.scipy.org/ehome/115969/259288/?& >>> >>> It's a 4 hour tutorial, which is quite lengthy and will mean having to >>> add some extra material. >>> >>> Do you have any bandwidth to work on an abstract? The deadline is the >>> 16th, I think. >>> >>> St?fan >>> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Fri Mar 13 22:04:10 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Fri, 13 Mar 2015 19:04:10 -0700 (PDT) Subject: Apply segmentation to a large binary image In-Reply-To: References: Message-ID: <1426298649642.e76b8ca@Nodemailer> Hey Yuta, You'll need to do some stitching out-of-core. That's a really tricky problem and I don't have any ready-made solutions for you. The solution will depend on the nature of your segments. The only thing I would recommend is that you use a format such as HDF5 (you can use the excellent h5py library) that allows random access into the underlying disk data. Other than that, as I said, to my knowledge you'll have to develop your own stitching: segment *overlapping* tiles independently in memory, and when it comes time to write to disk, load the tile and overlapping tiles that have already been segmented, and resolve label mapping then... Generally, think of it this way: tile i has already been segmented and written out. We now want to write out tile j, which overlaps tile i. Labels from tile i that intersect labels from tile j in the overlap region should be matched. labels in tile j that *don't* intersect tile i should be relabelled to ensure they are unique with respect to tile i. Of course this gets a bit more complicated in 2D or 3D... Juan. On Fri, Mar 13, 2015 at 7:20 PM, Yuta Sato wrote: > Dear SKIMAGE Developers and Users: > I want to use the following algorithm in a large binary image that does not > fit into my PC memory. So, I am thinking to split my large image into tiles > and apply algorithm one by one. However, the original border definition > change when I do it in parts. I need the result as applied in original full > image. How can I do it efficiently? > skimage.segmentation.clear_border(image, buffer_size=0, bgval=0) > Thanks for your ideas. > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahmed.osman99 at gmail.com Sat Mar 14 14:15:28 2015 From: ahmed.osman99 at gmail.com (Ahmed Osman) Date: Sat, 14 Mar 2015 11:15:28 -0700 (PDT) Subject: COSFIRE Filters feature Message-ID: <27532c40-db96-47ea-983f-fd161729d197@googlegroups.com> Hi All, I am planning to implement the COSFIRE feature. I am new to contributing to scikit-image, so any form of mentorship is really valuable. Can some one let me know what I need to do for a smooth pull request merge? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Sat Mar 14 15:58:43 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Sat, 14 Mar 2015 12:58:43 -0700 (PDT) Subject: COSFIRE Filters feature In-Reply-To: <27532c40-db96-47ea-983f-fd161729d197@googlegroups.com> References: <27532c40-db96-47ea-983f-fd161729d197@googlegroups.com> Message-ID: <647374d4-2190-4398-8089-afd167eeeb44@googlegroups.com> Hi Ahmed, Making your first contribution can feel like a challenge. We?ve tried to provide a guide to this that covers the best practices and also tells you *why* it?s important to do these things, instead of just providing a magic set of steps/Git commands. The guide is located here: http://scikit-image.org/docs/dev/contribute.html#development-process If you have questions beyond the scope of that process, or need something explained, we?d be happy to help. Regards, Josh On Saturday, March 14, 2015 at 2:53:40 PM UTC-5, Ahmed Osman wrote: Hi All, > > I am planning to implement the COSFIRE feature. I am new to contributing > to scikit-image, so any form of mentorship is really valuable. Can some one > let me know what I need to do for a smooth pull request merge? > > thanks > ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Sat Mar 14 19:24:54 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Sat, 14 Mar 2015 16:24:54 -0700 (PDT) Subject: Apply segmentation to a large binary image In-Reply-To: <1426298649642.e76b8ca@Nodemailer> References: <1426298649642.e76b8ca@Nodemailer> Message-ID: Would it be possible to generalize / refactor `clear_border` to a function which removes all points connected to a specific pixel/voxel? That would greatly simplify the work needed here. I thought we had some sort of `remove_object` functionality like this, but I don't see it. Josh On Friday, March 13, 2015 at 9:04:12 PM UTC-5, Juan Nunez-Iglesias wrote: > > Hey Yuta, > > You'll need to do some stitching out-of-core. That's a really tricky > problem and I don't have any ready-made solutions for you. The solution > will depend on the nature of your segments. The only thing I would > recommend is that you use a format such as HDF5 (you can use the excellent > h5py library) that allows random access into the underlying disk data. > > Other than that, as I said, to my knowledge you'll have to develop your > own stitching: segment *overlapping* tiles independently in memory, and > when it comes time to write to disk, load the tile and overlapping tiles > that have already been segmented, and resolve label mapping then... > > Generally, think of it this way: tile i has already been segmented and > written out. We now want to write out tile j, which overlaps tile i. Labels > from tile i that intersect labels from tile j in the overlap region should > be matched. labels in tile j that *don't* intersect tile i should be > relabelled to ensure they are unique with respect to tile i. > > Of course this gets a bit more complicated in 2D or 3D... > > Juan. > > > > > On Fri, Mar 13, 2015 at 7:20 PM, Yuta Sato wrote: > >> Dear SKIMAGE Developers and Users: >> >> I want to use the following algorithm in a large binary image that does >> not fit into my PC memory. So, I am thinking to split my large image into >> tiles and apply algorithm one by one. However, the original border >> definition change when I do it in parts. I need the result as applied in >> original full image. How can I do it efficiently? >> >> skimage.segmentation.clear_border(image, buffer_size=0, bgval=0) >> >> Thanks for your ideas. >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Sat Mar 14 21:18:16 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sat, 14 Mar 2015 18:18:16 -0700 (PDT) Subject: Apply segmentation to a large binary image In-Reply-To: References: Message-ID: <1426382296175.071e7ea3@Nodemailer> Josh, you might be thinking of morphology.remove_small_objects, but that is O(image.size), rather than O(sum(image == label)), which is what you are after. In fact we would need a flood-fill algorithm, which we don't have... That would be a fantastic addition. On Sun, Mar 15, 2015 at 10:24 AM, Josh Warner wrote: > Would it be possible to generalize / refactor `clear_border` to a function > which removes all points connected to a specific pixel/voxel? That would > greatly simplify the work needed here. > I thought we had some sort of `remove_object` functionality like this, but > I don't see it. > Josh > On Friday, March 13, 2015 at 9:04:12 PM UTC-5, Juan Nunez-Iglesias wrote: >> >> Hey Yuta, >> >> You'll need to do some stitching out-of-core. That's a really tricky >> problem and I don't have any ready-made solutions for you. The solution >> will depend on the nature of your segments. The only thing I would >> recommend is that you use a format such as HDF5 (you can use the excellent >> h5py library) that allows random access into the underlying disk data. >> >> Other than that, as I said, to my knowledge you'll have to develop your >> own stitching: segment *overlapping* tiles independently in memory, and >> when it comes time to write to disk, load the tile and overlapping tiles >> that have already been segmented, and resolve label mapping then... >> >> Generally, think of it this way: tile i has already been segmented and >> written out. We now want to write out tile j, which overlaps tile i. Labels >> from tile i that intersect labels from tile j in the overlap region should >> be matched. labels in tile j that *don't* intersect tile i should be >> relabelled to ensure they are unique with respect to tile i. >> >> Of course this gets a bit more complicated in 2D or 3D... >> >> Juan. >> >> >> >> >> On Fri, Mar 13, 2015 at 7:20 PM, Yuta Sato wrote: >> >>> Dear SKIMAGE Developers and Users: >>> >>> I want to use the following algorithm in a large binary image that does >>> not fit into my PC memory. So, I am thinking to split my large image into >>> tiles and apply algorithm one by one. However, the original border >>> definition change when I do it in parts. I need the result as applied in >>> original full image. How can I do it efficiently? >>> >>> skimage.segmentation.clear_border(image, buffer_size=0, bgval=0) >>> >>> Thanks for your ideas. >>> >>> -- >>> You received this message because you are subscribed to the Google Groups >>> "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to scikit-image+unsubscribe at googlegroups.com. >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Mar 15 13:45:36 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 15 Mar 2015 18:45:36 +0100 Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> Message-ID: On Tue, Mar 10, 2015 at 7:21 AM, St?fan van der Walt wrote: > Hi Aman > > On Mon, Mar 9, 2015 at 9:52 AM, AMAN singh wrote: > > Please tell me whether I am on right track or not. If you can suggest me > > some resources which will be helpful to me in understanding the project, > I > > would be highly obliged. Also, I would like to know that how much part of > > ndimage is to be ported under this project since it is a big module. > > Kindly provide me some suggestions and guide me through this. > Hi Aman, the idea is to port the whole module. I think you should make a plan for that. We are aware that it's a large job, and whether or not it's feasible to complete all of ndimage within one GSoC depends on how fast you will go. Compared to porting scipy.cluster last year I'd guess that ndimage is >2x more work. However, Richard last year implemented new features in addition to completing the port, so for a fast student I expect it to be possible to complete the whole module. I would expect the main challenge to be to make the Cython version (close to) as fast as the current C code. > Thanks for your interest in GSoC 2015! Please have a look at the > issues for scikit-image, and try and submit a few PRs so that we can > work together and get to know you a bit better. > @all: it's maybe good to know that Aman has already submitted 5 PRs to Scipy (4 small ones merged, 1 larger one for which the bottleneck is on our side): https://github.com/scipy/scipy/pulls?q=is%3Apr+author%3Abewithaman+is%3Aclosed @Aman: the majority of expertise and mentoring power will likely come from the scikit-image devs, so it would be good to submit a few scikit-image PRs as Stefan says. Feel free to ping me - I read the scikit-image mailing list but not Github activity. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaeldmueller7 at gmail.com Mon Mar 16 15:49:16 2015 From: michaeldmueller7 at gmail.com (Michael Mueller) Date: Mon, 16 Mar 2015 12:49:16 -0700 (PDT) Subject: GSOC introduction: Dynamic time warping Message-ID: <2d12029c-8d6b-4a4e-a941-64f7114abbaf@googlegroups.com> Hello everyone, My name is Michael Mueller and I am a first-year math and computer science major at the University of Massachusetts, Amherst. Last year, I participated in GSOC with AstroPy , an open-source astronomy library in Python. My project involved writing a C/Cython text file reader (and writer) to increase the speed of ASCII capabilities in `astropy.io.ascii`. My blog from the summer is viewable here . This year, having used scikit-image recently for image processing as part of a research project, I'm interested in submitting a proposal to work with scikit-image on the dynamic time warping project. While I have no previous knowledge of dynamic time warping, I found a nice introduction to the subject and am excited to read more about it. Any other references I should take a look at? I also noticed that the GSOC page mentions that students should have a solid PR merged, which I plan to start. Are there any particular issues that might be good to tackle, or any good entry points to get used to the scikit-image code base? Cheers, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Tue Mar 17 02:15:40 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Mon, 16 Mar 2015 23:15:40 -0700 Subject: GSOC introduction: Dynamic time warping In-Reply-To: <2d12029c-8d6b-4a4e-a941-64f7114abbaf@googlegroups.com> References: <2d12029c-8d6b-4a4e-a941-64f7114abbaf@googlegroups.com> Message-ID: Hi Michael On Mon, Mar 16, 2015 at 12:49 PM, Michael Mueller wrote: > This year, having used scikit-image recently for image processing as part of > a research project, I'm interested in submitting a proposal to work with > scikit-image on the dynamic time warping project. While I have no previous > knowledge of dynamic time warping, I found a nice introduction to the > subject and am excited to read more about it. Any other references I should > take a look at? Thanks for the introduction and for your interest! I would very much like to see a good implementation of DTW in skimage. On the GSoC page, there are two references available--the R dtw paper is a good place to start. > I also noticed that the GSOC page mentions that students should have a solid > PR merged, which I plan to start. Are there any particular issues that might > be good to tackle, or any good entry points to get used to the scikit-image > code base? No specific issues come to mind; I'd suggest browsing the issues list on GitHub and picking some low hanging fruit. You don't need to implement anything major, we just want to work with you, doing code review etc., so that we know one another a bit better by the time the GSoC applications roll in. Best regards St?fan From claiborne.morton at gmail.com Thu Mar 19 18:07:57 2015 From: claiborne.morton at gmail.com (Claiborne Morton) Date: Thu, 19 Mar 2015 18:07:57 -0400 Subject: Water-shedding non-circular particles Message-ID: Hey guys, Im still having trouble finding ways to separate touching particles if the are not both circular. Further when dealing with elliptical shapes, a single particle tends to incorrectly get cut in half. Any ideas how I could change parameters in the water-shedding function to correct for this? Attached are a few problem cases so you can see examples. Thanks, Clay [image: Inline image 1] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: unnamed.png Type: image/png Size: 75805 bytes Desc: not available URL: From emmanuelle.gouillart at nsup.org Thu Mar 19 18:24:08 2015 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Thu, 19 Mar 2015 23:24:08 +0100 Subject: Water-shedding non-circular particles In-Reply-To: References: Message-ID: <20150319222408.GE223357@phare.normalesup.org> Hi Clay, how do you select the markers and the elevation function used in the watershed algorithm? Could you include the code that results in the segmentation you attached? Cheers, Emmanuelle On Thu, Mar 19, 2015 at 06:07:57PM -0400, Claiborne Morton wrote: > Hey guys, Im still having trouble finding ways to separate touching particles > if the are not both circular. Further when dealing with elliptical shapes, a > single particle tends to incorrectly get cut in half. Any ideas how I could > change parameters in the water-shedding function to correct for this? Attached > are a few problem cases so you can see examples. > Thanks, > Clay > Inline image 1 From stefanv at berkeley.edu Mon Mar 23 04:31:44 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Mon, 23 Mar 2015 01:31:44 -0700 Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> Message-ID: Hi folks, On Sun, Mar 15, 2015 at 10:45 AM, Ralf Gommers wrote: > @all: it's maybe good to know that Aman has already submitted 5 PRs to Scipy > (4 small ones merged, 1 larger one for which the bottleneck is on our side): > https://github.com/scipy/scipy/pulls?q=is%3Apr+author%3Abewithaman+is%3Aclosed I wasn't aware--thanks for the heads-up! St?fan From ferdinand.greiss at gmail.com Mon Mar 23 11:15:10 2015 From: ferdinand.greiss at gmail.com (Ferdinand Greiss) Date: Mon, 23 Mar 2015 08:15:10 -0700 (PDT) Subject: Dynamic intensity scaling for CollectionViewer Message-ID: <913659af-9db7-4a2f-815b-c5ad913a863a@googlegroups.com> Hallo, Thanks for your amazing project on image processing. I was wondering whether it would be straight forward to implement automatic rescaling of intensity (or y axis on LineProfile widget) in order to account for bleaching in image sequences. Otherwise the line plot will vanish below the pre-computed value of the first image and I won't be able to see much after the first few images. Thanks for any help or/and suggestions. Ferdinand -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Mon Mar 23 18:56:48 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Mon, 23 Mar 2015 15:56:48 -0700 Subject: Dynamic intensity scaling for CollectionViewer In-Reply-To: <913659af-9db7-4a2f-815b-c5ad913a863a@googlegroups.com> References: <913659af-9db7-4a2f-815b-c5ad913a863a@googlegroups.com> Message-ID: Hi Ferdinand On Mon, Mar 23, 2015 at 8:15 AM, Ferdinand Greiss wrote: > I was wondering whether it would be straight forward to implement automatic > rescaling of intensity (or y axis on LineProfile widget) in order to account > for bleaching in image sequences. Otherwise the line plot will vanish below > the pre-computed value of the first image and I won't be able to see much > after the first few images. I think that should be pretty simple. Can you share a code snippet of how you are using it currently? St?fan From stefanv at berkeley.edu Mon Mar 23 19:21:16 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 23 Mar 2015 16:21:16 -0700 Subject: Light Exhibit Message-ID: <87h9tbz4er.fsf@berkeley.edu> Hi everyone Here's a call for participation in an image exhibition taking place in Paris and San Francisco, among others. """ Light in all of its forms allows us to communicate, entertain, explore, and understand the world we inhabit and the Universe we live in. This exhibit shows you some examples of the myriad of wonderful things that light can do, and how it plays a critical role in our lives every day. WE ARE LOOKING FOR IMAGES THAT: -Illustrate different aspects/phenomena of light in nature, such as lensing, reflection, refraction, atomic collisions, shadows, etc. Examples might include sunsets showing a distorted Sun, auroras, rainbows, sunrises, light rays, lightning, Sun halos, moon dogs, etc. -We are also looking for natural photos of biologic importance tied to light such as sea turtle hatchlings, migrating birds, butterflies, bats at dusk, photosynthesis, sunflowers, etc. -Microscopic images and medically-themed images such as brain scans, images of cells, etc. -Also needed, Earth at Night scences, including our views of the Milky Way at night, "light pollution" themed images, etc. -We have pinned examples to Pinterest: - http://www.pinterest.com/kimberlyarcand/light-beyond-the-bulb-core-30%2B-images-for-iyl/ """ http://lightexhibit.org/about.html St?fan From stefanv at berkeley.edu Mon Mar 23 19:21:55 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 23 Mar 2015 16:21:55 -0700 Subject: SpaceX public photo archive Message-ID: <87fv8vz4do.fsf@berkeley.edu> Hi everyone A heads up from Josh Warner on Gitter that SpaceX now has an enormous photo collection in the public domain: https://www.flickr.com/photos/spacexphotos St?fan From jni.soma at gmail.com Mon Mar 23 19:35:04 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Mon, 23 Mar 2015 16:35:04 -0700 (PDT) Subject: SpaceX public photo archive In-Reply-To: <87fv8vz4do.fsf@berkeley.edu> References: <87fv8vz4do.fsf@berkeley.edu> Message-ID: <1427153704339.b931dc1c@Nodemailer> Very cool! On Tue, Mar 24, 2015 at 10:22 AM, Stefan van der Walt wrote: > Hi everyone > A heads up from Josh Warner on Gitter that SpaceX now has an > enormous photo collection in the public domain: > https://www.flickr.com/photos/spacexphotos > St?fan > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.silvester at gmail.com Mon Mar 23 20:42:29 2015 From: steven.silvester at gmail.com (Steven Silvester) Date: Mon, 23 Mar 2015 17:42:29 -0700 (PDT) Subject: Dynamic intensity scaling for CollectionViewer In-Reply-To: <913659af-9db7-4a2f-815b-c5ad913a863a@googlegroups.com> References: <913659af-9db7-4a2f-815b-c5ad913a863a@googlegroups.com> Message-ID: <72c413f2-760b-4800-8124-96c899c3b7df@googlegroups.com> Ferdinand, You could pass all of the images through skimage.exposure.rescale_intensity prior to adding them to the ImageCollection. See also http://fiji.sc/Bleach_Correction for a list of other pre-processing ideas. Regards, Steve On Monday, March 23, 2015 at 12:17:02 PM UTC-5, Ferdinand Greiss wrote: Hallo, > > Thanks for your amazing project on image processing. > > I was wondering whether it would be straight forward to implement > automatic rescaling of intensity (or y axis on LineProfile widget) in order > to account for bleaching in image sequences. Otherwise the line plot will > vanish below the pre-computed value of the first image and I won't be able > to see much after the first few images. > > Thanks for any help or/and suggestions. > Ferdinand > ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Mon Mar 23 20:58:38 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 23 Mar 2015 17:58:38 -0700 Subject: Fiscal sponsorship Message-ID: <87zj73xlc1.fsf@berkeley.edu> Hi everyone scikit-image is getting enough traction that we can consider finding some development sponsorships (think, e.g., of getting the team together for sprints, etc.). Along that vein, I would like to propose that we sign a fiscal sponsorship agreement with NumFocus to manage the legal aspects of any funds raised. Please let me know if you have any concerns. Regards St?fan From ferdinand.greiss at gmail.com Tue Mar 24 05:09:55 2015 From: ferdinand.greiss at gmail.com (Ferdinand Greiss) Date: Tue, 24 Mar 2015 02:09:55 -0700 (PDT) Subject: Dynamic intensity scaling for CollectionViewer In-Reply-To: References: <913659af-9db7-4a2f-815b-c5ad913a863a@googlegroups.com> Message-ID: <6ef4a775-b17e-4cf1-a9c8-205954cab7be@googlegroups.com> from skimage.viewer import CollectionViewer from skimage.viewer.plugins.lineprofile import LineProfile import numpy as np stack = np.random.randn(100, 50, 50) stack *= np.exp(-0.1*np.arange(stack.shape[0]))[:, np.newaxis, np.newaxis] viewer = CollectionViewer(stack) viewer += LineProfile() viewer.show() I actually did change the source code of tifffile.py written by Christoph Gohlke on line 4660 to include rescaling for min and max of current image in image stack (by pressing r). But it'd be nice to still be able to use your plugins... Thanks for your prompt answer... Ferdinand Am Montag, 23. M?rz 2015 23:57:11 UTC+1 schrieb stefanv: > > Hi Ferdinand > > On Mon, Mar 23, 2015 at 8:15 AM, Ferdinand Greiss > > wrote: > > I was wondering whether it would be straight forward to implement > automatic > > rescaling of intensity (or y axis on LineProfile widget) in order to > account > > for bleaching in image sequences. Otherwise the line plot will vanish > below > > the pre-computed value of the first image and I won't be able to see > much > > after the first few images. > > I think that should be pretty simple. Can you share a code snippet of > how you are using it currently? > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From claiborne.morton at gmail.com Tue Mar 24 16:42:36 2015 From: claiborne.morton at gmail.com (Claiborne Morton) Date: Tue, 24 Mar 2015 13:42:36 -0700 (PDT) Subject: Water-shedding non-circular particles In-Reply-To: <20150319222408.GE223357@phare.normalesup.org> References: <20150319222408.GE223357@phare.normalesup.org> Message-ID: <1d6981e0-56b5-44fe-976f-da49f9c1ce0f@googlegroups.com> Hey sorry for getting back to you so late. Here is the code I am using as well as the segmentation (attached). Im a little new to this, so do not know what you mean by "selecting the markers and elevation function". Do you see any reason why I am getting these errors? import skimage.morphology as morphology from skimage.segmentation import random_walker from skimage.morphology import watershed from skimage.feature import peak_local_max image = binary_filled # Now we want to separate the two objects in image # Generate the markers as local maxima of the distance # to the background from scipy import ndimage distance = ndimage.distance_transform_edt(image) local_maxi = peak_local_max(distance, indices=False, footprint=np.ones((20, 20)), labels=image) markers = morphology.label(local_maxi) labels_ws = watershed(-distance, markers, mask=image) Thanks again! Clay On Thursday, March 19, 2015 at 6:24:10 PM UTC-4, Emmanuelle Gouillart wrote: > > Hi Clay, > > how do you select the markers and the elevation function used in the > watershed algorithm? Could you include the code that results in the > segmentation you attached? > > Cheers, > Emmanuelle > > On Thu, Mar 19, 2015 at 06:07:57PM -0400, Claiborne Morton wrote: > > Hey guys, Im still having trouble finding ways to separate touching > particles > > if the are not both circular. Further when dealing with elliptical > shapes, a > > single particle tends to incorrectly get cut in half. Any ideas how I > could > > change parameters in the water-shedding function to correct for this? > Attached > > are a few problem cases so you can see examples. > > > Thanks, > > Clay > > Inline image 1 > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Original Binary.png Type: image/png Size: 20220 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Watershedding.png Type: image/png Size: 426579 bytes Desc: not available URL: From stefanv at berkeley.edu Tue Mar 24 16:43:27 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Tue, 24 Mar 2015 13:43:27 -0700 Subject: Dynamic intensity scaling for CollectionViewer In-Reply-To: <6ef4a775-b17e-4cf1-a9c8-205954cab7be@googlegroups.com> References: <913659af-9db7-4a2f-815b-c5ad913a863a@googlegroups.com> <6ef4a775-b17e-4cf1-a9c8-205954cab7be@googlegroups.com> Message-ID: Hi Ferdinand On Tue, Mar 24, 2015 at 2:09 AM, Ferdinand Greiss wrote: > stack = np.random.randn(100, 50, 50) > stack *= np.exp(-0.1*np.arange(stack.shape[0]))[:, np.newaxis, np.newaxis] When you construct your stack, you can simple run ``skimage.exposure.rescale_intensity`` on each image as Steven suggested. If you want that to happen automatically, you can even specify a loading function in ``skimage.io.ImageCollection``, as the ``load_func`` parameter. Otherwise, simply load the collection and iterate over it. Regards St?fan From tcaswell at gmail.com Tue Mar 24 11:30:15 2015 From: tcaswell at gmail.com (Thomas Caswell) Date: Tue, 24 Mar 2015 15:30:15 +0000 Subject: Dynamic intensity scaling for CollectionViewer In-Reply-To: <6ef4a775-b17e-4cf1-a9c8-205954cab7be@googlegroups.com> References: <913659af-9db7-4a2f-815b-c5ad913a863a@googlegroups.com> <6ef4a775-b17e-4cf1-a9c8-205954cab7be@googlegroups.com> Message-ID: I strongly suggest finding a solution that does not involve changing tifffile.py. Managing those sort of changes will very quickly become unsustainable. Tom On Tue, Mar 24, 2015 at 5:17 AM Ferdinand Greiss wrote: > > from skimage.viewer import CollectionViewer > > from skimage.viewer.plugins.lineprofile import LineProfile > > import numpy as np > > stack = np.random.randn(100, 50, 50) > > stack *= np.exp(-0.1*np.arange(stack.shape[0]))[:, np.newaxis, np.newaxis] > > > viewer = CollectionViewer(stack) > > viewer += LineProfile() > > viewer.show() > > > I actually did change the source code of tifffile.py written by Christoph > Gohlke on line 4660 to include rescaling for min and max of current image > in image stack (by pressing r). But it'd be nice to still be able to use > your plugins... > > > Thanks for your prompt answer... > > Ferdinand > > Am Montag, 23. M?rz 2015 23:57:11 UTC+1 schrieb stefanv: >> >> Hi Ferdinand >> >> On Mon, Mar 23, 2015 at 8:15 AM, Ferdinand Greiss >> > wrote: >> > I was wondering whether it would be straight forward to implement >> automatic >> > rescaling of intensity (or y axis on LineProfile widget) in order to >> account >> > for bleaching in image sequences. Otherwise the line plot will vanish >> below >> > the pre-computed value of the first image and I won't be able to see >> much >> > after the first few images. >> >> I think that should be pretty simple. Can you share a code snippet of >> how you are using it currently? >> >> St?fan >> > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ug201310004 at iitj.ac.in Tue Mar 24 20:34:55 2015 From: ug201310004 at iitj.ac.in (AMAN singh) Date: Tue, 24 Mar 2015 17:34:55 -0700 (PDT) Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> Message-ID: <6307eb39-c271-4f74-b0cf-05fca6065c9e@googlegroups.com> Hi Everyone I have made a basic draft of my proposal here . Please review it and suggest modifications. @Ralf and @stefanv thanks for the suggestions. Regards, Aman On Tuesday, March 10, 2015 at 6:54:06 AM UTC+5:30, AMAN singh wrote: > > Hi developers > > My name is Aman Singh and I am currently a second year undergraduate > student of Computer Science department at Indian Institute of Technology, > Jodhpur. I want to participate in GSoC'15 and the project I am aiming for > is *porting scipy.ndimage to cython*. I have been following scipy for the > last few months and have also made some contributions. I came across this > project on their GSoC'15 ideas' page and found it interesting. > I have done some research in the last week on my part. I am going through > Cython documentation, scipy lecture on github and Richard's work of GSoC' > 14 in which he ported cluster package to cython. While going through the > module scipy.ndimage I also found that Thouis Jones had already ported a > function ndimage.label() to cython. I can use that as a reference for > the rest of the project. > > Please tell me whether I am on right track or not. If you can suggest me > some resources which will be helpful to me in understanding the project, I > would be highly obliged. Also, I would like to know that how much part of > ndimage is to be ported under this project since it is a big module. > Kindly provide me some suggestions and guide me through this. > > Regards, > > Aman Singh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Tue Mar 24 20:36:21 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Tue, 24 Mar 2015 17:36:21 -0700 (PDT) Subject: Water-shedding non-circular particles In-Reply-To: <1d6981e0-56b5-44fe-976f-da49f9c1ce0f@googlegroups.com> References: <1d6981e0-56b5-44fe-976f-da49f9c1ce0f@googlegroups.com> Message-ID: <1427243781024.1fb0dfbf@Nodemailer> Hi all, Incidentally, Clay got a nice answer on the ImageJ site pointing to the "watershed irregular structures" [1] plugin. It's a pretty clever idea that I hope we can get into scikit-image. I asked Jan Brocher to relicense as BSD and he's done so! I'll be working on porting this sometime soon. Clay, it's really difficult to get perfect segmentation using simple image processing primitives, so I'm actually quite impressed at the accuracy you have already! Anyway, it looks like a few local maxima are getting suppressed (e.g. your rightmost example). There's been some recent developments on that function that I haven't been following closely, but that's one promising place for parameter tuning. I would also run remove_small_objects to get rid of some of the chaff. Do you have the original (non-binary) image? Doing some more sophisticated edge detection there might help you as well. Juan. [1] http://fiji.sc/BioVoxxel_Toolbox#Watershed_Irregular_Structures On Wed, Mar 25, 2015 at 7:42 AM, Claiborne Morton wrote: > Hey sorry for getting back to you so late. Here is the code I am using as > well as the segmentation (attached). Im a little new to this, so do not > know what you mean by "selecting the markers and elevation function". Do > you see any reason why I am getting these errors? > import skimage.morphology as morphology > from skimage.segmentation import random_walker > from skimage.morphology import watershed > from skimage.feature import peak_local_max > image = binary_filled > # Now we want to separate the two objects in image > # Generate the markers as local maxima of the distance > # to the background > from scipy import ndimage > distance = ndimage.distance_transform_edt(image) > local_maxi = peak_local_max(distance, indices=False, > footprint=np.ones((20, 20)), labels=image) > markers = morphology.label(local_maxi) > labels_ws = watershed(-distance, markers, mask=image) > Thanks again! > Clay > On Thursday, March 19, 2015 at 6:24:10 PM UTC-4, Emmanuelle Gouillart wrote: >> >> Hi Clay, >> >> how do you select the markers and the elevation function used in the >> watershed algorithm? Could you include the code that results in the >> segmentation you attached? >> >> Cheers, >> Emmanuelle >> >> On Thu, Mar 19, 2015 at 06:07:57PM -0400, Claiborne Morton wrote: >> > Hey guys, Im still having trouble finding ways to separate touching >> particles >> > if the are not both circular. Further when dealing with elliptical >> shapes, a >> > single particle tends to incorrectly get cut in half. Any ideas how I >> could >> > change parameters in the water-shedding function to correct for this? >> Attached >> > are a few problem cases so you can see examples. >> >> > Thanks, >> > Clay >> > Inline image 1 >> > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Tue Mar 24 21:00:33 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Tue, 24 Mar 2015 18:00:33 -0700 Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: <6307eb39-c271-4f74-b0cf-05fca6065c9e@googlegroups.com> References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> <6307eb39-c271-4f74-b0cf-05fca6065c9e@googlegroups.com> Message-ID: Hi Aman On Tue, Mar 24, 2015 at 5:34 PM, AMAN singh wrote: > I have made a basic draft of my proposal here. > Please review it and suggest modifications. I would suggest that, instead of filling out the leaves of the tree, we start by fully porting one piece of functionality. It would be good if you could construct at least a list of top level functions to be ported. The timeline is currently a bit vague. Regards St?fan From jni.soma at gmail.com Tue Mar 24 22:04:13 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Tue, 24 Mar 2015 19:04:13 -0700 (PDT) Subject: Fiscal sponsorship In-Reply-To: <87zj73xlc1.fsf@berkeley.edu> References: <87zj73xlc1.fsf@berkeley.edu> Message-ID: <1427249053051.9ade7485@Nodemailer> I don't understand anything about the legal aspects of this (what's a "fiscal sponsorship agreement"?), but I do trust you to get this right and I absolutely love the idea! Especially if we make the first sprint in Hawaii as was vaguely (inadvertently?) mooted a few weeks ago on Gitter. =P On Tue, Mar 24, 2015 at 11:58 AM, Stefan van der Walt wrote: > Hi everyone > scikit-image is getting enough traction that we can consider > finding some development sponsorships (think, e.g., of getting the > team together for sprints, etc.). Along that vein, I would like > to propose that we sign a fiscal sponsorship agreement with > NumFocus to manage the legal aspects of any funds raised. > Please let me know if you have any concerns. > Regards > St?fan > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dzungng89 at gmail.com Tue Mar 24 22:44:33 2015 From: dzungng89 at gmail.com (Dzung Nguyen) Date: Tue, 24 Mar 2015 19:44:33 -0700 (PDT) Subject: Range of a,b in Lab color space Message-ID: What is the range of a, b coordinate in Lab space? -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Tue Mar 24 22:50:48 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Tue, 24 Mar 2015 19:50:48 -0700 (PDT) Subject: Fiscal sponsorship In-Reply-To: <87zj73xlc1.fsf@berkeley.edu> References: <87zj73xlc1.fsf@berkeley.edu> Message-ID: <72b3a4dc-0105-4be3-8a7e-b248fc758ed0@googlegroups.com> NumFocus is the right approach. You've got my support, St?fan! Josh On Monday, March 23, 2015 at 7:58:41 PM UTC-5, stefanv wrote: > > Hi everyone > > scikit-image is getting enough traction that we can consider > finding some development sponsorships (think, e.g., of getting the > team together for sprints, etc.). Along that vein, I would like > to propose that we sign a fiscal sponsorship agreement with > NumFocus to manage the legal aspects of any funds raised. > > Please let me know if you have any concerns. > > Regards > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Tue Mar 24 23:26:54 2015 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Tue, 24 Mar 2015 20:26:54 -0700 Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: <6307eb39-c271-4f74-b0cf-05fca6065c9e@googlegroups.com> References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> <6307eb39-c271-4f74-b0cf-05fca6065c9e@googlegroups.com> Message-ID: On Tue, Mar 24, 2015 at 5:34 PM, AMAN singh wrote: > Hi Everyone > > I have made a basic draft of my proposal here > > . > Please review it and suggest modifications. > Hi Aman, This may not be 100% true for all the functionality, but I believe that the gist of the ndimage module is in the 4-5 object-like constructs in ni_support, namely: - NI_Iterator in its three flavors: point, subspace and line iterator, - NI_LineBuffer and - NI_FilterIterator. Closely linked to this is the choice of a method to deal with multiple dtypes, a question for which I don't think there is an obvious answer. Since performance is critical, you may want to take a look at bottleneck's use of templates that are pre-processed before cythonizing and compiling. If you get these right, then rather than the leaves of the tree, you will have built a solid foundation, more like the the trunk: porting all the other modules is then going to mostly be little more than an exercise in translation. So I would suggest that you devote more time to getting these fundamental questions right, as some trial and error is going to be inevitable. Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Mar 25 03:13:57 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 25 Mar 2015 08:13:57 +0100 Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> <6307eb39-c271-4f74-b0cf-05fca6065c9e@googlegroups.com> Message-ID: On Wed, Mar 25, 2015 at 4:26 AM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > On Tue, Mar 24, 2015 at 5:34 PM, AMAN singh > wrote: > >> Hi Everyone >> >> I have made a basic draft of my proposal here >> >> . >> Please review it and suggest modifications. >> > > Hi Aman, > > This may not be 100% true for all the functionality, but I believe that > the gist of the ndimage module is in the 4-5 object-like constructs in > ni_support, namely: > > > - NI_Iterator in its three flavors: point, subspace and line iterator, > - NI_LineBuffer and > - NI_FilterIterator. > > Closely linked to this is the choice of a method to deal with multiple > dtypes, a question for which I don't think there is an obvious answer. > Since performance is critical, you may want to take a look at bottleneck's > use of templates that are pre-processed before cythonizing and compiling. > > If you get these right, then rather than the leaves of the tree, you will > have built a solid foundation, more like the the trunk: porting all the > other modules is then going to mostly be little more than an exercise in > translation. So I would suggest that you devote more time to getting these > fundamental questions right, as some trial and error is going to be > inevitable. > This sounds like great advice. Comments on the timeline: - week 1 "reading through the code": I think you should cover, and will have covered this, in the communitiy bonding period. At the start of week 1 you should be at the point where you start tacking the problem. Probably by doing what Jaime says above. - unit tests, docs and benchmarks: these cannot be separated from writing code. Each PR should have decent unit test coverage and a decent docstring. Plus since performance is critical you have to benchmark your code as you go. The only thing you could reserve time for at the end is writing some longer documentation (maybe a tutorial), benchmarks in ASV format (see https://github.com/scipy/scipy/tree/master/benchmarks) and some minor cleanups. Regarding "I will also use better algorithms when possible to improve the time complexity of the functions": it is important to not mix porting code from C to Cython with changing the algorithm, because when output of a function doesn't match the current Scipy output you don't know whether porting or algorithm changes are the cause. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From georgeshattab at gmail.com Wed Mar 25 12:01:50 2015 From: georgeshattab at gmail.com (Georges H) Date: Wed, 25 Mar 2015 09:01:50 -0700 (PDT) Subject: Example: Scikit-image and trackpy (bubble tracking in foams) In-Reply-To: <5478733C.9040703@sciunto.org> References: <546E2D64.8030208@sciunto.org> <20141120192756.GB28567@phare.normalesup.org> <871tona3bo.fsf@sun.ac.za> <5478733C.9040703@sciunto.org> Message-ID: <61516b5e-5e05-402b-9deb-015393249ef0@googlegroups.com> Hey everyone, I have looked into this one earlier and bookmarked it out of curiosity but the link Fran?ois posted is no longer working Can i find this demo somewhere ? Much appreciated. On Friday, 28 November 2014 14:07:27 UTC+1, Fran?ois Boulogne wrote: > > > > Francois, it'd be fantastic to have this one in there as well. > > > > Sure ! :) > > -- > Fran?ois Boulogne. > http://www.sciunto.org > GPG: 32D5F22F > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nelle.varoquaux at gmail.com Wed Mar 25 06:57:40 2015 From: nelle.varoquaux at gmail.com (Nelle Varoquaux) Date: Wed, 25 Mar 2015 11:57:40 +0100 Subject: Fwd: ANN: SciPy (Scientific Python) 2015 Call for Proposals & Registration Open - tutorial & talk submissions due April 1st In-Reply-To: <44e64802-74d6-4294-ae8d-29b2e5bdd283@googlegroups.com> References: <44e64802-74d6-4294-ae8d-29b2e5bdd283@googlegroups.com> Message-ID: Hello everyone, (I apologize for the cross posting). This is a quick reminder that the call for submission for Scipy 2015 is open but due April 1st! There is only 7 days left to submit a proposal. Thanks, Nelle ---------- Forwarded message ---------- From: Courtenay Godshall Date: 19 March 2015 at 04:46 Subject: ANN: SciPy (Scientific Python) 2015 Call for Proposals & Registration Open - tutorial & talk submissions due April 1st To: python-list at python.org **SciPy 2015 Conference (Scientific Computing with Python) Call for Proposals: Submit Your Tutorial and Talk Ideas by April 1, 2015 at http://scipy2015.scipy.org.** SciPy 2015, the fourteenth annual Scientific Computing with Python conference, will be held July 6-12, 2015 in Austin, Texas. SciPy is a community dedicated to the advancement of scientific computing through open source Python software for mathematics, science, and engineering. The annual SciPy Conference brings together over 500 participants from industry, academia, and government to showcase their latest projects, learn from skilled users and developers, and collaborate on code development. The full program will consist of two days of tutorials by followed by three days of presentations, and concludes with two days of developer sprints. More info available on the conference website at http://scipy2015.scipy.org; you can also sign up on the website for mailing list updates or follow @scipyconf on Twitter. We hope you'll join us - early bird registration is open until May 15, 2015 at http://scipy2015.scipy.org/ehome/115969/259272/?& We encourage you to submit tutorial or talk proposals in the categories below; please also share with others who you'd like to see participate! Submit via the conference website @ http://scipy2015.scipy.org. *SCIPY TUTORIAL SESSION PROPOSALS - DEADLINE EXTENDED TO WED APRIL 1, 2015* The SciPy experience kicks off with two days of tutorials. These sessions provide extremely affordable access to expert training, and consistently receive fantastic feedback from participants. We're looking for submissions on topics from introductory to advanced - we'll have attendees across the gamut looking to learn. Whether you are a major contributor to a scientific Python library or an expert-level user, this is a great opportunity to share your knowledge and stipends are available. Submit Your Tutorial Proposal on the SciPy 2015 website: http://scipy2015.scipy.org *SCIPY TALK AND POSTER SUBMISSIONS - DUE April 1, 2015* SciPy 2015 will include 3 major topic tracks and 7 mini-symposia tracks. Submit Your Talk Proposal on the SciPy 2015 website: http://scipy2015.scipy.org Major topic tracks include: - Scientific Computing in Python (General track) - Python in Data Science - Quantitative and Computational Social Sciences Mini-symposia will include the applications of Python in: - Astronomy and astrophysics - Computational life and medical sciences - Engineering - Geographic information systems (GIS) - Geophysics - Oceanography and meteorology - Visualization, vision and imaging If you have any questions or comments, feel free to contact us at: scipy-organizers at scipy.org. -- https://mail.python.org/mailman/listinfo/python-list From kwiechen1 at gmail.com Wed Mar 25 16:08:22 2015 From: kwiechen1 at gmail.com (Kai Wiechen) Date: Wed, 25 Mar 2015 13:08:22 -0700 (PDT) Subject: Water-shedding non-circular particles In-Reply-To: References: Message-ID: <5ae5bc7e-eca6-4f2c-bbac-b95ac8c4b8ce@googlegroups.com> I have a problem closely related to this (segmenting touching irregular nuclei in histology images). Is it possible to get the original (RGB ??) image to test something? Best regards, Kai On Thursday, March 19, 2015 at 11:07:58 PM UTC+1, Claiborne Morton wrote: > > Hey guys, Im still having trouble finding ways to separate touching > particles if the are not both circular. Further when dealing with > elliptical shapes, a single particle tends to incorrectly get cut in half. > Any ideas how I could change parameters in the water-shedding function to > correct for this? Attached are a few problem cases so you can see examples. > > Thanks, > Clay > [image: Inline image 1] > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.niccoli at gmail.com Wed Mar 25 16:13:20 2015 From: matteo.niccoli at gmail.com (Matteo) Date: Wed, 25 Mar 2015 13:13:20 -0700 (PDT) Subject: Issue with morphological filters Message-ID: *Issues with morphological filters when trying to remove white holes in black objects in a binary images. Using opening or filling holes on inverted (or complement) of the original binary.* Hi there I have a series of derivatives calculated on geophysical data. Many of these derivatives have nice continuous maxima, so I treat them as images on which I do some cleanup with morphological filter. Here's one example of operations that I do routinely, and successfully: # threshold theta map using Otsu method thresh_th = threshold_otsu(theta) binary_th = theta > thresh_th # clean up small objects label_objects_th, nb_labels_th = sp.ndimage.label(binary_th) sizes_th = np.bincount(label_objects_th.ravel()) mask_sizes_th = sizes_th > 175 mask_sizes_th[0] = 0 binary_cleaned_th = mask_sizes_th[label_objects_th] # further enhance with morphological closing (dilation followed by an erosion) to remove small dark spots and connect small bright cracks # followed by an extra erosion selem = disk(1) closed_th = closing(binary_cleaned_th, selem)/255 eroded_th = erosion(closed_th, selem)/255 # Finally, extract lienaments using skeletonization skeleton_th=skeletonize(binary_th) skeleton_cleaned_th=skeletonize(binary_cleaned_th) # plot to compare fig = plt.figure(figsize=(20, 7)) ax = fig.add_subplot(1, 2, 1) imshow(skeleton_th, cmap='bone_r', interpolation='none') ax2 = fig.add_subplot(1, 3, 2) imshow(skeleton_cleaned_th, cmap='bone_r', interpolation='none') ax.set_xticks([]) ax.set_yticks([]) ax2.set_xticks([]) ax2.set_yticks([]) Unfortunately I cannot share the data as it is proprietary, but I will for the next example, which is the one that does not work. There's one derivative that shows lots of detail but not continuous maxima. As a workaround I created filled contours in Matplotlib exported as an image. The image is attached. Now I want to import back the image and plot it to test: # import back image cfthdr=io.imread('filled_contour.png') # threshold using using Otsu method thresh_thdr = threshold_otsu(cfthdr) binary_thdr = cfthdr > thresh_thdr # plot it fig = plt.figure(figsize=(5, 5)) ax = fig.add_subplot(1, 1, 1) ax.set_xticks([]) ax.set_yticks([]) plt.imshow(binary_thdr, cmap='bone') plt.show() The above works without issues. Next I want to fill the white holes inside the black blobs. I thought of 2 strategies. The first would be to use opening; the second to invert the image, and then fill the holes as in here: http://scikit-image.org/docs/dev/auto_examples/plot_holes_and_peaks.html By the way, I found a similar example for opencv here http://stackoverflow.com/questions/10316057/filling-holes-inside-a-binary-object Let's start with opening. When I try: selem = disk(1) opened_thdr = opening(binary_thdr, selem) or: selem = disk(1) opened_thdr = opening(cfthdr, selem) I get an error message like this: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () 1 #binary_thdr=img_as_float(binary_thdr,force_copy=False) ----> 2 opened_thdr = opening(binary_thdr, selem)/255 3 4 # plot it 5 fig = plt.figure(figsize=(5, 5)) C:\...\skimage\morphology\grey.pyc in opening(image, selem, out) 160 shift_y = True if (h % 2) == 0 else False 161 --> 162 eroded = erosion(image, selem) 163 out = dilation(eroded, selem, out=out, shift_x=shift_x, shift_y=shift_y) 164 return out C:\...\skimage\morphology\grey.pyc in erosion(image, selem, out, shift_x, shift_y) 58 selem = img_as_ubyte(selem) 59 return cmorph._erode(image, selem, out=out, ---> 60 shift_x=shift_x, shift_y=shift_y) 61 62 C:\...\skimage\morphology\cmorph.pyd in skimage.morphology.cmorph._erode (skimage\morphology\cmorph.c:2658)() ValueError: Buffer has wrong number of dimensions (expected 2, got 3) --------------------------------------------------------------------------- Any idea of what is going on and how I can fix it? As for inverting (or finding the complement) and then hole filling, that would be my preferred option. However, I have not been able to invert the image. I tried numpy.invert, adapting the last example from here: http://docs.scipy.org/doc/numpy/reference/generated/numpy.invert.html I tried something like this: http://stackoverflow.com/a/16724700 and this: http://stackoverflow.com/a/2498909 But none of these methods worked. Is there a way in scikit.image to do that, and if not, do you have any suggestions? Thank you, Matteo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: filled_contours.png Type: image/png Size: 19400 bytes Desc: not available URL: From fboulogne at sciunto.org Wed Mar 25 12:02:28 2015 From: fboulogne at sciunto.org (=?UTF-8?B?RnJhbsOnb2lzIEJvdWxvZ25l?=) Date: Wed, 25 Mar 2015 17:02:28 +0100 Subject: Example: Scikit-image and trackpy (bubble tracking in foams) In-Reply-To: <61516b5e-5e05-402b-9deb-015393249ef0@googlegroups.com> References: <546E2D64.8030208@sciunto.org> <20141120192756.GB28567@phare.normalesup.org> <871tona3bo.fsf@sun.ac.za> <5478733C.9040703@sciunto.org> <61516b5e-5e05-402b-9deb-015393249ef0@googlegroups.com> Message-ID: <5512DC14.6090008@sciunto.org> Le 25/03/2015 17:01, Georges H a ?crit : > Hey everyone, > I have looked into this one earlier and bookmarked it out of curiosity > but the link Fran?ois posted is no longer working > Can i find this demo somewhere ? > The repository is there: https://github.com/soft-matter/trackpy-examples And you can also browse on this page: http://nbviewer.ipython.org/github/soft-matter/trackpy-examples/tree/master/notebooks/ The one you are looking for is there: http://nbviewer.ipython.org/github/soft-matter/trackpy-examples/blob/master/notebooks/custom-feature-detection.ipynb Best, -- Fran?ois Boulogne. http://www.sciunto.org GPG: 32D5F22F From jni.soma at gmail.com Wed Mar 25 20:29:41 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 25 Mar 2015 17:29:41 -0700 (PDT) Subject: Issue with morphological filters In-Reply-To: References: Message-ID: <1427329781457.b04670a1@Nodemailer> Hi Matteo, My guess is that even though you are looking at a "black and white" image, the png is actually an RGB png. Just check the output of "print(cfthdr.shape)". Should be straightforward to make it a binary image: from skimage import color cfthdr = color.rgb2gray(cfthdr) > 0.5 Then you should have a binary image. (And inverting should be as simple as "cfthdr_inv = ~cfthdr") We have morphology.binary_fill_holes to do what you want. btw, there's also morphology.remove_small_objects, which does exactly what you did but as a function call. Finally, it looks like you are not using the latest version of scikit-image (0.11), so you might want to upgrade. Hope that helps! Juan. On Thu, Mar 26, 2015 at 8:48 AM, Matteo wrote: > *Issues with morphological filters when trying to remove white holes in > black objects in a binary images. Using opening or filling holes on > inverted (or complement) of the original binary.* > Hi there > I have a series of derivatives calculated on geophysical data. > Many of these derivatives have nice continuous maxima, so I treat them as > images on which I do some cleanup with morphological filter. > Here's one example of operations that I do routinely, and successfully: > # threshold theta map using Otsu method > thresh_th = threshold_otsu(theta) > binary_th = theta > thresh_th > # clean up small objects > label_objects_th, nb_labels_th = sp.ndimage.label(binary_th) > sizes_th = np.bincount(label_objects_th.ravel()) > mask_sizes_th = sizes_th > 175 > mask_sizes_th[0] = 0 > binary_cleaned_th = mask_sizes_th[label_objects_th] > # further enhance with morphological closing (dilation followed by an > erosion) to remove small dark spots and connect small bright cracks > # followed by an extra erosion > selem = disk(1) > closed_th = closing(binary_cleaned_th, selem)/255 > eroded_th = erosion(closed_th, selem)/255 > # Finally, extract lienaments using skeletonization > skeleton_th=skeletonize(binary_th) > skeleton_cleaned_th=skeletonize(binary_cleaned_th) > # plot to compare > fig = plt.figure(figsize=(20, 7)) > ax = fig.add_subplot(1, 2, 1) > imshow(skeleton_th, cmap='bone_r', interpolation='none') > ax2 = fig.add_subplot(1, 3, 2) > imshow(skeleton_cleaned_th, cmap='bone_r', interpolation='none') > ax.set_xticks([]) > ax.set_yticks([]) > ax2.set_xticks([]) > ax2.set_yticks([]) > Unfortunately I cannot share the data as it is proprietary, but I will for > the next example, which is the one that does not work. > There's one derivative that shows lots of detail but not continuous maxima. > As a workaround I created filled contours in Matplotlib > exported as an image. The image is attached. > Now I want to import back the image and plot it to test: > # import back image > cfthdr=io.imread('filled_contour.png') > # threshold using using Otsu method > thresh_thdr = threshold_otsu(cfthdr) > binary_thdr = cfthdr > thresh_thdr > # plot it > fig = plt.figure(figsize=(5, 5)) > ax = fig.add_subplot(1, 1, 1) > ax.set_xticks([]) > ax.set_yticks([]) > plt.imshow(binary_thdr, cmap='bone') > plt.show() > The above works without issues. > > Next I want to fill the white holes inside the black blobs. I thought of 2 > strategies. > The first would be to use opening; the second to invert the image, and then > fill the holes as in here: > http://scikit-image.org/docs/dev/auto_examples/plot_holes_and_peaks.html > By the way, I found a similar example for opencv here > http://stackoverflow.com/questions/10316057/filling-holes-inside-a-binary-object > > Let's start with opening. When I try: > selem = disk(1) > opened_thdr = opening(binary_thdr, selem) > or: > selem = disk(1) > opened_thdr = opening(cfthdr, selem) > I get an error message like this: > --------------------------------------------------------------------------- > ValueError Traceback (most recent call last) > in () > 1 #binary_thdr=img_as_float(binary_thdr,force_copy=False) > ----> 2 opened_thdr = opening(binary_thdr, selem)/255 > 3 > 4 # plot it > 5 fig = plt.figure(figsize=(5, 5)) > C:\...\skimage\morphology\grey.pyc in opening(image, selem, out) > 160 shift_y = True if (h % 2) == 0 else False > 161 > --> 162 eroded = erosion(image, selem) > 163 out = dilation(eroded, selem, out=out, shift_x=shift_x, > shift_y=shift_y) > 164 return out > C:\...\skimage\morphology\grey.pyc in erosion(image, selem, out, shift_x, > shift_y) > 58 selem = img_as_ubyte(selem) > 59 return cmorph._erode(image, selem, out=out, > ---> 60 shift_x=shift_x, shift_y=shift_y) > 61 > 62 > C:\...\skimage\morphology\cmorph.pyd in skimage.morphology.cmorph._erode > (skimage\morphology\cmorph.c:2658)() > ValueError: Buffer has wrong number of dimensions (expected 2, got 3) > --------------------------------------------------------------------------- > Any idea of what is going on and how I can fix it? > > As for inverting (or finding the complement) and then hole filling, that > would be my preferred option. > However, I have not been able to invert the image. I tried numpy.invert, > adapting the last example from here: > http://docs.scipy.org/doc/numpy/reference/generated/numpy.invert.html > I tried something like this: > http://stackoverflow.com/a/16724700 > and this: > http://stackoverflow.com/a/2498909 > But none of these methods worked. Is there a way in scikit.image to do > that, and if not, do you have any suggestions? > Thank you, > Matteo > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Mar 25 21:57:27 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Wed, 25 Mar 2015 18:57:27 -0700 Subject: Fwd: Opening for Full-Time Postdoctoral Researchers In-Reply-To: References: Message-ID: Please see below for a postdoc opportunity at my group. ---------- Forwarded message ---------- Full-Time Postdoctoral Researcher Positions Open We are now accepting applications for full-time postdoctoral researchers at BIDS. Successful applicants will join our current cohort of fellows in helping make data analysis easier in the research sciences. We are looking for postdoctoral researchers with excellent credentials in their fields as well as strong interests in advancing data-analysis approaches with a community of like-minded individuals from across campus. Your data does not have to be ?big data? for you to be eligible for the program. If you find your data-related problems to be at all unique and challenging, we welcome you to work with us. In particular, we are interested in applications from cross-disciplinary groups and from individuals exploring topics that may broaden the research diversity of our community (e.g., genetics, economics, and the Internet of Things). Each postdoctoral researcher will become part of and contribute to a growing ecosystem that brings together faculty, other postdoctoral scholars, students, staff, and alumni to form a strong network that assists researchers in advancing data-analysis methods and inquiry, expanding and building new software and analytics tools, sharing best practices, and more. To learn more about this position?s requirements and responsibilities, please visit the application page . In addition, if you know anyone who would be interested in this program, please feel free to forward this message along. Twitter Website Berkeley Institute for Data Science (BIDS) ? 190 Doe Library ? Berkeley, CA 94720 ? USA [image: Email Marketing Powered by MailChimp] -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Mar 25 22:09:58 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Wed, 25 Mar 2015 19:09:58 -0700 Subject: Website designer volunteer Message-ID: Hi folks, I know it's a bit of a long shot, but I'd like to find a volunteer to work on the layout and readability of our website. If you know of anyone interested in doing design work the same way we do software development, please let me know. They will have to be able to work with our current workflow, so good technical chops are a must. Thanks, St?fan From stefanv at berkeley.edu Wed Mar 25 22:12:23 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Wed, 25 Mar 2015 19:12:23 -0700 Subject: Digest for scikit-image@googlegroups.com - 2 updates in 2 topics In-Reply-To: References: <90e6ba6138909244b20511fc75f7@google.com> Message-ID: On Wed, Mar 25, 2015 at 6:40 AM, Raphael Okoye wrote: > Can one implement SIFT and SURF functionalities available in openCV > in Sckit-image?Thanks Unfortunately SIFT and SURF are patent encumbered, so we cannot include them in scikit-image. That said, I have file readers available for the format output by die provided SIFT and SURF binaries. You can also use SIFT and SURF from OpenCV inter-operatively with scikit-image. We also have other kinds of similar feature detectors, such as CenSURE, implemented (see `skimage.feature`). St?fan From raphael at aims.ac.za Wed Mar 25 09:40:13 2015 From: raphael at aims.ac.za (Raphael Okoye) Date: Wed, 25 Mar 2015 22:40:13 +0900 Subject: Digest for scikit-image@googlegroups.com - 2 updates in 2 topics In-Reply-To: <90e6ba6138909244b20511fc75f7@google.com> References: <90e6ba6138909244b20511fc75f7@google.com> Message-ID: hi guys, Can one implement SIFT and SURF functionalities available in openCV in Sckit-image?Thanks Raphael On 24/03/2015, scikit-image at googlegroups.com wrote: > ============================================================================= > Today's topic summary > ============================================================================= > > Group: scikit-image at googlegroups.com > URL: > > https://groups.google.com/forum/?utm_source=digest&utm_medium=email#!forum/scikit-image/topics > > > - Dynamic intensity scaling for CollectionViewer [1 Update] > http://groups.google.com/group/scikit-image/t/8aee8a915f035440 > - GSoC: Rewriting scipy.ndimage in Cython [1 Update] > http://groups.google.com/group/scikit-image/t/5c1c936b30cba4d4 > > > ============================================================================= > Topic: Dynamic intensity scaling for CollectionViewer > URL: http://groups.google.com/group/scikit-image/t/8aee8a915f035440 > ============================================================================= > > ---------- 1 of 1 ---------- > From: Ferdinand Greiss > Date: Mar 23 08:15AM -0700 > URL: http://groups.google.com/group/scikit-image/msg/1d6c8e510adc4334 > > Hallo, > > Thanks for your amazing project on image processing. > > I was wondering whether it would be straight forward to implement automatic > > rescaling of intensity (or y axis on LineProfile widget) in order to > account for bleaching in image sequences. Otherwise the line plot will > vanish below the pre-computed value of the first image and I won't be able > to see much after the first few images. > > Thanks for any help or/and suggestions. > Ferdinand > > > > ============================================================================= > Topic: GSoC: Rewriting scipy.ndimage in Cython > URL: http://groups.google.com/group/scikit-image/t/5c1c936b30cba4d4 > ============================================================================= > > ---------- 1 of 1 ---------- > From: "St?fan van der Walt" > Date: Mar 23 01:31AM -0700 > URL: http://groups.google.com/group/scikit-image/msg/cbacf3d255b16d93 > > Hi folks, > >> @all: it's maybe good to know that Aman has already submitted 5 PRs to >> Scipy >> (4 small ones merged, 1 larger one for which the bottleneck is on our >> side): >> https://github.com/scipy/scipy/pulls?q=is%3Apr+author%3Abewithaman+is%3Aclosed > > I wasn't aware--thanks for the heads-up! > > St?fan > > > > > > > -- > You have received this digest because you're subscribed to updates for this > group. You can change your settings on the group membership page: > > https://groups.google.com/forum/?utm_source=digest&utm_medium=email#!forum/scikit-image/join > . > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > > From warmspringwinds at gmail.com Thu Mar 26 07:42:41 2015 From: warmspringwinds at gmail.com (Daniil Pakhomov) Date: Thu, 26 Mar 2015 04:42:41 -0700 (PDT) Subject: Face detection In-Reply-To: References: Message-ID: <5a3dc76e-b3c4-4d61-a539-9aa7080907b6@googlegroups.com> Hello, Stefan. Could I ask you to review my proposal, please? Because I have a little confusion on how we can avoid the patent. Thank you. ???????, 28 ????? 2013 ?., 10:35:31 UTC+1 ???????????? Stefan van der Walt ???????: > > Hi everyone > > I've been interested in getting face detection into skimage for a > while. This morning, Nathan Faggian reminded me that the highly > popular Viola-Jones detector is patent encumbered (yes, if you're not > careful you can use patented code in packages like OpenCV). However, > the following link seems to suggest that we can work around that by > training our own classifier with different features: > > > http://rafaelmizrahi.blogspot.com/2007/02/intel-opencv-face-detection-license.html > > If there's any interest in working on this, or if you already have an > algorithm available, please get in touch. > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warmspringwinds at gmail.com Thu Mar 26 07:43:09 2015 From: warmspringwinds at gmail.com (Daniil Pakhomov) Date: Thu, 26 Mar 2015 04:43:09 -0700 (PDT) Subject: Face detection In-Reply-To: References: Message-ID: <9eb53b2f-8cfd-4639-8a74-20cce86923df@googlegroups.com> Here it is https://docs.google.com/document/d/19omiz31ewE-oWcveSCq7hSFWRsnRPjF39np6yL_PqYQ/edit?usp=sharing ???????, 28 ????? 2013 ?., 10:35:31 UTC+1 ???????????? Stefan van der Walt ???????: > > Hi everyone > > I've been interested in getting face detection into skimage for a > while. This morning, Nathan Faggian reminded me that the highly > popular Viola-Jones detector is patent encumbered (yes, if you're not > careful you can use patented code in packages like OpenCV). However, > the following link seems to suggest that we can work around that by > training our own classifier with different features: > > > http://rafaelmizrahi.blogspot.com/2007/02/intel-opencv-face-detection-license.html > > If there's any interest in working on this, or if you already have an > algorithm available, please get in touch. > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From google at terre-adelie.org Thu Mar 26 02:28:12 2015 From: google at terre-adelie.org (=?ISO-8859-1?Q?J=E9r=F4me?= Kieffer) Date: Thu, 26 Mar 2015 07:28:12 +0100 Subject: Digest for scikit-image@googlegroups.com - 2 updates in 2 topics In-Reply-To: References: <90e6ba6138909244b20511fc75f7@google.com> Message-ID: <20150326072812.4709e189aac8df67ce9dc3fd@terre-adelie.org> On Wed, 25 Mar 2015 22:40:13 +0900 Raphael Okoye wrote: > hi guys, > > Can one implement SIFT and SURF functionalities available in openCV > in Sckit-image?Thanks Hi I worked a bit on it ... Here are Python wrapper to C++ libraries (included) https://github.com/kif/imageAlignment This is the Sift version on GPU (OpenCL, so it also runs multithreaded on CPU): https://github.com/kif/sift_pyocl The patent issue mentioned by Stefan applies even if the code is GPL or MIT... HTH -- J?r?me Kieffer From ug201310004 at iitj.ac.in Thu Mar 26 15:40:56 2015 From: ug201310004 at iitj.ac.in (AMAN singh) Date: Thu, 26 Mar 2015 12:40:56 -0700 (PDT) Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> Message-ID: <3e5a8631-5918-4602-a341-1c835bbf5299@googlegroups.com> Thank you everyone for your insightful comments. I have tried to incorporate your suggestion in the proposal. Kindly have a look at the new proposal here and suggest the improvements. Thanks once again. Regards, Aman Singh On Tuesday, March 10, 2015 at 6:54:06 AM UTC+5:30, AMAN singh wrote: > > Hi developers > > My name is Aman Singh and I am currently a second year undergraduate > student of Computer Science department at Indian Institute of Technology, > Jodhpur. I want to participate in GSoC'15 and the project I am aiming for > is *porting scipy.ndimage to cython*. I have been following scipy for the > last few months and have also made some contributions. I came across this > project on their GSoC'15 ideas' page and found it interesting. > I have done some research in the last week on my part. I am going through > Cython documentation, scipy lecture on github and Richard's work of GSoC' > 14 in which he ported cluster package to cython. While going through the > module scipy.ndimage I also found that Thouis Jones had already ported a > function ndimage.label() to cython. I can use that as a reference for > the rest of the project. > > Please tell me whether I am on right track or not. If you can suggest me > some resources which will be helpful to me in understanding the project, I > would be highly obliged. Also, I would like to know that how much part of > ndimage is to be ported under this project since it is a big module. > Kindly provide me some suggestions and guide me through this. > > Regards, > > Aman Singh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From claiborne.morton at gmail.com Thu Mar 26 16:42:14 2015 From: claiborne.morton at gmail.com (Claiborne Morton) Date: Thu, 26 Mar 2015 13:42:14 -0700 (PDT) Subject: Water-shedding non-circular particles In-Reply-To: References: Message-ID: <7c637b51-4f34-42de-8e44-762faee45173@googlegroups.com> Hey thanks for all the help, here is the original image. Also I am removing the smallest particles later on in the process using a function that does the removal based on the average size of healthy (highly circular) cells, which is why I had not removed them in the images I have already posted. On Thursday, March 19, 2015 at 6:07:58 PM UTC-4, Claiborne Morton wrote: > > Hey guys, Im still having trouble finding ways to separate touching > particles if the are not both circular. Further when dealing with > elliptical shapes, a single particle tends to incorrectly get cut in half. > Any ideas how I could change parameters in the water-shedding function to > correct for this? Attached are a few problem cases so you can see examples. > > Thanks, > Clay > [image: Inline image 1] > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image_2.png Type: image/png Size: 2153000 bytes Desc: not available URL: From claiborne.morton at gmail.com Thu Mar 26 16:45:54 2015 From: claiborne.morton at gmail.com (Claiborne Morton) Date: Thu, 26 Mar 2015 13:45:54 -0700 (PDT) Subject: Water-shedding non-circular particles In-Reply-To: References: Message-ID: <5576f949-1c76-4272-991d-1a2195ea7554@googlegroups.com> Also Juan, When you say, "There have been some recent developments on that function," are you referring to the peak_local_max() function? On Thursday, March 19, 2015 at 6:07:58 PM UTC-4, Claiborne Morton wrote: > > Hey guys, Im still having trouble finding ways to separate touching > particles if the are not both circular. Further when dealing with > elliptical shapes, a single particle tends to incorrectly get cut in half. > Any ideas how I could change parameters in the water-shedding function to > correct for this? Attached are a few problem cases so you can see examples. > > Thanks, > Clay > [image: Inline image 1] > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.niccoli at gmail.com Thu Mar 26 18:09:55 2015 From: matteo.niccoli at gmail.com (Matteo) Date: Thu, 26 Mar 2015 15:09:55 -0700 (PDT) Subject: Issue with morphological filters In-Reply-To: <1427329781457.b04670a1@Nodemailer> References: <1427329781457.b04670a1@Nodemailer> Message-ID: <0d584f88-e74a-461b-8654-e99739bdfbf3@googlegroups.com> Hello Juan Thanks so much for your suggestions. Once I convertedthe image as you suggested: # import back image cfthdr=io.imread('filled_contour_THDR.png') cfthdr = color.rgb2gray(cfthdr) > 0.5 I get good results with opening: # clean it up with opening selem17 = disk(17) opened_thdr = opening(cfthdr, selem17)/255 # plot it fig = plt.figure(figsize=(5, 5)) ax = fig.add_subplot(1, 1, 1) ax.set_xticks([]) ax.set_yticks([]) plt.imshow(opened_thdr,cmap='bone') plt.show() # not bad With remove_small_objects the advantage is that it does not join blobs in the original: cfthdr_inv = ~cfthdr test=remove_small_objects(cfthdr,10000) # plot it fig = plt.figure(figsize=(5, 5)) ax = fig.add_subplot(1, 1, 1) ax.set_xticks([]) ax.set_yticks([]) plt.imshow(test,cmap='bone') plt.show() but with reconstruction done as this: # filling holes with morphological reconstruction seed = np.copy(cfthdr_inv) seed[1:-1, 1:-1] = cfthdr_inv.max() mask = cfthdr_inv filled = reconstruction(seed, mask, method='erosion') # plot it fig = plt.figure(figsize=(5, 5)) ax = fig.add_subplot(1, 1, 1) ax.set_xticks([]) ax.set_yticks([]) plt.imshow(filled,cmap='bone',vmin=cfthdr_inv.min(), vmax=cfthdr_inv.max()) plt.show() I get a completely white image. Do you have any suggestions as to why? Thank again. Cheers, Matteo On Wednesday, March 25, 2015 at 6:29:43 PM UTC-6, Juan Nunez-Iglesias wrote: > Hi Matteo, > > My guess is that even though you are looking at a "black and white" image, > the png is actually an RGB png. Just check the output of > "print(cfthdr.shape)". Should be straightforward to make it a binary image: > > from skimage import color > cfthdr = color.rgb2gray(cfthdr) > 0.5 > > Then you should have a binary image. (And inverting should be as simple as > "cfthdr_inv = ~cfthdr") We have morphology.binary_fill_holes to do what you > want. > > btw, there's also morphology.remove_small_objects, which does exactly what > you did but as a function call. Finally, it looks like you are not using > the latest version of scikit-image (0.11), so you might want to upgrade. > > Hope that helps! > > Juan. > > > > > On Thu, Mar 26, 2015 at 8:48 AM, Matteo > wrote: > >> *Issues with morphological filters when trying to remove white holes >> in black objects in a binary images. Using opening or filling holes on >> inverted (or complement) of the original binary.* >> >> Hi there >> >> I have a series of derivatives calculated on geophysical data. >> >> Many of these derivatives have nice continuous maxima, so I treat them as >> images on which I do some cleanup with morphological filter. >> >> Here's one example of operations that I do routinely, and successfully: >> >> # threshold theta map using Otsu method >> >> thresh_th = threshold_otsu(theta) >> >> binary_th = theta > thresh_th >> >> # clean up small objects >> >> label_objects_th, nb_labels_th = sp.ndimage.label(binary_th) >> >> sizes_th = np.bincount(label_objects_th.ravel()) >> >> mask_sizes_th = sizes_th > 175 >> >> mask_sizes_th[0] = 0 >> >> binary_cleaned_th = mask_sizes_th[label_objects_th] >> >> # further enhance with morphological closing (dilation followed by an >> erosion) to remove small dark spots and connect small bright cracks >> >> # followed by an extra erosion >> >> selem = disk(1) >> >> closed_th = closing(binary_cleaned_th, selem)/255 >> >> eroded_th = erosion(closed_th, selem)/255 >> >> # Finally, extract lienaments using skeletonization >> >> skeleton_th=skeletonize(binary_th) >> >> skeleton_cleaned_th=skeletonize(binary_cleaned_th) >> >> # plot to compare >> >> fig = plt.figure(figsize=(20, 7)) >> >> ax = fig.add_subplot(1, 2, 1) >> >> imshow(skeleton_th, cmap='bone_r', interpolation='none') >> >> ax2 = fig.add_subplot(1, 3, 2) >> >> imshow(skeleton_cleaned_th, cmap='bone_r', interpolation='none') >> >> ax.set_xticks([]) >> >> ax.set_yticks([]) >> >> ax2.set_xticks([]) >> ax2.set_yticks([]) >> >> Unfortunately I cannot share the data as it is proprietary, but I will >> for the next example, which is the one that does not work. >> >> There's one derivative that shows lots of detail but not continuous >> maxima. As a workaround I created filled contours in Matplotlib >> >> exported as an image. The image is attached. >> >> Now I want to import back the image and plot it to test: >> >> # import back image >> >> cfthdr=io.imread('filled_contour.png') >> >> # threshold using using Otsu method >> >> thresh_thdr = threshold_otsu(cfthdr) >> >> binary_thdr = cfthdr > thresh_thdr >> >> # plot it >> >> fig = plt.figure(figsize=(5, 5)) >> >> ax = fig.add_subplot(1, 1, 1) >> >> ax.set_xticks([]) >> >> ax.set_yticks([]) >> >> plt.imshow(binary_thdr, cmap='bone') >> >> plt.show() >> >> The above works without issues. >> >> >> >> Next I want to fill the white holes inside the black blobs. I thought of >> 2 strategies. >> >> The first would be to use opening; the second to invert the image, and >> then fill the holes as in here: >> >> http://scikit-image.org/docs/dev/auto_examples/plot_holes_and_peaks.html >> >> By the way, I found a similar example for opencv here >> >> >> http://stackoverflow.com/questions/10316057/filling-holes-inside-a-binary-object >> >> Let's start with opening. When I try: >> >> selem = disk(1) >> >> opened_thdr = opening(binary_thdr, selem) >> >> or: >> >> selem = disk(1) >> >> opened_thdr = opening(cfthdr, selem) >> >> I get an error message like this: >> >> --------------------------------------------------------------------------- >> >> >> ValueError Traceback (most recent call >> last) >> >> in () >> >> 1 #binary_thdr=img_as_float(binary_thdr,force_copy=False) >> >> ----> 2 opened_thdr = opening(binary_thdr, selem)/255 >> >> 3 >> >> 4 # plot it >> >> 5 fig = plt.figure(figsize=(5, 5)) >> >> C:\...\skimage\morphology\grey.pyc in opening(image, selem, out) >> >> 160 shift_y = True if (h % 2) == 0 else False >> >> 161 >> >> --> 162 eroded = erosion(image, selem) >> >> 163 out = dilation(eroded, selem, out=out, shift_x=shift_x, >> shift_y=shift_y) >> >> 164 return out >> >> C:\...\skimage\morphology\grey.pyc in erosion(image, selem, out, shift_x, >> shift_y) >> >> 58 selem = img_as_ubyte(selem) >> >> 59 return cmorph._erode(image, selem, out=out, >> >> ---> 60 shift_x=shift_x, shift_y=shift_y) >> >> 61 >> >> 62 >> >> C:\...\skimage\morphology\cmorph.pyd in skimage.morphology.cmorph._erode >> (skimage\morphology\cmorph.c:2658)() >> >> ValueError: Buffer has wrong number of dimensions (expected 2, got 3) >> >> --------------------------------------------------------------------------- >> >> >> Any idea of what is going on and how I can fix it? >> >> >> >> As for inverting (or finding the complement) and then hole filling, that >> would be my preferred option. >> >> However, I have not been able to invert the image. I tried numpy.invert, >> adapting the last example from here: >> >> http://docs.scipy.org/doc/numpy/reference/generated/numpy.invert.html >> >> I tried something like this: >> >> http://stackoverflow.com/a/16724700 >> >> and this: >> >> http://stackoverflow.com/a/2498909 >> >> But none of these methods worked. Is there a way in scikit.image to do >> that, and if not, do you have any suggestions? >> >> Thank you, >> >> Matteo >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> For more options, visit https://groups.google.com/d/optout. >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: filled_contours_opening.png Type: image/png Size: 14599 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: filled_contours_removed_small_objects.png Type: image/png Size: 14472 bytes Desc: not available URL: From hughesadam87 at gmail.com Thu Mar 26 19:16:12 2015 From: hughesadam87 at gmail.com (Adam Hughes) Date: Thu, 26 Mar 2015 16:16:12 -0700 (PDT) Subject: The right way to access red channel in Lena Message-ID: <761e6902-b10e-45d7-951f-d5ffe143975c@googlegroups.com> I'm trying to build a filter to show only the red channel in the lena image. I defined two masks: (1,0,0) (255,0,0) Oddly, the (255,0,0) gives me the correct plot when doing imshow(), but (1,0,0) doesn't. Why does the (1,0,0) mask lead to light regions being dark and dark regions being light? Here's a working example: %pylab inline from skimage.data import lena lena = lena() f, (ax1, ax2) = plt.subplots(1,2, figsize=(8,6)) ax1.imshow(lena*(1,0,0)) ax2.imshow(lena*(255,0,0)) -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Thu Mar 26 19:29:34 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Thu, 26 Mar 2015 16:29:34 -0700 (PDT) Subject: The right way to access red channel in Lena In-Reply-To: <761e6902-b10e-45d7-951f-d5ffe143975c@googlegroups.com> References: <761e6902-b10e-45d7-951f-d5ffe143975c@googlegroups.com> Message-ID: <6efe8a8d-b892-4088-b3c0-97c9dddcd4dd@googlegroups.com> Hi Adam, You can slice out just the red channel with `lena[..., 0]`, which is equivalent to `lena[:, :, 0]`. The result will be a rank 2 array representing the red channel. This ability is made possible by NumPy. From hughesadam87 at gmail.com Thu Mar 26 19:31:58 2015 From: hughesadam87 at gmail.com (Adam Hughes) Date: Thu, 26 Mar 2015 16:31:58 -0700 (PDT) Subject: The right way to access red channel in Lena In-Reply-To: <761e6902-b10e-45d7-951f-d5ffe143975c@googlegroups.com> References: <761e6902-b10e-45d7-951f-d5ffe143975c@googlegroups.com> Message-ID: I don't think the image came through. Let me attach it On Thursday, March 26, 2015 at 7:16:12 PM UTC-4, Adam Hughes wrote: > > I'm trying to build a filter to show only the red channel in the lena > image. I defined two masks: > > (1,0,0) > (255,0,0) > > Oddly, the (255,0,0) gives me the correct plot when doing imshow(), but > (1,0,0) doesn't. Why does the (1,0,0) mask lead to light regions being > dark and dark regions being light? Here's a working example: > > %pylab inline > from skimage.data import lena > > lena = lena() > > f, (ax1, ax2) = plt.subplots(1,2, figsize=(8,6)) > > ax1.imshow(lena*(1,0,0)) > ax2.imshow(lena*(255,0,0)) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: red_lena.png Type: image/png Size: 99444 bytes Desc: not available URL: From stefanv at berkeley.edu Thu Mar 26 20:38:55 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Thu, 26 Mar 2015 17:38:55 -0700 Subject: The right way to access red channel in Lena In-Reply-To: References: <761e6902-b10e-45d7-951f-d5ffe143975c@googlegroups.com> Message-ID: Hi Adam On Thu, Mar 26, 2015 at 4:31 PM, Adam Hughes wrote: > ax1.imshow(lena*(1,0,0)) > ax2.imshow(lena*(255,0,0)) > The multiplication turns your image's dtype into uint64, which causes the scaling problems. I prefer with float images, always ensuring that they are in [0, 1] and then specifying vmin=0 and vmax=1 to matplotlib. Regards St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Thu Mar 26 20:27:01 2015 From: hughesadam87 at gmail.com (Adam Hughes) Date: Thu, 26 Mar 2015 20:27:01 -0400 Subject: The right way to access red channel in Lena In-Reply-To: References: <761e6902-b10e-45d7-951f-d5ffe143975c@googlegroups.com> Message-ID: Thanks Josh. I actually would normally do it that way, but I stumbled on this behavior when we were trying to build filters. For example, scale the red channel by 50% via multiplying by (0.5, 1, 1). Just curious what's going on, and why multiplcation by (1,0,0) doesn't work. On Thu, Mar 26, 2015 at 7:31 PM, Adam Hughes wrote: > I don't think the image came through. Let me attach it > > On Thursday, March 26, 2015 at 7:16:12 PM UTC-4, Adam Hughes wrote: >> >> I'm trying to build a filter to show only the red channel in the lena >> image. I defined two masks: >> >> (1,0,0) >> (255,0,0) >> >> Oddly, the (255,0,0) gives me the correct plot when doing imshow(), but >> (1,0,0) doesn't. Why does the (1,0,0) mask lead to light regions being >> dark and dark regions being light? Here's a working example: >> >> %pylab inline >> from skimage.data import lena >> >> lena = lena() >> >> f, (ax1, ax2) = plt.subplots(1,2, figsize=(8,6)) >> >> ax1.imshow(lena*(1,0,0)) >> ax2.imshow(lena*(255,0,0)) >> >> >> -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/JzmfEbBJKYU/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Thu Mar 26 20:40:35 2015 From: hughesadam87 at gmail.com (Adam Hughes) Date: Thu, 26 Mar 2015 20:40:35 -0400 Subject: The right way to access red channel in Lena In-Reply-To: References: <761e6902-b10e-45d7-951f-d5ffe143975c@googlegroups.com> Message-ID: Thanks! On Thu, Mar 26, 2015 at 8:38 PM, St?fan van der Walt wrote: > Hi Adam > > On Thu, Mar 26, 2015 at 4:31 PM, Adam Hughes > wrote: > >> ax1.imshow(lena*(1,0,0)) >> ax2.imshow(lena*(255,0,0)) >> > > The multiplication turns your image's dtype into uint64, which causes the > scaling problems. > > I prefer with float images, always ensuring that they are in [0, 1] and > then specifying vmin=0 and vmax=1 to matplotlib. > > Regards > St?fan > > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/JzmfEbBJKYU/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Fri Mar 27 01:14:04 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 26 Mar 2015 22:14:04 -0700 (PDT) Subject: Issue with morphological filters In-Reply-To: <0d584f88-e74a-461b-8654-e99739bdfbf3@googlegroups.com> References: <0d584f88-e74a-461b-8654-e99739bdfbf3@googlegroups.com> Message-ID: <1427433243664.0ffc2fe5@Nodemailer> Hi Matteo, Can you try putting this notebook up as a gist and pasting a link to the notebook? It's hard for me to follow all of the steps (and the polarity of the image) without the images inline. Is it just the inverse of what you want? And anyway why aren't you just using ndimage's binary_fill_holes? https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.ndimage.morphology.binary_fill_holes.html Juan. On Fri, Mar 27, 2015 at 9:09 AM, Matteo wrote: > Hello Juan > Thanks so much for your suggestions. > Once I convertedthe image as you suggested: > # import back image > cfthdr=io.imread('filled_contour_THDR.png') > cfthdr = color.rgb2gray(cfthdr) > 0.5 > I get good results with opening: > # clean it up with opening > selem17 = disk(17) > opened_thdr = opening(cfthdr, selem17)/255 > # plot it > fig = plt.figure(figsize=(5, 5)) > ax = fig.add_subplot(1, 1, 1) > ax.set_xticks([]) > ax.set_yticks([]) > plt.imshow(opened_thdr,cmap='bone') > plt.show() > # not bad > With remove_small_objects the advantage is that it does not join blobs in > the original: > cfthdr_inv = ~cfthdr > test=remove_small_objects(cfthdr,10000) > # plot it > fig = plt.figure(figsize=(5, 5)) > ax = fig.add_subplot(1, 1, 1) > ax.set_xticks([]) > ax.set_yticks([]) > plt.imshow(test,cmap='bone') > plt.show() > but with reconstruction done as this: > # filling holes with morphological reconstruction > seed = np.copy(cfthdr_inv) > seed[1:-1, 1:-1] = cfthdr_inv.max() > mask = cfthdr_inv > filled = reconstruction(seed, mask, method='erosion') > # plot it > fig = plt.figure(figsize=(5, 5)) > ax = fig.add_subplot(1, 1, 1) > ax.set_xticks([]) > ax.set_yticks([]) > plt.imshow(filled,cmap='bone',vmin=cfthdr_inv.min(), vmax=cfthdr_inv.max()) > plt.show() > I get a completely white image. Do you have any suggestions as to why? > Thank again. Cheers, > Matteo > On Wednesday, March 25, 2015 at 6:29:43 PM UTC-6, Juan Nunez-Iglesias wrote: >> Hi Matteo, >> >> My guess is that even though you are looking at a "black and white" image, >> the png is actually an RGB png. Just check the output of >> "print(cfthdr.shape)". Should be straightforward to make it a binary image: >> >> from skimage import color >> cfthdr = color.rgb2gray(cfthdr) > 0.5 >> >> Then you should have a binary image. (And inverting should be as simple as >> "cfthdr_inv = ~cfthdr") We have morphology.binary_fill_holes to do what you >> want. >> >> btw, there's also morphology.remove_small_objects, which does exactly what >> you did but as a function call. Finally, it looks like you are not using >> the latest version of scikit-image (0.11), so you might want to upgrade. >> >> Hope that helps! >> >> Juan. >> >> >> >> >> On Thu, Mar 26, 2015 at 8:48 AM, Matteo > > wrote: >> >>> *Issues with morphological filters when trying to remove white holes >>> in black objects in a binary images. Using opening or filling holes on >>> inverted (or complement) of the original binary.* >>> >>> Hi there >>> >>> I have a series of derivatives calculated on geophysical data. >>> >>> Many of these derivatives have nice continuous maxima, so I treat them as >>> images on which I do some cleanup with morphological filter. >>> >>> Here's one example of operations that I do routinely, and successfully: >>> >>> # threshold theta map using Otsu method >>> >>> thresh_th = threshold_otsu(theta) >>> >>> binary_th = theta > thresh_th >>> >>> # clean up small objects >>> >>> label_objects_th, nb_labels_th = sp.ndimage.label(binary_th) >>> >>> sizes_th = np.bincount(label_objects_th.ravel()) >>> >>> mask_sizes_th = sizes_th > 175 >>> >>> mask_sizes_th[0] = 0 >>> >>> binary_cleaned_th = mask_sizes_th[label_objects_th] >>> >>> # further enhance with morphological closing (dilation followed by an >>> erosion) to remove small dark spots and connect small bright cracks >>> >>> # followed by an extra erosion >>> >>> selem = disk(1) >>> >>> closed_th = closing(binary_cleaned_th, selem)/255 >>> >>> eroded_th = erosion(closed_th, selem)/255 >>> >>> # Finally, extract lienaments using skeletonization >>> >>> skeleton_th=skeletonize(binary_th) >>> >>> skeleton_cleaned_th=skeletonize(binary_cleaned_th) >>> >>> # plot to compare >>> >>> fig = plt.figure(figsize=(20, 7)) >>> >>> ax = fig.add_subplot(1, 2, 1) >>> >>> imshow(skeleton_th, cmap='bone_r', interpolation='none') >>> >>> ax2 = fig.add_subplot(1, 3, 2) >>> >>> imshow(skeleton_cleaned_th, cmap='bone_r', interpolation='none') >>> >>> ax.set_xticks([]) >>> >>> ax.set_yticks([]) >>> >>> ax2.set_xticks([]) >>> ax2.set_yticks([]) >>> >>> Unfortunately I cannot share the data as it is proprietary, but I will >>> for the next example, which is the one that does not work. >>> >>> There's one derivative that shows lots of detail but not continuous >>> maxima. As a workaround I created filled contours in Matplotlib >>> >>> exported as an image. The image is attached. >>> >>> Now I want to import back the image and plot it to test: >>> >>> # import back image >>> >>> cfthdr=io.imread('filled_contour.png') >>> >>> # threshold using using Otsu method >>> >>> thresh_thdr = threshold_otsu(cfthdr) >>> >>> binary_thdr = cfthdr > thresh_thdr >>> >>> # plot it >>> >>> fig = plt.figure(figsize=(5, 5)) >>> >>> ax = fig.add_subplot(1, 1, 1) >>> >>> ax.set_xticks([]) >>> >>> ax.set_yticks([]) >>> >>> plt.imshow(binary_thdr, cmap='bone') >>> >>> plt.show() >>> >>> The above works without issues. >>> >>> >>> >>> Next I want to fill the white holes inside the black blobs. I thought of >>> 2 strategies. >>> >>> The first would be to use opening; the second to invert the image, and >>> then fill the holes as in here: >>> >>> http://scikit-image.org/docs/dev/auto_examples/plot_holes_and_peaks.html >>> >>> By the way, I found a similar example for opencv here >>> >>> >>> http://stackoverflow.com/questions/10316057/filling-holes-inside-a-binary-object >>> >>> Let's start with opening. When I try: >>> >>> selem = disk(1) >>> >>> opened_thdr = opening(binary_thdr, selem) >>> >>> or: >>> >>> selem = disk(1) >>> >>> opened_thdr = opening(cfthdr, selem) >>> >>> I get an error message like this: >>> >>> --------------------------------------------------------------------------- >>> >>> >>> ValueError Traceback (most recent call >>> last) >>> >>> in () >>> >>> 1 #binary_thdr=img_as_float(binary_thdr,force_copy=False) >>> >>> ----> 2 opened_thdr = opening(binary_thdr, selem)/255 >>> >>> 3 >>> >>> 4 # plot it >>> >>> 5 fig = plt.figure(figsize=(5, 5)) >>> >>> C:\...\skimage\morphology\grey.pyc in opening(image, selem, out) >>> >>> 160 shift_y = True if (h % 2) == 0 else False >>> >>> 161 >>> >>> --> 162 eroded = erosion(image, selem) >>> >>> 163 out = dilation(eroded, selem, out=out, shift_x=shift_x, >>> shift_y=shift_y) >>> >>> 164 return out >>> >>> C:\...\skimage\morphology\grey.pyc in erosion(image, selem, out, shift_x, >>> shift_y) >>> >>> 58 selem = img_as_ubyte(selem) >>> >>> 59 return cmorph._erode(image, selem, out=out, >>> >>> ---> 60 shift_x=shift_x, shift_y=shift_y) >>> >>> 61 >>> >>> 62 >>> >>> C:\...\skimage\morphology\cmorph.pyd in skimage.morphology.cmorph._erode >>> (skimage\morphology\cmorph.c:2658)() >>> >>> ValueError: Buffer has wrong number of dimensions (expected 2, got 3) >>> >>> --------------------------------------------------------------------------- >>> >>> >>> Any idea of what is going on and how I can fix it? >>> >>> >>> >>> As for inverting (or finding the complement) and then hole filling, that >>> would be my preferred option. >>> >>> However, I have not been able to invert the image. I tried numpy.invert, >>> adapting the last example from here: >>> >>> http://docs.scipy.org/doc/numpy/reference/generated/numpy.invert.html >>> >>> I tried something like this: >>> >>> http://stackoverflow.com/a/16724700 >>> >>> and this: >>> >>> http://stackoverflow.com/a/2498909 >>> >>> But none of these methods worked. Is there a way in scikit.image to do >>> that, and if not, do you have any suggestions? >>> >>> Thank you, >>> >>> Matteo >>> >>> -- >>> You received this message because you are subscribed to the Google Groups >>> "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to scikit-image... at googlegroups.com . >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> >> > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Fri Mar 27 01:20:29 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 26 Mar 2015 22:20:29 -0700 (PDT) Subject: Water-shedding non-circular particles In-Reply-To: <5576f949-1c76-4272-991d-1a2195ea7554@googlegroups.com> References: <5576f949-1c76-4272-991d-1a2195ea7554@googlegroups.com> Message-ID: <1427433629305.bf4d117f@Nodemailer> Yes, that's what I mean. I haven't used it so I can't offer guidance there but you might want to fiddle with the parameters / update your scikit-image to the github master version. Regarding your source images, it looks like there is a nice signal separating touching cells. You might want to run an edge detector (e.g. Ilastik) or even just morphologically opening your images prior to thresholding. Juan. On Fri, Mar 27, 2015 at 7:45 AM, Claiborne Morton wrote: > Also Juan, > When you say, "There have been some recent developments on that function," > are you referring to the peak_local_max() function? > On Thursday, March 19, 2015 at 6:07:58 PM UTC-4, Claiborne Morton wrote: >> >> Hey guys, Im still having trouble finding ways to separate touching >> particles if the are not both circular. Further when dealing with >> elliptical shapes, a single particle tends to incorrectly get cut in half. >> Any ideas how I could change parameters in the water-shedding function to >> correct for this? Attached are a few problem cases so you can see examples. >> >> Thanks, >> Clay >> [image: Inline image 1] >> >> >> > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Fri Mar 27 10:04:10 2015 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Fri, 27 Mar 2015 07:04:10 -0700 Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> <3e5a8631-5918-4602-a341-1c835bbf5299@googlegroups.com> Message-ID: On Fri, Mar 27, 2015 at 2:27 AM, Ralf Gommers wrote: > > > On Thu, Mar 26, 2015 at 8:40 PM, AMAN singh > wrote: > >> Thank you everyone for your insightful comments. >> I have tried to incorporate your suggestion in the proposal. Kindly have >> a look at the new proposal here >> >> and suggest the improvements. >> > > Hi Aman, this looks quite good to me. For the timeline I think it will > take longer to get the iterators right and shorter to port the last > functions at the end - once you get the hang of it you'll be able to do the > last ones quickly I expect. > That sounds about right. I think that breaking down the schedule to what function will be ported what week is little more than wishful thinking, and that keeping things at the file level would make more sense. But I think you are getting your proposal there. One idea that just crossed my mind: checking your implementation of the iterators and other stuff in support.c for correctness and performance is going to be an important part of the project. Perhaps it is a good idea to identify, either now or very early on the project, a few current ndimage top level functions that use each of those objects, if possible without interaction with the others, and build a sequence that could look something like (I am making this up in a hurry, so don't take the actual function names proposed too seriously, although they may actually make sense): Port NI_PointIterator -> Port NI_CenterOfMass, benchmark and test Port NI_LineBuffer -> Port NI_UniformFilter1D, benchmark and test ... This would very likely extend the time you will need to implement all the items in support.c. But by the time you were finished with that we would both have high confidence that things were going well, plus a "Rosetta Stone" that should make it a breeze to finish the job, both for you and anyone else. We would also have an intermediate milestone (everything in support ported plus a working example of each being used, with correctness and performance verified), that would be a worthy deliverable on its own: if we are terribly miscalculating task duration, and everything slips and is delayed, getting there could still be considered a success, since it would make finishing the job for others much, much simpler. One little concern of mine, and the questions don't really go to Aman, but to the scipy devs: the Cython docs on fused types have a big fat warning at the top on support still being experimental. Also, this is going to bump the version requirements for Cython to a very recent one. Are we OK with this? Similarly, you suggest using Cython's prange to parallelize computations. I haven't seen OpenMP used anywhere in NumPy or SciPy, and have the feeling that parallel implementations are left out on purpose. Am I right, or would parallelizing were possible be OK? Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.niccoli at gmail.com Fri Mar 27 11:32:39 2015 From: matteo.niccoli at gmail.com (Matteo) Date: Fri, 27 Mar 2015 08:32:39 -0700 (PDT) Subject: Issue with morphological filters In-Reply-To: <1427433243664.0ffc2fe5@Nodemailer> References: <0d584f88-e74a-461b-8654-e99739bdfbf3@googlegroups.com> <1427433243664.0ffc2fe5@Nodemailer> Message-ID: Hello Juan Here it is: http://nbviewer.ipython.org/urls/dl.dropbox.com/s/ancfxe2gx1fbyyp/morphology_test.ipynb?dl=0 I get the same, odd results, with both ndimage's binary_fill_holes, and reconstruction. IS it because of the structuring elements/masks? Thanks for your help. Matteo On Thursday, March 26, 2015 at 11:14:05 PM UTC-6, Juan Nunez-Iglesias wrote: > Hi Matteo, > > Can you try putting this notebook up as a gist and pasting a link to the > notebook? It's hard for me to follow all of the steps (and the polarity of > the image) without the images inline. Is it just the inverse of what you > want? And anyway why aren't you just using ndimage's binary_fill_holes? > > > https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.ndimage.morphology.binary_fill_holes.html > > Juan. > > > > > On Fri, Mar 27, 2015 at 9:09 AM, Matteo > wrote: > > Hello Juan > > Thanks so much for your suggestions. > Once I convertedthe image as you suggested: > # import back image > cfthdr=io.imread('filled_contour_THDR.png') > cfthdr = color.rgb2gray(cfthdr) > 0.5 > > I get good results with opening: > # clean it up with opening > selem17 = disk(17) > opened_thdr = opening(cfthdr, selem17)/255 > # plot it > fig = plt.figure(figsize=(5, 5)) > ax = fig.add_subplot(1, 1, 1) > ax.set_xticks([]) > ax.set_yticks([]) > plt.imshow(opened_thdr,cmap='bone') > plt.show() > # not bad > > > With remove_small_objects the advantage is that it does not join blobs in > the original: > cfthdr_inv = ~cfthdr > test=remove_small_objects(cfthdr,10000) > # plot it > fig = plt.figure(figsize=(5, 5)) > ax = fig.add_subplot(1, 1, 1) > ax.set_xticks([]) > ax.set_yticks([]) > plt.imshow(test,cmap='bone') > plt.show() > > > but with reconstruction done as this: > # filling holes with morphological reconstruction > seed = np.copy(cfthdr_inv) > seed[1:-1, 1:-1] = cfthdr_inv.max() > mask = cfthdr_inv > filled = reconstruction(seed, mask, method='erosion') > # plot it > fig = plt.figure(figsize=(5, 5)) > ax = fig.add_subplot(1, 1, 1) > ax.set_xticks([]) > ax.set_yticks([]) > plt.imshow(filled,cmap='bone',vmin=cfthdr_inv.min(), vmax=cfthdr_inv.max > ()) > plt.show() > > I get a completely white image. Do you have any suggestions as to why? > > Thank again. Cheers, > Matteo > > > On Wednesday, March 25, 2015 at 6:29:43 PM UTC-6, Juan Nunez-Iglesias > wrote: > > Hi Matteo, > > My guess is that even though you are looking at a "black and white" image, > the png is actually an RGB png. Just check the output of > "print(cfthdr.shape)". Should be straightforward to make it a binary image: > > from skimage import color > cfthdr = color.rgb2gray(cfthdr) > 0.5 > > Then you should have a binary image. (And inverting should be as simple as > "cfthdr_inv = ~cfthdr") We have morphology.binary_fill_holes to do what you > want. > > btw, there's also morphology.remove_small_objects, which does exactly what > you did but as a function call. Finally, it looks like you are not using > the latest version of scikit-image (0.11), so you might want to upgrade. > > Hope that helps! > > Juan. > > > > > On Thu, Mar 26, 2015 at 8:48 AM, Matteo wrote: > > *Issues with morphological filters when trying to remove white holes in > black objects in a binary images. Using opening or filling holes on > inverted (or complement) of the original binary.* > > Hi there > > I have a series of derivatives calculated on geophysical data. > > Many of these derivatives have nice continuous maxima, so I treat them as > images on which I do some cleanup with morphological filter. > > Here's one example of operations that I do routinely, and successfully: > > # threshold theta map using Otsu method > > thresh_th = threshold_otsu(theta) > > binary_th = theta > thresh_th > > # clean up small objects > > label_objects_th, nb_labels_th = sp.ndimage.label(binary_th) > > sizes_th = np.bincount(label_objects_th.ravel()) > > mask_sizes_th = sizes_th > 175 > > mask_sizes_th[0] = 0 > > binary_cleaned_th = mask_sizes_th[label_objects_th] > > # further enhance with morphological closing (dilation followed by an > erosion) to remove small dark spots and connect small bright cracks > > # followed by an extra erosion > > selem = disk(1) > > closed_th = closing(binary_cleaned_th, selem)/255 > > eroded_th = erosion(closed_th, selem)/255 > > # Finally, extract lienaments using skeletonization > > skeleton_th=skeletonize(binary_th) > > skeleton_cleaned_th=skeletonize(binary_cleaned_th) > > # plot to compare > > fig = plt.figure(figsize=(20, 7)) > > ax = fig.add_subplot(1, 2, 1) > > imshow(skeleton_th, cmap='bone_r', interpolation='none') > > ax2 = fig.add_subplot(1, 3, 2) > > imshow(skeleton_cleaned_th, cmap='bone_r', interpolation='none') > > ax.set_xticks([]) > > ax.set_yticks([]) > > ax2.set_xticks([]) > ax2.set_yticks([]) > > Unfortunately I cannot share the data as it is proprietary, but I will > for the next example, which is the one that does not work. > > There's one derivative that shows lots of detail but not continuous > maxima. As a workaround I created filled contours in Matplotlib > > exported as an image. The image is attached. > > Now I want to import back the image and plot it to test: > > # import back image > > cfthdr=io.imread('filled_contour.png') > > # threshold using using Otsu method > > thresh_thdr = threshold_otsu(cfthdr) > > binary_thdr = cfthdr > thresh_thdr > > # plot it > > fig = plt.figure(figsize=(5, 5)) > > ax = fig.add_subplot(1, 1, 1) > > ax.set_xticks([]) > > ax.set_yticks([]) > > plt.imshow(binary_thdr, cmap='bone') > > plt.show() > > The above works without issues. > > > > Next I want to fill the white holes inside the black blobs. I thought of 2 > strategies. > > The first would be to use opening; the second to invert the image, and > then fill the holes as in here: > > http://scikit-image.org/docs/dev/auto_examples/plot_holes_and_peaks.html > > By the way, I found a similar example for opencv here > > > http://stackoverflow.com/questions/10316057/filling-holes-inside-a-binary-object > > Let's start with opening. When I try: > > selem = disk(1) > > opened_thdr = opening(binary_thdr, selem) > > or: > > selem = disk(1) > > opened_thdr = opening(cfthdr, selem) > > I get an error message like this: > > --------------------------------------------------------------------------- > > > ValueError Traceback (most recent call > last) > > in () > > 1 #binary_thdr=img_as_float(binary_thdr,force_copy=False) > > ----> 2 opened_thdr = opening(binary_thdr, selem)/255 > > 3 > > ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Mar 27 05:27:21 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 27 Mar 2015 10:27:21 +0100 Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: <3e5a8631-5918-4602-a341-1c835bbf5299@googlegroups.com> References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> <3e5a8631-5918-4602-a341-1c835bbf5299@googlegroups.com> Message-ID: On Thu, Mar 26, 2015 at 8:40 PM, AMAN singh wrote: > Thank you everyone for your insightful comments. > I have tried to incorporate your suggestion in the proposal. Kindly have > a look at the new proposal here > > and suggest the improvements. > Hi Aman, this looks quite good to me. For the timeline I think it will take longer to get the iterators right and shorter to port the last functions at the end - once you get the hang of it you'll be able to do the last ones quickly I expect. Cheers, Ralf > Thanks once again. > Regards, > > Aman Singh > > > > On Tuesday, March 10, 2015 at 6:54:06 AM UTC+5:30, AMAN singh wrote: > >> Hi developers >> >> My name is Aman Singh and I am currently a second year undergraduate >> student of Computer Science department at Indian Institute of Technology, >> Jodhpur. I want to participate in GSoC'15 and the project I am aiming for >> is *porting scipy.ndimage to cython*. I have been following scipy for >> the last few months and have also made some contributions. I came across >> this project on their GSoC'15 ideas' page and found it interesting. >> I have done some research in the last week on my part. I am going through >> Cython documentation, scipy lecture on github and Richard's work of GSoC' >> 14 in which he ported cluster package to cython. While going through the >> module scipy.ndimage I also found that Thouis Jones had already ported a >> function ndimage.label() to cython. I can use that as a reference for >> the rest of the project. >> >> Please tell me whether I am on right track or not. If you can suggest me >> some resources which will be helpful to me in understanding the project, I >> would be highly obliged. Also, I would like to know that how much part of >> ndimage is to be ported under this project since it is a big module. >> Kindly provide me some suggestions and guide me through this. >> >> Regards, >> >> Aman Singh >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Sat Mar 28 02:01:01 2015 From: tsyu80 at gmail.com (Tony Yu) Date: Sat, 28 Mar 2015 01:01:01 -0500 Subject: Website designer volunteer In-Reply-To: References: Message-ID: Hey St?fan, Do you have some ideas for what improvements you're looking for? -Tony On Wed, Mar 25, 2015 at 9:09 PM, St?fan van der Walt wrote: > Hi folks, > > I know it's a bit of a long shot, but I'd like to find a volunteer to > work on the layout and readability of our website. > > If you know of anyone interested in doing design work the same way we > do software development, please let me know. They will have to be > able to work with our current workflow, so good technical chops are a > must. > > Thanks, > St?fan > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwiechen1 at gmail.com Sat Mar 28 06:50:53 2015 From: kwiechen1 at gmail.com (Kai Wiechen) Date: Sat, 28 Mar 2015 03:50:53 -0700 (PDT) Subject: Water-shedding non-circular particles In-Reply-To: <7c637b51-4f34-42de-8e44-762faee45173@googlegroups.com> References: <7c637b51-4f34-42de-8e44-762faee45173@googlegroups.com> Message-ID: <5c7c42db-8362-4a14-8c83-744a308edd30@googlegroups.com> In order to separate nuclei from H&E or H&DAB stained images I have tried color deconvolution (a slightly modified variant from skimage.color) to get the hematoxylin part. It should be possible to extract the eosin (red colored) part to remove nuclei and particles of thrombocytes prior to watershed segmentation. However, it seems to be necessary for color deconvolution to have a neutral and not saturated background. Can you provide test images not saturated (see histogram attached) and blank field images? Kai Am Donnerstag, 26. M?rz 2015 21:42:14 UTC+1 schrieb Claiborne Morton: > Hey thanks for all the help, here is the original image. Also I am > removing the smallest particles later on in the process using a function > that does the removal based on the average size of healthy (highly > circular) cells, which is why I had not removed them in the images I have > already posted. > > > > > > On Thursday, March 19, 2015 at 6:07:58 PM UTC-4, Claiborne Morton wrote: >> >> Hey guys, Im still having trouble finding ways to separate touching >> particles if the are not both circular. Further when dealing with >> elliptical shapes, a single particle tends to incorrectly get cut in half. >> Any ideas how I could change parameters in the water-shedding function to >> correct for this? Attached are a few problem cases so you can see examples. >> >> Thanks, >> Clay >> [image: Inline image 1] >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dzungng89 at gmail.com Sat Mar 28 15:30:28 2015 From: dzungng89 at gmail.com (Dzung Nguyen) Date: Sat, 28 Mar 2015 12:30:28 -0700 (PDT) Subject: Lab Color space vs RGB Message-ID: Is it true that Lab color space is larger than RGB color space? For example, there exists a color in Lab that can't be reproduced in RGB space ( outside of monitor's gamut)? How does the function rgb2lab deal with this case? From the code, I saw that there's always a corresponding mapping one to one between Lab and RGB. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dzungng89 at gmail.com Sun Mar 29 11:24:26 2015 From: dzungng89 at gmail.com (Dzung Nguyen) Date: Sun, 29 Mar 2015 08:24:26 -0700 (PDT) Subject: Lab Color space vs RGB In-Reply-To: References: Message-ID: A follow up questions is: Which rendering intent is used when converting from XYZ to RGB? (AbsoluteColorimetric, Perceptual, RelativeColorimetric or Saturation). The formula is presented in this link, but I couldn't interpret it. http://www.easyrgb.com/index.php?X=MATH&H=01#text1 On Saturday, March 28, 2015 at 10:37:13 PM UTC-5, Dzung Nguyen wrote: > > Is it true that Lab color space is larger than RGB color space? For > example, there exists a color in Lab that can't be reproduced in RGB space > ( outside of monitor's gamut)? How does the function rgb2lab deal with this > case? From the code, I saw that there's always a corresponding mapping one > to one between Lab and RGB. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ciaran.robb at googlemail.com Sun Mar 29 12:01:48 2015 From: ciaran.robb at googlemail.com (ciaran.robb at googlemail.com) Date: Sun, 29 Mar 2015 09:01:48 -0700 (PDT) Subject: regionprops - displaying region properties In-Reply-To: References: <46469c78-2cfb-4c8c-913a-a639745c4ab9@googlegroups.com> <8B196A79-5ED9-48DB-ADA6-1C57EAFA3944@demuc.de> <5b40325e-aff4-4b49-9533-7722efba9905@googlegroups.com> Message-ID: <75ac8c79-3953-43d6-bed7-61b207251fd5@googlegroups.com> Hi, Sorry for the delay, I have been bogged down with writing papers! I have attached an Ipython notebook with an example routine using one of the skimage.data images. Ciaran On Monday, March 2, 2015 at 11:38:21 PM UTC, Johannes Sch?nberger wrote: > > Maybe, there is a way to elegantly integrate this into the RegionProperty > class? > > Could you share your current implementation, so we can decide for a good > strategy? > > > On Mar 2, 2015, at 6:02 PM, ciara... at googlemail.com > wrote: > > > > Hi Johannes, > > > > Yeah of course. Would it be best placed in module color? > > > > Ciaran > > > > On Monday, March 2, 2015 at 5:26:12 PM UTC, Johannes Sch?nberger wrote: > > That sounds great. Would you be willing to work on integrating this into > skimage? > > > > Thanks. > > > > > On Feb 26, 2015, at 11:51 AM, ciara... at googlemail.com wrote: > > > > > > Hi > > > Adding to my own post but hey.... > > > > > > I have since written my own code which allows visualising of region > properties (eg area, eccentricity etc) via colormap, if anyone is > interested let me know! > > > > > > Ciaran > > > > > > On Sunday, February 1, 2015 at 11:45:44 PM UTC, > ciara... at googlemail.com wrote: > > > Hello everyone, > > > > > > I have recently been attempting to modify some existing skimage code > to display regionprops for a labeled image (e.g. area or eccentricity) > > > > > > I initially tried to translate a vectorized bit of old matlab code I > had, but gave up on that and decided to alter the existing label2rgb > skimage function > > > > > > I am attempting to change each label value to it's area property value > similar to the label2rgb "avg" function. > > > > > > so I have: > > > labels = a labeled image > > > > > > out = np.zeros_like(labels) #a blank array > > > labels2 = np.unique(labels) #a vector of label vals > > > out = np.zeros_like(labels) > > > Props = regionprops(labels, ['Area']) > > > bg_label=0 > > > bg = (labels2 == bg_label) > > > if bg.any(): > > > labels2 = labels2[labels2 != bg_label] > > > out[bg] = 0 > > > for label in labels2: > > > mask = (labels == label).nonzero() > > > color = Props[label].area > > > out[mask] = color > > > but the "out" props image does not correspond to the correct area > values? > > > Can anyone help me with this? > > > It also throws the following error: > > > "list index out of range" > > > It would certainly be useful to have a way to view the spatial > distribution of label properties in this way - perhaps in a future skimage > version? > > > > > > > > > -- > > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image... at googlegroups.com. > > > For more options, visit https://groups.google.com/d/optout. > > > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image... at googlegroups.com . > > For more options, visit https://groups.google.com/d/optout. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Display Region Props Demo.ipynb Type: application/octet-stream Size: 284603 bytes Desc: not available URL: From jni.soma at gmail.com Sun Mar 29 22:55:39 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sun, 29 Mar 2015 19:55:39 -0700 (PDT) Subject: regionprops - displaying region properties In-Reply-To: <75ac8c79-3953-43d6-bed7-61b207251fd5@googlegroups.com> References: <75ac8c79-3953-43d6-bed7-61b207251fd5@googlegroups.com> Message-ID: <1427684138597.93d1d37b@Nodemailer> Hi Ciaran, Probably the preferred way of sharing IPython notebooks is to post them to gist.github.com and share the link from nbviewer. There's actually a gist extension to IPython notebook that will do this automagically for you: http://nbviewer.ipython.org/gist/minrk/4982809 (Read the whole article before doing anything!) Juan. On Mon, Mar 30, 2015 at 3:01 AM, null wrote: > Hi, > Sorry for the delay, I have been bogged down with writing papers! > I have attached an Ipython notebook with an example routine using one of > the skimage.data images. > Ciaran > On Monday, March 2, 2015 at 11:38:21 PM UTC, Johannes Sch?nberger wrote: >> >> Maybe, there is a way to elegantly integrate this into the RegionProperty >> class? >> >> Could you share your current implementation, so we can decide for a good >> strategy? >> >> > On Mar 2, 2015, at 6:02 PM, ciara... at googlemail.com >> wrote: >> > >> > Hi Johannes, >> > >> > Yeah of course. Would it be best placed in module color? >> > >> > Ciaran >> > >> > On Monday, March 2, 2015 at 5:26:12 PM UTC, Johannes Sch?nberger wrote: >> > That sounds great. Would you be willing to work on integrating this into >> skimage? >> > >> > Thanks. >> > >> > > On Feb 26, 2015, at 11:51 AM, ciara... at googlemail.com wrote: >> > > >> > > Hi >> > > Adding to my own post but hey.... >> > > >> > > I have since written my own code which allows visualising of region >> properties (eg area, eccentricity etc) via colormap, if anyone is >> interested let me know! >> > > >> > > Ciaran >> > > >> > > On Sunday, February 1, 2015 at 11:45:44 PM UTC, >> ciara... at googlemail.com wrote: >> > > Hello everyone, >> > > >> > > I have recently been attempting to modify some existing skimage code >> to display regionprops for a labeled image (e.g. area or eccentricity) >> > > >> > > I initially tried to translate a vectorized bit of old matlab code I >> had, but gave up on that and decided to alter the existing label2rgb >> skimage function >> > > >> > > I am attempting to change each label value to it's area property value >> similar to the label2rgb "avg" function. >> > > >> > > so I have: >> > > labels = a labeled image >> > > >> > > out = np.zeros_like(labels) #a blank array >> > > labels2 = np.unique(labels) #a vector of label vals >> > > out = np.zeros_like(labels) >> > > Props = regionprops(labels, ['Area']) >> > > bg_label=0 >> > > bg = (labels2 == bg_label) >> > > if bg.any(): >> > > labels2 = labels2[labels2 != bg_label] >> > > out[bg] = 0 >> > > for label in labels2: >> > > mask = (labels == label).nonzero() >> > > color = Props[label].area >> > > out[mask] = color >> > > but the "out" props image does not correspond to the correct area >> values? >> > > Can anyone help me with this? >> > > It also throws the following error: >> > > "list index out of range" >> > > It would certainly be useful to have a way to view the spatial >> distribution of label properties in this way - perhaps in a future skimage >> version? >> > > >> > > >> > > -- >> > > You received this message because you are subscribed to the Google >> Groups "scikit-image" group. >> > > To unsubscribe from this group and stop receiving emails from it, send >> an email to scikit-image... at googlegroups.com. >> > > For more options, visit https://groups.google.com/d/optout. >> > >> > >> > -- >> > You received this message because you are subscribed to the Google >> Groups "scikit-image" group. >> > To unsubscribe from this group and stop receiving emails from it, send >> an email to scikit-image... at googlegroups.com . >> > For more options, visit https://groups.google.com/d/optout. >> >> > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Mon Mar 30 00:03:20 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sun, 29 Mar 2015 21:03:20 -0700 (PDT) Subject: Issue with morphological filters In-Reply-To: References: Message-ID: <1427688200136.b53bcefa@Nodemailer> Hmm, I must say I don't know what's going on with either the reconstruction or the binary_fill_holes. (Originally I thought the image was inverted but you tried both polarities...) My advice would be to look at a few iterations of morphological reconstruction manually and see what's going on... Also, I would use the "grey" colormap, which is the most intuitive to look at (you used a reversed colormap for a couple of the images). Finally, it may be that you need to fill each "blob" independently. If so, have a look at skimage.measure.regionprops.filled_image. http://scikit-image.org/docs/dev/api/skimage.measure.html#regionprops Juan. On Sat, Mar 28, 2015 at 2:32 AM, Matteo wrote: > Hello Juan > Here it is: > http://nbviewer.ipython.org/urls/dl.dropbox.com/s/ancfxe2gx1fbyyp/morphology_test.ipynb?dl=0 > I get the same, odd results, with both ndimage's binary_fill_holes, and > reconstruction. IS it because of the structuring elements/masks? > Thanks for your help. > Matteo > On Thursday, March 26, 2015 at 11:14:05 PM UTC-6, Juan Nunez-Iglesias wrote: >> Hi Matteo, >> >> Can you try putting this notebook up as a gist and pasting a link to the >> notebook? It's hard for me to follow all of the steps (and the polarity of >> the image) without the images inline. Is it just the inverse of what you >> want? And anyway why aren't you just using ndimage's binary_fill_holes? >> >> >> https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.ndimage.morphology.binary_fill_holes.html >> >> Juan. >> >> >> >> >> On Fri, Mar 27, 2015 at 9:09 AM, Matteo > > wrote: >> >> Hello Juan >> >> Thanks so much for your suggestions. >> Once I convertedthe image as you suggested: >> # import back image >> cfthdr=io.imread('filled_contour_THDR.png') >> cfthdr = color.rgb2gray(cfthdr) > 0.5 >> >> I get good results with opening: >> # clean it up with opening >> selem17 = disk(17) >> opened_thdr = opening(cfthdr, selem17)/255 >> # plot it >> fig = plt.figure(figsize=(5, 5)) >> ax = fig.add_subplot(1, 1, 1) >> ax.set_xticks([]) >> ax.set_yticks([]) >> plt.imshow(opened_thdr,cmap='bone') >> plt.show() >> # not bad >> >> >> With remove_small_objects the advantage is that it does not join blobs in >> the original: >> cfthdr_inv = ~cfthdr >> test=remove_small_objects(cfthdr,10000) >> # plot it >> fig = plt.figure(figsize=(5, 5)) >> ax = fig.add_subplot(1, 1, 1) >> ax.set_xticks([]) >> ax.set_yticks([]) >> plt.imshow(test,cmap='bone') >> plt.show() >> >> >> but with reconstruction done as this: >> # filling holes with morphological reconstruction >> seed = np.copy(cfthdr_inv) >> seed[1:-1, 1:-1] = cfthdr_inv.max() >> mask = cfthdr_inv >> filled = reconstruction(seed, mask, method='erosion') >> # plot it >> fig = plt.figure(figsize=(5, 5)) >> ax = fig.add_subplot(1, 1, 1) >> ax.set_xticks([]) >> ax.set_yticks([]) >> plt.imshow(filled,cmap='bone',vmin=cfthdr_inv.min(), vmax=cfthdr_inv.max >> ()) >> plt.show() >> >> I get a completely white image. Do you have any suggestions as to why? >> >> Thank again. Cheers, >> Matteo >> >> >> On Wednesday, March 25, 2015 at 6:29:43 PM UTC-6, Juan Nunez-Iglesias >> wrote: >> >> Hi Matteo, >> >> My guess is that even though you are looking at a "black and white" image, >> the png is actually an RGB png. Just check the output of >> "print(cfthdr.shape)". Should be straightforward to make it a binary image: >> >> from skimage import color >> cfthdr = color.rgb2gray(cfthdr) > 0.5 >> >> Then you should have a binary image. (And inverting should be as simple as >> "cfthdr_inv = ~cfthdr") We have morphology.binary_fill_holes to do what you >> want. >> >> btw, there's also morphology.remove_small_objects, which does exactly what >> you did but as a function call. Finally, it looks like you are not using >> the latest version of scikit-image (0.11), so you might want to upgrade. >> >> Hope that helps! >> >> Juan. >> >> >> >> >> On Thu, Mar 26, 2015 at 8:48 AM, Matteo wrote: >> >> *Issues with morphological filters when trying to remove white holes in >> black objects in a binary images. Using opening or filling holes on >> inverted (or complement) of the original binary.* >> >> Hi there >> >> I have a series of derivatives calculated on geophysical data. >> >> Many of these derivatives have nice continuous maxima, so I treat them as >> images on which I do some cleanup with morphological filter. >> >> Here's one example of operations that I do routinely, and successfully: >> >> # threshold theta map using Otsu method >> >> thresh_th = threshold_otsu(theta) >> >> binary_th = theta > thresh_th >> >> # clean up small objects >> >> label_objects_th, nb_labels_th = sp.ndimage.label(binary_th) >> >> sizes_th = np.bincount(label_objects_th.ravel()) >> >> mask_sizes_th = sizes_th > 175 >> >> mask_sizes_th[0] = 0 >> >> binary_cleaned_th = mask_sizes_th[label_objects_th] >> >> # further enhance with morphological closing (dilation followed by an >> erosion) to remove small dark spots and connect small bright cracks >> >> # followed by an extra erosion >> >> selem = disk(1) >> >> closed_th = closing(binary_cleaned_th, selem)/255 >> >> eroded_th = erosion(closed_th, selem)/255 >> >> # Finally, extract lienaments using skeletonization >> >> skeleton_th=skeletonize(binary_th) >> >> skeleton_cleaned_th=skeletonize(binary_cleaned_th) >> >> # plot to compare >> >> fig = plt.figure(figsize=(20, 7)) >> >> ax = fig.add_subplot(1, 2, 1) >> >> imshow(skeleton_th, cmap='bone_r', interpolation='none') >> >> ax2 = fig.add_subplot(1, 3, 2) >> >> imshow(skeleton_cleaned_th, cmap='bone_r', interpolation='none') >> >> ax.set_xticks([]) >> >> ax.set_yticks([]) >> >> ax2.set_xticks([]) >> ax2.set_yticks([]) >> >> Unfortunately I cannot share the data as it is proprietary, but I will >> for the next example, which is the one that does not work. >> >> There's one derivative that shows lots of detail but not continuous >> maxima. As a workaround I created filled contours in Matplotlib >> >> exported as an image. The image is attached. >> >> Now I want to import back the image and plot it to test: >> >> # import back image >> >> cfthdr=io.imread('filled_contour.png') >> >> # threshold using using Otsu method >> >> thresh_thdr = threshold_otsu(cfthdr) >> >> binary_thdr = cfthdr > thresh_thdr >> >> # plot it >> >> fig = plt.figure(figsize=(5, 5)) >> >> ax = fig.add_subplot(1, 1, 1) >> >> ax.set_xticks([]) >> >> ax.set_yticks([]) >> >> plt.imshow(binary_thdr, cmap='bone') >> >> plt.show() >> >> The above works without issues. >> >> >> >> Next I want to fill the white holes inside the black blobs. I thought of 2 >> strategies. >> >> The first would be to use opening; the second to invert the image, and >> then fill the holes as in here: >> >> http://scikit-image.org/docs/dev/auto_examples/plot_holes_and_peaks.html >> >> By the way, I found a similar example for opencv here >> >> >> http://stackoverflow.com/questions/10316057/filling-holes-inside-a-binary-object >> >> Let's start with opening. When I try: >> >> selem = disk(1) >> >> opened_thdr = opening(binary_thdr, selem) >> >> or: >> >> selem = disk(1) >> >> opened_thdr = opening(cfthdr, selem) >> >> I get an error message like this: >> >> --------------------------------------------------------------------------- >> >> >> ValueError Traceback (most recent call >> last) >> >> in () >> >> 1 #binary_thdr=img_as_float(binary_thdr,force_copy=False) >> >> ----> 2 opened_thdr = opening(binary_thdr, selem)/255 >> >> 3 >> >> ... > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dzungng89 at gmail.com Mon Mar 30 10:29:12 2015 From: dzungng89 at gmail.com (Dzung Nguyen) Date: Mon, 30 Mar 2015 07:29:12 -0700 (PDT) Subject: Steerable pyramid In-Reply-To: <842195c1-5efe-412a-9bcf-47806e017012@googlegroups.com> References: <1c82516d-08a5-46d4-b8fb-4dc4dcb8328d@googlegroups.com> <842195c1-5efe-412a-9bcf-47806e017012@googlegroups.com> Message-ID: <07f90d0a-8d79-48c4-abee-ac23522ac0c1@googlegroups.com> Hi, Will this PR be merged? Do I need to clarify anything? On Friday, March 13, 2015 at 8:24:25 PM UTC-5, Dzung Nguyen wrote: > > I created a PR here: > https://github.com/scikit-image/scikit-image/pull/1425 > > On Thursday, March 12, 2015 at 7:48:35 PM UTC-5, Josh Warner wrote: >> >> We have Gabor filters implemented in `skimage.filters`, but IMO I'd be >> open to adding alternative perceptual filters. >> >> Looks like nice clean work! >> >> >> On Thursday, March 12, 2015 at 7:29:18 PM UTC-5, Dzung Nguyen wrote: >>> >>> Hi all, >>> >>> I implemented Steerable pyramid (similar to Gabor transform). Would >>> skimage community be interested in this? I am thinking of adding API for >>> image transforms, and have all popular transform out there? (orthogonal, >>> Gabor, steerable etc) >>> >>> https://github.com/andreydung/Steerable-filter >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From google at terre-adelie.org Mon Mar 30 01:30:29 2015 From: google at terre-adelie.org (=?ISO-8859-1?Q?J=E9r=F4me?= Kieffer) Date: Mon, 30 Mar 2015 07:30:29 +0200 Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> <3e5a8631-5918-4602-a341-1c835bbf5299@googlegroups.com> Message-ID: <20150330073029.e4924c897d1324c1b569084f@terre-adelie.org> On Fri, 27 Mar 2015 07:04:10 -0700 Jaime Fern?ndez del R?o wrote: > Similarly, you suggest using Cython's prange to parallelize computations. I > haven't seen OpenMP used anywhere in NumPy or SciPy, and have the feeling > that parallel implementations are left out on purpose. Am I right, or would > parallelizing were possible be OK? OpenMP is tricky under MacOSX: 10.7-10.9 had no support at all (they use clang <3.6). Since 10.10, the support is incomplete, well at least many code I tested fail with OpenMP (they run under linux and windows), I noticed wrong results, not only failure to compile ! Of course on can install gcc or icc, but this is not in Python's philosophy -- J?r?me Kieffer From stefanv at berkeley.edu Mon Mar 30 22:53:28 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Mon, 30 Mar 2015 19:53:28 -0700 Subject: Steerable pyramid In-Reply-To: <07f90d0a-8d79-48c4-abee-ac23522ac0c1@googlegroups.com> References: <1c82516d-08a5-46d4-b8fb-4dc4dcb8328d@googlegroups.com> <842195c1-5efe-412a-9bcf-47806e017012@googlegroups.com> <07f90d0a-8d79-48c4-abee-ac23522ac0c1@googlegroups.com> Message-ID: Hi Dzung On Mon, Mar 30, 2015 at 7:29 AM, Dzung Nguyen wrote: > Will this PR be merged? Do I need to clarify anything? I've commented on your PR--thanks for your contribution! St?fan