From tsyu80 at gmail.com Sun Dec 1 09:50:23 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Sun, 1 Dec 2013 08:50:23 -0600 Subject: scikit-image paper In-Reply-To: References: Message-ID: On Thu, Nov 21, 2013 at 12:30 AM, St?fan van der Walt wrote: > On Thu, Nov 21, 2013 at 6:39 AM, Tony Yu wrote: > > Unfortunately, it's a no-go on mentioning the specific companies we've > > worked with, but its OK to say something like "Enthought, Inc uses > > scikit-image extensively in their consulting projects related to > geophysics > > and microscopy". > > That's great, thank you! Could I use [1] as a reference? > Oops, I forgot to reply to this. Yes, definitely. -Tony > St?fan > > [1] Personal communications with Tony S Yu, [or insert > other name here] > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaaagrawal at gmail.com Mon Dec 2 11:31:16 2013 From: aaaagrawal at gmail.com (Ankit Agrawal) Date: Mon, 2 Dec 2013 08:31:16 -0800 (PST) Subject: Scipy India 2013, 13-15th Dec, IIT Bombay Message-ID: <14cd8559-d046-4506-af54-1210e3ba8552@googlegroups.com> Hi everyone, I wanted to know if someone is interested in attending the Scipy India Conference 2013 which is going to be held during 13-15th December at IIT Bombay(my university). The last date for registration is 5th Dec. I am myself not sure whether I will be at my hometown or at the university during this period, but if someone is interested, I would very much like to attend. Hopefully, Chintak would be done with his endterms by then and can possibly meetup too. If anyone is interested and is registering for it, please let me know. Cheers, Ankit. -------------- next part -------------- An HTML attachment was scrubbed... URL: From deshpande.jaidev at gmail.com Mon Dec 2 11:37:34 2013 From: deshpande.jaidev at gmail.com (Jaidev Deshpande) Date: Mon, 2 Dec 2013 22:07:34 +0530 Subject: Scipy India 2013, 13-15th Dec, IIT Bombay In-Reply-To: <14cd8559-d046-4506-af54-1210e3ba8552@googlegroups.com> References: <14cd8559-d046-4506-af54-1210e3ba8552@googlegroups.com> Message-ID: On Mon, Dec 2, 2013 at 10:01 PM, Ankit Agrawal wrote: > Hi everyone, > > I wanted to know if someone is interested in attending the Scipy > India Conference 2013 which is going to be held during 13-15th December at > IIT Bombay(my university). The last date for registration is 5th Dec. I am > myself not sure whether I will be at my hometown or at the university during > this period, but if someone is interested, I would very much like to attend. > Hopefully, Chintak would be done with his endterms by then and can possibly > meetup too. > > If anyone is interested and is registering for it, please let me > know. > > > Cheers, > Ankit. > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. Hi, I'll be going there, and there will be a few people from the Enthought office here in Mumbai. -- JD From aaaagrawal at gmail.com Mon Dec 2 11:55:28 2013 From: aaaagrawal at gmail.com (Ankit Agrawal) Date: Mon, 2 Dec 2013 22:25:28 +0530 Subject: Scipy India 2013, 13-15th Dec, IIT Bombay In-Reply-To: References: <14cd8559-d046-4506-af54-1210e3ba8552@googlegroups.com> Message-ID: Hi Jaidev, > I'll be going there, and there will be a few people from the Enthought > office here in Mumbai. That's great. I also see that you have submitted a proposalon Compressed Sensing, which I would be interested in attending. I will definitely catch up with you guys if I attend the conference. Thanks. Cheers, Ankit Agrawal, Communications and Signal Processing, IIT Bombay. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chintaksheth at gmail.com Mon Dec 2 12:17:18 2013 From: chintaksheth at gmail.com (Chintak Sheth) Date: Mon, 2 Dec 2013 22:47:18 +0530 Subject: Scipy India 2013, 13-15th Dec, IIT Bombay In-Reply-To: <14cd8559-d046-4506-af54-1210e3ba8552@googlegroups.com> References: <14cd8559-d046-4506-af54-1210e3ba8552@googlegroups.com> Message-ID: Hi Ankit Aah, my university exams. Last one is on 13th, I'll be reaching on 14th evening. It should be an awesome experience! Either ways if you are still around then we'll surely meet up! :) Chintak On Dec 2, 2013 10:01 PM, "Ankit Agrawal" wrote: > Hi everyone, > > I wanted to know if someone is interested in attending the Scipy > India Conference 2013 which is going to be held > during 13-15th December at IIT Bombay(my university). The last date for > registration is 5th Dec. I am myself not sure whether I will be at my > hometown or at the university during this period, but if someone is > interested, I would very much like to attend. Hopefully, Chintak would be > done with his endterms by then and can possibly meetup too. > > If anyone is interested and is registering for it, please let me > know. > > > Cheers, > Ankit. > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Mon Dec 2 23:52:43 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Mon, 2 Dec 2013 22:52:43 -0600 Subject: Segfault with phase unwrap code Message-ID: I'm getting a segfault when running the phase unwrap example and tests. I assume this issue is system dependent since Travis isn't raising a fuss. Can someone with a similar setup reproduce the issue? $ gcc -v Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/c++/4.2.1 Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) Target: x86_64-apple-darwin13.0.0 Thread model: posix Thanks, -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Tue Dec 3 02:22:16 2013 From: jsch at demuc.de (=?windows-1252?Q?Johannes_Sch=F6nberger?=) Date: Tue, 3 Dec 2013 08:22:16 +0100 Subject: Segfault with phase unwrap code In-Reply-To: References: Message-ID: <742D1AA5-2606-4C04-A8B8-EBBCD65B35ED@demuc.de> Tony I?m experiencing the same problems on OS X. See https://github.com/scikit-image/scikit-image/issues/835 Johannes Am 03.12.2013 um 05:52 schrieb Tony Yu : > I'm getting a segfault when running the phase unwrap example and tests. I assume this issue is system dependent since Travis isn't raising a fuss. > > Can someone with a similar setup reproduce the issue? > > $ gcc -v > Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/c++/4.2.1 > Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) > Target: x86_64-apple-darwin13.0.0 > Thread model: posix > > Thanks, > -Tony > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. From tsyu80 at gmail.com Tue Dec 3 20:32:24 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Tue, 3 Dec 2013 19:32:24 -0600 Subject: Segfault with phase unwrap code In-Reply-To: <742D1AA5-2606-4C04-A8B8-EBBCD65B35ED@demuc.de> References: <742D1AA5-2606-4C04-A8B8-EBBCD65B35ED@demuc.de> Message-ID: On Tue, Dec 3, 2013 at 1:22 AM, Johannes Sch?nberger wrote: > Tony > > I?m experiencing the same problems on OS X. See > https://github.com/scikit-image/scikit-image/issues/835 Thanks, I knew this issue sounded familiar. My gmail-search-foo just isn't that great, I suppose. -T > > Johannes > > Am 03.12.2013 um 05:52 schrieb Tony Yu : > > > I'm getting a segfault when running the phase unwrap example and tests. > I assume this issue is system dependent since Travis isn't raising a fuss. > > > > Can someone with a similar setup reproduce the issue? > > > > $ gcc -v > > Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr > --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/c++/4.2.1 > > Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) > > Target: x86_64-apple-darwin13.0.0 > > Thread model: posix > > > > Thanks, > > -Tony > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean.kossaifi at gmail.com Wed Dec 4 07:16:45 2013 From: jean.kossaifi at gmail.com (Jean K) Date: Wed, 4 Dec 2013 04:16:45 -0800 (PST) Subject: Problem with Transform.SimilarityTransform Message-ID: Hi everyone, I'm currently trying to use skleanr.Transform.SimilarityTransform to remove scaling translation and rotation from one set of points to align it to the other. However, if I centre the sets around the origin first, there seems to be frequently a problem (which doesn't occur if the points are all positives), the output being NaN. I tried to write a small reproducible code: In [77]: #I fixed the seed here for reproducibility but this happens often np.random.seed(4) #Two random set of points a = np.random.randn(10, 2) b = np.random.randn(10, 2) # Center the points arount the origin a -= np.mean(a, axis=0)[np.newaxis, :] b -= np.mean(b, axis=0)[np.newaxis, :] tform = SimilarityTransform() tform.estimate(a, b) tform(a) Out[77]: array([[ nan, nan], [ nan, nan], [ nan, nan], [ nan, nan], [ nan, nan], [ nan, nan], [ nan, nan], [ nan, nan], [ nan, nan], [ nan, nan]]) Note that if I don't centre the point there is no problem: In [89]: #I fixed the seed here for reproducibility but this happens often np.random.seed(4) #Two random set of points a = np.random.randn(10, 2) b = np.random.randn(10, 2) # Center the points arount the origin #a -= np.mean(a, axis=0)[np.newaxis, :] #b -= np.mean(b, axis=0)[np.newaxis, :] tform = SimilarityTransform() tform.estimate(a, b) tform(a) Out[89]: array([[ 3.76870886, -0.35152078], [ 1.83453334, -1.25080725], [ 5.42428044, -4.30088121], [ 2.51364241, -1.00154154], [ 6.14244682, -2.71511189], [ 5.37956586, -0.65190768], [ 4.5752074 , -0.19039746], [ 1.96968262, -1.99729896], [ 1.47865106, 0.59493455], [ 5.39473376, -0.31125435]]) Sometime it also tells me that the *SVD doesn't converge*. Any idea what is going on? Thanks, Jean -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.classen at gmail.com Wed Dec 4 15:16:10 2013 From: scott.classen at gmail.com (Scott Classen) Date: Wed, 4 Dec 2013 12:16:10 -0800 (PST) Subject: img_as_float is confounding me Message-ID: I have four 2D numpy arrays. 100x100, 200x200, 300x300, 400x400 I want to use pyramid_reduce on the 200x200, 300x300, and 400x400 arrays to downsample them to 100x100 arrays, then add them all together. The arrays are in a list and I'm looping through the list (looping code left out for clarity) calculating binned images for each one: g.binned = img_as_float(g.cropped)# no reducing here, just converting to scikit compatable float?I hope. g.binned = pyramid_reduce(img_as_float(g.cropped), downscale=2) g.binned = pyramid_reduce(img_as_float(g.cropped), downscale=3) g.binned = pyramid_reduce(img_as_float(g.cropped), downscale=4) The g.cropped input arrays originated from numpy arrays with dtype=float64 I then print some useful information about my 4 arrays: array data type: float64 shape: (100, 100) nanmax: 777.0 array data type: float64 shape: (100, 100) nanmax: 1.0 array data type: float64 shape: (100, 100) nanmax: 1.0 array data type: float64 shape: (100, 100) nanmax: 1.0 I'm curious why the g.binned array from g.binned = img_as_float(g.cropped)has not been scaled to 0,1? The others which have been through the pyramid_reduce function have apparently been scaled, but maybe not by the img_as_float routine, but by pyramid_reduce? I then add the arrays tmp_image = list[0].binned + list[1].binned + list[2].binned + list[3].binned However, because the first array has not been properly scaled to 0,1 I get an error when I run the tmp_image through img_as_uint so I can write out my binary image file: Traceback (most recent call last): File "./test.py", line 318, in image_to_write = img_as_uint(tmp_image) File "/sw/lib/python2.7/site-packages/skimage/util/dtype.py", line 310, in img_as_uint return convert(image, np.uint16, force_copy) File "/sw/lib/python2.7/site-packages/skimage/util/dtype.py", line 191, in convert raise ValueError("Images of type float must be between -1 and 1.") ValueError: Images of type float must be between -1 and 1. Any advice would be most appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Wed Dec 4 15:38:56 2013 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Wed, 4 Dec 2013 12:38:56 -0800 (PST) Subject: img_as_float is confounding me In-Reply-To: References: Message-ID: Briefly, `img_as_float` assumes all inputs were properly scaled images of their reported dtype. If `img_as_float` is handed an image with the datatype `np.float64`, it is caught and the input image is returned without modification... no scaling is applied or attempted. Your input here appears to be a floating point array on the range [0.0, 777.0]. You will need to manually scale to the range [0, 1] - or, ideally, set the actual dtype (`np.int16` or `np.uint16`, in this case?) when you load your data. Then everything should work well. On Wednesday, December 4, 2013 2:16:10 PM UTC-6, Scott Classen wrote: > > I have four 2D numpy arrays. 100x100, 200x200, 300x300, 400x400 > > I want to use pyramid_reduce on the 200x200, 300x300, and 400x400 arrays > to downsample them to 100x100 arrays, then add them all together. > > The arrays are in a list and I'm looping through the list (looping code > left out for clarity) calculating binned images for each one: > > g.binned = img_as_float(g.cropped)# no reducing here, just converting to > scikit compatable float?I hope. > g.binned = pyramid_reduce(img_as_float(g.cropped), downscale=2) > g.binned = pyramid_reduce(img_as_float(g.cropped), downscale=3) > g.binned = pyramid_reduce(img_as_float(g.cropped), downscale=4) > > The g.cropped input arrays originated from numpy arrays with dtype=float64 > > I then print some useful information about my 4 arrays: > > array data type: float64 shape: (100, 100) nanmax: 777.0 > array data type: float64 shape: (100, 100) nanmax: 1.0 > array data type: float64 shape: (100, 100) nanmax: 1.0 > array data type: float64 shape: (100, 100) nanmax: 1.0 > > > I'm curious why the g.binned array from g.binned = img_as_float(g.cropped)has > not been scaled to 0,1? The others which have been through the > pyramid_reduce function have apparently been scaled, but maybe not by the > img_as_float routine, but by pyramid_reduce? > > I then add the arrays > > tmp_image = list[0].binned + list[1].binned + list[2].binned + > list[3].binned > > However, because the first array has not been properly scaled to 0,1 I get > an error when I run the tmp_image through img_as_uint so I can write out my > binary image file: > > Traceback (most recent call last): > File "./test.py", line 318, in > image_to_write = img_as_uint(tmp_image) > File "/sw/lib/python2.7/site-packages/skimage/util/dtype.py", line 310, > in img_as_uint > return convert(image, np.uint16, force_copy) > File "/sw/lib/python2.7/site-packages/skimage/util/dtype.py", line 191, > in convert > raise ValueError("Images of type float must be between -1 and 1.") > ValueError: Images of type float must be between -1 and 1. > > > Any advice would be most appreciated. > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Wed Dec 4 10:54:39 2013 From: jsch at demuc.de (=?UTF-8?Q?Johannes_Sch=C3=B6nberger?=) Date: Wed, 4 Dec 2013 16:54:39 +0100 Subject: Problem with Transform.SimilarityTransform In-Reply-To: <1021830409.31273.1386164910127.open-xchange@app03> References: <1021830409.31273.1386164910127.open-xchange@app03> Message-ID: <1834013121.42070.1386172481914.open-xchange@app03.ox.hosteurope.de> The implementation of the null space solver and especially the normalization is indeed not suitable for this special case. I'll quickly fix it in the coming days... > Am 04.12.2013 um 13:16 schrieb "Jean K" : > > Hi everyone, > > I'm currently trying to use skleanr.Transform.SimilarityTransform to remove scaling translation and rotation from one set of points to align it to the other. > However, if I centre the sets around the origin first, there seems to be frequently a problem (which doesn't occur if the points are all positives), the output being NaN. > > I tried to write a small reproducible code: > In [77]: > > #I fixed the seed here for reproducibility but this happens often > np.random.seed(4) > > #Two random set of points > a = np.random.randn(10, 2) > b = np.random.randn(10, 2) > > # Center the points arount the origin > a -= np.mean(a, axis=0)[np.newaxis, :] > b -= np.mean(b, axis=0)[np.newaxis, :] > > tform = SimilarityTransform() > tform.estimate(a, b) > tform(a) > Out[77]: > array([[ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan]]) > > Note that if I don't centre the point there is no problem: > In [89]: > > #I fixed the seed here for reproducibility but this happens often > np.random.seed(4) > > #Two random set of points > a = np.random.randn(10, 2) > b = np.random.randn(10, 2) > > # Center the points arount the origin > #a -= np.mean(a, axis=0)[np.newaxis, :] > #b -= np.mean(b, axis=0)[np.newaxis, :] > > tform = SimilarityTransform() > tform.estimate(a, b) > tform(a) > Out[89]: > array([[ 3.76870886, -0.35152078], > [ 1.83453334, -1.25080725], > [ 5.42428044, -4.30088121], > [ 2.51364241, -1.00154154], > [ 6.14244682, -2.71511189], > [ 5.37956586, -0.65190768], > [ 4.5752074 , -0.19039746], > [ 1.96968262, -1.99729896], > [ 1.47865106, 0.59493455], > [ 5.39473376, -0.31125435]]) > > Sometime it also tells me that the SVD doesn't converge. > Any idea what is going on? > > Thanks, > > Jean > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jturner at gemini.edu Wed Dec 4 18:50:14 2013 From: jturner at gemini.edu (James Turner) Date: Wed, 4 Dec 2013 20:50:14 -0300 Subject: Fwd: Ureka 1.0 is now available In-Reply-To: <529FBF66.4090105@gemini.edu> References: <529FBF66.4090105@gemini.edu> Message-ID: <529FBFB6.3050200@gemini.edu> Hi everyone, I thought you might be interested to know that Scikit-image 0.9.3 is now included in Ureka, our distribution of Python and IRAF software for astronomy, which has just come out of beta today. Ureka is rather astronomy-orientated (including some "legacy" stuff) so won't be everyone's installation method of choice, but the rest of you might at least like to know where your work is ending up :-). While I'm at it, you might be interested to keep an eye on AstroPy (http://www.astropy.org/), which has been developing rapidly and recently came out with a v0.3 release; it certainly has some room for overlap with Scikit-image. Cheers, James. -------- Original Message -------- Subject: [AstroPy] Ureka 1.0 is now available Date: Wed, 4 Dec 2013 18:13:33 +0000 From: Christine Slocum To: AstroPy at scipy.org STScI and Gemini are announcing the release of Ureka 1.0. Improvements (with respect to beta 6): - test for completeness of OS library dependencies - many installation improvements - expanded documentation - support for Ubuntu 13.10, Fedora 19, and OS X Mavericks (10.9) - menu shortcuts on Linux - optional support for MySQL and PostgreSQL - bundled Fortran run-time library is always used to avoid conflict with OS version(s) Updated packages: - Gemini IRAF 1.12 - virtualenv (1.10.1) - SciPy (0.13.0) - wheel (0.22.0) - scikit-image (0.9.3) - Pillow (2.2.1) - CFITSIO (3350) - scikit-learn (0.14.1) - Tornado (3.1.1) - setuptools (1.3) - NumPy (1.8.0) - stsci.distutils (0.3.2) - astropy (0.3) - d2to1 (0.2.11) - PyParsing (2.0.1) New packages: - PyKE (2.4.0) - Six (1.4.1) - PyTZ (2013.8) - kapteyn (2.2.1b17) - BeautifulSoup (3.2.1) - mechanize (0.2.5) - jsonschema (2.1.0) - numexpr (2.2.2) - Dateutil (2.2) - mock (1.0.1) Ureka is a binary packaging installer for common astronomical software (primarily for the UV/Optical/IR community). The goals of the Ureka installer are to: 1) minimize the number of actions needed to install all the different software components. We are seeking a "one button install" (it's not quite one button, but not far from it). 2) permit installation without requiring system privileges. 3) make installs as problem-free as possible for the great majority of users. 4) allow users to install their own software (particularly Python-based) within this framework, or update versions of software within the framework. 5) permit different Ureka installations to coexist and to easily switch between them. 6) enable installing different versions of the same software package under a particular Ureka installation. 7) support Macs and most popular Linux variants. Ureka does not use LD_LIBRARY_PATH (or its Mac equivalent), nor require PYTHONPATH, minimizing the possibility of affecting existing software after installation or use. Should conflicts nevertheless arise, Ureka can easily be disabled temporarily or enabled only in specific terminal windows. Keep in mind that no installation system is completely foolproof (that's very nearly impossible to achieve). In particular, when users update or add software to the Ureka framework, they increase the risk of breaking something, but we feel that is an option that users should have as long as they understand the possible risks. This release includes IRAF 2.16 and associated packages for IRAF, DS9, and a fairly full suite of Python scientific software packages (e.g., numpy, scipy, and matplotlib). The full listing of included software can be found at this link: http://ssb.stsci.edu/ureka/1.0/docs/components.html Ureka can be downloaded from: http://ssb.stsci.edu/ureka/ (choose "1.0"). Installation and usage instructions can be found on the same web page. Please send questions or feedback to help at stsci.edu. _______________________________________________ AstroPy mailing list AstroPy at scipy.org http://mail.scipy.org/mailman/listinfo/astropy From tsyu80 at gmail.com Wed Dec 4 23:11:34 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Wed, 4 Dec 2013 22:11:34 -0600 Subject: img_as_float is confounding me In-Reply-To: References: Message-ID: On Wed, Dec 4, 2013 at 2:38 PM, Josh Warner wrote: > Briefly, `img_as_float` assumes all inputs were properly scaled images of > their reported dtype. > > If `img_as_float` is handed an image with the datatype `np.float64`, it is > caught and the input image is returned without modification... no scaling > is applied or attempted. > > Your input here appears to be a floating point array on the range [0.0, > 777.0]. You will need to manually scale to the range [0, 1] - or, ideally, > set the actual dtype (`np.int16` or `np.uint16`, in this case?) when you > load your data. Then everything should work well. > You might find `rescale_intensity` helpful if you have data that should automatically be linearly rescaled to the dtype limits. For example: import numpy as np from skimage.exposure import rescale_intensity rescale_intensity(np.arange(1000, dtype=float)) Note however, that it's best practice to pass in an input range to rescale intensity (by default, it uses the min and max of the input data). If you're processing a series of images, you can't really tell signal from noise if you're always stretching to the min/max of each image. Best, -Tony > > > On Wednesday, December 4, 2013 2:16:10 PM UTC-6, Scott Classen wrote: >> >> I have four 2D numpy arrays. 100x100, 200x200, 300x300, 400x400 >> >> I want to use pyramid_reduce on the 200x200, 300x300, and 400x400 arrays >> to downsample them to 100x100 arrays, then add them all together. >> >> The arrays are in a list and I'm looping through the list (looping code >> left out for clarity) calculating binned images for each one: >> >> g.binned = img_as_float(g.cropped)# no reducing here, just converting to >> scikit compatable float?I hope. >> g.binned = pyramid_reduce(img_as_float(g.cropped), downscale=2) >> g.binned = pyramid_reduce(img_as_float(g.cropped), downscale=3) >> g.binned = pyramid_reduce(img_as_float(g.cropped), downscale=4) >> >> The g.cropped input arrays originated from numpy arrays with dtype=float64 >> >> I then print some useful information about my 4 arrays: >> >> array data type: float64 shape: (100, 100) nanmax: 777.0 >> array data type: float64 shape: (100, 100) nanmax: 1.0 >> array data type: float64 shape: (100, 100) nanmax: 1.0 >> array data type: float64 shape: (100, 100) nanmax: 1.0 >> >> >> I'm curious why the g.binned array from g.binned = >> img_as_float(g.cropped)has not been scaled to 0,1? The others which have >> been through the pyramid_reduce function have apparently been scaled, but >> maybe not by the img_as_float routine, but by pyramid_reduce? >> >> I then add the arrays >> >> tmp_image = list[0].binned + list[1].binned + list[2].binned + >> list[3].binned >> >> However, because the first array has not been properly scaled to 0,1 I >> get an error when I run the tmp_image through img_as_uint so I can write >> out my binary image file: >> >> Traceback (most recent call last): >> File "./test.py", line 318, in >> image_to_write = img_as_uint(tmp_image) >> File "/sw/lib/python2.7/site-packages/skimage/util/dtype.py", line >> 310, in img_as_uint >> return convert(image, np.uint16, force_copy) >> File "/sw/lib/python2.7/site-packages/skimage/util/dtype.py", line >> 191, in convert >> raise ValueError("Images of type float must be between -1 and 1.") >> ValueError: Images of type float must be between -1 and 1. >> >> >> Any advice would be most appreciated. >> >> >> >> >> >> >> > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean.kossaifi at gmail.com Fri Dec 6 06:36:35 2013 From: jean.kossaifi at gmail.com (Jean K) Date: Fri, 6 Dec 2013 03:36:35 -0800 (PST) Subject: Problem with Transform.SimilarityTransform In-Reply-To: <1834013121.42070.1386172481914.open-xchange@app03.ox.hosteurope.de> References: <1021830409.31273.1386164910127.open-xchange@app03> <1834013121.42070.1386172481914.open-xchange@app03.ox.hosteurope.de> Message-ID: <74ce41f0-47be-4d30-afd1-68eb86f7a543@googlegroups.com> Great, thanks! On Wednesday, 4 December 2013 15:54:39 UTC, Johannes Sch?nberger wrote: > > The implementation of the null space solver and especially the > normalization is indeed not suitable for this special case. I'll quickly > fix it in the coming days... > > Am 04.12.2013 um 13:16 schrieb "Jean K" > >: > > Hi everyone, > > I'm currently trying to use skleanr.Transform.SimilarityTransform to > remove scaling translation and rotation from one set of points to align it > to the other. > However, if I centre the sets around the origin first, there seems to be > frequently a problem (which doesn't occur if the points are all positives), > the output being NaN. > > I tried to write a small reproducible code: > In [77]: > > #I fixed the seed here for reproducibility but this happens often > > np.random.seed(4) > > > > #Two random set of points > > a = np.random.randn(10, 2) > > b = np.random.randn(10, 2) > > > > # Center the points arount the origin > > a -= np.mean(a, axis=0)[np.newaxis, :] > > b -= np.mean(b, axis=0)[np.newaxis, :] > > > > tform = SimilarityTransform() > > tform.estimate(a, b) > > tform(a) > > Out[77]: > > array([[ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan], > [ nan, nan]]) > > > Note that if I don't centre the point there is no problem: > > In [89]: > > #I fixed the seed here for reproducibility but this happens often > > np.random.seed(4) > > > > #Two random set of points > > a = np.random.randn(10, 2) > > b = np.random.randn(10, 2) > > > > # Center the points arount the origin > > #a -= np.mean(a, axis=0)[np.newaxis, :] > > #b -= np.mean(b, axis=0)[np.newaxis, :] > > > > tform = SimilarityTransform() > > tform.estimate(a, b) > > tform(a) > > Out[89]: > > array([[ 3.76870886, -0.35152078], > [ 1.83453334, -1.25080725], > [ 5.42428044, -4.30088121], > [ 2.51364241, -1.00154154], > [ 6.14244682, -2.71511189], > [ 5.37956586, -0.65190768], > [ 4.5752074 , -0.19039746], > [ 1.96968262, -1.99729896], > [ 1.47865106, 0.59493455], > [ 5.39473376, -0.31125435]]) > > > Sometime it also tells me that the *SVD doesn't converge*. > > Any idea what is going on? > > > Thanks, > > > Jean > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image... at googlegroups.com . > For more options, visit https://groups.google.com/groups/opt_out. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Fri Dec 6 19:07:57 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Fri, 6 Dec 2013 19:07:57 -0500 Subject: Pattern drawing library ideas Message-ID: Hi everyone, My colleague Evelyn and I have been using scikit image's draw utilities to generate test data for modeling particle distributions on optical fibers (ie the SEM images I sent a few weeks ago). We are doing this to test some new models for nanoparticle cluster fitting that we've developed. In working on this, we believe we've come up with a general framework for drawing 2D particle ensembles in skimage. One could envision using it to do something like: Step 0: Generate a blank canvas of resolution 1024 x 768 Step 1: Add circles of average radius 5 pixels until 30% of the image is covered. - Arrange these randomly vs. equally spaced on a grid etc... Step 2: Add an "cluster" of particles to this image. Paint only the clusters red. Step 3: Run an optimization that maximizes the inter-particle spacing. Step 4: Make all the particles smaller than a certain area green. Step 5: Try watershedding etc... By generating the sample images in skimage, we wouldn't have to break out of the python workflow to make our test data. We believe that we've come up with some abstractions that would allow for such an toolset. This includes defining a abstract class for different particle shapes, and then wrapping skimage.draw() functions as a class method. Before we get invested in this, I had a few questions: 1. Is this something that, if executed well, would be of interest to incorporate into scikit image? If so, I will start working on it as a branch; otherwise, we'll just use skimage as a dependency. I'd image it would either be a submodule of skimage.draw(), something like skimage.draw.ensemble() or draw.psuedodata()... 2. Is anyone aware of a pre-existing tooset/library (preferrably in Python) that's built for this? And if so, is that library compatible to skimage? 3. When a user runs skimage.draw.circle(), it returns *rr* and *cc*, what are these? Is cc the "chain code"? One caveat: our design plan is object oriented. We thought that the best way to have an image with several particles would require a *Particle*class to add enough metadata to the returns of scikit.draw() so that particles could indivually be tracked, isolated and manipulated. The ensemble would be created on a *Canvas* class (better name? *TestImage*?), which is responsible for storing an ensemble of Particles, as well as all the drawing and organization of the ensemble. For example, a circle would have attributes X, Y, R, which are then passed to a draw() method that called skimage.circle(). In this way, one could track particle positions, manipulate and redraw(). Would something like this clash with skimage's basic design paradigms? If so we, maybe it would best to keep this toolkit out of skimage. PS, is anyone else working on this, or interested? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Sat Dec 7 02:02:45 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Sat, 7 Dec 2013 01:02:45 -0600 Subject: match_template confidence > 3 ? In-Reply-To: <8580c00d-9bdf-429b-9921-ff14cdf83207@googlegroups.com> References: <8580c00d-9bdf-429b-9921-ff14cdf83207@googlegroups.com> Message-ID: Sorry it took so long for me to reply to your email. Unfortunately, I'm not sure what the problem is here. You're correct in thinking that values shouldn't be outside of [-1, 1], but as your code demonstrates, there are apparently inputs that break the current implementation. I've played around with isolating a small test image, but I haven't had much luck. -Tony On Wed, Nov 20, 2013 at 12:38 PM, Jon Schull wrote: > Greetings, we're using SciKit template_matching and getting confidence > values > 3. Does that make sense? > > A simple example with test images is attached. > > Thanks for the fine work! > > import numpy, sys > from skimage import data > from skimage import io > from skimage.feature import match_template > > def containsLikelyhood(needleLoc, haystackLoc): > haystack = data.imread(haystackLoc, as_grey=True) > needle = data.imread(needleLoc, as_grey=True) > greyLoc = match_template(haystack, needle) > > ij = numpy.unravel_index(numpy.argmax(greyLoc), greyLoc.shape) > x, y = ij[::-1] > > height, width = needle.shape > return greyLoc[y][x], (x, y, width, height) > > > if __name__ == "__main__": > print containsLikelyhood("locatedStar.PNG", "screenshot.PNG") > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Sat Dec 7 20:39:57 2013 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Sat, 7 Dec 2013 17:39:57 -0800 (PST) Subject: Pattern drawing library ideas In-Reply-To: References: Message-ID: <2c6a0164-befe-49f5-83c5-6ca8f0ca1bd0@googlegroups.com> Hi Adam, I'll briefly answer question #3: `rr` and `cc` from `skimage.draw.circle()` correspond to indices of image pixels within the circle. They are meant to be used to do something to all values in that circle, e.g. set all values in that circle to one with `im[rr, cc] = 1`. I don't know of anyone else doing what you propose. We do strive for functional interfaces *where possible*, but what you describe certainly appears suited to an object-oriented framework. Given my above answer I think you want to track and monitor your *Particle*s *prior* to calling `skimage.draw`. The tweak I would make is not to draw anything at all until an output is requested, then batch up all of the circles/objects and have them drawn at that time. Before that point every *Particle* would exist in a much more compact state, probably just center coordinates and radius, and this will be both better and more precise to deal with when repositioning or otherwise tweaking things. Regards, Josh On Friday, December 6, 2013 6:07:57 PM UTC-6, Adam Hughes wrote: > > Hi everyone, > > My colleague Evelyn and I have been using scikit image's draw utilities to > generate test data for modeling particle distributions on optical fibers > (ie the SEM images I sent a few weeks ago). We are doing this to test some > new models for nanoparticle cluster fitting that we've developed. > > In working on this, we believe we've come up with a general framework for > drawing 2D particle ensembles in skimage. One could envision using it to > do something like: > > Step 0: Generate a blank canvas of resolution 1024 x 768 > Step 1: Add circles of average radius 5 pixels until 30% of the image > is covered. > - Arrange these randomly vs. equally spaced on a grid > etc... > Step 2: Add an "cluster" of particles to this image. Paint only the > clusters red. > Step 3: Run an optimization that maximizes the inter-particle spacing. > Step 4: Make all the particles smaller than a certain area green. > Step 5: Try watershedding > etc... > > By generating the sample images in skimage, we wouldn't have to break out > of the python workflow to make our test data. > > We believe that we've come up with some abstractions that would allow for > such an toolset. This includes defining a abstract class for different > particle shapes, and then wrapping skimage.draw() functions as a class > method. Before we get invested in this, I had a few questions: > > 1. Is this something that, if executed well, would be of interest to > incorporate into scikit image? If so, I will start working on it as a > branch; otherwise, we'll just use skimage as a dependency. I'd image it > would either be a submodule of skimage.draw(), something like > skimage.draw.ensemble() or draw.psuedodata()... > > 2. Is anyone aware of a pre-existing tooset/library (preferrably in > Python) that's built for this? And if so, is that library compatible to > skimage? > > 3. When a user runs skimage.draw.circle(), it returns *rr* and *cc*, > what are these? Is cc the "chain code"? > > One caveat: our design plan is object oriented. We thought that the best > way to have an image with several particles would require a *Particle*class to add enough metadata to the returns of scikit.draw() so that > particles could indivually be tracked, isolated and manipulated. The > ensemble would be created on a *Canvas* class (better name? *TestImage*?), > which is responsible for storing an ensemble of Particles, as well as all > the drawing and organization of the ensemble. For example, a circle would > have attributes X, Y, R, which are then passed to a draw() method that > called skimage.circle(). In this way, one could track particle positions, > manipulate and redraw(). Would something like this clash with skimage's > basic design paradigms? If so we, maybe it would best to keep this toolkit > out of skimage. > > PS, is anyone else working on this, or interested? > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From svfilhol at alaska.edu Sat Dec 7 23:12:26 2013 From: svfilhol at alaska.edu (Simon Filhol) Date: Sat, 7 Dec 2013 20:12:26 -0800 (PST) Subject: exposure.equalize_adapthist() problem Message-ID: <507e4f65-0ccc-40c5-bb98-03e8b2864e51@googlegroups.com> Hi, after reinstalling, updating the skimage package version 0.9.x , I cannot get the function equalize_adapthist() to work like the example on the scikit-image webpage. from skimage import exposure, data img = data.moon() img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03) the following error is returned in the console: >>> img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03) Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'equalize_adapthist' Would you have any insight into which problem I am running? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sat Dec 7 21:08:10 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 8 Dec 2013 04:08:10 +0200 Subject: Pattern drawing library ideas In-Reply-To: References: Message-ID: Hi, Adam On Sat, Dec 7, 2013 at 2:07 AM, Adam Hughes wrote: > 1. Is this something that, if executed well, would be of interest to > incorporate into scikit image? If so, I will start working on it as a > branch; otherwise, we'll just use skimage as a dependency. I'd image it > would either be a submodule of skimage.draw(), something like > skimage.draw.ensemble() or draw.psuedodata()... > > 2. Is anyone aware of a pre-existing tooset/library (preferrably in Python) > that's built for this? And if so, is that library compatible to skimage? What you are building reminds me a bit of a scenegraph. You are probably better off using a library like Matplotlib for rendering anything remotely involved. In `skimage` we provide a very basic drawing API with the explicit goal of manipulating images (arrays) directly, but it is not meant as an advanced canvas. I reckon that you will develop some low-level primitives as you go along, and those would fit well into `skimage.draw`. Regards St?fan From stefan at sun.ac.za Sat Dec 7 21:22:41 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 8 Dec 2013 04:22:41 +0200 Subject: match_template confidence > 3 ? In-Reply-To: References: <8580c00d-9bdf-429b-9921-ff14cdf83207@googlegroups.com> Message-ID: On Sat, Dec 7, 2013 at 9:02 AM, Tony Yu wrote: > Unfortunately, I'm not sure what the problem is here. You're correct in > thinking that values shouldn't be outside of [-1, 1], but as your code > demonstrates, there are apparently inputs that break the current > implementation. I've played around with isolating a small test image, but I > haven't had much luck. I've filed an issue with a minimal code snippet here: https://github.com/scikit-image/scikit-image/issues/845 St?fan From steven.silvester at gmail.com Sun Dec 8 21:51:53 2013 From: steven.silvester at gmail.com (Steven Silvester) Date: Sun, 8 Dec 2013 18:51:53 -0800 (PST) Subject: exposure.equalize_adapthist() problem In-Reply-To: <507e4f65-0ccc-40c5-bb98-03e8b2864e51@googlegroups.com> References: <507e4f65-0ccc-40c5-bb98-03e8b2864e51@googlegroups.com> Message-ID: <1282beda-c122-46d9-9ce6-25934455ef01@googlegroups.com> Hi Simon, How did you install it? What OS are you using? I just tried it using v0.9.3 from Anaconda on Windows 7, and it works just fine. Regards, Steve Silvester -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Sun Dec 8 19:02:39 2013 From: jsch at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Mon, 9 Dec 2013 01:02:39 +0100 Subject: match_template confidence > 3 ? In-Reply-To: References: <8580c00d-9bdf-429b-9921-ff14cdf83207@googlegroups.com> Message-ID: <942D69F3-DD98-4382-99CE-AD36DE850502@demuc.de> See https://github.com/scikit-image/scikit-image/pull/847. Can you please test if this fixes your issues? Am 08.12.2013 um 03:22 schrieb St?fan van der Walt : > On Sat, Dec 7, 2013 at 9:02 AM, Tony Yu wrote: >> Unfortunately, I'm not sure what the problem is here. You're correct in >> thinking that values shouldn't be outside of [-1, 1], but as your code >> demonstrates, there are apparently inputs that break the current >> implementation. I've played around with isolating a small test image, but I >> haven't had much luck. > > I've filed an issue with a minimal code snippet here: > > https://github.com/scikit-image/scikit-image/issues/845 > > St?fan > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. From hughesadam87 at gmail.com Mon Dec 9 12:41:02 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Mon, 9 Dec 2013 09:41:02 -0800 (PST) Subject: Pattern drawing library ideas In-Reply-To: References: Message-ID: <1b5ecdff-47d9-4881-b6f4-a816c1c07e0e@googlegroups.com> Thanks for the advice guys! Josh, that makes sense, thanks. I will implement that suggestion, and thanks for explaining rr and cc. Stefan, thanks! I will still share some example notebooks down the road, but keep the library separate from skimage. If there are any new particle types that might make sense to be added to skimage.draw, then i'd be happy to share at that time. On Friday, December 6, 2013 7:07:57 PM UTC-5, Adam Hughes wrote: > > Hi everyone, > > My colleague Evelyn and I have been using scikit image's draw utilities to > generate test data for modeling particle distributions on optical fibers > (ie the SEM images I sent a few weeks ago). We are doing this to test some > new models for nanoparticle cluster fitting that we've developed. > > In working on this, we believe we've come up with a general framework for > drawing 2D particle ensembles in skimage. One could envision using it to > do something like: > > Step 0: Generate a blank canvas of resolution 1024 x 768 > Step 1: Add circles of average radius 5 pixels until 30% of the image > is covered. > - Arrange these randomly vs. equally spaced on a grid > etc... > Step 2: Add an "cluster" of particles to this image. Paint only the > clusters red. > Step 3: Run an optimization that maximizes the inter-particle spacing. > Step 4: Make all the particles smaller than a certain area green. > Step 5: Try watershedding > etc... > > By generating the sample images in skimage, we wouldn't have to break out > of the python workflow to make our test data. > > We believe that we've come up with some abstractions that would allow for > such an toolset. This includes defining a abstract class for different > particle shapes, and then wrapping skimage.draw() functions as a class > method. Before we get invested in this, I had a few questions: > > 1. Is this something that, if executed well, would be of interest to > incorporate into scikit image? If so, I will start working on it as a > branch; otherwise, we'll just use skimage as a dependency. I'd image it > would either be a submodule of skimage.draw(), something like > skimage.draw.ensemble() or draw.psuedodata()... > > 2. Is anyone aware of a pre-existing tooset/library (preferrably in > Python) that's built for this? And if so, is that library compatible to > skimage? > > 3. When a user runs skimage.draw.circle(), it returns *rr* and *cc*, > what are these? Is cc the "chain code"? > > One caveat: our design plan is object oriented. We thought that the best > way to have an image with several particles would require a *Particle*class to add enough metadata to the returns of scikit.draw() so that > particles could indivually be tracked, isolated and manipulated. The > ensemble would be created on a *Canvas* class (better name? *TestImage*?), > which is responsible for storing an ensemble of Particles, as well as all > the drawing and organization of the ensemble. For example, a circle would > have attributes X, Y, R, which are then passed to a draw() method that > called skimage.circle(). In this way, one could track particle positions, > manipulate and redraw(). Would something like this clash with skimage's > basic design paradigms? If so we, maybe it would best to keep this toolkit > out of skimage. > > PS, is anyone else working on this, or interested? > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfarmernv at gmail.com Fri Dec 13 15:03:30 2013 From: dfarmernv at gmail.com (Dan Farmer) Date: Fri, 13 Dec 2013 12:03:30 -0800 Subject: Better results with Canny/Hough for circular particles In-Reply-To: References: Message-ID: Hi Adam, This can be the worst part of image processing, but I'm curious how much you played with the parameters to Canny? You probably know this, but canny already tries to close gaps (hysteresis thresholding). What you want to do is try to lower the low_threshold parameter (values above the high threshold value get initially labeled as edges, then it looks for pixels that are connected to edge pixels and whose value is > low_threshold to link the edges). An easy/basic way to get rid of small fragments would be to start with morphological erosion. -Dan On Fri, Dec 13, 2013 at 11:47 AM, Adam Hughes wrote: > Hi, > > I have several images of circular particles (see attached for an example) > and I've been experimenting with automatic routines to find edges. > > I've found that with Canny, I can get really nice edges, but the edges are > not always connected. Thus, when I do fill-binary, many of my particles are > not painted in due to slight breaks in the border returned by canny. Is > there an ideal way to fix this, either by connecting "almost" connected > canny edges? Additionally, what is the best way to filter out small > fragments and/or non-circular edges? > > I've attached an image of the canny outlines; you can see that I obviously > want to get rid some of the regions that aren't associated with any > particles. PS, the coloring of the outlines are based on the brightness of > the image at that point underneath it, which has been hidden. (Would be > happy to share the function if anyone wants it). > > Lastly, I tried adapting the circular hough transform example: > > http://scikit-image.org/docs/dev/auto_examples/plot_circular_elliptical_hough_transform.html > > But struggled with setting it up, due to a naive understanding of the > algorithm. Given that my image has thousands of particles, but I know > roughly the size distribution, would the circular hough transform be useful > to me? > > Thanks > > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. From hughesadam87 at gmail.com Fri Dec 13 15:25:53 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Fri, 13 Dec 2013 12:25:53 -0800 (PST) Subject: Better results with Canny/Hough for circular particles In-Reply-To: References: Message-ID: Hi Dan, Thanks for the quick reply. I think that I can get better results if I tweak the parameters. The threshold parameter intuitively makes sense, but I'll have to read a bit to get familiar with sigma and the algorithm in general. Thanks for the explanation; it really helped. I will try out the erosion as well. PS, do you have any feelings towards the applicability of circular hough to my image? On Friday, December 13, 2013 3:03:30 PM UTC-5, Dan Farmer wrote: > > Hi Adam, > > This can be the worst part of image processing, but I'm curious how > much you played with the parameters to Canny? You probably know this, > but canny already tries to close gaps (hysteresis thresholding). What > you want to do is try to lower the low_threshold parameter (values > above the high threshold value get initially labeled as edges, then it > looks for pixels that are connected to edge pixels and whose value is > > low_threshold to link the edges). > > An easy/basic way to get rid of small fragments would be to start with > morphological erosion. > > -Dan > > On Fri, Dec 13, 2013 at 11:47 AM, Adam Hughes > > wrote: > > Hi, > > > > I have several images of circular particles (see attached for an > example) > > and I've been experimenting with automatic routines to find edges. > > > > I've found that with Canny, I can get really nice edges, but the edges > are > > not always connected. Thus, when I do fill-binary, many of my particles > are > > not painted in due to slight breaks in the border returned by canny. Is > > there an ideal way to fix this, either by connecting "almost" connected > > canny edges? Additionally, what is the best way to filter out small > > fragments and/or non-circular edges? > > > > I've attached an image of the canny outlines; you can see that I > obviously > > want to get rid some of the regions that aren't associated with any > > particles. PS, the coloring of the outlines are based on the brightness > of > > the image at that point underneath it, which has been hidden. (Would be > > happy to share the function if anyone wants it). > > > > Lastly, I tried adapting the circular hough transform example: > > > > > http://scikit-image.org/docs/dev/auto_examples/plot_circular_elliptical_hough_transform.html > > > > But struggled with setting it up, due to a naive understanding of the > > algorithm. Given that my image has thousands of particles, but I know > > roughly the size distribution, would the circular hough transform be > useful > > to me? > > > > Thanks > > > > > > -- > > You received this message because you are subscribed to the Google > Groups > > "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an > > email to scikit-image... at googlegroups.com . > > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Fri Dec 13 14:47:44 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Fri, 13 Dec 2013 14:47:44 -0500 Subject: Better results with Canny/Hough for circular particles Message-ID: Hi, I have several images of circular particles (see attached for an example) and I've been experimenting with automatic routines to find edges. I've found that with Canny, I can get really nice edges, but the edges are not always connected. Thus, when I do fill-binary, many of my particles are not painted in due to slight breaks in the border returned by canny. Is there an ideal way to fix this, either by connecting "almost" connected canny edges? Additionally, what is the best way to filter out small fragments and/or non-circular edges? I've attached an image of the canny outlines; you can see that I obviously want to get rid some of the regions that aren't associated with any particles. PS, the coloring of the outlines are based on the brightness of the image at that point underneath it, which has been hidden. (Would be happy to share the function if anyone wants it). Lastly, I tried adapting the circular hough transform example: http://scikit-image.org/docs/dev/auto_examples/plot_circular_elliptical_hough_transform.html But struggled with setting it up, due to a naive understanding of the algorithm. Given that my image has thousands of particles, but I know roughly the size distribution, would the circular hough transform be useful to me? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: aunps.png Type: image/png Size: 303252 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: canny.png Type: image/png Size: 58065 bytes Desc: not available URL: From hughesadam87 at gmail.com Fri Dec 13 18:09:02 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Fri, 13 Dec 2013 18:09:02 -0500 Subject: Better results with Canny/Hough for circular particles In-Reply-To: <4506DDA5-883C-4CED-8CB3-56F2E96A94B1@demuc.de> References: <4506DDA5-883C-4CED-8CB3-56F2E96A94B1@demuc.de> Message-ID: I'm trying multiple methods for isolating nanoparticles for a paper in which the purpose will be to contextualize a lot of available image processing workflows best suited for sizing, separating and counting nanoparticles. The idea is that imaging nanoparticles is becomes a widely standard necessity, and so I want to present an overview of the availble methods and workflows for best processing images, as well best practices for obtaining high quality images in the lab via SEM/AFM. I've already covered approaches via auto thresholding, with optionally applying watersheding to find particle boundaries. IMO, Canny, Hough, Sobel etc... give another approach; whereby, one finds and deals directly with the particle boundaries/outlines. The advantage of this approach is that it lends finer control over the particle boundaries, which are often hazy in SEM images. I think that there are some use cases for this approach. For example, I've observed that canny edge detection gives a great fit to particles that have a halo effect, and so I'm pretty confident that, if we could fill all of the edges, that we'd be sizing the particles very nicely. It's easier to assess the "goodness of fit" looking at just the edges, but obviously it introduces more work. I at least had hoped to present a bit of this edge --> filter ---> fill approach in the paper to complement the threshold approaches to segmentation. I tried the morphological operators (open, close, dilation, edge), but they are a bit restrictive. I'd need something that has a variable structure parameter, as Dan mentioned sort of is builtin to canny (ie, it's smart about how it closes the regions). I'll probably just leave it at "the input parameters are crucial for getting closed regions" instead of focusing on how to connect the regions after the fact. PS, is anyone familiar with a function in scikit or ndimage to give the perimeter/outline of a filled region? Thanks On Fri, Dec 13, 2013 at 4:32 PM, Johannes Sch?nberger wrote: > Hi, > > do you really need the edges or what is the actual purpose? Maybe there > are better methods other than canny to achieve it... > > Am 13.12.2013 um 21:25 schrieb Adam Hughes : > > > Hi Dan, > > > > Thanks for the quick reply. > > > > I think that I can get better results if I tweak the parameters. The > threshold parameter intuitively makes sense, but I'll have to read a bit to > get familiar with sigma and the algorithm in general. Thanks for the > explanation; it really helped. I will try out the erosion as well. > > > > PS, do you have any feelings towards the applicability of circular hough > to my image? > > > > On Friday, December 13, 2013 3:03:30 PM UTC-5, Dan Farmer wrote: > > Hi Adam, > > > > This can be the worst part of image processing, but I'm curious how > > much you played with the parameters to Canny? You probably know this, > > but canny already tries to close gaps (hysteresis thresholding). What > > you want to do is try to lower the low_threshold parameter (values > > above the high threshold value get initially labeled as edges, then it > > looks for pixels that are connected to edge pixels and whose value is > > > low_threshold to link the edges). > > > > An easy/basic way to get rid of small fragments would be to start with > > morphological erosion. > > > > -Dan > > > > On Fri, Dec 13, 2013 at 11:47 AM, Adam Hughes > wrote: > > > Hi, > > > > > > I have several images of circular particles (see attached for an > example) > > > and I've been experimenting with automatic routines to find edges. > > > > > > I've found that with Canny, I can get really nice edges, but the edges > are > > > not always connected. Thus, when I do fill-binary, many of my > particles are > > > not painted in due to slight breaks in the border returned by canny. > Is > > > there an ideal way to fix this, either by connecting "almost" connected > > > canny edges? Additionally, what is the best way to filter out small > > > fragments and/or non-circular edges? > > > > > > I've attached an image of the canny outlines; you can see that I > obviously > > > want to get rid some of the regions that aren't associated with any > > > particles. PS, the coloring of the outlines are based on the > brightness of > > > the image at that point underneath it, which has been hidden. (Would > be > > > happy to share the function if anyone wants it). > > > > > > Lastly, I tried adapting the circular hough transform example: > > > > > > > http://scikit-image.org/docs/dev/auto_examples/plot_circular_elliptical_hough_transform.html > > > > > > But struggled with setting it up, due to a naive understanding of the > > > algorithm. Given that my image has thousands of particles, but I know > > > roughly the size distribution, would the circular hough transform be > useful > > > to me? > > > > > > Thanks > > > > > > > > > -- > > > You received this message because you are subscribed to the Google > Groups > > > "scikit-image" group. > > > To unsubscribe from this group and stop receiving emails from it, send > an > > > email to scikit-image... at googlegroups.com. > > > For more options, visit https://groups.google.com/groups/opt_out. > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. > > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/9U46IbLV90A/unsubscribe. > To unsubscribe from this group and all of its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Fri Dec 13 19:42:58 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Fri, 13 Dec 2013 19:42:58 -0500 Subject: New examples Message-ID: Hey guys, The new examples in the gallery look great, and are quite helpful! I just watched a tutorial from Stefan today (PyData 2012), which was really great: http://vimeo.com/53065496 Perhaps put any videos like this in the gallery as well? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Fri Dec 13 16:32:40 2013 From: jsch at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Fri, 13 Dec 2013 22:32:40 +0100 Subject: Better results with Canny/Hough for circular particles In-Reply-To: References: Message-ID: <4506DDA5-883C-4CED-8CB3-56F2E96A94B1@demuc.de> Hi, do you really need the edges or what is the actual purpose? Maybe there are better methods other than canny to achieve it... Am 13.12.2013 um 21:25 schrieb Adam Hughes : > Hi Dan, > > Thanks for the quick reply. > > I think that I can get better results if I tweak the parameters. The threshold parameter intuitively makes sense, but I'll have to read a bit to get familiar with sigma and the algorithm in general. Thanks for the explanation; it really helped. I will try out the erosion as well. > > PS, do you have any feelings towards the applicability of circular hough to my image? > > On Friday, December 13, 2013 3:03:30 PM UTC-5, Dan Farmer wrote: > Hi Adam, > > This can be the worst part of image processing, but I'm curious how > much you played with the parameters to Canny? You probably know this, > but canny already tries to close gaps (hysteresis thresholding). What > you want to do is try to lower the low_threshold parameter (values > above the high threshold value get initially labeled as edges, then it > looks for pixels that are connected to edge pixels and whose value is > > low_threshold to link the edges). > > An easy/basic way to get rid of small fragments would be to start with > morphological erosion. > > -Dan > > On Fri, Dec 13, 2013 at 11:47 AM, Adam Hughes wrote: > > Hi, > > > > I have several images of circular particles (see attached for an example) > > and I've been experimenting with automatic routines to find edges. > > > > I've found that with Canny, I can get really nice edges, but the edges are > > not always connected. Thus, when I do fill-binary, many of my particles are > > not painted in due to slight breaks in the border returned by canny. Is > > there an ideal way to fix this, either by connecting "almost" connected > > canny edges? Additionally, what is the best way to filter out small > > fragments and/or non-circular edges? > > > > I've attached an image of the canny outlines; you can see that I obviously > > want to get rid some of the regions that aren't associated with any > > particles. PS, the coloring of the outlines are based on the brightness of > > the image at that point underneath it, which has been hidden. (Would be > > happy to share the function if anyone wants it). > > > > Lastly, I tried adapting the circular hough transform example: > > > > http://scikit-image.org/docs/dev/auto_examples/plot_circular_elliptical_hough_transform.html > > > > But struggled with setting it up, due to a naive understanding of the > > algorithm. Given that my image has thousands of particles, but I know > > roughly the size distribution, would the circular hough transform be useful > > to me? > > > > Thanks > > > > > > -- > > You received this message because you are subscribed to the Google Groups > > "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send an > > email to scikit-image... at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. From hughesadam87 at gmail.com Sat Dec 14 04:01:34 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Sat, 14 Dec 2013 04:01:34 -0500 Subject: Better results with Canny/Hough for circular particles In-Reply-To: References: <4506DDA5-883C-4CED-8CB3-56F2E96A94B1@demuc.de> Message-ID: Thanks for the help! On Dec 14, 2013 3:44 AM, "Johannes Sch?nberger" wrote: > > Am 14.12.2013 um 00:09 schrieb Adam Hughes : > > > I'm trying multiple methods for isolating nanoparticles for a paper in > which the purpose will be to contextualize a lot of available image > processing workflows best suited for sizing, separating and counting > nanoparticles. The idea is that imaging nanoparticles is becomes a widely > standard necessity, and so I want to present an overview of the availble > methods and workflows for best processing images, as well best practices > for obtaining high quality images in the lab via SEM/AFM. > > > > I've already covered approaches via auto thresholding, with optionally > applying watersheding to find particle boundaries. > > OK, segmentation would have been my first suggestion. > > > IMO, Canny, Hough, Sobel etc... give another approach; whereby, one > finds and deals directly with the particle boundaries/outlines. The > advantage of this approach is that it lends finer control over the particle > boundaries, which are often hazy in SEM images. I think that there are > some use cases for this approach. For example, I've observed that canny > edge detection gives a great fit to particles that have a halo effect, and > so I'm pretty confident that, if we could fill all of the edges, that we'd > be sizing the particles very nicely. It's easier to assess the "goodness > of fit" looking at just the edges, but obviously it introduces more work. > I at least had hoped to present a bit of this edge --> filter ---> fill > approach in the paper to complement the threshold approaches to > segmentation. > > > > I tried the morphological operators (open, close, dilation, edge), but > they are a bit restrictive. I'd need something that has a variable > structure parameter, as Dan mentioned sort of is builtin to canny (ie, it's > smart about how it closes the regions). I'll probably just leave it at > "the input parameters are crucial for getting closed regions" instead of > focusing on how to connect the regions after the fact. > > > > PS, is anyone familiar with a function in scikit or ndimage to give the > perimeter/outline of a filled region? > > skimage.measure.regionprops, skimage.measure.perimeter > > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/9U46IbLV90A/unsubscribe. > To unsubscribe from this group and all of its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri Dec 13 21:34:38 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 14 Dec 2013 04:34:38 +0200 Subject: New examples In-Reply-To: References: Message-ID: Hey, Adam On Sat, Dec 14, 2013 at 2:42 AM, Adam Hughes wrote: > Perhaps put any videos like this in the gallery as well? We could also add links to StackOverflow posts. Would you mind being our curator, and send a pull request either on the website repo or on the docs to make this happen? Thank you! St?fan From stefan at sun.ac.za Fri Dec 13 22:07:58 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 14 Dec 2013 05:07:58 +0200 Subject: Restoration module Message-ID: Hi everyone, Just a heads-up on a new scikit-image submodule: ``skimage.restoration``. Thanks to Fran?ois Orieux, we are fortunate to have the following deconvolution algorithms implemented: [1] `wiener` [2] `unsupervised_wiener` [3] `richardson_lucy` I'd pay specific attention to Francois' own algorithm [2] -- no more painful parameter tweaking to get a good Wiener-Hunt deconvolution! Thanks also to everyone on the team who worked hard to review and shape this PR. St?fan From jsch at demuc.de Sat Dec 14 03:24:07 2013 From: jsch at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Sat, 14 Dec 2013 09:24:07 +0100 Subject: Restoration module In-Reply-To: References: Message-ID: <9FF82A0A-072D-4A1F-9A05-2F1D1EF17692@demuc.de> Great contribution! Are there any other "filters" that should go into this new sub-package? Johannes Am 14.12.2013 um 04:07 schrieb St?fan van der Walt : > Hi everyone, > > Just a heads-up on a new scikit-image submodule: ``skimage.restoration``. > > Thanks to Fran?ois Orieux, we are fortunate to have the following > deconvolution algorithms implemented: > > [1] `wiener` > [2] `unsupervised_wiener` > [3] `richardson_lucy` > > I'd pay specific attention to Francois' own algorithm [2] -- no more > painful parameter tweaking to get a good Wiener-Hunt deconvolution! > > Thanks also to everyone on the team who worked hard to review and shape this PR. > > St?fan > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. From jsch at demuc.de Sat Dec 14 03:43:41 2013 From: jsch at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Sat, 14 Dec 2013 09:43:41 +0100 Subject: Better results with Canny/Hough for circular particles In-Reply-To: References: <4506DDA5-883C-4CED-8CB3-56F2E96A94B1@demuc.de> Message-ID: Am 14.12.2013 um 00:09 schrieb Adam Hughes : > I'm trying multiple methods for isolating nanoparticles for a paper in which the purpose will be to contextualize a lot of available image processing workflows best suited for sizing, separating and counting nanoparticles. The idea is that imaging nanoparticles is becomes a widely standard necessity, and so I want to present an overview of the availble methods and workflows for best processing images, as well best practices for obtaining high quality images in the lab via SEM/AFM. > > I've already covered approaches via auto thresholding, with optionally applying watersheding to find particle boundaries. OK, segmentation would have been my first suggestion. > IMO, Canny, Hough, Sobel etc... give another approach; whereby, one finds and deals directly with the particle boundaries/outlines. The advantage of this approach is that it lends finer control over the particle boundaries, which are often hazy in SEM images. I think that there are some use cases for this approach. For example, I've observed that canny edge detection gives a great fit to particles that have a halo effect, and so I'm pretty confident that, if we could fill all of the edges, that we'd be sizing the particles very nicely. It's easier to assess the "goodness of fit" looking at just the edges, but obviously it introduces more work. I at least had hoped to present a bit of this edge --> filter ---> fill approach in the paper to complement the threshold approaches to segmentation. > > I tried the morphological operators (open, close, dilation, edge), but they are a bit restrictive. I'd need something that has a variable structure parameter, as Dan mentioned sort of is builtin to canny (ie, it's smart about how it closes the regions). I'll probably just leave it at "the input parameters are crucial for getting closed regions" instead of focusing on how to connect the regions after the fact. > > PS, is anyone familiar with a function in scikit or ndimage to give the perimeter/outline of a filled region? skimage.measure.regionprops, skimage.measure.perimeter From stefan at sun.ac.za Sat Dec 14 14:04:32 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 14 Dec 2013 21:04:32 +0200 Subject: Restoration module In-Reply-To: <9FF82A0A-072D-4A1F-9A05-2F1D1EF17692@demuc.de> References: <9FF82A0A-072D-4A1F-9A05-2F1D1EF17692@demuc.de> Message-ID: On Sat, Dec 14, 2013 at 10:24 AM, Johannes Sch?nberger wrote: > Are there any other "filters" that should go into this new sub-package? Thanks for opening the following issue where this can be tracked: https://github.com/scikit-image/scikit-image/issues/855 Thanks St?fan From hughesadam87 at gmail.com Sun Dec 15 15:16:51 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Sun, 15 Dec 2013 12:16:51 -0800 (PST) Subject: New examples In-Reply-To: References: Message-ID: Hey Stefan, I would be happy to try :) I'm confident I can gather the videos and stack overflow posts, but do you have a sense of where you'd like this information to fit into the website? In particular, I was thinking it could have either its own category (let's just call it "Outreach" at the moment) at this level: http://scikit-image.org/docs/0.9.x/ Or to lump it into the User Guide section, something like: User Guide: -section 1 -subsection - ... Community: - Video tutorials - Stack overflow posts Of course, it could also be bumped down directly into the User Guide: User Guide: -section1 -subsection -... - Outreach -video tutorials -stack overflow -mailing list discussions -more user examples I'm thinking user examples would be special links to repos or .ipynb files that aren't on the front page, but are often passed around on the mailing list. Any feelings about the organization Stefan? On Friday, December 13, 2013 9:34:38 PM UTC-5, Stefan van der Walt wrote: > > Hey, Adam > > On Sat, Dec 14, 2013 at 2:42 AM, Adam Hughes > > wrote: > > Perhaps put any videos like this in the gallery as well? > > We could also add links to StackOverflow posts. > > Would you mind being our curator, and send a pull request either on > the website repo or on the docs to make this happen? > > Thank you! > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Sun Dec 15 15:18:30 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Sun, 15 Dec 2013 15:18:30 -0500 Subject: New examples In-Reply-To: References: Message-ID: Actually, it looks like the user guide already has a nice section that might be the right place for this: http://scikit-image.org/docs/0.9.x/user_guide/getting_help.html On Sun, Dec 15, 2013 at 3:16 PM, Adam Hughes wrote: > Hey Stefan, > > I would be happy to try :) > > I'm confident I can gather the videos and stack overflow posts, but do you > have a sense of where you'd like this information to fit into the website? > > In particular, I was thinking it could have either its own category (let's > just call it "Outreach" at the moment) at this level: > http://scikit-image.org/docs/0.9.x/ > > Or to lump it into the User Guide section, something like: > > User Guide: > -section 1 > -subsection > - ... > > Community: > - Video tutorials > - Stack overflow posts > > Of course, it could also be bumped down directly into the User Guide: > > User Guide: > -section1 > -subsection > -... > - Outreach > -video tutorials > -stack overflow > -mailing list discussions > -more user examples > > I'm thinking user examples would be special links to repos or .ipynb files > that aren't on the front page, but are often passed around on the mailing > list. > > Any feelings about the organization Stefan? > > On Friday, December 13, 2013 9:34:38 PM UTC-5, Stefan van der Walt wrote: >> >> Hey, Adam >> >> On Sat, Dec 14, 2013 at 2:42 AM, Adam Hughes >> wrote: >> > Perhaps put any videos like this in the gallery as well? >> >> We could also add links to StackOverflow posts. >> >> Would you mind being our curator, and send a pull request either on >> the website repo or on the docs to make this happen? >> >> Thank you! >> St?fan >> > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/85YYPimmlhs/unsubscribe. > To unsubscribe from this group and all of its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fboulogne at sciunto.org Sun Dec 15 16:25:23 2013 From: fboulogne at sciunto.org (=?ISO-8859-1?Q?Fran=E7ois_Boulogne?=) Date: Sun, 15 Dec 2013 16:25:23 -0500 Subject: New examples In-Reply-To: References: Message-ID: <52AE1E43.4030309@sciunto.org> Hi ! Nice idea. It's also something I thought about. I have three videos in my notes: * * * Cheers, -- Fran?ois Boulogne. http://www.sciunto.org GPG fingerprint: 25F6 C971 4875 A6C1 EDD1 75C8 1AA7 216E 32D5 F22F From hughesadam87 at gmail.com Mon Dec 16 14:12:52 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Mon, 16 Dec 2013 11:12:52 -0800 (PST) Subject: New examples In-Reply-To: <52AE1E43.4030309@sciunto.org> References: <52AE1E43.4030309@sciunto.org> Message-ID: <47fb6427-ea32-40e7-9b55-fa7d85c34bde@googlegroups.com> Thanks Francis! On Sunday, December 15, 2013 4:25:23 PM UTC-5, Fran?ois Boulogne wrote: > > Hi ! > > Nice idea. It's also something I thought about. I have three videos in > my notes: > * > < > http://marakana.com/s/post/1101/image_processing_in_python_with_scikits-image_pydata_video> > > * > * > > > Cheers, > > -- > Fran?ois Boulogne. > http://www.sciunto.org > GPG fingerprint: 25F6 C971 4875 A6C1 EDD1 75C8 1AA7 216E 32D5 F22F > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Mon Dec 16 14:08:56 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Mon, 16 Dec 2013 14:08:56 -0500 Subject: Interactive selection of markers/seeds for random walk Message-ID: Hi, I was wondering if anyone had any examples or tools that would allow one to interactively select regions for seeds/markers in an image, which we could subsequently run the randomwalk algorithm over: http://scikit-image.org/docs/dev/auto_examples/plot_random_walker_segmentation.html I really enjoyed the ilastik tool that scikit image lists under its related links on the homepage, and wondered if there were any demos for selection that might be easily adaptable to fit this purpose. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Mon Dec 16 21:34:03 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 17 Dec 2013 04:34:03 +0200 Subject: New examples In-Reply-To: References: Message-ID: On Sun, Dec 15, 2013 at 10:18 PM, Adam Hughes wrote: > Actually, it looks like the user guide already has a nice section that might > be the right place for this: > > http://scikit-image.org/docs/0.9.x/user_guide/getting_help.html Another option would be to have a "side scrolling" bar on the top of the gallery. I.e., we take about 300 pixels at the top to show the videos, the rest of the page loads the examples. If that's too complicated, simply linking to the place you mention would be ok. St?fan From kevin.keraudren at googlemail.com Tue Dec 17 04:11:08 2013 From: kevin.keraudren at googlemail.com (Kevin Keraudren) Date: Tue, 17 Dec 2013 09:11:08 +0000 Subject: Interactive selection of markers/seeds for random walk In-Reply-To: References: Message-ID: Hi, What about skimage.viewer.plugins.labelplugin.LabelPainter, there is an example with the watershed algorithm there: https://github.com/scikit-image/scikit-image/blob/master/viewer_examples/plugins/watershed_demo.py Otherwise, with OpenCV, you could look at the GrabCut example: https://github.com/Itseez/opencv/blob/master/samples/python2/grabcut.py Kind regards, Kevin On Mon, Dec 16, 2013 at 7:08 PM, Adam Hughes wrote: > Hi, > > I was wondering if anyone had any examples or tools that would allow one > to interactively select regions for seeds/markers in an image, which we > could subsequently run the randomwalk algorithm over: > > > http://scikit-image.org/docs/dev/auto_examples/plot_random_walker_segmentation.html > > I really enjoyed the ilastik tool that scikit image lists under its > related links on the homepage, and wondered if there were any demos for > selection that might be easily adaptable to fit this purpose. > > Thanks! > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.richardson at visionsystemsinc.com Wed Dec 18 09:06:28 2013 From: scott.richardson at visionsystemsinc.com (scott.richardson at visionsystemsinc.com) Date: Wed, 18 Dec 2013 06:06:28 -0800 (PST) Subject: slic segmentation Message-ID: I recently upgraded skimage from release 0.8.2.0 to 0.9.3 and noticed that I am getting back a different segmentation than I used to. I see that @jni and @ahojnnes have made quite a few edits to skimage/segmentation/slic_superpixels.py and _slic.pyx, so I suspect that is expected, but I wanted to make sure it wasn't a regression. thanks Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Wed Dec 18 09:09:22 2013 From: jsch at demuc.de (=?UTF-8?Q?Johannes_Sch=C3=B6nberger?=) Date: Wed, 18 Dec 2013 06:09:22 -0800 (PST) Subject: slic segmentation In-Reply-To: References: Message-ID: Can you share how the results changed? Am Mittwoch, 18. Dezember 2013 15:06:28 UTC+1 schrieb scott.ri... at visionsystemsinc.com: > > I recently upgraded skimage from release 0.8.2.0 to 0.9.3 and noticed that > I am getting back a different segmentation than I used to. > > I see that @jni and @ahojnnes have made quite a few edits to > skimage/segmentation/slic_superpixels.py and _slic.pyx, so I suspect that > is expected, but I wanted to make sure it wasn't a regression. > > thanks > Scott > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Wed Dec 18 09:11:06 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 18 Dec 2013 06:11:06 -0800 (PST) Subject: slic segmentation In-Reply-To: References: Message-ID: <1387375863076.f806f7d3@Nodemailer> The segmentation did change, as a result of this PR: https://github.com/scikit-image/scikit-image/pull/666 In summary, the handling of the ratio/compactness parameter in scikit-image's SLIC was different from the reference implementation provided by the authors, and we fixed that between 0.8 and 0.9. If you fiddle with the compactness parameter, you should be able to get something close to your original segmentation. I hope this helps! Juan. ? Sent from Mailbox for iPhone On Thu, Dec 19, 2013 at 1:08 AM, null wrote: > I recently upgraded skimage from release 0.8.2.0 to 0.9.3 and noticed that > I am getting back a different segmentation than I used to. > I see that @jni and @ahojnnes have made quite a few edits to > skimage/segmentation/slic_superpixels.py and _slic.pyx, so I suspect that > is expected, but I wanted to make sure it wasn't a regression. > thanks > Scott > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Wed Dec 18 10:19:43 2013 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Wed, 18 Dec 2013 07:19:43 -0800 (PST) Subject: slic segmentation In-Reply-To: <1387375863076.f806f7d3@Nodemailer> References: <1387375863076.f806f7d3@Nodemailer> Message-ID: <2ea03c3a-0b28-4e05-92b7-ebd92a19088d@googlegroups.com> For completeness, also note earlier SLIC versions automatically forced a small Gaussian blur to the image prior to segmentation. This was controlled via the sigma parameter, and it defaulted to 1. Now the default is sigma=0; i.e. SLIC just performs SLIC by default. This is more intuitive and offers compatibility with workflows including their own custom pre-processing blurs. So, in addition to what Juan noted above, if you want result parity with 0.8.x you also must set sigma=1. On Wednesday, December 18, 2013 8:11:06 AM UTC-6, Juan Nunez-Iglesias wrote: The segmentation did change, as a result of this PR: > > https://github.com/scikit-image/scikit-image/pull/666 > > > In summary, the handling of the ratio/compactness parameter in > scikit-image's SLIC was different from the reference implementation > provided by the authors, and we fixed that between 0.8 and 0.9. If you > fiddle with the compactness parameter, you should be able to get something > close to your original segmentation. > > > I hope this helps! > > > Juan. > ? > Sent from Mailbox for iPhone > > > On Thu, Dec 19, 2013 at 1:08 AM, scott.ri... at visionsystemsinc.com > > wrote: > >> I recently upgraded skimage from release 0.8.2.0 to 0.9.3 and noticed >> that I am getting back a different segmentation than I used to. >> >> I see that @jni and @ahojnnes have made quite a few edits to >> skimage/segmentation/slic_superpixels.py and _slic.pyx, so I suspect >> that is expected, but I wanted to make sure it wasn't a regression. >> >> thanks >> Scott >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> For more options, visit https://groups.google.com/groups/opt_out. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.richardson at visionsystemsinc.com Wed Dec 18 10:53:32 2013 From: scott.richardson at visionsystemsinc.com (scott.richardson at visionsystemsinc.com) Date: Wed, 18 Dec 2013 07:53:32 -0800 (PST) Subject: slic segmentation In-Reply-To: <2ea03c3a-0b28-4e05-92b7-ebd92a19088d@googlegroups.com> References: <1387375863076.f806f7d3@Nodemailer> <2ea03c3a-0b28-4e05-92b7-ebd92a19088d@googlegroups.com> Message-ID: <7dcb9813-175b-4297-87eb-7bd565556660@googlegroups.com> Ok. That makes sense. Thanks for the explanation. On Wednesday, December 18, 2013 10:19:43 AM UTC-5, Josh Warner wrote: > > For completeness, also note earlier SLIC versions automatically forced a > small Gaussian blur to the image prior to segmentation. This was controlled > via the sigma parameter, and it defaulted to 1. > > Now the default is sigma=0; i.e. SLIC just performs SLIC by default. This > is more intuitive and offers compatibility with workflows including their > own custom pre-processing blurs. So, in addition to what Juan noted above, > if you want result parity with 0.8.x you also must set sigma=1. > > On Wednesday, December 18, 2013 8:11:06 AM UTC-6, Juan Nunez-Iglesias > wrote: > > The segmentation did change, as a result of this PR: >> >> https://github.com/scikit-image/scikit-image/pull/666 >> >> >> In summary, the handling of the ratio/compactness parameter in >> scikit-image's SLIC was different from the reference implementation >> provided by the authors, and we fixed that between 0.8 and 0.9. If you >> fiddle with the compactness parameter, you should be able to get something >> close to your original segmentation. >> >> >> I hope this helps! >> >> >> Juan. >> ? >> Sent from Mailbox for iPhone >> >> >> On Thu, Dec 19, 2013 at 1:08 AM, scott.ri... at visionsystemsinc.com < >> scott.ri... at visionsystemsinc.com> wrote: >> >>> I recently upgraded skimage from release 0.8.2.0 to 0.9.3 and noticed >>> that I am getting back a different segmentation than I used to. >>> >>> I see that @jni and @ahojnnes have made quite a few edits to >>> skimage/segmentation/slic_superpixels.py and _slic.pyx, so I suspect >>> that is expected, but I wanted to make sure it wasn't a regression. >>> >>> thanks >>> Scott >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to scikit-image... at googlegroups.com. >>> For more options, visit https://groups.google.com/groups/opt_out. >>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadavh.horesh at gmail.com Wed Dec 18 03:13:02 2013 From: nadavh.horesh at gmail.com (Nadav Horesh) Date: Wed, 18 Dec 2013 10:13:02 +0200 Subject: Linking events in skimage.viewer Message-ID: I I started to build an interactive image exploration utility based on matplotlib. Recently, following a link on this list, I encountered skimage.viewer, and found that the plugins architecture matches my needs. I could not find how to link keyboard and mouse events (and maybe buttons) to plugins. Any suggestions? I am using version 0.93 on linux (I can install the pre 0.10, if needed) Thanks, Nadav From nadavh.horesh at gmail.com Wed Dec 18 06:36:53 2013 From: nadavh.horesh at gmail.com (Nadav Horesh) Date: Wed, 18 Dec 2013 13:36:53 +0200 Subject: Linking events in skimage.viewer In-Reply-To: References: Message-ID: Thank you Juan. IIt looks like the event caught by the CanvasToolBase are not whtn I am looking for (mouse click and keyboard events) Thank you again, Nadav 2013/12/18 Juan Nunez-Iglesias : > Hi Nadav, > > I don't have much experience with interactive tools, but I think this is the > right place to start for examples: > https://github.com/scikit-image/scikit-image/blob/master/skimage/viewer/canvastools/linetool.py > > You'll see that the LineProfile plugin uses the LineTool canvas tool: > https://github.com/scikit-image/scikit-image/blob/master/skimage/viewer/plugins/lineprofile.py > > Hopefully this helps, otherwise @tonysyu might be able to step in and offer > a bit more guidance. > > Juan. > > > > > On Wed, Dec 18, 2013 at 7:13 PM, Nadav Horesh > wrote: >> >> I >> I started to build an interactive image exploration utility based on >> matplotlib. Recently, following a link on this list, I encountered >> skimage.viewer, and found that the plugins architecture matches my needs. >> I could not find how to link keyboard and mouse events (and maybe buttons) >> to plugins. Any suggestions? >> >> I am using version 0.93 on linux (I can install the pre 0.10, if needed) >> >> Thanks, >> >> Nadav >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/groups/opt_out. > > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. From jni.soma at gmail.com Wed Dec 18 03:26:42 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 18 Dec 2013 19:26:42 +1100 Subject: Linking events in skimage.viewer In-Reply-To: References: Message-ID: Hi Nadav, I don't have much experience with interactive tools, but I think this is the right place to start for examples: https://github.com/scikit-image/scikit-image/blob/master/skimage/viewer/canvastools/linetool.py You'll see that the LineProfile plugin uses the LineTool canvas tool: https://github.com/scikit-image/scikit-image/blob/master/skimage/viewer/plugins/lineprofile.py Hopefully this helps, otherwise @tonysyu might be able to step in and offer a bit more guidance. Juan. On Wed, Dec 18, 2013 at 7:13 PM, Nadav Horesh wrote: > I > I started to build an interactive image exploration utility based on > matplotlib. Recently, following a link on this list, I encountered > skimage.viewer, and found that the plugins architecture matches my needs. > I could not find how to link keyboard and mouse events (and maybe buttons) > to plugins. Any suggestions? > > I am using version 0.93 on linux (I can install the pre 0.10, if needed) > > Thanks, > > Nadav > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amca01 at gmail.com Thu Dec 19 04:59:56 2013 From: amca01 at gmail.com (Alasdair McAndrew) Date: Thu, 19 Dec 2013 01:59:56 -0800 (PST) Subject: Close an image window? Message-ID: This is a trivial question, but I still don't know how to do it. I've started up "ipython --pylab" and then: import skimage.io as io c = io.imread('cameraman.jpg') io.imshow(c,'qt') and this gives me an image in a window labelled "skimage". (I'm using Ubuntu 12.04). But how do I close this window from the console? I've tried: plt.close('skimage') plt.close() plt.close('all') plt.close(1) none of which have any effect. I can of course close them with the mouse, but that seems a little inelegant. Is the method to "close" the window in another package entirely? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Thu Dec 19 13:03:18 2013 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Thu, 19 Dec 2013 10:03:18 -0800 (PST) Subject: Close an image window? In-Reply-To: References: Message-ID: I believe `io.imshow` is provided via a plugin from Matplotlib. Matplotlib's `plt.show()` is a blocking function, which means that until that window is closed nothing else will be executed from the Python shell. So I don't think there's an elegant way to do what you ask, as the design decisions made around that package mean by definition any such `plt.close()` command would not work! It's possible someone has hacked this in, but if so I couldn't find it on short notice. Also, asking the Matplotlib guys about this might yield additional insight. Regards, Josh On Thursday, December 19, 2013 3:59:56 AM UTC-6, Alasdair McAndrew wrote: > > This is a trivial question, but I still don't know how to do it. I've > started up "ipython --pylab" and then: > > import skimage.io as io > > c = io.imread('cameraman.jpg') > > io.imshow(c,'qt') > > and this gives me an image in a window labelled "skimage". (I'm using > Ubuntu 12.04). But how do I close this window from the console? I've tried: > > > plt.close('skimage') > > plt.close() > > plt.close('all') > > plt.close(1) > > > none of which have any effect. I can of course close them with the mouse, > but that seems a little inelegant. Is the method to "close" the window in > another package entirely? > > > Thanks! > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amca01 at gmail.com Thu Dec 19 15:46:35 2013 From: amca01 at gmail.com (Alasdair McAndrew) Date: Thu, 19 Dec 2013 12:46:35 -0800 (PST) Subject: Close an image window? In-Reply-To: References: Message-ID: <7b8815f5-1089-40a4-8968-502cff2808dc@googlegroups.com> I can use matplotlib's imshow: plt.imshow(c,cmap=plt.cm.gray) and this gives me a figure I can close with "plt.close()", but the trouble with this method is that images aren't displayed at "truesize"; that is one image pixel for one screen pixel. And there's isn't a matplotlib parameter which allows me to do this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillem.palou at gmail.com Fri Dec 20 04:57:05 2013 From: guillem.palou at gmail.com (Guillem Palou) Date: Fri, 20 Dec 2013 01:57:05 -0800 (PST) Subject: Enforce SLIC connectivity Message-ID: <2d1f20b1-a03d-41f8-8fde-0c6abeb55aa1@googlegroups.com> Hello all, I have implemented the post-processing step of the SLIc superpixels to enforce superpixel connectivity. The algorithm is essentially the same of the original paper: "R.Achanta et al. SLIC Superpixels Compared to State-of-the-art Superpixel Methods, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, num. 11, p. 2274 - 2282, May 2012.": 1. Loop through all the image 2. Get the size of a connected component (adjacent pixels with same label) 3. If the size is less than a threshold, merge it to an adjacent cluster At the end, the generated superpixels are 4-connected (or 6 in 3 dimensions), so each label corresponds to a single connected component. This differs from the actual implementation, where a label may have multiple disconnected components. To be said, the processing time does not suffer. See the corresponding pull request -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Fri Dec 20 06:20:13 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Fri, 20 Dec 2013 22:20:13 +1100 Subject: Enforce SLIC connectivity In-Reply-To: <2d1f20b1-a03d-41f8-8fde-0c6abeb55aa1@googlegroups.com> References: <2d1f20b1-a03d-41f8-8fde-0c6abeb55aa1@googlegroups.com> Message-ID: Awesome, Guillem, thanks! Proceeding to review on github... =) Juan. On Fri, Dec 20, 2013 at 8:57 PM, Guillem Palou wrote: > Hello all, > > I have implemented the post-processing step of the SLIc superpixels to > enforce superpixel connectivity. The algorithm is essentially the same of > the original paper: "R.Achanta et al. SLIC Superpixels Compared to > State-of-the-art Superpixel Methods, IEEE Transactions on Pattern Analysis > and Machine Intelligence, vol. 34, num. 11, p. 2274 - 2282, May 2012.": > > 1. Loop through all the image > 2. Get the size of a connected component (adjacent pixels with same > label) > 3. If the size is less than a threshold, merge it to an adjacent > cluster > > At the end, the generated superpixels are 4-connected (or 6 in 3 > dimensions), so each label corresponds to a single connected component. > This differs from the actual implementation, where a label may have > multiple disconnected components. > > To be said, the processing time does not suffer. See the corresponding > pull request > > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Sun Dec 22 09:20:44 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Sun, 22 Dec 2013 08:20:44 -0600 Subject: Linking events in skimage.viewer In-Reply-To: References: Message-ID: On Wed, Dec 18, 2013 at 5:36 AM, Nadav Horesh wrote: > Thank you Juan. IIt looks like the event caught by the CanvasToolBase > are not whtn I am looking for (mouse click and keyboard events) > Hi Nadav, I'm not sure I follow. The canvas tools use matplotlib events because the image canvas is drawn with matplotlib. The `CanvasToolBase` class shows an example of using the tool's `connect_event` to connect to "key_press_event" (keyboard events). The `LineTool` class shows an example connecting to keyboard events through "button_press_event". Note that this event system is only for the image canvas. If another widget (e.g. a slider) has focus, then the events are handled by Qt's infrastructure. Cheers, -Tony > > Thank you again, > > Nadav > > 2013/12/18 Juan Nunez-Iglesias : > > Hi Nadav, > > > > I don't have much experience with interactive tools, but I think this is > the > > right place to start for examples: > > > https://github.com/scikit-image/scikit-image/blob/master/skimage/viewer/canvastools/linetool.py > > > > You'll see that the LineProfile plugin uses the LineTool canvas tool: > > > https://github.com/scikit-image/scikit-image/blob/master/skimage/viewer/plugins/lineprofile.py > > > > Hopefully this helps, otherwise @tonysyu might be able to step in and > offer > > a bit more guidance. > > > > Juan. > > > > > > > > > > On Wed, Dec 18, 2013 at 7:13 PM, Nadav Horesh > > wrote: > >> > >> I > >> I started to build an interactive image exploration utility based on > >> matplotlib. Recently, following a link on this list, I encountered > >> skimage.viewer, and found that the plugins architecture matches my > needs. > >> I could not find how to link keyboard and mouse events (and maybe > buttons) > >> to plugins. Any suggestions? > >> > >> I am using version 0.93 on linux (I can install the pre 0.10, if needed) > >> > >> Thanks, > >> > >> Nadav > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.silvester at gmail.com Sun Dec 22 18:10:32 2013 From: steven.silvester at gmail.com (Steven Silvester) Date: Sun, 22 Dec 2013 15:10:32 -0800 (PST) Subject: Linking events in skimage.viewer In-Reply-To: References: Message-ID: <49058bb3-23ac-47c1-95da-34e019515d55@googlegroups.com> Nadav, The canvas itself is a matplotlib.backends.backend_qt4agg.FigureCanvasQTAgg. If you wanted, you could subclass that class and provide your own mousePressEvent and friends. Something like: from matplotlib.backends.backend_qt4agg import FigureCanvasQTAggimport matplotlib.pyplot as pltfrom skimage import data class MyCanvas(FigureCanvasQtAgg): def mousePressEvent(self, event): pass def main(): image = data.camera() f, ax = plt.subplots() f.canvas = MyCanvas() ax.imshow(image, interpolation='nearest') h, w = image.shape plt.show() On Wednesday, December 18, 2013 2:13:02 AM UTC-6, Nadav Horesh wrote: I > I started to build an interactive image exploration utility based on > matplotlib. Recently, following a link on this list, I encountered > skimage.viewer, and found that the plugins architecture matches my needs. > I could not find how to link keyboard and mouse events (and maybe buttons) > to plugins. Any suggestions? > > I am using version 0.93 on linux (I can install the pre 0.10, if needed) > > Thanks, > > Nadav > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.silvester at gmail.com Mon Dec 23 10:50:29 2013 From: steven.silvester at gmail.com (Steven Silvester) Date: Mon, 23 Dec 2013 09:50:29 -0600 Subject: Linking events in skimage.viewer In-Reply-To: References: <49058bb3-23ac-47c1-95da-34e019515d55@googlegroups.com> Message-ID: Nadav, You want the slider to update the value as it is moving? If so, pass update_on="move" to Slider(). I tried your example, and my view does update on key presses. I am using skimage 0.9.3 on Windows 7. import matplotlib.pyplot as pltfrom skimage import datafrom skimage import viewerfrom skimage.viewer.plugins.overlayplugin import OverlayPluginfrom skimage.viewer.widgets import Slider class Cimage(object): def __init__(self): self.recname = 'howdy' self.rgb_image = data.camera() self.gamma = 1 def key_press(self, event): self.view.image = self.view.image[::-1] def image_update(self, image, **kwargs): return image[::-1] - int(kwargs['gamma']) * 5 def create_window(self): self.view = viewer.ImageViewer(self.rgb_image) plugin = OverlayPlugin(image_filter=self.image_update) plugin += Slider('gamma', 1.0, 3.0, value=self.gamma, orientation='vertical', update_on='move') plugin += Slider('mat mix', 0.0, 1.0, value=0.0, update_on='move', orientation='vertical') plugin += Slider('wb mix', 0.0, 1.0, value=0.0, update_on='move', orientation='horizontal') self.view += plugin self.view.connect_event('key_press_event', self.key_press) self.view.setWindowTitle(self.recname) self.view.show() def main(): a = Cimage() a.create_window() if __name__ == '__main__': main() About my earlier example, you would need to call f.canvas.show instead of plt.show (I ran the code this time). But, if the MPLCanvas methods work, I?d stick with those. Cheers, Steve On Mon, Dec 23, 2013 at 8:10 AM, Nadav Horesh wrote: > Tony, > I found the .connect_event method of the ImageViewer class just before > I got the reply from you, and it roughly works. I say roughly because > in the application there are some sliders that I added (not a part of > the built-in plugins), and what I see that I see an image update only > after I touch the sliders. In the linetool, however, the response is > immediate. > > Code snippet: > > class Cimage: > . > . > . > def create_window(self): > self.view = viewer.ImageViewer(self.rgb_image) > > plugin = OverlayPlugin(image_filter=self.image_update) > plugin += Slider('gamma', 1.0, 3.0, value=self.gamma, > orientation='vertical') > plugin += Slider('mat mix', 0.0, 1.0, value=0.0, > update_on='move', orientation='vertical') > plugin += Slider('wb mix', 0.0, 1.0, value=0.0, > update_on='move', orientation='horizontal') > > self.view += plugin > self.view.connect_event('key_press_event', self.key_press) > self.view.setWindowTitle(self.recname) > self.view.show() > . > . > . > in the key_press method execution chain I have > self.view.image = new_image > > so the image is updated, but it does not trigger a display update. > > > > Steve, > Here is your code after some typos correction: > > #! /usr/bin/python > > from __future__ import print_function > > from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg > import matplotlib.pyplot as plt > from skimage import data > > > class MyCanvas(FigureCanvasQTAgg): > > def mousePressEvent(self, event): > #pass > print('press') > self.image = self.image[::-1] > > def main(): > image = data.camera() > f, ax = plt.subplots() > f.canvas = MyCanvas(f) > ax.imshow(image, interpolation='nearest') > h, w = image.shape > plt.show() > > > if __name__ == '__main__': > main() > > Running it I see that the method mousePressEvent is not being called > > I'll be happy to know if there is something to follow and build an > application based on the viewer module. > > Thank you both very much > > Nadav > > > 2013/12/23 Steven Silvester : > > Nadav, > > > > The canvas itself is a > matplotlib.backends.backend_qt4agg.FigureCanvasQTAgg. > > If you wanted, you could subclass that class and provide your own > > mousePressEvent and friends. Something like: > > > > from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg > > import matplotlib.pyplot as plt > > from skimage import data > > > > > > class MyCanvas(FigureCanvasQtAgg): > > > > def mousePressEvent(self, event): > > pass > > > > def main(): > > image = data.camera() > > f, ax = plt.subplots() > > f.canvas = MyCanvas() > > ax.imshow(image, interpolation='nearest') > > h, w = image.shape > > plt.show() > > > > On Wednesday, December 18, 2013 2:13:02 AM UTC-6, Nadav Horesh wrote: > >> > >> I > >> I started to build an interactive image exploration utility based on > >> matplotlib. Recently, following a link on this list, I encountered > >> skimage.viewer, and found that the plugins architecture matches my > needs. > >> I could not find how to link keyboard and mouse events (and maybe > buttons) > >> to plugins. Any suggestions? > >> > >> I am using version 0.93 on linux (I can install the pre 0.10, if needed) > >> > >> Thanks, > >> > >> Nadav > > > > -- > > You received this message because you are subscribed to the Google Groups > > "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send an > > email to scikit-image+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. > > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/pRZYHjAW78U/unsubscribe. > To unsubscribe from this group and all of its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadavh.horesh at gmail.com Mon Dec 23 09:10:59 2013 From: nadavh.horesh at gmail.com (Nadav Horesh) Date: Mon, 23 Dec 2013 16:10:59 +0200 Subject: Linking events in skimage.viewer In-Reply-To: <49058bb3-23ac-47c1-95da-34e019515d55@googlegroups.com> References: <49058bb3-23ac-47c1-95da-34e019515d55@googlegroups.com> Message-ID: Tony, I found the .connect_event method of the ImageViewer class just before I got the reply from you, and it roughly works. I say roughly because in the application there are some sliders that I added (not a part of the built-in plugins), and what I see that I see an image update only after I touch the sliders. In the linetool, however, the response is immediate. Code snippet: class Cimage: . . . def create_window(self): self.view = viewer.ImageViewer(self.rgb_image) plugin = OverlayPlugin(image_filter=self.image_update) plugin += Slider('gamma', 1.0, 3.0, value=self.gamma, orientation='vertical') plugin += Slider('mat mix', 0.0, 1.0, value=0.0, update_on='move', orientation='vertical') plugin += Slider('wb mix', 0.0, 1.0, value=0.0, update_on='move', orientation='horizontal') self.view += plugin self.view.connect_event('key_press_event', self.key_press) self.view.setWindowTitle(self.recname) self.view.show() . . . in the key_press method execution chain I have self.view.image = new_image so the image is updated, but it does not trigger a display update. Steve, Here is your code after some typos correction: #! /usr/bin/python from __future__ import print_function from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg import matplotlib.pyplot as plt from skimage import data class MyCanvas(FigureCanvasQTAgg): def mousePressEvent(self, event): #pass print('press') self.image = self.image[::-1] def main(): image = data.camera() f, ax = plt.subplots() f.canvas = MyCanvas(f) ax.imshow(image, interpolation='nearest') h, w = image.shape plt.show() if __name__ == '__main__': main() Running it I see that the method mousePressEvent is not being called I'll be happy to know if there is something to follow and build an application based on the viewer module. Thank you both very much Nadav 2013/12/23 Steven Silvester : > Nadav, > > The canvas itself is a matplotlib.backends.backend_qt4agg.FigureCanvasQTAgg. > If you wanted, you could subclass that class and provide your own > mousePressEvent and friends. Something like: > > from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg > import matplotlib.pyplot as plt > from skimage import data > > > class MyCanvas(FigureCanvasQtAgg): > > def mousePressEvent(self, event): > pass > > def main(): > image = data.camera() > f, ax = plt.subplots() > f.canvas = MyCanvas() > ax.imshow(image, interpolation='nearest') > h, w = image.shape > plt.show() > > On Wednesday, December 18, 2013 2:13:02 AM UTC-6, Nadav Horesh wrote: >> >> I >> I started to build an interactive image exploration utility based on >> matplotlib. Recently, following a link on this list, I encountered >> skimage.viewer, and found that the plugins architecture matches my needs. >> I could not find how to link keyboard and mouse events (and maybe buttons) >> to plugins. Any suggestions? >> >> I am using version 0.93 on linux (I can install the pre 0.10, if needed) >> >> Thanks, >> >> Nadav > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. From nadavh.horesh at gmail.com Thu Dec 26 09:12:15 2013 From: nadavh.horesh at gmail.com (Nadav Horesh) Date: Thu, 26 Dec 2013 16:12:15 +0200 Subject: Linking events in skimage.viewer In-Reply-To: References: <49058bb3-23ac-47c1-95da-34e019515d55@googlegroups.com> Message-ID: Thanks Steve. It works also on my system (linux). I'll look over why II had a problem in my original application. Sorry for the late response, Nadav 2013/12/23 Steven Silvester : > Nadav, > > You want the slider to update the value as it is moving? If so, pass > update_on="move" to Slider(). I tried your example, and my view does update > on key presses. I am using skimage 0.9.3 on Windows 7. > > import matplotlib.pyplot as plt > from skimage import data > from skimage import viewer > from skimage.viewer.plugins.overlayplugin import OverlayPlugin > from skimage.viewer.widgets import Slider > > class Cimage(object): > > def __init__(self): > self.recname = 'howdy' > self.rgb_image = data.camera() > self.gamma = 1 > > def key_press(self, event): > self.view.image = self.view.image[::-1] > > def image_update(self, image, **kwargs): > return image[::-1] - int(kwargs['gamma']) * 5 > > def create_window(self): > self.view = viewer.ImageViewer(self.rgb_image) > > plugin = OverlayPlugin(image_filter=self.image_update) > plugin += Slider('gamma', 1.0, 3.0 > , value=self.gamma, > orientation='vertical', update_on='move') > plugin += Slider('mat mix', 0.0, 1.0, value=0.0, > update_on='move', orientation='vertical') > plugin += Slider('wb mix', 0.0, 1.0, value=0.0, > update_on='move', orientation='horizontal') > > self.view += plugin > self.view.connect_event('key_press_event', self.key_press) > self.view.setWindowTitle(self.recname) > self.view.show() > > def main(): > a = Cimage() > a.create_window() > > if __name__ == '__main__': > main() > > About my earlier example, you would need to call f.canvas.show instead of > plt.show (I ran the code this time). But, if the MPLCanvas methods work, I?d > stick with those. > > Cheers, > Steve > > > > On Mon, Dec 23, 2013 at 8:10 AM, Nadav Horesh > wrote: >> >> Tony, >> I found the .connect_event method of the ImageViewer class just before >> I got the reply from you, and it roughly works. I say roughly because >> in the application there are some sliders that I added (not a part of >> the built-in plugins), and what I see that I see an image update only >> after I touch the sliders. In the linetool, however, the response is >> immediate. >> >> Code snippet: >> >> class Cimage: >> . >> . >> . >> def create_window(self): >> self.view = viewer.ImageViewer(self.rgb_image) >> >> plugin = OverlayPlugin(image_filter=self.image_update) >> plugin += Slider('gamma', 1.0, 3.0, value=self.gamma, >> orientation='vertical') >> plugin += Slider('mat mix', 0.0, 1.0, value=0.0, >> update_on='move', orientation='vertical') >> plugin += Slider('wb mix', 0.0, 1.0, value=0.0, >> update_on='move', orientation='horizontal') >> >> self.view += plugin >> self.view.connect_event('key_press_event', self.key_press) >> self.view.setWindowTitle(self.recname) >> self.view.show() >> . >> . >> . >> in the key_press method execution chain I have >> self.view.image = new_image >> >> so the image is updated, but it does not trigger a display update. >> >> >> >> Steve, >> Here is your code after some typos correction: >> >> #! /usr/bin/python >> >> from __future__ import print_function >> >> from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg >> import matplotlib.pyplot as plt >> from skimage import data >> >> >> class MyCanvas(FigureCanvasQTAgg): >> >> def mousePressEvent(self, event): >> #pass >> print('press') >> self.image = self.image[::-1] >> >> def main(): >> image = data.camera() >> f, ax = plt.subplots() >> f.canvas = MyCanvas(f) >> ax.imshow(image, interpolation='nearest') >> h, w = image.shape >> plt.show() >> >> >> if __name__ == '__main__': >> main() >> >> Running it I see that the method mousePressEvent is not being called >> >> I'll be happy to know if there is something to follow and build an >> application based on the viewer module. >> >> Thank you both very much >> >> Nadav >> >> >> 2013/12/23 Steven Silvester : >> > Nadav, >> > >> > The canvas itself is a >> > matplotlib.backends.backend_qt4agg.FigureCanvasQTAgg. >> > If you wanted, you could subclass that class and provide your own >> > mousePressEvent and friends. Something like: >> > >> > from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg >> > import matplotlib.pyplot as plt >> > from skimage import data >> > >> > >> > class MyCanvas(FigureCanvasQtAgg): >> > >> > def mousePressEvent(self, event): >> > pass >> > >> > def main(): >> > image = data.camera() >> > f, ax = plt.subplots() >> > f.canvas = MyCanvas() >> > ax.imshow(image, interpolation='nearest') >> > h, w = image.shape >> > plt.show() >> > >> > On Wednesday, December 18, 2013 2:13:02 AM UTC-6, Nadav Horesh wrote: >> >> >> >> I >> >> I started to build an interactive image exploration utility based on >> >> matplotlib. Recently, following a link on this list, I encountered >> >> skimage.viewer, and found that the plugins architecture matches my >> >> needs. >> >> I could not find how to link keyboard and mouse events (and maybe >> >> buttons) >> >> to plugins. Any suggestions? >> >> >> >> I am using version 0.93 on linux (I can install the pre 0.10, if >> >> needed) >> >> >> >> Thanks, >> >> >> >> Nadav >> > >> > -- >> > You received this message because you are subscribed to the Google >> > Groups >> > "scikit-image" group. >> > To unsubscribe from this group and stop receiving emails from it, send >> > an >> > email to scikit-image+unsubscribe at googlegroups.com. >> > For more options, visit https://groups.google.com/groups/opt_out. >> >> -- >> You received this message because you are subscribed to a topic in the >> Google Groups "scikit-image" group. >> To unsubscribe from this topic, visit >> https://groups.google.com/d/topic/scikit-image/pRZYHjAW78U/unsubscribe. >> To unsubscribe from this group and all of its topics, send an email to >> scikit-image+unsubscribe at googlegroups.com. >> >> For more options, visit https://groups.google.com/groups/opt_out. > > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. From jni.soma at gmail.com Fri Dec 27 20:14:16 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sat, 28 Dec 2013 12:14:16 +1100 Subject: Images and audio Message-ID: Hi all, I found this article fascinating for its application of image processing techniques, thought you might like it also: http://blog.longnow.org/02013/12/26/reviving-and-restoring-lost-sounds/ I haven't investigated yet whether the raw data are available. If so, a notebook using skimage and some Python audio library to achieve the same thing would be utterly excellent. =) Juan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Mon Dec 30 21:45:41 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Mon, 30 Dec 2013 18:45:41 -0800 (PST) Subject: Interactive selection of markers/seeds for random walk In-Reply-To: References: Message-ID: <8ecb072d-66c8-49fe-82b2-967890ec95bb@googlegroups.com> Thanks, sorry I had not seen this earlier! On Monday, December 16, 2013 2:08:56 PM UTC-5, Adam Hughes wrote: > > Hi, > > I was wondering if anyone had any examples or tools that would allow one > to interactively select regions for seeds/markers in an image, which we > could subsequently run the randomwalk algorithm over: > > > http://scikit-image.org/docs/dev/auto_examples/plot_random_walker_segmentation.html > > I really enjoyed the ilastik tool that scikit image lists under its > related links on the homepage, and wondered if there were any demos for > selection that might be easily adaptable to fit this purpose. > > Thanks! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Tue Dec 31 01:37:43 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Mon, 30 Dec 2013 22:37:43 -0800 (PST) Subject: What RGB color is this? (quick Q) Message-ID: <891ff210-9e0e-445d-b9a4-dab180b4308f@googlegroups.com> I noticed recently that matplotlib.colors limits RGB values to a range (0 - 1), while in scikit image, RGB values can be much larger. For example: *test = np.zeros( (500,500,3) )* *test[:,:,0]=50* *test[:,:,1]=19* *test[:,:,2]=25* *imshow(test); * Produces a teal background. I was curious how the color teal is derived from this? I tried normalizing to 255 and and 50 but neither seemed to produce the same teal color. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Tue Dec 31 01:54:17 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Mon, 30 Dec 2013 22:54:17 -0800 (PST) Subject: What RGB color is this? (quick Q) In-Reply-To: <891ff210-9e0e-445d-b9a4-dab180b4308f@googlegroups.com> References: <891ff210-9e0e-445d-b9a4-dab180b4308f@googlegroups.com> Message-ID: <21f8863a-e4f4-4072-9b4f-f91f6910de7d@googlegroups.com> It's late, so it didn't occur to me that the imshow source-code is probably a good way to figure this one out... If anyone knows it offhand, that would save me some trouble. Otherwise, I'll hunt it down tomorrow :) On Tuesday, December 31, 2013 1:37:43 AM UTC-5, Adam Hughes wrote: > > I noticed recently that matplotlib.colors limits RGB values to a range (0 > - 1), while in scikit image, RGB values can be much larger. For example: > > *test = np.zeros( (500,500,3) )* > > *test[:,:,0]=50* > *test[:,:,1]=19* > *test[:,:,2]=25* > > *imshow(test); * > > Produces a teal background. I was curious how the color teal is derived > from this? I tried normalizing to 255 and and 50 but neither seemed to > produce the same teal color. > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Dec 31 06:15:04 2013 From: stefan at sun.ac.za (=?iso-8859-1?Q?St=E9fan?= van der Walt) Date: Tue, 31 Dec 2013 13:15:04 +0200 Subject: What RGB color is this? (quick Q) In-Reply-To: <891ff210-9e0e-445d-b9a4-dab180b4308f@googlegroups.com> References: <891ff210-9e0e-445d-b9a4-dab180b4308f@googlegroups.com> Message-ID: <20131231111504.GB27865@shinobi> Hi Adam On Mon, 30 Dec 2013 22:37:43 -0800, Adam Hughes wrote: > I noticed recently that matplotlib.colors limits RGB values to a range (0 - > 1), while in scikit image, RGB values can be much larger. For example: > > *test = np.zeros( (500,500,3) )* > > *test[:,:,0]=50* > *test[:,:,1]=19* > *test[:,:,2]=25* > > *imshow(test); * > > Produces a teal background. I was curious how the color teal is derived > from this? I tried normalizing to 255 and and 50 but neither seemed to > produce the same teal color. Here's a write-up of the data-type and range representation that scikit-image uses: http://scikit-image.org/docs/0.9.x/user_guide/data_types.html When visualizing data with Matplotlib, note that data is normalized by default, so you have to specify "vmin" and "vmax" to correctly display your generated background. Regards St??????fan From hughesadam87 at gmail.com Tue Dec 31 14:01:46 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Tue, 31 Dec 2013 14:01:46 -0500 Subject: What RGB color is this? (quick Q) In-Reply-To: <20131231111504.GB27865@shinobi> References: <891ff210-9e0e-445d-b9a4-dab180b4308f@googlegroups.com> <20131231111504.GB27865@shinobi> Message-ID: Thanks Stefan. That helps clarify some of the dtypes to me; however I still have a few confusions in regard to color data. I should have specified this more in my OP. I am trying to create a program where all color data is stored as RGB. This requires a validator that does flexible *to_rgb()* conversion. I want the users to have flexibility, so it should accept names like "aqua" as well as RGB tuples. I realize now that imshow() will do its own conversions, but still don't quite understand exactly what constraints I need to impose on users for all the various use cases. For example, if a user enters a single integer (say 239), is there a de-facto way to rgb-convert this? I've tried to exhause the scenarious below; any case with question marks is still unclear to me. INPUT TYPE INPUT EXAMPLE HANDLER DESIRED OUTPUT ----------------------------------------------------------------------------------------------------- hex string '#0FF000' ColorConverter.to_rgb() (.2, .4, .5) name string 'purple ' ColorConverter.to_rgb() (.1, .8, .3) < 1 float tuple ' (.5, .2, .4) PASS (.5, .2, .4) > 1 float/int tuple (30, 28, 90) ???? ???? int 140 (Digital channel?) (140, 140, 140)??? float 39.5 (Error??) ??? I read on wiki that a RGB tuple with elements > 1 can be interpreted as a "Digital Channel", so perhaps just leave these as is. The tough cases for me are really when a user enters a single Int or Float. Of course, I could just raise an exception if there's no de-facto way to handle this... On Tue, Dec 31, 2013 at 6:15 AM, St?fan van der Walt wrote: > Hi Adam > > On Mon, 30 Dec 2013 22:37:43 -0800, Adam Hughes wrote: > > I noticed recently that matplotlib.colors limits RGB values to a range > (0 - > > 1), while in scikit image, RGB values can be much larger. For example: > > > > *test = np.zeros( (500,500,3) )* > > > > *test[:,:,0]=50* > > *test[:,:,1]=19* > > *test[:,:,2]=25* > > > > *imshow(test); * > > > > Produces a teal background. I was curious how the color teal is derived > > from this? I tried normalizing to 255 and and 50 but neither seemed to > > produce the same teal color. > > Here's a write-up of the data-type and range representation that > scikit-image > uses: > > http://scikit-image.org/docs/0.9.x/user_guide/data_types.html > > When visualizing data with Matplotlib, note that data is normalized by > default, so you have to specify "vmin" and "vmax" to correctly display your > generated background. > > Regards > St?fan > > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/a54ehbd1fLk/unsubscribe. > To unsubscribe from this group and all of its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcaswell at uchicago.edu Tue Dec 31 19:59:38 2013 From: tcaswell at uchicago.edu (Thomas A Caswell) Date: Tue, 31 Dec 2013 18:59:38 -0600 Subject: What RGB color is this? (quick Q) In-Reply-To: References: <891ff210-9e0e-445d-b9a4-dab180b4308f@googlegroups.com> <20131231111504.GB27865@shinobi> Message-ID: There is no canonical mapping between scalar values (1d) and RGB (3d) which is why matplotlib has so many color maps. If you pass in to imshow a NxMx3 or NxMx4 array it is interpreted as RGB or RGBA values respectively (see http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.imshow) and not color mapped. If the arrays are float they are assumed to be in the range [0-1], if they are integers they should be uint8. There was some discussion recently on github abut tweaking the validation a bit (issues 2499 and 2632). Tom On Dec 31, 2013 1:01 PM, "Adam Hughes" wrote: > Thanks Stefan. That helps clarify some of the dtypes to me; however I > still have a few confusions in regard to color data. I should have > specified this more in my OP. > > I am trying to create a program where all color data is stored as RGB. > This requires a validator that does flexible *to_rgb()* conversion. I > want the users to have flexibility, so it should accept names like "aqua" > as well as RGB tuples. I realize now that imshow() will do its own > conversions, but still don't quite understand exactly what constraints I > need to impose on users for all the various use cases. For example, if a > user enters a single integer (say 239), is there a de-facto way to > rgb-convert this? I've tried to exhause the scenarious below; any case > with question marks is still unclear to me. > > INPUT TYPE INPUT EXAMPLE HANDLER DESIRED OUTPUT > > ----------------------------------------------------------------------------------------------------- > > hex string '#0FF000' ColorConverter.to_rgb() (.2, .4, .5) > name string 'purple ' ColorConverter.to_rgb() (.1, .8, .3) > < 1 float tuple ' (.5, .2, .4) PASS (.5, > .2, .4) > > 1 float/int tuple (30, 28, 90) ???? ???? > int 140 (Digital channel?) > (140, 140, 140)??? > float 39.5 (Error??) > ??? > > I read on wiki that a RGB tuple with elements > 1 can be interpreted as a > "Digital Channel", so perhaps just leave these as is. The tough cases for > me are really when a user enters a single Int or Float. Of course, I could > just raise an exception if there's no de-facto way to handle this... > > > On Tue, Dec 31, 2013 at 6:15 AM, St?fan van der Walt wrote: > >> Hi Adam >> >> On Mon, 30 Dec 2013 22:37:43 -0800, Adam Hughes wrote: >> > I noticed recently that matplotlib.colors limits RGB values to a range >> (0 - >> > 1), while in scikit image, RGB values can be much larger. For example: >> > >> > *test = np.zeros( (500,500,3) )* >> > >> > *test[:,:,0]=50* >> > *test[:,:,1]=19* >> > *test[:,:,2]=25* >> > >> > *imshow(test); * >> > >> > Produces a teal background. I was curious how the color teal is derived >> > from this? I tried normalizing to 255 and and 50 but neither seemed to >> > produce the same teal color. >> >> Here's a write-up of the data-type and range representation that >> scikit-image >> uses: >> >> http://scikit-image.org/docs/0.9.x/user_guide/data_types.html >> >> When visualizing data with Matplotlib, note that data is normalized by >> default, so you have to specify "vmin" and "vmax" to correctly display >> your >> generated background. >> >> Regards >> St?fan >> >> -- >> You received this message because you are subscribed to a topic in the >> Google Groups "scikit-image" group. >> To unsubscribe from this topic, visit >> https://groups.google.com/d/topic/scikit-image/a54ehbd1fLk/unsubscribe. >> To unsubscribe from this group and all of its topics, send an email to >> scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/groups/opt_out. >> > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Tue Dec 31 20:15:54 2013 From: hughesadam87 at gmail.com (Adam Hughes) Date: Tue, 31 Dec 2013 20:15:54 -0500 Subject: What RGB color is this? (quick Q) In-Reply-To: References: <891ff210-9e0e-445d-b9a4-dab180b4308f@googlegroups.com> <20131231111504.GB27865@shinobi> Message-ID: Thanks Thomas. Yes, I'm starting to see all the headaches that come with flexibility in dtypes and colors. Thanks for linking, that clears some stuff up for me. On Tue, Dec 31, 2013 at 7:59 PM, Thomas A Caswell wrote: > There is no canonical mapping between scalar values (1d) and RGB (3d) > which is why matplotlib has so many color maps. > > If you pass in to imshow a NxMx3 or NxMx4 array it is interpreted as RGB > or RGBA values respectively (see > http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.imshow) and > not color mapped. If the arrays are float they are assumed to be in the > range [0-1], if they are integers they should be uint8. There was some > discussion recently on github abut tweaking the validation a bit (issues > 2499 and 2632). > > Tom > On Dec 31, 2013 1:01 PM, "Adam Hughes" wrote: > >> Thanks Stefan. That helps clarify some of the dtypes to me; however I >> still have a few confusions in regard to color data. I should have >> specified this more in my OP. >> >> I am trying to create a program where all color data is stored as RGB. >> This requires a validator that does flexible *to_rgb()* conversion. I >> want the users to have flexibility, so it should accept names like "aqua" >> as well as RGB tuples. I realize now that imshow() will do its own >> conversions, but still don't quite understand exactly what constraints I >> need to impose on users for all the various use cases. For example, if a >> user enters a single integer (say 239), is there a de-facto way to >> rgb-convert this? I've tried to exhause the scenarious below; any case >> with question marks is still unclear to me. >> >> INPUT TYPE INPUT EXAMPLE HANDLER DESIRED OUTPUT >> >> ----------------------------------------------------------------------------------------------------- >> >> hex string '#0FF000' ColorConverter.to_rgb() (.2, .4, .5) >> name string 'purple ' ColorConverter.to_rgb() (.1, .8, .3) >> < 1 float tuple ' (.5, .2, .4) PASS (.5, >> .2, .4) >> > 1 float/int tuple (30, 28, 90) ???? >> ???? >> int 140 (Digital channel?) >> (140, 140, 140)??? >> float 39.5 (Error??) >> ??? >> >> I read on wiki that a RGB tuple with elements > 1 can be interpreted as a >> "Digital Channel", so perhaps just leave these as is. The tough cases for >> me are really when a user enters a single Int or Float. Of course, I could >> just raise an exception if there's no de-facto way to handle this... >> >> >> On Tue, Dec 31, 2013 at 6:15 AM, St?fan van der Walt wrote: >> >>> Hi Adam >>> >>> On Mon, 30 Dec 2013 22:37:43 -0800, Adam Hughes wrote: >>> > I noticed recently that matplotlib.colors limits RGB values to a range >>> (0 - >>> > 1), while in scikit image, RGB values can be much larger. For example: >>> > >>> > *test = np.zeros( (500,500,3) )* >>> > >>> > *test[:,:,0]=50* >>> > *test[:,:,1]=19* >>> > *test[:,:,2]=25* >>> > >>> > *imshow(test); * >>> > >>> > Produces a teal background. I was curious how the color teal is >>> derived >>> > from this? I tried normalizing to 255 and and 50 but neither seemed to >>> > produce the same teal color. >>> >>> Here's a write-up of the data-type and range representation that >>> scikit-image >>> uses: >>> >>> http://scikit-image.org/docs/0.9.x/user_guide/data_types.html >>> >>> When visualizing data with Matplotlib, note that data is normalized by >>> default, so you have to specify "vmin" and "vmax" to correctly display >>> your >>> generated background. >>> >>> Regards >>> St?fan >>> >>> -- >>> You received this message because you are subscribed to a topic in the >>> Google Groups "scikit-image" group. >>> To unsubscribe from this topic, visit >>> https://groups.google.com/d/topic/scikit-image/a54ehbd1fLk/unsubscribe. >>> To unsubscribe from this group and all of its topics, send an email to >>> scikit-image+unsubscribe at googlegroups.com. >>> For more options, visit https://groups.google.com/groups/opt_out. >>> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> >> For more options, visit https://groups.google.com/groups/opt_out. >> > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/a54ehbd1fLk/unsubscribe. > To unsubscribe from this group and all of its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > -------------- next part -------------- An HTML attachment was scrubbed... URL: