From ciaran.robb at googlemail.com Sun Feb 1 18:45:44 2015 From: ciaran.robb at googlemail.com (ciaran.robb at googlemail.com) Date: Sun, 1 Feb 2015 15:45:44 -0800 (PST) Subject: regionprops - displaying region properties Message-ID: Hello everyone, I have recently been attempting to modify some existing skimage code to display regionprops for a labeled image (e.g. area or eccentricity) I initially tried to translate a vectorized bit of old matlab code I had, but gave up on that and decided to alter the existing label2rgb skimage function I am attempting to change each label value to it's area property value similar to the label2rgb "avg" function. so I have: labels = a labeled image out = np.zeros_like(labels) #a blank array labels2 = np.unique(labels) #a vector of label vals out = np.zeros_like(labels) Props = regionprops(labels, ['Area']) bg_label=0 bg = (labels2 == bg_label) if bg.any(): labels2 = labels2[labels2 != bg_label] out[bg] = 0 for label in labels2: mask = (labels == label).nonzero() color = Props[label].area out[mask] = color but the "out" props image does not correspond to the correct area values? Can anyone help me with this? It also throws the following error: "list index out of range" It would certainly be useful to have a way to view the spatial distribution of label properties in this way - perhaps in a future skimage version? -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel.gutsche at gmail.com Mon Feb 2 04:03:15 2015 From: marcel.gutsche at gmail.com (Marcel Gutsche) Date: Mon, 2 Feb 2015 01:03:15 -0800 (PST) Subject: Image viewer plugin, refreshing low and high values of the slider widget Message-ID: Hi folks, another image viewer plugin related question. Is it possible to update the low and high values of the slider widget based on the return values of plugin function? E.g something like this? custom_plugin = Plugin(image_filter = calc_and_show) custom_plugin += Slider('parameter1', low = calc_and_show.lower_bound(), high = calc_and_show.upper_bound() ) Thanks for the great support so far! Marcel -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.silvester at gmail.com Mon Feb 2 20:38:29 2015 From: steven.silvester at gmail.com (Steven Silvester) Date: Mon, 2 Feb 2015 17:38:29 -0800 (PST) Subject: Image viewer plugin, refreshing low and high values of the slider widget In-Reply-To: References: Message-ID: <02442e28-e532-48e1-aedb-e9f83dc915ab@googlegroups.com> Marcel, If you mean dynamically updating the range of the slider, then no, this is not possible with our framework. You might see https://github.com/FelixHartmann/traitsui-tutorial-qt if you're looking to create something more complex than what we offer. Also, note that we are moving away from our current Qt-based Viewer toward IPython widgets (hopefully in time for the 0.12 release later this year). Regards, Steve On Monday, February 2, 2015 at 3:03:15 AM UTC-6, Marcel Gutsche wrote: > > Hi folks, > > another image viewer plugin related question. Is it possible to update the > low and high values of the slider widget based on the return values of > plugin function? > > E.g something like this? > > custom_plugin = Plugin(image_filter = calc_and_show) > custom_plugin += Slider('parameter1', low = calc_and_show.lower_bound(), > high = calc_and_show.upper_bound() ) > > Thanks for the great support so far! > > Marcel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Tue Feb 3 19:51:09 2015 From: hughesadam87 at gmail.com (Adam Hughes) Date: Tue, 3 Feb 2015 16:51:09 -0800 (PST) Subject: Image viewer plugin, refreshing low and high values of the slider widget In-Reply-To: References: Message-ID: <9e6dd9d7-68fe-4801-a49a-71d49ca54fa6@googlegroups.com> Oddly enough, I figured out how to do this yesterday for traitsui. Check out my answer to this question on stack overflow: http://stackoverflow.com/questions/9956167/change-property-parameter-from-within-class-constructor-python-traits/28286878#28286878 I adapted the DynamicRange trait from an old example floating around from Jonathan March. PS, Steven, our group has spent considerable time getting complex GUI's with IPython widgets together. I put a video of one online a few months ago: http://hugadams.github.io/scikit-spectra/ I can link you to the source code if you're interested, as we found that the IPython widget framework had a fair learning curve in regard to sophisticated apps. Having some far-along examples really helps, so let me know if you'd like to see some of them. On Monday, February 2, 2015 at 4:03:15 AM UTC-5, Marcel Gutsche wrote: > > Hi folks, > > another image viewer plugin related question. Is it possible to update the > low and high values of the slider widget based on the return values of > plugin function? > > E.g something like this? > > custom_plugin = Plugin(image_filter = calc_and_show) > custom_plugin += Slider('parameter1', low = calc_and_show.lower_bound(), > high = calc_and_show.upper_bound() ) > > Thanks for the great support so far! > > Marcel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Tue Feb 3 19:57:24 2015 From: hughesadam87 at gmail.com (Adam Hughes) Date: Tue, 3 Feb 2015 16:57:24 -0800 (PST) Subject: Interactive Mona Lisa Demo Message-ID: <04340af5-1cf6-4b72-ac41-285a9d2d9489@googlegroups.com> Hey everyone, I made an interactive demo for a class I'm TAing where the user loads an image (default is mona lisa prado) and they can dynamically change HSV or RGB values with sliders, and the altered image is updated in realtime. The point of this excercise is to try to mimic the fading of the mona lisa prado colors into the tinged yellow version of the original Mona Lisa on display today. This program uses TraitsUI and Chaco for its interactivity, so probably isn't of direct use for the image viewer; however, in the spirit of interactive examples, I thought it would be cool to share. It was much easier to get Chaco and play nicely with image data than I thought it would be. /home/glue/Desktop/monolisa.tar.gz *Note: this will be very slow if you use the high-res version of the mona lisa prado, so please start by trying the lowres files*. Requirements: scikit-image numpy chaco traits traitsui enable -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: monolisa.tar.gz Type: application/octet-stream Size: 5175290 bytes Desc: not available URL: From steven.silvester at gmail.com Tue Feb 3 21:58:08 2015 From: steven.silvester at gmail.com (Steven Silvester) Date: Tue, 3 Feb 2015 18:58:08 -0800 (PST) Subject: Image viewer plugin, refreshing low and high values of the slider widget In-Reply-To: <9e6dd9d7-68fe-4801-a49a-71d49ca54fa6@googlegroups.com> References: <9e6dd9d7-68fe-4801-a49a-71d49ca54fa6@googlegroups.com> Message-ID: <84ef6757-41b9-4f20-8c31-2853bcefc91f@googlegroups.com> Adam, I saw your demo a few weeks ago and was inspired: https://github.com/scikit-image/scikit-image/issues/1311. Very cool work. Regards, Steve On Tuesday, February 3, 2015 at 6:51:09 PM UTC-6, Adam Hughes wrote: > > Oddly enough, I figured out how to do this yesterday for traitsui. Check > out my answer to this question on stack overflow: > > > http://stackoverflow.com/questions/9956167/change-property-parameter-from-within-class-constructor-python-traits/28286878#28286878 > > I adapted the DynamicRange trait from an old example floating around from > Jonathan March. > > PS, Steven, our group has spent considerable time getting complex GUI's > with IPython widgets together. I put a video of one online a few months > ago: > > http://hugadams.github.io/scikit-spectra/ > > I can link you to the source code if you're interested, as we found that > the IPython widget framework had a fair learning curve in regard to > sophisticated apps. Having some far-along examples really helps, so let me > know if you'd like to see some of them. > > On Monday, February 2, 2015 at 4:03:15 AM UTC-5, Marcel Gutsche wrote: >> >> Hi folks, >> >> another image viewer plugin related question. Is it possible to update >> the low and high values of the slider widget based on the return values of >> plugin function? >> >> E.g something like this? >> >> custom_plugin = Plugin(image_filter = calc_and_show) >> custom_plugin += Slider('parameter1', low = calc_and_show.lower_bound(), >> high = calc_and_show.upper_bound() ) >> >> Thanks for the great support so far! >> >> Marcel >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Tue Feb 3 20:14:49 2015 From: hughesadam87 at gmail.com (Adam Hughes) Date: Tue, 3 Feb 2015 20:14:49 -0500 Subject: Interactive Mona Lisa Demo In-Reply-To: <04340af5-1cf6-4b72-ac41-285a9d2d9489@googlegroups.com> References: <04340af5-1cf6-4b72-ac41-285a9d2d9489@googlegroups.com> Message-ID: Sorry, I attached the wrong compressed file. Use this version. On Tue, Feb 3, 2015 at 7:57 PM, Adam Hughes wrote: > Hey everyone, > > I made an interactive demo for a class I'm TAing where the user loads an > image (default is mona lisa prado) and they can dynamically change HSV or > RGB values with sliders, and the altered image is updated in realtime. The > point of this excercise is to try to mimic the fading of the mona lisa > prado colors into the tinged yellow version of the original Mona Lisa on > display today. > > This program uses TraitsUI and Chaco for its interactivity, so probably > isn't of direct use for the image viewer; however, in the spirit of > interactive examples, I thought it would be cool to share. It was much > easier to get Chaco and play nicely with image data than I thought it would > be. > /home/glue/Desktop/monolisa.tar.gz > *Note: this will be very slow if you use the high-res version of the mona > lisa prado, so please start by trying the lowres files*. > > Requirements: > > scikit-image > numpy > chaco > traits > traitsui > enable > > > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/0DGu77zgTGo/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: monolisa.zip Type: application/zip Size: 5176385 bytes Desc: not available URL: From marcel.gutsche at gmail.com Wed Feb 4 10:22:44 2015 From: marcel.gutsche at gmail.com (Marcel Gutsche) Date: Wed, 4 Feb 2015 07:22:44 -0800 (PST) Subject: Transparent output Message-ID: <9ac1c679-e7b6-4193-bb64-696349449750@googlegroups.com> Hi all, I'm not sure if it is a bug, or whether I've just overlooked something obvious, but the internet did not offer much regarding this issue. I try to get slices from an image cube which consists of several images s = 1,...,n with the same dimensions. My new slice should have the width of the original images and the height of the number of images. Here is the code to do this: from skimage.io import ImageCollection, imsave from os.path import join import numpy as np def main(dir): ic = ImageCollection( join(dir, '*.png' ) ) row = 0 img = np.empty((len(ic), ic[0].shape[1], ic[0].shape[2] ) ) for s in range(len(ic)): img[s,...] = ic[s][row,...] # fname = 'new_{0:03d}.jpg'.format(v) # -> wrong colors fname = 'new_{0:03d}.png'.format(v) # -> output image is transparent imsave(fname, img) The problem is that the output images are all transparent. My input files are .png images with an alpha channel. I have also checked the values of the alpha channel of the output which are all set to 255, which, at least to my knowledge, should set the opacity to 100%. Regards, Marcel -------------- next part -------------- An HTML attachment was scrubbed... URL: From almar.klein at gmail.com Wed Feb 4 09:45:41 2015 From: almar.klein at gmail.com (Almar Klein) Date: Wed, 04 Feb 2015 15:45:41 +0100 Subject: ANN: imageio v1.0 In-Reply-To: References: <54651C9C.2020108@gmail.com> Message-ID: <54D23095.8010002@gmail.com> FYI: imageio v1.1 is available now. Release notes: http://imageio.readthedocs.org/en/latest/releasenotes.html - Almar On 19-11-14 13:01, Steven Silvester wrote: > Yes, all imports would have to be relative. I've chimed in on #42, but > posting my thoughts here: > > "I'd vote to have a source-only version, and wheels for 64 bit Linux, > Windows, and OSX with just the freeimage support. > Rather that auto-downloading, it would be nice to present the user with > the option of whether to download an external lib, perhaps as a simple > Tk dialog. > I think partial functionality is fine in this case given the nature of > the library." > > > Regards, > > Steve > > On Thursday, November 13, 2014 3:03:27 PM UTC-6, Almar Klein wrote: > > Hi all, > > I'm pleased to announce version 1.0 of imageio - a library for reading > and writing images. This library started as a spin-off of the freeimage > plugin in skimage, and is now a fully-fledged library with unit tests > and all. > > Imageio provides an easy interface to read and write a wide range of > image data, including animated images, volumetric data, and scientific > formats. It is cross-platform, runs on Python 2.x and 3.x, and is easy > to install. > > Imageio is plugin-based, making it easy to extend. It could probably > use > more scientific formats. I welcome anyone who's interested to > contribute! > > install: pip install imageio > website: http://imageio.github.io > release notes: > http://imageio.readthedocs.org/en/latest/releasenotes.html > > > Regards, > Almar > > -- > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image+unsubscribe at googlegroups.com > . > For more options, visit https://groups.google.com/d/optout. From steven.silvester at gmail.com Wed Feb 4 21:33:41 2015 From: steven.silvester at gmail.com (Steven Silvester) Date: Wed, 4 Feb 2015 18:33:41 -0800 (PST) Subject: Transparent output In-Reply-To: <9ac1c679-e7b6-4193-bb64-696349449750@googlegroups.com> References: <9ac1c679-e7b6-4193-bb64-696349449750@googlegroups.com> Message-ID: <3f5c3556-faaf-41e4-8f72-3b324e11f99b@googlegroups.com> Marcel, Would you be able to attach one of your images for me to test against? Regards, Steve On Wednesday, February 4, 2015 at 9:22:44 AM UTC-6, Marcel Gutsche wrote: > > Hi all, > > > I'm not sure if it is a bug, or whether I've just overlooked something > obvious, but the internet did not offer much regarding this issue. I try to > get slices from an image cube which consists of several images s = 1,...,n > with the same dimensions. My new slice should have the width of the > original images and the height of the number of images. Here is the code to > do this: > > > from skimage.io import ImageCollection, imsave > > from os.path import join > > import numpy as np > > > def main(dir): > > ic = ImageCollection( join(dir, '*.png' ) ) > > row = 0 > > img = np.empty((len(ic), ic[0].shape[1], ic[0].shape[2] ) ) > > for s in range(len(ic)): > > img[s,...] = ic[s][row,...] > > # fname = 'new_{0:03d}.jpg'.format(v) # -> wrong colors > > fname = 'new_{0:03d}.png'.format(v) # -> output image is transparent > imsave(fname, img) > > > > The problem is that the output images are all transparent. My input files > are .png images with an alpha channel. I have also checked the values of > the alpha channel of the output which are all set to 255, which, at least > to my knowledge, should set the opacity to 100%. > > > Regards, > > Marcel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Wed Feb 4 21:50:52 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 04 Feb 2015 18:50:52 -0800 (PST) Subject: Transparent output In-Reply-To: <9ac1c679-e7b6-4193-bb64-696349449750@googlegroups.com> References: <9ac1c679-e7b6-4193-bb64-696349449750@googlegroups.com> Message-ID: <1423104651911.835559b9@Nodemailer> Marcel, What about slicing out the alpha channel? Also, depending on memory considerations, you might try ic.concatenate(), which was my very first contribution to scikit-image! =D Juan. On Thu, Feb 5, 2015 at 2:22 AM, Marcel Gutsche wrote: > Hi all, > I'm not sure if it is a bug, or whether I've just overlooked something > obvious, but the internet did not offer much regarding this issue. I try to > get slices from an image cube which consists of several images s = 1,...,n > with the same dimensions. My new slice should have the width of the > original images and the height of the number of images. Here is the code to > do this: > from skimage.io import ImageCollection, imsave > from os.path import join > import numpy as np > def main(dir): > ic = ImageCollection( join(dir, '*.png' ) ) > row = 0 > img = np.empty((len(ic), ic[0].shape[1], ic[0].shape[2] ) ) > for s in range(len(ic)): > img[s,...] = ic[s][row,...] > # fname = 'new_{0:03d}.jpg'.format(v) # -> wrong colors > fname = 'new_{0:03d}.png'.format(v) # -> output image is transparent > imsave(fname, img) > The problem is that the output images are all transparent. My input files > are .png images with an alpha channel. I have also checked the values of > the alpha channel of the output which are all set to 255, which, at least > to my knowledge, should set the opacity to 100%. > Regards, > Marcel > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.silvester at gmail.com Wed Feb 4 22:08:40 2015 From: steven.silvester at gmail.com (Steven Silvester) Date: Wed, 4 Feb 2015 19:08:40 -0800 (PST) Subject: Interactive Mona Lisa Demo In-Reply-To: <04340af5-1cf6-4b72-ac41-285a9d2d9489@googlegroups.com> References: <04340af5-1cf6-4b72-ac41-285a9d2d9489@googlegroups.com> Message-ID: <33e9c994-a8b8-44b0-bf88-d41e9c7c49f9@googlegroups.com> Nice Demo Adam! Perfect teaching point. Two nits: you have the absolute path to the image hard coded in the file, and of course its only Python 2 since it relies on TraitsUI :(. (Also Chaco is so very hard to work with, see https://github.com/FelixHartmann/traitsui-tutorial-qt for a way to use Matplotlib and TraitsUI nicely together in Qt). Regards, Steve On Tuesday, February 3, 2015 at 6:57:24 PM UTC-6, Adam Hughes wrote: > > Hey everyone, > > I made an interactive demo for a class I'm TAing where the user loads an > image (default is mona lisa prado) and they can dynamically change HSV or > RGB values with sliders, and the altered image is updated in realtime. The > point of this excercise is to try to mimic the fading of the mona lisa > prado colors into the tinged yellow version of the original Mona Lisa on > display today. > > This program uses TraitsUI and Chaco for its interactivity, so probably > isn't of direct use for the image viewer; however, in the spirit of > interactive examples, I thought it would be cool to share. It was much > easier to get Chaco and play nicely with image data than I thought it would > be. > /home/glue/Desktop/monolisa.tar.gz > *Note: this will be very slow if you use the high-res version of the mona > lisa prado, so please start by trying the lowres files*. > > Requirements: > > scikit-image > numpy > chaco > traits > traitsui > enable > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Feb 4 22:31:35 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 04 Feb 2015 19:31:35 -0800 Subject: Interactive Mona Lisa Demo In-Reply-To: <04340af5-1cf6-4b72-ac41-285a9d2d9489@googlegroups.com> References: <04340af5-1cf6-4b72-ac41-285a9d2d9489@googlegroups.com> Message-ID: <87egq52fvs.fsf@berkeley.edu> Hi Adam On 2015-02-03 17:14:49, Adam Hughes wrote: >> I made an interactive demo for a class I'm TAing where the user >> loads an image (default is mona lisa prado) and they can >> dynamically change HSV or RGB values with sliders, and the >> altered image is updated in realtime. The point of this >> excercise is to try to mimic the fading of the mona lisa prado >> colors into the tinged yellow version of the original Mona Lisa >> on display today. I'm afraid I couldn't get this running (trouble installing Chaco + Traits, and thereafter segfaults when trying it). Could you do an skimage.viewer example for us? Then we can include it in the scikit-image demos repo! St?fan From stefanv at berkeley.edu Wed Feb 4 22:32:21 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 04 Feb 2015 19:32:21 -0800 Subject: Labels on GitHub issues Message-ID: <87d25p2fui.fsf@berkeley.edu> Hi all Our current set of GitHub labels are not utilized, probably because they don't make much sense i.t.o. our workflow. If you've seen any good usage patterns for labels out there, i.e. ways we can use them to improve our development process, please let me know. Thanks St?fan From tsyu80 at gmail.com Wed Feb 4 23:28:09 2015 From: tsyu80 at gmail.com (Tony Yu) Date: Wed, 4 Feb 2015 22:28:09 -0600 Subject: Labels on GitHub issues In-Reply-To: <87d25p2fui.fsf@berkeley.edu> References: <87d25p2fui.fsf@berkeley.edu> Message-ID: One approach that I'd like to try out is to use a 2x2 set of labels for priority and effort. For example - priority-high (red) - priority-low (gray) - effort-high (gray) - effort-low (red) The idea is that any issue would be assigned both a priority and effort. Items that are high priority and low effort should be tackled first (hence, two loud, red labels). Items that are low priority and high effort can probably be deferred (hence, two muted, gray labels). Other labels could be more ad-hoc, but it'd be nice to use colors to group labels. For example, a single color might be used to group all labels that point to a specific skimage package. That said, I've always been terrible at labeling issues. (It's right up there with forgetting to add to change-logs.) -Tony On Wed, Feb 4, 2015 at 9:32 PM, Stefan van der Walt wrote: > Hi all > > Our current set of GitHub labels are not utilized, probably > because they don't make much sense i.t.o. our workflow. If you've > seen any good usage patterns for labels out there, i.e. ways we > can use them to improve our development process, please let me > know. > > Thanks > St?fan > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Wed Feb 4 23:40:43 2015 From: hughesadam87 at gmail.com (Adam Hughes) Date: Wed, 4 Feb 2015 23:40:43 -0500 Subject: Interactive Mona Lisa Demo In-Reply-To: <87egq52fvs.fsf@berkeley.edu> References: <04340af5-1cf6-4b72-ac41-285a9d2d9489@googlegroups.com> <87egq52fvs.fsf@berkeley.edu> Message-ID: Thanks for the inputs Steven, Stefan, I'll update it soon and resend. On Wed, Feb 4, 2015 at 10:31 PM, Stefan van der Walt wrote: > Hi Adam On 2015-02-03 17:14:49, Adam Hughes > wrote: > >> I made an interactive demo for a class I'm TAing where the user loads an >>> image (default is mona lisa prado) and they can dynamically change HSV or >>> RGB values with sliders, and the altered image is updated in realtime. The >>> point of this excercise is to try to mimic the fading of the mona lisa >>> prado colors into the tinged yellow version of the original Mona Lisa on >>> display today. >>> >> > I'm afraid I couldn't get this running (trouble installing Chaco + Traits, > and thereafter segfaults when trying it). Could you do an skimage.viewer > example for us? Then we can include it in the scikit-image demos repo! > St?fan > > > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit https://groups.google.com/d/ > topic/scikit-image/0DGu77zgTGo/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel.gutsche at gmail.com Thu Feb 5 04:20:36 2015 From: marcel.gutsche at gmail.com (Marcel Gutsche) Date: Thu, 5 Feb 2015 01:20:36 -0800 (PST) Subject: Transparent output In-Reply-To: <1423104651911.835559b9@Nodemailer> References: <9ac1c679-e7b6-4193-bb64-696349449750@googlegroups.com> <1423104651911.835559b9@Nodemailer> Message-ID: > > Hi folks, thanks for the quick reply. Removing the alpha channel worked. I also added one of my images as a reference for testing, why it does not work when alpha channels are provided. Also concatenate_images works well, thanks for the hint Juan. I haven't tested it for speed, but it is way more readable. One quick question to google-groups: Is it possible to answer using my email client? Marcel. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 001_Set0_Cam_027.png Type: image/png Size: 420967 bytes Desc: not available URL: From stefanv at berkeley.edu Thu Feb 5 13:11:10 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Thu, 5 Feb 2015 10:11:10 -0800 Subject: Transparent output In-Reply-To: References: <9ac1c679-e7b6-4193-bb64-696349449750@googlegroups.com> <1423104651911.835559b9@Nodemailer> Message-ID: On Thu, Feb 5, 2015 at 1:20 AM, Marcel Gutsche wrote: > One quick question to google-groups: Is it possible to answer using my email > client? Yes, that is the way most of us do it, I think. You can adjust your mail delivery settings at https://groups.google.com/group/scikit-image St?fan From steven.silvester at gmail.com Thu Feb 5 21:19:26 2015 From: steven.silvester at gmail.com (Steven Silvester) Date: Thu, 5 Feb 2015 18:19:26 -0800 (PST) Subject: Transparent output In-Reply-To: <9ac1c679-e7b6-4193-bb64-696349449750@googlegroups.com> References: <9ac1c679-e7b6-4193-bb64-696349449750@googlegroups.com> Message-ID: <2f9f349e-96fe-46ff-8e67-acc85df41925@googlegroups.com> Marcel, When you say the output image was transparent, you mean it was all white? I believe what happened is your call to `np.empty` creates a floating point array, which skimage expects to be in the range [0, 1], but your data is in the range [0, 255]. If you used `dtype=np.uint8` in your `np.empty` call, I suspect it would work. Regards, Steve On Wednesday, February 4, 2015 at 9:22:44 AM UTC-6, Marcel Gutsche wrote: > > Hi all, > > > I'm not sure if it is a bug, or whether I've just overlooked something > obvious, but the internet did not offer much regarding this issue. I try to > get slices from an image cube which consists of several images s = 1,...,n > with the same dimensions. My new slice should have the width of the > original images and the height of the number of images. Here is the code to > do this: > > > from skimage.io import ImageCollection, imsave > > from os.path import join > > import numpy as np > > > def main(dir): > > ic = ImageCollection( join(dir, '*.png' ) ) > > row = 0 > > img = np.empty((len(ic), ic[0].shape[1], ic[0].shape[2] ) ) > > for s in range(len(ic)): > > img[s,...] = ic[s][row,...] > > # fname = 'new_{0:03d}.jpg'.format(v) # -> wrong colors > > fname = 'new_{0:03d}.png'.format(v) # -> output image is transparent > imsave(fname, img) > > > > The problem is that the output images are all transparent. My input files > are .png images with an alpha channel. I have also checked the values of > the alpha channel of the output which are all set to 255, which, at least > to my knowledge, should set the opacity to 100%. > > > Regards, > > Marcel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Thu Feb 5 21:21:05 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 05 Feb 2015 18:21:05 -0800 (PST) Subject: Transparent output In-Reply-To: <2f9f349e-96fe-46ff-8e67-acc85df41925@googlegroups.com> References: <2f9f349e-96fe-46ff-8e67-acc85df41925@googlegroups.com> Message-ID: <1423189264396.bf393fd1@Nodemailer> Steve, that is a *great* catch!!! =D On Fri, Feb 6, 2015 at 1:19 PM, Steven Silvester wrote: > Marcel, > When you say the output image was transparent, you mean it was all white? > I believe what happened is your call to `np.empty` creates a floating point > array, which skimage expects to be in the range [0, 1], but your data is in > the range [0, 255]. If you used `dtype=np.uint8` in your `np.empty` call, > I suspect it would work. > Regards, > Steve > On Wednesday, February 4, 2015 at 9:22:44 AM UTC-6, Marcel Gutsche wrote: >> >> Hi all, >> >> >> I'm not sure if it is a bug, or whether I've just overlooked something >> obvious, but the internet did not offer much regarding this issue. I try to >> get slices from an image cube which consists of several images s = 1,...,n >> with the same dimensions. My new slice should have the width of the >> original images and the height of the number of images. Here is the code to >> do this: >> >> >> from skimage.io import ImageCollection, imsave >> >> from os.path import join >> >> import numpy as np >> >> >> def main(dir): >> >> ic = ImageCollection( join(dir, '*.png' ) ) >> >> row = 0 >> >> img = np.empty((len(ic), ic[0].shape[1], ic[0].shape[2] ) ) >> >> for s in range(len(ic)): >> >> img[s,...] = ic[s][row,...] >> >> # fname = 'new_{0:03d}.jpg'.format(v) # -> wrong colors >> >> fname = 'new_{0:03d}.png'.format(v) # -> output image is transparent >> imsave(fname, img) >> >> >> >> The problem is that the output images are all transparent. My input files >> are .png images with an alpha channel. I have also checked the values of >> the alpha channel of the output which are all set to 255, which, at least >> to my knowledge, should set the opacity to 100%. >> >> >> Regards, >> >> Marcel >> > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel.gutsche at gmail.com Fri Feb 6 03:12:41 2015 From: marcel.gutsche at gmail.com (Marcel Gutsche) Date: Fri, 6 Feb 2015 09:12:41 +0100 Subject: Transparent output In-Reply-To: <1423189264396.bf393fd1@Nodemailer> References: <2f9f349e-96fe-46ff-8e67-acc85df41925@googlegroups.com> <1423189264396.bf393fd1@Nodemailer> Message-ID: Steve, the output image was really transparent, i.e. my image viewer showed a grey white checker pattern to indicate transparency. My actual implementation based on the Juan's hint regarding skimage.io.concatenate_images circumvents this issues by not creating an empty array in the first place, and by discarding the alpha channel. Regards, Marcel 2015-02-06 3:21 GMT+01:00 Juan Nunez-Iglesias : > Steve, that is a *great* catch!!! =D > > > > > On Fri, Feb 6, 2015 at 1:19 PM, Steven Silvester < > steven.silvester at gmail.com> wrote: > >> Marcel, >> >> When you say the output image was transparent, you mean it was all >> white? I believe what happened is your call to `np.empty` creates a >> floating point array, which skimage expects to be in the range [0, 1], but >> your data is in the range [0, 255]. If you used `dtype=np.uint8` in your >> `np.empty` call, I suspect it would work. >> >> >> Regards, >> >> Steve >> >> >> On Wednesday, February 4, 2015 at 9:22:44 AM UTC-6, Marcel Gutsche wrote: >>> >>> Hi all, >>> >>> >>> I'm not sure if it is a bug, or whether I've just overlooked something >>> obvious, but the internet did not offer much regarding this issue. I try to >>> get slices from an image cube which consists of several images s = 1,...,n >>> with the same dimensions. My new slice should have the width of the >>> original images and the height of the number of images. Here is the code to >>> do this: >>> >>> >>> from skimage.io import ImageCollection, imsave >>> >>> from os.path import join >>> >>> import numpy as np >>> >>> >>> def main(dir): >>> >>> ic = ImageCollection( join(dir, '*.png' ) ) >>> >>> row = 0 >>> >>> img = np.empty((len(ic), ic[0].shape[1], ic[0].shape[2] ) ) >>> >>> for s in range(len(ic)): >>> >>> img[s,...] = ic[s][row,...] >>> >>> # fname = 'new_{0:03d}.jpg'.format(v) # -> wrong colors >>> >>> fname = 'new_{0:03d}.png'.format(v) # -> output image is transparent >>> imsave(fname, img) >>> >>> >>> >>> The problem is that the output images are all transparent. My input >>> files are .png images with an alpha channel. I have also checked the values >>> of the alpha channel of the output which are all set to 255, which, at >>> least to my knowledge, should set the opacity to 100%. >>> >>> >>> Regards, >>> >>> Marcel >>> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/_gmROuMT9uU/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Sat Feb 7 16:19:52 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Sat, 07 Feb 2015 13:19:52 -0800 Subject: Improving HoG In-Reply-To: References: Message-ID: <87a90p4dxj.fsf@berkeley.edu> Dear Martin On Tue, Jan 27, 2015 at 11:18 AM, Martin Savc wrote: > Most of these would increase complexity, giving the implementation a > complicated look, with little gain. I've also been looking into some > practical improvements - integral histogram, separating the cell-block > histogram feature to use it with other dense feature transforms such as LBP, > a HoG visualization function that would render the visualization at higher > resolutions that the original image. Personally, I have never used our HoG implementation, but I am very glad that you are doing a thorough review of it! If you can make any improvements (such as the fixes you've already submitted), those are most welcome. Regards St?fan From stefanv at berkeley.edu Sat Feb 7 16:24:23 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Sat, 07 Feb 2015 13:24:23 -0800 Subject: Google Summer of Code 2015 Message-ID: <877fvt4dq0.fsf@berkeley.edu> Hi everyone It's almost time to submit project outlines for Google Summer of Code projects, so please make suggestions here or update the wiki at https://github.com/scikit-image/scikit-image/wiki/GSoC-2015 Thanks! St?fan From jni.soma at gmail.com Sun Feb 8 01:51:46 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sat, 07 Feb 2015 22:51:46 -0800 (PST) Subject: Google Summer of Code 2015 In-Reply-To: <877fvt4dq0.fsf@berkeley.edu> References: <877fvt4dq0.fsf@berkeley.edu> Message-ID: <1423378305840.c069c69f@Nodemailer> I'd take at most a secondary role in this year's GSoC, due to (ahem) other projects going on... =D At any rate the Cythonizing of ndimage is the most worthy goal imho, though it is very ambitious. A few days ago I tried making a seemingly simple change (optionally don't cast the value of generic_filter to float), and was quickly overwhelmed by macros and pointers named things like ii and bb. Juan. On Sun, Feb 8, 2015 at 8:24 AM, Stefan van der Walt wrote: > Hi everyone It's almost time to submit project outlines for > Google Summer of Code projects, so please make suggestions here or > update the wiki at > https://github.com/scikit-image/scikit-image/wiki/GSoC-2015 > Thanks! St?fan > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Sun Feb 8 08:20:33 2015 From: jsch at demuc.de (=?utf-8?Q?Johannes_Sch=C3=B6nberger?=) Date: Sun, 8 Feb 2015 14:20:33 +0100 (CET) Subject: Google Summer of Code 2015 In-Reply-To: <1423378305840.c069c69f@Nodemailer> References: <877fvt4dq0.fsf@berkeley.edu> <1423378305840.c069c69f@Nodemailer> Message-ID: <6199A6F8-5F79-4E9C-BB7C-517F2D34D5B5@demuc.de> I can also only play a secondary role in this. I'll be working on other stuff full time this summer. ndimage / interpolation / full code coverage / fixing existing issues on github sounds most important to me. Best, Johannes > On Feb 8, 2015, at 1:51 AM, Juan Nunez-Iglesias wrote: > > I'd take at most a secondary role in this year's GSoC, due to (ahem) other projects going on... =D > > At any rate the Cythonizing of ndimage is the most worthy goal imho, though it is very ambitious. A few days ago I tried making a seemingly simple change (optionally don't cast the value of generic_filter to float), and was quickly overwhelmed by macros and pointers named things like ii and bb. > > Juan. > > > > >> On Sun, Feb 8, 2015 at 8:24 AM, Stefan van der Walt wrote: >> Hi everyone It's almost time to submit project outlines for >> Google Summer of Code projects, so please make suggestions here or >> update the wiki at >> https://github.com/scikit-image/scikit-image/wiki/GSoC-2015 >> Thanks! St?fan >> >> -- >> You received this message because you are subscribed to the Google Groups "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.a.f.jackson.2 at googlemail.com Sat Feb 14 09:35:31 2015 From: james.a.f.jackson.2 at googlemail.com (james.a.f.jackson.2 at googlemail.com) Date: Sat, 14 Feb 2015 06:35:31 -0800 (PST) Subject: Install difficulties - No module named _hough_transform Message-ID: <0ac1bd34-ca40-4778-8674-7d02b1c5ceb7@googlegroups.com> Hi, I'm trying to install skimage, and having installed the dependencies (from requirements.txt in the source release), and then using pip to install skimage itself, I am having problems importing the transform library: Python 2.7.2 (default, Oct 11 2012, 20:14:37) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import skimage.data >>> import skimage.transform Traceback (most recent call last): File "", line 1, in File "skimage/transform/__init__.py", line 1, in from ._hough_transform import (hough_ellipse, hough_line, ImportError: No module named _hough_transform This is on Mac OSX 10.8.4 with Python 2.7.2. Is this an external library that needs installing from somewhere else? I have already had to install tifffile separately to get skimage.data working (perhaps should be added to requirements.txt?). I've tried a variety of approaches, but haven't had any success. Any and all advice welcome! Yours, James. -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.a.f.jackson.2 at googlemail.com Sat Feb 14 09:40:00 2015 From: james.a.f.jackson.2 at googlemail.com (james.a.f.jackson.2 at googlemail.com) Date: Sat, 14 Feb 2015 06:40:00 -0800 (PST) Subject: Install difficulties - No module named _hough_transform In-Reply-To: <0ac1bd34-ca40-4778-8674-7d02b1c5ceb7@googlegroups.com> References: <0ac1bd34-ca40-4778-8674-7d02b1c5ceb7@googlegroups.com> Message-ID: <63b1ab40-c36e-4e9c-a2c2-e4df07605ae4@googlegroups.com> Further, it appears more things are missing from the pip install: >>> import skimage >>> print skimage.__version__ 0.11dev >>> from skimage.feature import blob_dog, blob_log, blob_doh Traceback (most recent call last): File "", line 1, in File "skimage/feature/__init__.py", line 2, in from ._daisy import daisy File "skimage/feature/_daisy.py", line 4, in from .. import img_as_float, draw File "skimage/draw/__init__.py", line 1, in from .draw import circle, ellipse, set_color File "skimage/draw/draw.py", line 3, in from ._draw import _coords_inside_image ImportError: No module named _draw This isn't particularly neat; it appears the dependencies aren't being fulfilled, or that the binaries available from pip are incomplete (the version 0.11dev raises suspicion). If I attempt to download the source and build, I get the following error: building 'skimage._shared.geometry' extension compiling C sources C compiler: clang -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch i386 -arch x86_64 -pipe compile options: '-I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c' clang: geometry.c clang: error: no such file or directory: 'geometry.c' Which again implies something is missing, either from the dependencies or the source distribution. I can't find mention of a 'geometry' library that would fix this... Yours, James. On Saturday, February 14, 2015 at 3:35:31 PM UTC+1, james.a.f... at googlemail.com wrote: > > Hi, > > I'm trying to install skimage, and having installed the dependencies (from > requirements.txt in the source release), and then using pip to install > skimage itself, I am having problems importing the transform library: > > Python 2.7.2 (default, Oct 11 2012, 20:14:37) > [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on > darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> import skimage.data > >>> import skimage.transform > Traceback (most recent call last): > File "", line 1, in > File "skimage/transform/__init__.py", line 1, in > from ._hough_transform import (hough_ellipse, hough_line, > ImportError: No module named _hough_transform > > This is on Mac OSX 10.8.4 with Python 2.7.2. Is this an external library > that needs installing from somewhere else? I have already had to install > tifffile separately to get skimage.data working (perhaps should be added to > requirements.txt?). I've tried a variety of approaches, but haven't had > any success. > > Any and all advice welcome! > > Yours, > James. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Sat Feb 14 11:51:18 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Sat, 14 Feb 2015 08:51:18 -0800 (PST) Subject: Install difficulties - No module named _hough_transform In-Reply-To: <0ac1bd34-ca40-4778-8674-7d02b1c5ceb7@googlegroups.com> References: <0ac1bd34-ca40-4778-8674-7d02b1c5ceb7@googlegroups.com> Message-ID: <18318cbf-92a3-429e-83ca-38df7643394b@googlegroups.com> Hi James, These are Cython modules that aren't being found or are not properly compiling. Please let us know what version of Cython you are running. Rest assured the modules are there. These are not external dependencies. I suspect an old version of Cython is to blame. Regards, On Saturday, February 14, 2015 at 7:35:31 AM UTC-7, james.a.f... at googlemail.com wrote: > Hi, > > > I'm trying to install skimage, and having installed the dependencies (from requirements.txt in the source release), and then using pip to install skimage itself, I am having problems importing the transform library: > > > > Python 2.7.2 (default, Oct 11 2012, 20:14:37)? > [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> import skimage.data > >>> import skimage.transform > Traceback (most recent call last): > ? File "", line 1, in > ? File "skimage/transform/__init__.py", line 1, in > ? ? from ._hough_transform import (hough_ellipse, hough_line, > ImportError: No module named _hough_transform > > > This is on Mac OSX 10.8.4 with Python 2.7.2. ?Is this an external library that needs installing from somewhere else? ?I have already had to install tifffile separately to get skimage.data working (perhaps should be added to requirements.txt?). ?I've tried a variety of approaches, but haven't had any success. > > > Any and all advice welcome! > > > Yours, > James. From jsch at demuc.de Sat Feb 14 12:32:50 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Sat, 14 Feb 2015 12:32:50 -0500 Subject: Install difficulties - No module named _hough_transform In-Reply-To: References: <0ac1bd34-ca40-4778-8674-7d02b1c5ceb7@googlegroups.com> <18318cbf-92a3-429e-83ca-38df7643394b@googlegroups.com> Message-ID: <60890905-B8F4-45A1-A3BD-CD4E5EBA40C6@demuc.de> Just to make sure: Do you try to import skimage from within the scikit-image source directory? > On Feb 14, 2015, at 12:21 PM, James Jackson wrote: > > Just tinkering, I wondered if this was related to having multiple versions of Python installed. When auto-config / build scripts are being run I always have a niggling feeling of unease in a multiple-version environment. Looking at the multi-version support in OS X, I came across this nugget to set a global default version to execute: > > sudo defaults write /Library/Preferences/com.apple.versioner.python Version 2.7 > > It looks like the modules are now importing properly. Perhaps worth adding to the install page as a Mac OS X note? > > Yours, > James. > > On Sat, Feb 14, 2015 at 6:17 PM, James Jackson wrote: > Josh, > > Thanks for the reply - I've just installed cython using the requirements file so should be up-to-date. Looking back at my install log, it seems I've got the latest (0.22). > > Collecting cython>=0.21 (from -r requirements.txt (line 1)) > Downloading Cython-0.22-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (3.7MB) > > Looking in the site-packages directory I do see a _hough_transform library: > > James-Jacksons-MacBook-Air:transform jamesjackson$ pwd > /Library/Python/2.7/site-packages/skimage/transform > James-Jacksons-MacBook-Air:transform jamesjackson$ ll | grep hough > -rwxr-xr-x 1 root wheel 659580 14 Feb 09:41 _hough_transform.so > -rw-r--r-- 1 root wheel 5566 14 Feb 09:41 hough_transform.py > -rw-r--r-- 1 root wheel 5044 14 Feb 09:41 hough_transform.pyc > > So something funny is clearly going on here with it not being picked up... > > Yours, > James. > > On Sat, Feb 14, 2015 at 5:51 PM, Josh Warner wrote: > Hi James, > > These are Cython modules that aren't being found or are not properly compiling. Please let us know what version of Cython you are running. > > Rest assured the modules are there. These are not external dependencies. I suspect an old version of Cython is to blame. > > > Regards, > > > > On Saturday, February 14, 2015 at 7:35:31 AM UTC-7, james.a.f... at googlemail.com wrote: > > Hi, > > > > > > I'm trying to install skimage, and having installed the dependencies (from requirements.txt in the source release), and then using pip to install skimage itself, I am having problems importing the transform library: > > > > > > > > Python 2.7.2 (default, Oct 11 2012, 20:14:37) > > [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin > > Type "help", "copyright", "credits" or "license" for more information. > > >>> import skimage.data > > >>> import skimage.transform > > Traceback (most recent call last): > > File "", line 1, in > > File "skimage/transform/__init__.py", line 1, in > > from ._hough_transform import (hough_ellipse, hough_line, > > ImportError: No module named _hough_transform > > > > > > This is on Mac OSX 10.8.4 with Python 2.7.2. Is this an external library that needs installing from somewhere else? I have already had to install tifffile separately to get skimage.data working (perhaps should be added to requirements.txt?). I've tried a variety of approaches, but haven't had any success. > > > > > > Any and all advice welcome! > > > > > > Yours, > > James. > > -- > You received this message because you are subscribed to a topic in the Google Groups "scikit-image" group. > To unsubscribe from this topic, visit https://groups.google.com/d/topic/scikit-image/anOjoI4jW-w/unsubscribe. > To unsubscribe from this group and all its topics, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From james.a.f.jackson.2 at googlemail.com Sat Feb 14 12:17:55 2015 From: james.a.f.jackson.2 at googlemail.com (James Jackson) Date: Sat, 14 Feb 2015 18:17:55 +0100 Subject: Install difficulties - No module named _hough_transform In-Reply-To: <18318cbf-92a3-429e-83ca-38df7643394b@googlegroups.com> References: <0ac1bd34-ca40-4778-8674-7d02b1c5ceb7@googlegroups.com> <18318cbf-92a3-429e-83ca-38df7643394b@googlegroups.com> Message-ID: Josh, Thanks for the reply - I've just installed cython using the requirements file so should be up-to-date. Looking back at my install log, it seems I've got the latest (0.22). Collecting cython>=0.21 (from -r requirements.txt (line 1)) Downloading Cython-0.22-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (3.7MB) Looking in the site-packages directory I do see a _hough_transform library: James-Jacksons-MacBook-Air:transform jamesjackson$ pwd /Library/Python/2.7/site-packages/skimage/transform James-Jacksons-MacBook-Air:transform jamesjackson$ ll | grep hough -rwxr-xr-x 1 root wheel 659580 14 Feb 09:41 _hough_transform.so -rw-r--r-- 1 root wheel 5566 14 Feb 09:41 hough_transform.py -rw-r--r-- 1 root wheel 5044 14 Feb 09:41 hough_transform.pyc So something funny is clearly going on here with it not being picked up... Yours, James. On Sat, Feb 14, 2015 at 5:51 PM, Josh Warner wrote: > Hi James, > > These are Cython modules that aren't being found or are not properly > compiling. Please let us know what version of Cython you are running. > > Rest assured the modules are there. These are not external dependencies. I > suspect an old version of Cython is to blame. > > > Regards, > > > > On Saturday, February 14, 2015 at 7:35:31 AM UTC-7, > james.a.f... at googlemail.com wrote: > > Hi, > > > > > > I'm trying to install skimage, and having installed the dependencies > (from requirements.txt in the source release), and then using pip to > install skimage itself, I am having problems importing the transform > library: > > > > > > > > Python 2.7.2 (default, Oct 11 2012, 20:14:37) > > [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on > darwin > > Type "help", "copyright", "credits" or "license" for more information. > > >>> import skimage.data > > >>> import skimage.transform > > Traceback (most recent call last): > > File "", line 1, in > > File "skimage/transform/__init__.py", line 1, in > > from ._hough_transform import (hough_ellipse, hough_line, > > ImportError: No module named _hough_transform > > > > > > This is on Mac OSX 10.8.4 with Python 2.7.2. Is this an external > library that needs installing from somewhere else? I have already had to > install tifffile separately to get skimage.data working (perhaps should be > added to requirements.txt?). I've tried a variety of approaches, but > haven't had any success. > > > > > > Any and all advice welcome! > > > > > > Yours, > > James. > > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/anOjoI4jW-w/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.a.f.jackson.2 at googlemail.com Sat Feb 14 12:21:07 2015 From: james.a.f.jackson.2 at googlemail.com (James Jackson) Date: Sat, 14 Feb 2015 18:21:07 +0100 Subject: Install difficulties - No module named _hough_transform In-Reply-To: References: <0ac1bd34-ca40-4778-8674-7d02b1c5ceb7@googlegroups.com> <18318cbf-92a3-429e-83ca-38df7643394b@googlegroups.com> Message-ID: Just tinkering, I wondered if this was related to having multiple versions of Python installed. When auto-config / build scripts are being run I always have a niggling feeling of unease in a multiple-version environment. Looking at the multi-version support in OS X, I came across this nugget to set a global default version to execute: sudo defaults write /Library/Preferences/com.apple.versioner.python Version 2.7 It looks like the modules are now importing properly. Perhaps worth adding to the install page as a Mac OS X note? Yours, James. On Sat, Feb 14, 2015 at 6:17 PM, James Jackson < james.a.f.jackson.2 at googlemail.com> wrote: > Josh, > > Thanks for the reply - I've just installed cython using the requirements > file so should be up-to-date. Looking back at my install log, it seems > I've got the latest (0.22). > > Collecting cython>=0.21 (from -r requirements.txt (line 1)) > Downloading > Cython-0.22-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > (3.7MB) > > Looking in the site-packages directory I do see a _hough_transform library: > > James-Jacksons-MacBook-Air:transform jamesjackson$ pwd > /Library/Python/2.7/site-packages/skimage/transform > James-Jacksons-MacBook-Air:transform jamesjackson$ ll | grep hough > -rwxr-xr-x 1 root wheel 659580 14 Feb 09:41 _hough_transform.so > -rw-r--r-- 1 root wheel 5566 14 Feb 09:41 hough_transform.py > -rw-r--r-- 1 root wheel 5044 14 Feb 09:41 hough_transform.pyc > > So something funny is clearly going on here with it not being picked up... > > Yours, > James. > > On Sat, Feb 14, 2015 at 5:51 PM, Josh Warner > wrote: > >> Hi James, >> >> These are Cython modules that aren't being found or are not properly >> compiling. Please let us know what version of Cython you are running. >> >> Rest assured the modules are there. These are not external dependencies. >> I suspect an old version of Cython is to blame. >> >> >> Regards, >> >> >> >> On Saturday, February 14, 2015 at 7:35:31 AM UTC-7, >> james.a.f... at googlemail.com wrote: >> > Hi, >> > >> > >> > I'm trying to install skimage, and having installed the dependencies >> (from requirements.txt in the source release), and then using pip to >> install skimage itself, I am having problems importing the transform >> library: >> > >> > >> > >> > Python 2.7.2 (default, Oct 11 2012, 20:14:37) >> > [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on >> darwin >> > Type "help", "copyright", "credits" or "license" for more information. >> > >>> import skimage.data >> > >>> import skimage.transform >> > Traceback (most recent call last): >> > File "", line 1, in >> > File "skimage/transform/__init__.py", line 1, in >> > from ._hough_transform import (hough_ellipse, hough_line, >> > ImportError: No module named _hough_transform >> > >> > >> > This is on Mac OSX 10.8.4 with Python 2.7.2. Is this an external >> library that needs installing from somewhere else? I have already had to >> install tifffile separately to get skimage.data working (perhaps should be >> added to requirements.txt?). I've tried a variety of approaches, but >> haven't had any success. >> > >> > >> > Any and all advice welcome! >> > >> > >> > Yours, >> > James. >> >> -- >> You received this message because you are subscribed to a topic in the >> Google Groups "scikit-image" group. >> To unsubscribe from this topic, visit >> https://groups.google.com/d/topic/scikit-image/anOjoI4jW-w/unsubscribe. >> To unsubscribe from this group and all its topics, send an email to >> scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.a.f.jackson.2 at googlemail.com Sat Feb 14 13:10:58 2015 From: james.a.f.jackson.2 at googlemail.com (James Jackson) Date: Sat, 14 Feb 2015 19:10:58 +0100 Subject: Install difficulties - No module named _hough_transform In-Reply-To: <60890905-B8F4-45A1-A3BD-CD4E5EBA40C6@demuc.de> References: <0ac1bd34-ca40-4778-8674-7d02b1c5ceb7@googlegroups.com> <18318cbf-92a3-429e-83ca-38df7643394b@googlegroups.com> <60890905-B8F4-45A1-A3BD-CD4E5EBA40C6@demuc.de> Message-ID: Johannes, No, having performed a global install I am sitting in another unrelated directory. Yours, James. On Sat, Feb 14, 2015 at 6:32 PM, Johannes Schoenberger wrote: > Just to make sure: Do you try to import skimage from within the > scikit-image source directory? > > > On Feb 14, 2015, at 12:21 PM, James Jackson < > james.a.f.jackson.2 at googlemail.com> wrote: > > > > Just tinkering, I wondered if this was related to having multiple > versions of Python installed. When auto-config / build scripts are being > run I always have a niggling feeling of unease in a multiple-version > environment. Looking at the multi-version support in OS X, I came across > this nugget to set a global default version to execute: > > > > sudo defaults write /Library/Preferences/com.apple.versioner.python > Version 2.7 > > > > It looks like the modules are now importing properly. Perhaps worth > adding to the install page as a Mac OS X note? > > > > Yours, > > James. > > > > On Sat, Feb 14, 2015 at 6:17 PM, James Jackson < > james.a.f.jackson.2 at googlemail.com> wrote: > > Josh, > > > > Thanks for the reply - I've just installed cython using the requirements > file so should be up-to-date. Looking back at my install log, it seems > I've got the latest (0.22). > > > > Collecting cython>=0.21 (from -r requirements.txt (line 1)) > > Downloading > Cython-0.22-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > (3.7MB) > > > > Looking in the site-packages directory I do see a _hough_transform > library: > > > > James-Jacksons-MacBook-Air:transform jamesjackson$ pwd > > /Library/Python/2.7/site-packages/skimage/transform > > James-Jacksons-MacBook-Air:transform jamesjackson$ ll | grep hough > > -rwxr-xr-x 1 root wheel 659580 14 Feb 09:41 _hough_transform.so > > -rw-r--r-- 1 root wheel 5566 14 Feb 09:41 hough_transform.py > > -rw-r--r-- 1 root wheel 5044 14 Feb 09:41 hough_transform.pyc > > > > So something funny is clearly going on here with it not being picked > up... > > > > Yours, > > James. > > > > On Sat, Feb 14, 2015 at 5:51 PM, Josh Warner > wrote: > > Hi James, > > > > These are Cython modules that aren't being found or are not properly > compiling. Please let us know what version of Cython you are running. > > > > Rest assured the modules are there. These are not external dependencies. > I suspect an old version of Cython is to blame. > > > > > > Regards, > > > > > > > > On Saturday, February 14, 2015 at 7:35:31 AM UTC-7, > james.a.f... at googlemail.com wrote: > > > Hi, > > > > > > > > > I'm trying to install skimage, and having installed the dependencies > (from requirements.txt in the source release), and then using pip to > install skimage itself, I am having problems importing the transform > library: > > > > > > > > > > > > Python 2.7.2 (default, Oct 11 2012, 20:14:37) > > > [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on > darwin > > > Type "help", "copyright", "credits" or "license" for more information. > > > >>> import skimage.data > > > >>> import skimage.transform > > > Traceback (most recent call last): > > > File "", line 1, in > > > File "skimage/transform/__init__.py", line 1, in > > > from ._hough_transform import (hough_ellipse, hough_line, > > > ImportError: No module named _hough_transform > > > > > > > > > This is on Mac OSX 10.8.4 with Python 2.7.2. Is this an external > library that needs installing from somewhere else? I have already had to > install tifffile separately to get skimage.data working (perhaps should be > added to requirements.txt?). I've tried a variety of approaches, but > haven't had any success. > > > > > > > > > Any and all advice welcome! > > > > > > > > > Yours, > > > James. > > > > -- > > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/anOjoI4jW-w/unsubscribe. > > To unsubscribe from this group and all its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/d/optout. > > > > > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/d/optout. > > -- > You received this message because you are subscribed to a topic in the > Google Groups "scikit-image" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/scikit-image/anOjoI4jW-w/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Mon Feb 16 20:16:53 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Mon, 16 Feb 2015 17:16:53 -0800 Subject: Last blockers for 0.11 release Message-ID: Hi everyone We're overdue for the 0.11 release, and there are very few issues left to close. I'd appreciate your feedback on this one: https://github.com/scikit-image/scikit-image/pull/1248 (peak detectors) And help in fixing: https://github.com/scikit-image/scikit-image/issues/1048 (doctest failures on Mac OSX) I think this Sunday is a reasonable target for 0.11. Please get any additional documentation updates in before then. St?fan From stefanv at berkeley.edu Mon Feb 16 23:23:41 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 16 Feb 2015 20:23:41 -0800 Subject: PeerJ Staff Picks 2015 Message-ID: <874mql5foy.fsf@berkeley.edu> Hi everyone Juan just informed me that our scikit-image paper appears on the 2015 PeerJ Staff Picks: https://peerj.com/collections/13-peerjpicks2015/ Congratulations team! St?fan From tsyu80 at gmail.com Mon Feb 16 23:31:14 2015 From: tsyu80 at gmail.com (Tony Yu) Date: Mon, 16 Feb 2015 22:31:14 -0600 Subject: PeerJ Staff Picks 2015 In-Reply-To: <874mql5foy.fsf@berkeley.edu> References: <874mql5foy.fsf@berkeley.edu> Message-ID: On Mon, Feb 16, 2015 at 10:23 PM, Stefan van der Walt wrote: > Hi everyone > > Juan just informed me that our scikit-image paper appears on the > 2015 PeerJ Staff Picks: > > https://peerj.com/collections/13-peerjpicks2015/ > > Congratulations team! > > St?fan > Sweet! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Tue Feb 17 00:21:49 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Tue, 17 Feb 2015 00:21:49 -0500 Subject: PeerJ Staff Picks 2015 In-Reply-To: References: <874mql5foy.fsf@berkeley.edu> Message-ID: Awesome news! > On Feb 16, 2015, at 11:31 PM, Tony Yu wrote: > > > On Mon, Feb 16, 2015 at 10:23 PM, Stefan van der Walt wrote: > Hi everyone > > Juan just informed me that our scikit-image paper appears on the > 2015 PeerJ Staff Picks: > > https://peerj.com/collections/13-peerjpicks2015/ > > Congratulations team! > > St?fan > > > Sweet! > > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From jsch at demuc.de Tue Feb 17 00:23:09 2015 From: jsch at demuc.de (Johannes Schoenberger) Date: Tue, 17 Feb 2015 00:23:09 -0500 Subject: Last blockers for 0.11 release In-Reply-To: References: Message-ID: Excellent work @everyone! This will be a huge release. > On Feb 16, 2015, at 8:16 PM, St?fan van der Walt wrote: > > Hi everyone > > We're overdue for the 0.11 release, and there are very few issues left > to close. I'd appreciate your feedback on this one: > > https://github.com/scikit-image/scikit-image/pull/1248 (peak detectors) > > And help in fixing: > > https://github.com/scikit-image/scikit-image/issues/1048 (doctest > failures on Mac OSX) > > I think this Sunday is a reasonable target for 0.11. Please get any > additional documentation updates in before then. > > St?fan > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From jni.soma at gmail.com Wed Feb 18 21:27:13 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 18 Feb 2015 18:27:13 -0800 (PST) Subject: Equivalent of watershed for cutting connected components of an image of particles? In-Reply-To: References: Message-ID: <1424312832870.ad60ef5@Nodemailer> Hey Adam, I'm *guessing* the IJ method is: 1. compute the thresholded background image (ie background labeled "True") 2. compute the Euclidean distance transform (scipy.ndimage.distance_transform_edt) 3. compute the local maxima (find_local_max) and set them as seeds 4. compute watershed, using the foreground as mask. All of those functions are available in scipy/scikit-image. If you get good results, a gallery example of this would certainly be appreciated! =) However, my experience with such methods is that they only work well for reasonably sparse, perfectly spherical particles. As to removing particles on the edge, I would use a bool mask with only the edges selected, then np.unique(), then remove them manually in a for loop. I agree that it's a bit laborious... Perhaps a separate function to do this could be added to the API... On Thu, Feb 19, 2015 at 11:04 AM, Adam Hughes wrote: > Hi, > In ImageJ, one can select watershedding to break up connected regions of > particles. Are there any examples of using watershed in this capacity in > scikit image? All of the examples I see seem to use watershedding to do > segmentation, not to break connected particles in an already-segmented > black and white image. > Also, is there a straightforward way to remove particles on a the edge of > an image? Sorry, googling is failing me, but I know this is possible. > Thanks > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hughesadam87 at gmail.com Wed Feb 18 19:04:09 2015 From: hughesadam87 at gmail.com (Adam Hughes) Date: Wed, 18 Feb 2015 19:04:09 -0500 Subject: Equivalent of watershed for cutting connected components of an image of particles? Message-ID: Hi, In ImageJ, one can select watershedding to break up connected regions of particles. Are there any examples of using watershed in this capacity in scikit image? All of the examples I see seem to use watershedding to do segmentation, not to break connected particles in an already-segmented black and white image. Also, is there a straightforward way to remove particles on a the edge of an image? Sorry, googling is failing me, but I know this is possible. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From georgeshattab at gmail.com Thu Feb 19 03:31:12 2015 From: georgeshattab at gmail.com (Georges H) Date: Thu, 19 Feb 2015 00:31:12 -0800 (PST) Subject: Equivalent of watershed for cutting connected components of an image of particles? In-Reply-To: References: Message-ID: <4abf2833-b59d-44da-a8f5-0ced1b60ac63@googlegroups.com> I second the post of Juan regarding the watershed for non sparse data. As for clearing image borders you have a function from the segmentation module here : http://scikit-image.org/docs/dev/api/skimage.segmentation.html#clear-border On Thursday, 19 February 2015 01:04:10 UTC+1, Adam Hughes wrote: > > Hi, > > In ImageJ, one can select watershedding to break up connected regions of > particles. Are there any examples of using watershed in this capacity in > scikit image? All of the examples I see seem to use watershedding to do > segmentation, not to break connected particles in an already-segmented > black and white image. > > Also, is there a straightforward way to remove particles on a the edge of > an image? Sorry, googling is failing me, but I know this is possible. > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Thu Feb 19 05:12:24 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 19 Feb 2015 02:12:24 -0800 (PST) Subject: Equivalent of watershed for cutting connected components of an image of particles? In-Reply-To: <4abf2833-b59d-44da-a8f5-0ced1b60ac63@googlegroups.com> References: <4abf2833-b59d-44da-a8f5-0ced1b60ac63@googlegroups.com> Message-ID: <1424340744346.9b43462c@Nodemailer> Ha! I'd never noticed that function! Thanks for pointing it out, Georges! =) On Thu, Feb 19, 2015 at 7:31 PM, Georges H wrote: > I second the post of Juan regarding the watershed for non sparse data. > As for clearing image borders you have a function from the segmentation > module here : > http://scikit-image.org/docs/dev/api/skimage.segmentation.html#clear-border > On Thursday, 19 February 2015 01:04:10 UTC+1, Adam Hughes wrote: >> >> Hi, >> >> In ImageJ, one can select watershedding to break up connected regions of >> particles. Are there any examples of using watershed in this capacity in >> scikit image? All of the examples I see seem to use watershedding to do >> segmentation, not to break connected particles in an already-segmented >> black and white image. >> >> Also, is there a straightforward way to remove particles on a the edge of >> an image? Sorry, googling is failing me, but I know this is possible. >> >> Thanks >> > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From georgeshattab at gmail.com Thu Feb 19 12:08:47 2015 From: georgeshattab at gmail.com (Georges H) Date: Thu, 19 Feb 2015 09:08:47 -0800 (PST) Subject: Equivalent of watershed for cutting connected components of an image of particles? In-Reply-To: <1424340744346.9b43462c@Nodemailer> References: <4abf2833-b59d-44da-a8f5-0ced1b60ac63@googlegroups.com> <1424340744346.9b43462c@Nodemailer> Message-ID: <11112867-4854-4530-badd-11c61d1dc511@googlegroups.com> Sure thing ! i am actually using it in my own registration workflow ;)) Follow up on the watershed segmentation, maybe you could upload an example image so we could make some suggestions ? On Thursday, 19 February 2015 11:12:26 UTC+1, Juan Nunez-Iglesias wrote: > > Ha! I'd never noticed that function! Thanks for pointing it out, Georges! > =) > > > > > On Thu, Feb 19, 2015 at 7:31 PM, Georges H > wrote: > >> I second the post of Juan regarding the watershed for non sparse data. >> >> As for clearing image borders you have a function from the segmentation >> module here : >> >> http://scikit-image.org/docs/dev/api/skimage.segmentation.html#clear-border >> >> >> >> On Thursday, 19 February 2015 01:04:10 UTC+1, Adam Hughes wrote: >>> >>> Hi, >>> >>> In ImageJ, one can select watershedding to break up connected regions of >>> particles. Are there any examples of using watershed in this capacity in >>> scikit image? All of the examples I see seem to use watershedding to do >>> segmentation, not to break connected particles in an already-segmented >>> black and white image. >>> >>> Also, is there a straightforward way to remove particles on a the edge >>> of an image? Sorry, googling is failing me, but I know this is possible. >>> >>> Thanks >>> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raniere at ime.unicamp.br Thu Feb 19 07:13:35 2015 From: raniere at ime.unicamp.br (Raniere Silva) Date: Thu, 19 Feb 2015 10:13:35 -0200 Subject: Google Summer of Code and NumFOCUS Message-ID: <20150219121335.GK16143@pupunha> Hi, NumFOCUS has promotes and supports the ongoing research and development of open-source computing tools including scikit-image. This year NumFOCUS want to try be a Google Summer of Code "umbrella" mentoring organization, Umbrella organizations are mentoring organizations accepted into the Google Summer of Code program that have other open source organizations working "under" them. Sometime organizations that work very closely or have very similar goals or communities may get put together under an "umbrella." Google stills expects all organizations under the umbrella, whether accepted into the program under their title or not, to adhere to all the rules and regulations of the program. From https://www.google-melange.com/gsoc/document/show/gsoc_program/google/gsoc2015/help_page#umbrella_organization To help promote and support scikit-image. We encourage scikit-image to apply to Google Summer of Code under your own title and will be very happy if you can also do with us. If you are interested, please check https://github.com/swcarpentry/gsoc2015 and https://github.com/swcarpentry/gsoc2015/blob/master/CONTRIBUTING.md. If you have any question, please email me directly. Thanks in advance, Raniere -------------- next part -------------- Q29udGVudC1UeXBlOiBhcHBsaWNhdGlvbi9wZ3Atc2lnbmF0dXJlDQoNCi0tLS0tQkVHSU4gUEdQ IFNJR05BVFVSRS0tLS0tDQpWZXJzaW9uOiBHbnVQRyB2Mg0KDQppUUljQkFFQkNBQUdCUUpVNWRO dkFBb0pFSHZNY3ZwVXBYWE5JaVFQLzFjWmxrcE01SDVvTnBlamVIZE1ReHpMDQpPeFp5T21NOVda cHVLM2VMVVU1SXBmZi94QmpCcHZwZkEzL3lwNElFYmxkVExOMlhRQktlVFcxYk40U3dSSDJNDQpU RCthV3J4ZkMxYURjTDdwNjhIVDRNaHo2ZXBOTDFJN1lJZ3QyM0JTek1PZDFZTTl0Q0NvaVhoaEZC UmQ4dG4vDQpPYTUvVFVjMnBKSFBmMkpYcFZBM1JHNTYvb1Z4b25HTXkxNVBKNGhpVy9sY0hrZmpT cTVZTnJOcUdrbGZUUTAzDQppbFR5RHdoZVJCTWJxaTliU0h3RzZ2cjVTUkRkVlY0MHYyUTEwRWc0 R2NjKzZsclJqYUw1T1BoZFYyWHJieENnDQpsM0RBWFJnbTJDUDExZ3VyWWc5QjFGb3dMVzl6ZnBy eTR3Z3lCSkRHQXdxMzJlOVBjdWdmV0prVG9nZFdLbkVEDQp6MGZod21BMklnTnR2eFVweTlBK0U0 eXJnUTZOaHF0cURDNGxJbVZES0JaaVQvb3Y0c25EQ2FRTEZYOFJvcDZwDQovOFZxdEVFT0NKRCtJ NlZ0NThtSGszNFJVaUcwUWppYytjMWpzb1NFQlVQeUh1Z3drc2Z1cldJMCtrZXpITDlvDQpWQnhh ZEdiYjhpS2UwSThKdVVpK2V3OVNManczVVhoVGMvZzBkQTd5R2dVSnROZW5ZVVZ6QmphSlZZNW5U RVhwDQpJL2hTWWlmSzZCR3hFa0ZYVVNmWHh4MHFWNHhCY1ZwaDVLck1yVHViWmZxY2FXV2luZjhB ZE5BQkRqb3RRb2RrDQppV1NWUGNoS2RiQy9LOHVBL1dtZ0tveEQ5Y0JNcTZhRkI5VnFjWWcxTUpj ODNPMmFUNUNqdmtwUW5YM0kzeHZnDQp4T1BOZkpzYml1cm81UHE2VDk3Mg0KPWFWd3YNCi0tLS0t RU5EIFBHUCBTSUdOQVRVUkUtLS0tLQ0K From raniere at ime.unicamp.br Fri Feb 20 15:37:07 2015 From: raniere at ime.unicamp.br (Raniere Silva) Date: Fri, 20 Feb 2015 18:37:07 -0200 Subject: ANN: SciPy Latin =?utf-8?Q?Am=C3=A9ric?= =?utf-8?Q?a?= 2015 - Call for Proposals Message-ID: <20150220203707.GW12853@pupunha> *Call for Proposals* *SciPy Latin Am?rica 2015*, the third annual Scientific Computing with Python Conference, will be held this *May 20-22* in *Posadas, Misiones, Argentina*. SciPy is a community dedicated to the advancement of scientific computing through open source Python software for mathematics, science, and engineering. The annual SciPy Conferences allows participants from academic, commercial, and governmental organizations to showcase their latest projects, learn from skilled users and developers, and collaborate on code development. *Proposals are now being accepted for SciPy Latin Am?rica 2015*. Presentation content can be at a novice, intermediate or advanced level. Talks will run 30-40 min and hands-on tutorials will run 100-120 min. We also receive proposal for posters. For more information about the different types of proposal, see below the "*Different types of Communication*" section. *How to Submit?* 1. Register for an account on http://conf.scipyla.org/user/register 2. Submit your proposal at http://conf.scipyla.org/activity/propose *Important Dates* - *April 6th*: Talks, poster, tutorial submission deadline. - *April 20th*: Notification Talks / Posters / Tutorial accepted. - *May 20th-22nd*: SciPy Latin Am?rica 2015. *Different types of Communication* *Talks*: These are the traditional talk sessions given during the main conference days. They're mostly 30 minutes long with 5 min for questions. If you think you have a topic but aren't sure how to propose it, contact our program committee and we'll work with you. We'd love to help you come up with a great proposal. *Tutorials*: We are looking for tutorials that can grow this community at any level. We aim for tutorials that will advance Scientific Python, advance this community, and shape the future. They're are 100-120 minutes long, but if you think you need more than one slot, you can split the content and submit two self-contained proposals. *Posters*: The poster session provides a more interactive, attendee-driven presentation than the speaker-driven conference talks. Poster presentations have fostered extensive discussions on the topics, with many that have gone on much longer than the actual "session" called for. The idea is to present your topic on poster board and as attendees mingle through the rows, they find your topic, read through what you've written, then strike up a discussion on it. It's as simple as that. You could be doing Q&A in the first minute of the session with a group of 10 people. *Lightning Talks*: Want to give a talk, but do not have enough material for a full talk? These talks are, at max, 5 minute talks done in quick succession in the main hall. No need to fill the whole slot, though! -- *The SciPy LA 2015 Program **Committee* -------------- next part -------------- Q29udGVudC1UeXBlOiBhcHBsaWNhdGlvbi9wZ3Atc2lnbmF0dXJlDQoNCi0tLS0tQkVHSU4gUEdQ IFNJR05BVFVSRS0tLS0tDQpWZXJzaW9uOiBHbnVQRyB2Mg0KDQppUUljQkFFQkNBQUdCUUpVNTVy ekFBb0pFSHZNY3ZwVXBYWE5Ic01QLzN3Tk13TUk3YmdaN3F0SjVGamFtOXJJDQo1czlFTHZHQ0Nt dWVLKytHZytvcE9MblJySUoyT1JUS1dmVm1PV3lHQzA3M2F2d0o5Ykgra3luQkI5L3RzTFJ3DQo5 UUlpSmxFU0xWM2toSXErS1hBdDNERXlWWjVTY2VWQWsxaHhETk1WT2NEeE1wYTFRL3JRSUtzSk9B SE0ySXd6DQpnZnlRR1BjQ2JaYnBOdzNsQWFLQmMrcG53K0VVa01oOHNSNXhZUjM0bWsweE41K3Iy bHBTWDdyZjl3NkphN296DQpoWVlKTFpuTU54c3dBUUg3Tm10eDQvVEJLMmN0VU1zT2JmdXU3dkFl elZRa05QVjNqdGRNeFFzMWRNUUpRQUlYDQpzWm9zNlV1bmlSckJHZktLUldVcUZyRTROK3MzeXZK ekFTck1yMWlNZlQxSGpZcEdsUHNUUkhIWTljRUhyOVZSDQpEKzZGaCtHL3hzMFNERU9SbFVjbnFN N3VwNjNmNGNkUHI4YTd6M1ArUmNoaFdMTnE3QXpOR3NHYzNFS2U4REhqDQpmdGdCNFI3TmwyVGFM M3UzNVpFY1ozM3VaSDd4NnJlUk50Q1g2bVRxMnRYUXRidEovUTJ2KzZwb3U3b1lTdFZQDQpnQUlN OE1tZUZ0YThqZ084a2dBQWJuanpCSWJXNlphU0EyZGYzOUtDdDhsRWdmNXRBRFJlZDlQeGM3OFF2 STVBDQpHd014cXlGdnVGc1Bid1lNM2xvZDJjNlFRL1ViWEVKcWgvMTRUSlBXSFRza1NNT1V3aXBE dlpWbzQvSVFWWEdRDQpYdnR3SDU2OWJ2UGd2VU5QMWp3eWl5UjZTYVR5Qnk5ZTBlNHRabWVUTjZn Z0xXZnhScVdaRmZJMnNYMm1DN0orDQpienEvVmF1UzU2SGlPcVNXazh2Kw0KPUREQ0oNCi0tLS0t RU5EIFBHUCBTSUdOQVRVUkUtLS0tLQ0K From jni.soma at gmail.com Sat Feb 21 18:53:47 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sat, 21 Feb 2015 15:53:47 -0800 (PST) Subject: Distinguishable shades of grey Message-ID: <1424562827186.ccc74f1c@Nodemailer> >From this paper: http://rsif.royalsocietypublishing.org/content/early/2012/09/22/rsif.2012.0601.short The following quote (emphasis mine): Humans possess three cone visual pigments for conveying colour information that is said to allow humans to be able to detect approximately 10 million unique colours [8,9] but only distinguish about 30 shades of grey [10]. (Let's ignore recent movies of dubious merit in this discussion. =P) In the new MPL imshow plugin (which I recently wrote), we switch from grayscale to cubehelix when the dynamic range is too low to be displayed on a common monitor (1/255): https://github.com/scikit-image/scikit-image/blob/master/skimage/io/_plugins/matplotlib_plugin.py#L51 Maybe the threshold should be when the difference is imperceptible to most humans (1/30)? Juan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Sun Feb 22 03:31:58 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Sun, 22 Feb 2015 00:31:58 -0800 Subject: Distinguishable shades of grey In-Reply-To: <1424562827186.ccc74f1c@Nodemailer> References: <1424562827186.ccc74f1c@Nodemailer> Message-ID: <87twye2vpd.fsf@berkeley.edu> On 2015-02-21 15:53:47, Juan Nunez-Iglesias wrote: > In the new MPL imshow plugin (which I recently wrote), we switch > from grayscale to cubehelix when the dynamic range is too low to > be displayed on a common monitor (1/255): Importantly, it should be clear and intuitive to users when the switch happens. Why do we not always use cubehelix? Apart from that it is a rather ugly colormap. St?fan From jni.soma at gmail.com Sun Feb 22 08:30:58 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sun, 22 Feb 2015 05:30:58 -0800 (PST) Subject: Distinguishable shades of grey In-Reply-To: <87twye2vpd.fsf@berkeley.edu> References: <87twye2vpd.fsf@berkeley.edu> Message-ID: <1424611857980.389fb7ac@Nodemailer> Any time that cubehelix is used, a colorbar is shown. So I think it's clear to the user what is going on. As to why we don't always use it, the fact is that when people load a grayscale (natural) image, they expect it to appear in grayscale. This is true of most of our examples, e.g., coins, camera. plt.imshow shows camera in jet by default, which is appalling *and* surprising to the user. Cubehelix would be merely surprising. =P On the other hand, it's generally unlikely that an image that shows no perceptually distinguishable shades of grey is actually a natural image. In those cases, I think it's appropriate to show cubehelix. (Or another suitable colormap. But I didn't want to wait until mpl made their fancy new one.) The key here is that imshow is a command made interactively to *explore one's data*. One wants to know at a glance what is going on. It's infuriating to load up an image and have it appear as a black rectangle because its range in 16-bit grayscale is really low. Then I have to go type a much longer command to get it to show the way I want. So I think in these cases a little magic is justified that will get people the result they want in one short command that will work as expected most of the time. ? Sent from Mailbox On Sun, Feb 22, 2015 at 7:32 PM, Stefan van der Walt wrote: > On 2015-02-21 15:53:47, Juan Nunez-Iglesias > wrote: >> In the new MPL imshow plugin (which I recently wrote), we switch >> from grayscale to cubehelix when the dynamic range is too low to >> be displayed on a common monitor (1/255): > Importantly, it should be clear and intuitive to users when the > switch happens. Why do we not always use cubehelix? Apart from > that it is a rather ugly colormap. > St?fan > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Tue Feb 24 01:16:14 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Mon, 23 Feb 2015 22:16:14 -0800 (PST) Subject: Distinguishable shades of grey In-Reply-To: <87twye2vpd.fsf@berkeley.edu> References: <87twye2vpd.fsf@berkeley.edu> Message-ID: <1424758574040.f6ed2892@Nodemailer> @tacaswell, I'm aware, hence, > But I didn't want to wait until mpl made their fancy new one =) How are those discussions going? Is there still going to be a custom fancy new cmap? Or are people thinking of settling into an existing map? Even Matlab is done with jet now, thankfully. (Not a huge fan of parula, though. But it's an improvement.) On Sun, Feb 22, 2015 at 7:32 PM, Stefan van der Walt wrote: > On 2015-02-21 15:53:47, Juan Nunez-Iglesias > wrote: >> In the new MPL imshow plugin (which I recently wrote), we switch >> from grayscale to cubehelix when the dynamic range is too low to >> be displayed on a common monitor (1/255): > Importantly, it should be clear and intuitive to users when the > switch happens. Why do we not always use cubehelix? Apart from > that it is a rather ugly colormap. > St?fan > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcaswell at gmail.com Tue Feb 24 00:14:01 2015 From: tcaswell at gmail.com (Thomas Caswell) Date: Tue, 24 Feb 2015 05:14:01 +0000 Subject: Distinguishable shades of grey References: <87twye2vpd.fsf@berkeley.edu> <1424611857980.389fb7ac@Nodemailer> Message-ID: As a side note, default jet is going away 'soon'. Tom On Sun Feb 22 2015 at 8:31:01 AM Juan Nunez-Iglesias wrote: > Any time that cubehelix is used, a colorbar is shown. So I think it's > clear to the user what is going on. > > As to why we don't always use it, the fact is that when people load a > grayscale (natural) image, they expect it to appear in grayscale. This is > true of most of our examples, e.g., coins, camera. plt.imshow shows camera > in jet by default, which is appalling *and* surprising to the user. > Cubehelix would be merely surprising. =P > > On the other hand, it's generally unlikely that an image that shows no > perceptually distinguishable shades of grey is actually a natural image. In > those cases, I think it's appropriate to show cubehelix. (Or another > suitable colormap. But I didn't want to wait until mpl made their fancy new > one.) > > The key here is that imshow is a command made interactively to *explore > one's data*. One wants to know at a glance what is going on. It's > infuriating to load up an image and have it appear as a black rectangle > because its range in 16-bit grayscale is really low. Then I have to go type > a much longer command to get it to show the way I want. So I think in these > cases a little magic is justified that will get people the result they want > in one short command that will work as expected most of the time. > > ? > Sent from Mailbox > > > On Sun, Feb 22, 2015 at 7:32 PM, Stefan van der Walt > wrote: > >> On 2015-02-21 15:53:47, Juan Nunez-Iglesias >> wrote: >> > In the new MPL imshow plugin (which I recently wrote), we switch >> > from grayscale to cubehelix when the dynamic range is too low to >> > be displayed on a common monitor (1/255): >> >> Importantly, it should be clear and intuitive to users when the >> switch happens. Why do we not always use cubehelix? Apart from >> that it is a rather ugly colormap. >> >> St?fan >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Tue Feb 24 08:51:51 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Tue, 24 Feb 2015 05:51:51 -0800 (PST) Subject: Distinguishable shades of grey In-Reply-To: References: Message-ID: <1424785911383.a0a69b5a@Nodemailer> > I reckon it would be similar to the middle colormap here?http://earthobservatory.nasa.gov/blogs/elegantfigures/files/2013/08/three_perceptual_palettes_618.png (from the elegant figures block series linked above), which I've always found quite attractive. <3 ! It's gorgeous! I don't like the idea floating around later in the thread to compress the lightness range. At least for images, that's a terrible idea, limiting your dynamic range for no good reason. For scatterplots and other things with light backgrounds, I understand the logic, but perhaps a different range for different uses is warranted... Will comment something more complete to the mpl list directly in the morning... Thanks for the update and good night! On Tue, Feb 24, 2015 at 11:48 PM, Thomas Caswell wrote: > Sorry, that is what I get for skimming emails too late at night :) > They are coming along, probably with a new color map rather than an > existing one. See > http://matplotlib.1069221.n5.nabble.com/release-strategy-and-the-color-revolution-td44929.html > Any feed back or suggestions on test patterns would be great. > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanpatrick.pommier at gmail.com Tue Feb 24 11:21:10 2015 From: jeanpatrick.pommier at gmail.com (Jean-Patrick Pommier) Date: Tue, 24 Feb 2015 08:21:10 -0800 (PST) Subject: Finding pairs of images (homologous chromosomes) Message-ID: <907817f6-d601-4932-ba98-e09241022e68@googlegroups.com> Dear All, I am trying to make pairs of images from the following set of images (chromosomes sorted by size after rotation). The idea is to make a feature vector for unsupervised classification (kmeans with 19 clusters) >From each chromosome an integral image was calculated: plt.figure(figsize = (15,15)) gs1 = gridspec.GridSpec(6,8) gs1.update(wspace=0.0, hspace=0.0) # set the spacing between axes. for i in range(38): # i = i + 1 # grid spec indexes from 0 ax1 = plt.subplot(gs1[i]) plt.axis('off') ax1.set_xticklabels([]) ax1.set_yticklabels([]) ax1.set_aspect('equal') image = sk.transform.integral_image(reallysorted[i][:,:,2]) imshow(image , interpolation='nearest') Then each integral image was flatten and combined with the others: Features =[] for i in range(38): Feat = np.ndarray.flatten(sk.transform.integral_image(reallysorted[i][:,:,2])) Features.append(Feat) X = np.asarray(Features) print X.shape The X array contains *38* lines and 9718 features, which is not good. However, I trried to submit these raw features to kmeans classification with sklearn using a direct example : from sklearn.neighbors import NearestNeighbors nbrs = NearestNeighbors(n_neighbors=*19*, algorithm='ball_tree').fit(X) distances, indices = nbrs.kneighbors(X) connection = nbrs.kneighbors_graph(X).toarray() Ploting the connection graph shows that a chromosomes is similar to more than one ... - Do you think that integral images can be used to discriminate the chromosomes pairs? - If so, how to reduce the number of features to 10~20? (to get a better discrimination) Thanks for your advices. Jean-Patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pigDAPI_bySize.png Type: image/png Size: 80585 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IntegralImages.png Type: image/png Size: 66034 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Connection_graph.png Type: image/png Size: 6846 bytes Desc: not available URL: From tcaswell at gmail.com Tue Feb 24 07:48:30 2015 From: tcaswell at gmail.com (Thomas Caswell) Date: Tue, 24 Feb 2015 12:48:30 +0000 Subject: Distinguishable shades of grey Message-ID: Sorry, that is what I get for skimming emails too late at night :) They are coming along, probably with a new color map rather than an existing one. See http://matplotlib.1069221.n5.nabble.com/release-strategy-and-the-color-revolution-td44929.html Any feed back or suggestions on test patterns would be great. -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Tue Feb 24 18:30:41 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Tue, 24 Feb 2015 15:30:41 -0800 (PST) Subject: Finding pairs of images (homologous chromosomes) In-Reply-To: <907817f6-d601-4932-ba98-e09241022e68@googlegroups.com> References: <907817f6-d601-4932-ba98-e09241022e68@googlegroups.com> Message-ID: Neat problem! For feature extraction, `skimage.feature` is probably your friend. Nothing against integral images, but I'm not sure they are going to give you an ideal feature set for discrimination (you can see that visually). Also, attempting to normalize your input data might be worth looking into at some point as it appears exposure is not uniform. As a first pass, you could feed raw grayscale values straight into e.g. a Bernoulli Restricted Boltzmann machine , or check out scikit-learn's excellent tutorial on digit recognition . Though for both of those options, the performance is going to be strongly dependent on the quality and - especially - quantity of the training set. Beyond that, thresholding and a skeletonization with `skimage.morphology.skeletonize` might give you informative morphology data to feed in to a classifier. Best of luck, Josh On Tuesday, February 24, 2015 at 10:21:10 AM UTC-6, Jean-Patrick Pommier wrote: > > Dear All, > > I am trying to make pairs of images from the following set of images > (chromosomes sorted by size after rotation). The idea is to make a feature > vector for unsupervised classification (kmeans with 19 clusters) > > > From each chromosome an integral image was calculated: > > plt.figure(figsize = (15,15)) > gs1 = gridspec.GridSpec(6,8) > gs1.update(wspace=0.0, hspace=0.0) # set the spacing between axes. > for i in range(38): > # i = i + 1 # grid spec indexes from 0 > ax1 = plt.subplot(gs1[i]) > plt.axis('off') > ax1.set_xticklabels([]) > ax1.set_yticklabels([]) > ax1.set_aspect('equal') > image = sk.transform.integral_image(reallysorted[i][:,:,2]) > imshow(image , interpolation='nearest') > > Then each integral image was flatten and combined with the others: > > Features =[] > > for i in range(38): > Feat = > np.ndarray.flatten(sk.transform.integral_image(reallysorted[i][:,:,2])) > Features.append(Feat) > X = np.asarray(Features) > print X.shape > > The X array contains *38* lines and 9718 features, which is not good. > However, I trried to submit these raw features to kmeans classification > with sklearn using a direct example > : > > from sklearn.neighbors import NearestNeighbors > nbrs = NearestNeighbors(n_neighbors=*19*, algorithm='ball_tree').fit(X) > distances, indices = nbrs.kneighbors(X) > connection = nbrs.kneighbors_graph(X).toarray() > Ploting the connection graph shows that a chromosomes is similar to more > than one ... > > - Do you think that integral images can be used to discriminate the > chromosomes pairs? > - If so, how to reduce the number of features to 10~20? (to get a > better discrimination) > > Thanks for your advices. > > Jean-Patrick > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanpatrick.pommier at gmail.com Wed Feb 25 08:09:39 2015 From: jeanpatrick.pommier at gmail.com (Jean-Patrick Pommier) Date: Wed, 25 Feb 2015 05:09:39 -0800 (PST) Subject: Finding pairs of images (homologous chromosomes) In-Reply-To: <907817f6-d601-4932-ba98-e09241022e68@googlegroups.com> References: <907817f6-d601-4932-ba98-e09241022e68@googlegroups.com> Message-ID: Thanks you for the links. Regarding the rbm classifier in the following example . At first sight I don't understand what is Y array (X array seems to be the set of images). Jean-Patrick Le mardi 24 f?vrier 2015 17:21:10 UTC+1, Jean-Patrick Pommier a ?crit : > > Dear All, > > I am trying to make pairs of images from the following set of images > (chromosomes sorted by size after rotation). The idea is to make a feature > vector for unsupervised classification (kmeans with 19 clusters) > > > From each chromosome an integral image was calculated: > > plt.figure(figsize = (15,15)) > gs1 = gridspec.GridSpec(6,8) > gs1.update(wspace=0.0, hspace=0.0) # set the spacing between axes. > for i in range(38): > # i = i + 1 # grid spec indexes from 0 > ax1 = plt.subplot(gs1[i]) > plt.axis('off') > ax1.set_xticklabels([]) > ax1.set_yticklabels([]) > ax1.set_aspect('equal') > image = sk.transform.integral_image(reallysorted[i][:,:,2]) > imshow(image , interpolation='nearest') > > Then each integral image was flatten and combined with the others: > > Features =[] > > for i in range(38): > Feat = > np.ndarray.flatten(sk.transform.integral_image(reallysorted[i][:,:,2])) > Features.append(Feat) > X = np.asarray(Features) > print X.shape > > The X array contains *38* lines and 9718 features, which is not good. > However, I trried to submit these raw features to kmeans classification > with sklearn using a direct example > : > > from sklearn.neighbors import NearestNeighbors > nbrs = NearestNeighbors(n_neighbors=*19*, algorithm='ball_tree').fit(X) > distances, indices = nbrs.kneighbors(X) > connection = nbrs.kneighbors_graph(X).toarray() > Ploting the connection graph shows that a chromosomes is similar to more > than one ... > > - Do you think that integral images can be used to discriminate the > chromosomes pairs? > - If so, how to reduce the number of features to 10~20? (to get a > better discrimination) > > Thanks for your advices. > > Jean-Patrick > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Wed Feb 25 17:55:10 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Wed, 25 Feb 2015 14:55:10 -0800 (PST) Subject: Finding pairs of images (homologous chromosomes) In-Reply-To: References: <907817f6-d601-4932-ba98-e09241022e68@googlegroups.com> Message-ID: <699716f2-45e3-4db2-a87c-4128abcc8184@googlegroups.com> Hi Jean-Patrick, Y is the known corresponding digit identity. The function to "jitter" the digit images around a bit just takes digits.target as Y and concatenates it with itself five times, so the expanded dataset has known identities to compare against. Regards, Josh On Wednesday, February 25, 2015 at 7:09:39 AM UTC-6, Jean-Patrick Pommier wrote: > > Thanks you for the links. > > Regarding the rbm classifier in the following example > . > At first sight I don't understand what is Y array (X array seems to be the > set of images). > > > Jean-Patrick > > Le mardi 24 f?vrier 2015 17:21:10 UTC+1, Jean-Patrick Pommier a ?crit : >> >> Dear All, >> >> I am trying to make pairs of images from the following set of images >> (chromosomes sorted by size after rotation). The idea is to make a feature >> vector for unsupervised classification (kmeans with 19 clusters) >> >> >> From each chromosome an integral image was calculated: >> >> plt.figure(figsize = (15,15)) >> gs1 = gridspec.GridSpec(6,8) >> gs1.update(wspace=0.0, hspace=0.0) # set the spacing between axes. >> for i in range(38): >> # i = i + 1 # grid spec indexes from 0 >> ax1 = plt.subplot(gs1[i]) >> plt.axis('off') >> ax1.set_xticklabels([]) >> ax1.set_yticklabels([]) >> ax1.set_aspect('equal') >> image = sk.transform.integral_image(reallysorted[i][:,:,2]) >> imshow(image , interpolation='nearest') >> >> Then each integral image was flatten and combined with the others: >> >> Features =[] >> >> for i in range(38): >> Feat = >> np.ndarray.flatten(sk.transform.integral_image(reallysorted[i][:,:,2])) >> Features.append(Feat) >> X = np.asarray(Features) >> print X.shape >> >> The X array contains *38* lines and 9718 features, which is not good. >> However, I trried to submit these raw features to kmeans classification >> with sklearn using a direct example >> : >> >> from sklearn.neighbors import NearestNeighbors >> nbrs = NearestNeighbors(n_neighbors=*19*, algorithm='ball_tree').fit(X) >> distances, indices = nbrs.kneighbors(X) >> connection = nbrs.kneighbors_graph(X).toarray() >> Ploting the connection graph shows that a chromosomes is similar to more >> than one ... >> >> - Do you think that integral images can be used to discriminate the >> chromosomes pairs? >> - If so, how to reduce the number of features to 10~20? (to get a >> better discrimination) >> >> Thanks for your advices. >> >> Jean-Patrick >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ciaran.robb at googlemail.com Thu Feb 26 11:51:18 2015 From: ciaran.robb at googlemail.com (ciaran.robb at googlemail.com) Date: Thu, 26 Feb 2015 08:51:18 -0800 (PST) Subject: regionprops - displaying region properties In-Reply-To: References: Message-ID: <46469c78-2cfb-4c8c-913a-a639745c4ab9@googlegroups.com> Hi Adding to my own post but hey.... I have since written my own code which allows visualising of region properties (eg area, eccentricity etc) via colormap, if anyone is interested let me know! Ciaran On Sunday, February 1, 2015 at 11:45:44 PM UTC, ciara... at googlemail.com wrote: > > Hello everyone, > > I have recently been attempting to modify some existing skimage code to > display regionprops for a labeled image (e.g. area or eccentricity) > > I initially tried to translate a vectorized bit of old matlab code I had, > but gave up on that and decided to alter the existing label2rgb skimage > function > > I am attempting to change each label value to it's area property value > similar to the label2rgb "avg" function. > > so I have: > labels = a labeled image > > out = np.zeros_like(labels) #a blank array > labels2 = np.unique(labels) #a vector of label vals > out = np.zeros_like(labels) > Props = regionprops(labels, ['Area']) > bg_label=0 > bg = (labels2 == bg_label) > if bg.any(): > labels2 = labels2[labels2 != bg_label] > out[bg] = 0 > for label in labels2: > mask = (labels == label).nonzero() > color = Props[label].area > out[mask] = color > but the "out" props image does not correspond to the correct area values? > Can anyone help me with this? > It also throws the following error: > "list index out of range" > It would certainly be useful to have a way to view the spatial > distribution of label properties in this way - perhaps in a future skimage > version? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin.cichy at gmail.com Fri Feb 27 17:11:34 2015 From: benjamin.cichy at gmail.com (Benjamin Cichy) Date: Fri, 27 Feb 2015 14:11:34 -0800 (PST) Subject: Building 0.11dev - issues Message-ID: <9b878619-0b6f-4d39-af1a-b97f384940e1@googlegroups.com> Hi all, I am attempting this install on Windows with >pip install . as per the instructions. The last compiler I have access to, is Visual Studio 10, so according to scikit-learn, and digging through the compiler script, it should be the last one recognized for .dll compilation before Python 3.5. I have the following errors right at the end of the build. This is under a fresh Anaconda install, with all the packages updated. creating build\temp.win-amd64-3.4\Release\skimage\_shared C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC:\Anaconda3\lib\core\include -IC:\Anaconda3\include -IC:\Anaconda3\include /Tcskimage\_shared\geometry.c /Fobuild\temp.win-amd64-3.4\Release\skimagej Found executable C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\ amd64\cl.exe C:\Anaconda3\lib\site-packages\setuptools-12.2-py3.4.egg\setuptools\dist.py: 282: UserWarning: Normalizing '0.11dev' to '0.11.dev error: Command "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\BIN\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IC-packages\numpy\core\include -IC:\Anaconda3\include -IC:\Anaconda3\include /Tcskimage\_shared\geometry.c /Fobuild\temp.win-amd64-3.4ared\geometry.obj" failed with exit status 2 geometry.c C:\Anaconda3\include\pyconfig.h(68) : fatal error C1083: Cannot open include file: 'io.h': No such file or directory ---------------------------------------- Rolling back uninstall of scikit-image Command "C:\Anaconda3\python.exe -c "import setuptools, tokenize;__file__='C:\\cygwin64\\tmp\\pip-jkks4ooy-build\\setup.py';execenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record C:\cygwin64\tmp\pip-xqr2l87y-recor --single-version-externally-managed --compile" failed with error code 1 in C:\cygwin64\tmp\pip-jkks4ooy-build Any suggestions? -Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: