From jeanpatrick.pommier at gmail.com Thu Nov 8 11:59:54 2012 From: jeanpatrick.pommier at gmail.com (jip) Date: Thu, 8 Nov 2012 08:59:54 -0800 (PST) Subject: graph.route_through_array, a route is "outside" Message-ID: <7bd848de-d408-4e42-9498-4acfacc4e0cd@googlegroups.com> Hello, A contour is extracted from a greyscale image ("walls") where the background is set to 255. Several points near the contour are used to compute a path with graph.route_through_array: mincost = graph.route_through_array(walls ,p1,p2,fully_connected=True) One found path is not completely inside the array. When fully_connected=False is used all the paths are inside the chromosomes (i.e. they do not cross regions where grey level=255). How to get a minimal cost route "inside" a particle (<>255)? Thank you Jean-Patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Nov 8 12:57:30 2012 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 8 Nov 2012 09:57:30 -0800 Subject: graph.route_through_array, a route is "outside" In-Reply-To: <7bd848de-d408-4e42-9498-4acfacc4e0cd@googlegroups.com> References: <7bd848de-d408-4e42-9498-4acfacc4e0cd@googlegroups.com> Message-ID: Hi JP On Thu, Nov 8, 2012 at 8:59 AM, jip wrote: > How to get a minimal cost route "inside" a particle (<>255)? Could you upload a minimal snippet to illustrate this behavior to gist.github.com? Thanks St?fan From jeanpatrick.pommier at gmail.com Thu Nov 8 14:44:16 2012 From: jeanpatrick.pommier at gmail.com (Jean-Patrick Pommier) Date: Thu, 8 Nov 2012 11:44:16 -0800 (PST) Subject: graph.route_through_array, a route is "outside" In-Reply-To: References: <7bd848de-d408-4e42-9498-4acfacc4e0cd@googlegroups.com> Message-ID: I could simplify my scriptbut Zachary's message make me understand I can't directly use an idea from binary image to greyscale image. Thanks JP Le jeudi 8 novembre 2012 18:57:51 UTC+1, Stefan van der Walt a ?crit : > > Hi JP > > On Thu, Nov 8, 2012 at 8:59 AM, jip > > wrote: > > How to get a minimal cost route "inside" a particle (<>255)? > > Could you upload a minimal snippet to illustrate this behavior to > gist.github.com? > > Thanks > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanpatrick.pommier at gmail.com Thu Nov 8 15:18:41 2012 From: jeanpatrick.pommier at gmail.com (Jean-Patrick Pommier) Date: Thu, 8 Nov 2012 12:18:41 -0800 (PST) Subject: graph.route_through_array, a route is "outside" In-Reply-To: <8F5B8984-8F20-40D9-B945-A3E01094E6A4@yale.edu> References: <7bd848de-d408-4e42-9498-4acfacc4e0cd@googlegroups.com> <8F5B8984-8F20-40D9-B945-A3E01094E6A4@yale.edu> Message-ID: Le jeudi 8 novembre 2012 19:56:15 UTC+1, Zachary Pincus a ?crit : > > In addition to a snippet as St?fan suggests, could you include a bit more > information about the expected results vs. what you get? > > Also, what do you mean by this: > > One found path is not completely inside the array. > > Are the coordinates of the output path outside the shape of the array? > (Less than zero or larger than the size along a given axis?) Or do you just > mean that the path is not completely within the non-255 "foreground" region > of the array? > I mean the path is not completely within the non 255 "foreground" > > If the latter, then this is not a bug, and you need to understand better > how the path-finding code works. The documentation for route_through_array > states: > > Simple example of how to use the MCP and MCP_Geometric classes. > > See the MCP and MCP_Geometric class documentation for explanation of the > > path-finding algorithm. > > So read up on what the MCP class is actually doing. There you will find > that it finds the minimum-cost path between the start and end points, > weighted according to the value of the pixel. (There are important > considerations with the geometric vs. non-geometric weighting that you'll > also need to read about.) > > Given this, clearly a two-pixel long path through pixels with value 255 > will be lower-cost than a 300-pixel path through pixels with value 1. Thus, > 255-valued pixels don't pose an impenetrable barrier or anything, as you > may perhaps be expecting for some reason. because of a na?ve idea based on binary image ... > (Also note that this means zero-valued pixels may not behave as you > imagine either.) The way to get true impenetrable barriers is to use > floating-point images and set the "don't-go-here" pixels to numpy.inf. > > Even so, your example image seems a bit odd -- the "background" is dark on > the image, but you say it's 255-valued? This is an error, I overlayed the path on the original image instead of the one with the background set to 255. > And are you just using the chromosomal staining intensity as the image to > do the pathfinding on (after setting the background to 255)? This would > mean the paths will try to avoid regions of high staining intensity, Thats true, the staining is lower where the chromosomes are touching, than on the chromatids or on the bright centromeric regions. > which seems odd . > So perhaps you're using an inverted image? this is a fluorescent staining with DAPI, tthe background is dark. This is why you need to provide much more detail when asking questions. > > Zach > Thanks a lot > JPat > > > On Nov 8, 2012, at 11:59 AM, jip wrote: > > > Hello, > > > > A contour is extracted from a greyscale image ("walls") where the > background is set to 255. Several points near the contour are used to > compute a path with graph.route_through_array: > > mincost = graph.route_through_array(walls ,p1,p2,fully_connected=True) > > > > One found path is not completely inside the array. When > fully_connected=False is used all the paths are inside the chromosomes > (i.e. they do not cross regions where grey level=255). > > > > How to get a minimal cost route "inside" a particle (<>255)? > > > > Thank you > > > > Jean-Patrick > > > > > > > > -- > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Thu Nov 8 12:38:14 2012 From: tsyu80 at gmail.com (Tony Yu) Date: Thu, 8 Nov 2012 12:38:14 -0500 Subject: Produce overlay between original image and labelled objects In-Reply-To: <8c7bbbae-6f7f-4c11-8f0a-b6c9c6fd4cfc@googlegroups.com> References: <8c7bbbae-6f7f-4c11-8f0a-b6c9c6fd4cfc@googlegroups.com> Message-ID: On Thu, Nov 8, 2012 at 4:30 AM, Frank wrote: > Hi, > > I want to use Python and scikit image for detection, counting and > measuring cells from digital pictures. So far most of the things I want to > do work fine: I just use a global threshold which separates my cells well > from the background. After labelling the thresholded objects the > regionprops function provides many of the features I am interested in > (area, centroid *etc*). > To be completely satisfied, I would like to produce an overlay between the > original image and the identified objects after thresholding for error > checking. I searched a while to find the proper function to do so and then > encountered the mark_boundaries function. However, that one is not working, > because the function is not found after importing the skimage.segmentation > module (ImportError: cannot import name mark_boundaries). Do you have any > suggestions, why it is not working? Or maybe you know a better way to > achieve my goal? > > Many thanks, > > Frank Hi Frank, Are you using the latest release (0.7) or the development version on github? It's a bit unfortunate, but the documentation link on the website goes directly to the dev docs instead of the latest release. I believe `mark_boundaries` is only in the development version of scikit-image, but really that was just a slight modification of `visualize_boundaries`, which should be available in 0.7. Note: `visualize_boundaries` doesn't work with grayscale images, so you may need to call `skimage.color.gray2rgb`. Hope that helps. -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanpatrick.pommier at gmail.com Thu Nov 8 15:45:07 2012 From: jeanpatrick.pommier at gmail.com (Jean-Patrick Pommier) Date: Thu, 8 Nov 2012 12:45:07 -0800 (PST) Subject: graph.route_through_array, a route is "outside" In-Reply-To: <50573856-1990-4D59-9EF5-033F4C3DBD2A@yale.edu> References: <7bd848de-d408-4e42-9498-4acfacc4e0cd@googlegroups.com> <50573856-1990-4D59-9EF5-033F4C3DBD2A@yale.edu> Message-ID: Le jeudi 8 novembre 2012 20:53:55 UTC+1, Zachary Pincus a ?crit : > > If you want a path that trends toward the "center" of the binary blobs, I > have had luck using the MCP algorithm on suitably-transformed distance maps > from the blobs. (You need to negate the distances and then add an offset to > get the minimum cost to 1. Also, I've found that sometimes taking the log > of the distances gives the path-finding algorithm a little more freedom, or > you can exponentiate them to keep the path more centered.) > This is good to know, I keep that. jp > > Zach > > > On Nov 8, 2012, at 2:44 PM, Jean-Patrick Pommier wrote: > > > I could simplify my script but Zachary's message make me understand I > can't directly use an idea from binary image to greyscale image. > > > > Thanks > > > > JP > > > > Le jeudi 8 novembre 2012 18:57:51 UTC+1, Stefan van der Walt a ?crit : > > Hi JP > > > > On Thu, Nov 8, 2012 at 8:59 AM, jip wrote: > > > How to get a minimal cost route "inside" a particle (<>255)? > > > > Could you upload a minimal snippet to illustrate this behavior to > > gist.github.com? > > > > Thanks > > St?fan > > > > -- > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Thu Nov 8 13:56:15 2012 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 8 Nov 2012 13:56:15 -0500 Subject: graph.route_through_array, a route is "outside" In-Reply-To: <7bd848de-d408-4e42-9498-4acfacc4e0cd@googlegroups.com> References: <7bd848de-d408-4e42-9498-4acfacc4e0cd@googlegroups.com> Message-ID: <8F5B8984-8F20-40D9-B945-A3E01094E6A4@yale.edu> In addition to a snippet as St?fan suggests, could you include a bit more information about the expected results vs. what you get? Also, what do you mean by this: > One found path is not completely inside the array. Are the coordinates of the output path outside the shape of the array? (Less than zero or larger than the size along a given axis?) Or do you just mean that the path is not completely within the non-255 "foreground" region of the array? If the latter, then this is not a bug, and you need to understand better how the path-finding code works. The documentation for route_through_array states: > Simple example of how to use the MCP and MCP_Geometric classes. > See the MCP and MCP_Geometric class documentation for explanation of the > path-finding algorithm. So read up on what the MCP class is actually doing. There you will find that it finds the minimum-cost path between the start and end points, weighted according to the value of the pixel. (There are important considerations with the geometric vs. non-geometric weighting that you'll also need to read about.) Given this, clearly a two-pixel long path through pixels with value 255 will be lower-cost than a 300-pixel path through pixels with value 1. Thus, 255-valued pixels don't pose an impenetrable barrier or anything, as you may perhaps be expecting for some reason. (Also note that this means zero-valued pixels may not behave as you imagine either.) The way to get true impenetrable barriers is to use floating-point images and set the "don't-go-here" pixels to numpy.inf. Even so, your example image seems a bit odd -- the "background" is dark on the image, but you say it's 255-valued? And are you just using the chromosomal staining intensity as the image to do the pathfinding on (after setting the background to 255)? This would mean the paths will try to avoid regions of high staining intensity, which seems odd. So perhaps you're using an inverted image? This is why you need to provide much more detail when asking questions. Zach On Nov 8, 2012, at 11:59 AM, jip wrote: > Hello, > > A contour is extracted from a greyscale image ("walls") where the background is set to 255. Several points near the contour are used to compute a path with graph.route_through_array: > mincost = graph.route_through_array(walls ,p1,p2,fully_connected=True) > > One found path is not completely inside the array. When fully_connected=False is used all the paths are inside the chromosomes (i.e. they do not cross regions where grey level=255). > > How to get a minimal cost route "inside" a particle (<>255)? > > Thank you > > Jean-Patrick > > > > -- > > From zachary.pincus at yale.edu Thu Nov 8 13:58:01 2012 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 8 Nov 2012 13:58:01 -0500 Subject: graph.route_through_array, a route is "outside" In-Reply-To: <8F5B8984-8F20-40D9-B945-A3E01094E6A4@yale.edu> References: <7bd848de-d408-4e42-9498-4acfacc4e0cd@googlegroups.com> <8F5B8984-8F20-40D9-B945-A3E01094E6A4@yale.edu> Message-ID: On Nov 8, 2012, at 1:56 PM, Zachary Pincus wrote: > Given this, clearly a two-pixel long path through pixels with value 255 will be lower-cost than a 300-pixel path through pixels with value 1. Err, sorry, this is clearly in error. A two-pixel long path through pixels with value 255 will be lower-cost than a *600-pixel* path through pixels with value 1. From zachary.pincus at yale.edu Thu Nov 8 14:53:55 2012 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 8 Nov 2012 14:53:55 -0500 Subject: graph.route_through_array, a route is "outside" In-Reply-To: References: <7bd848de-d408-4e42-9498-4acfacc4e0cd@googlegroups.com> Message-ID: <50573856-1990-4D59-9EF5-033F4C3DBD2A@yale.edu> If you want a path that trends toward the "center" of the binary blobs, I have had luck using the MCP algorithm on suitably-transformed distance maps from the blobs. (You need to negate the distances and then add an offset to get the minimum cost to 1. Also, I've found that sometimes taking the log of the distances gives the path-finding algorithm a little more freedom, or you can exponentiate them to keep the path more centered.) Zach On Nov 8, 2012, at 2:44 PM, Jean-Patrick Pommier wrote: > I could simplify my script but Zachary's message make me understand I can't directly use an idea from binary image to greyscale image. > > Thanks > > JP > > Le jeudi 8 novembre 2012 18:57:51 UTC+1, Stefan van der Walt a ?crit : > Hi JP > > On Thu, Nov 8, 2012 at 8:59 AM, jip wrote: > > How to get a minimal cost route "inside" a particle (<>255)? > > Could you upload a minimal snippet to illustrate this behavior to > gist.github.com? > > Thanks > St?fan > > -- > > From hannesschoenberger at gmail.com Thu Nov 8 13:15:22 2012 From: hannesschoenberger at gmail.com (=?iso-8859-1?Q?Sch=F6nberger_Johannes?=) Date: Thu, 8 Nov 2012 19:15:22 +0100 Subject: Produce overlay between original image and labelled objects In-Reply-To: <8c7bbbae-6f7f-4c11-8f0a-b6c9c6fd4cfc@googlegroups.com> References: <8c7bbbae-6f7f-4c11-8f0a-b6c9c6fd4cfc@googlegroups.com> Message-ID: Hi, you could also have a look at the following example: http://scikit-image.org/docs/dev/auto_examples/plot_label.html Johannes Sch?nberger Am 08.11.2012 um 10:30 schrieb Frank : > Hi, > > I want to use Python and scikit image for detection, counting and measuring cells from digital pictures. So far most of the things I want to do work fine: I just use a global threshold which separates my cells well from the background. After labelling the thresholded objects the regionprops function provides many of the features I am interested in (area, centroid etc). > To be completely satisfied, I would like to produce an overlay between the original image and the identified objects after thresholding for error checking. I searched a while to find the proper function to do so and then encountered the mark_boundaries function. However, that one is not working, because the function is not found after importing the skimage.segmentation module (ImportError: cannot import name mark_boundaries). Do you have any suggestions, why it is not working? Or maybe you know a better way to achieve my goal? > > Many thanks, > > Frank > > -- > > From pennekampster at googlemail.com Fri Nov 9 03:34:16 2012 From: pennekampster at googlemail.com (Frank) Date: Fri, 9 Nov 2012 00:34:16 -0800 (PST) Subject: Produce overlay between original image and labelled objects In-Reply-To: References: <8c7bbbae-6f7f-4c11-8f0a-b6c9c6fd4cfc@googlegroups.com> Message-ID: <68a5c0aa-f589-4075-a9c3-857c33c6f63b@googlegroups.com> Hi Tony and Johannes, in fact visualize_boundaries works fine and fulfills all I want, so I will stick with it. The bounding box approach seems fine too! Thanks to both of you for the quick response and the helpful hints! Cheers, Frank On Thursday, November 8, 2012 6:38:56 PM UTC+1, Tony S Yu wrote: > > > > On Thu, Nov 8, 2012 at 4:30 AM, Frank > > wrote: > >> Hi, >> >> I want to use Python and scikit image for detection, counting and >> measuring cells from digital pictures. So far most of the things I want to >> do work fine: I just use a global threshold which separates my cells well >> from the background. After labelling the thresholded objects the >> regionprops function provides many of the features I am interested in >> (area, centroid *etc*). >> To be completely satisfied, I would like to produce an overlay between >> the original image and the identified objects after thresholding for error >> checking. I searched a while to find the proper function to do so and then >> encountered the mark_boundaries function. However, that one is not working, >> because the function is not found after importing the skimage.segmentation >> module (ImportError: cannot import name mark_boundaries). Do you have any >> suggestions, why it is not working? Or maybe you know a better way to >> achieve my goal? >> >> Many thanks, >> >> Frank > > > Hi Frank, > > Are you using the latest release (0.7) or the development version on > github? It's a bit unfortunate, but the documentation link on the website > goes directly to the dev docs instead of the latest release. > > I believe `mark_boundaries` is only in the development version of > scikit-image, but really that was just a slight modification of > `visualize_boundaries`, which should be available in 0.7. Note: > `visualize_boundaries` doesn't work with grayscale images, so you may need > to call `skimage.color.gray2rgb`. > > Hope that helps. > > -Tony > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pennekampster at googlemail.com Mon Nov 12 07:43:21 2012 From: pennekampster at googlemail.com (Frank) Date: Mon, 12 Nov 2012 04:43:21 -0800 (PST) Subject: Oversplitting by watershed Message-ID: Dear group, I have some issues with the watershed algorithm implemented in scikits image. I use a global threshold to segment cells from background, but some cells touch and I want them to be split. Watershed seems the appropriate way to deal with my problem, however my particles are split in too many pieces. Is there a way to adjust the sensitivity of the watershed method? Many thanks for any suggestion! The code that I use looks like below. An example image that I want to process can be downloaded here: https://dl.dropbox.com/u/10373933/test.jpg # packages needed to perform image processing and analysis import numpy as np import scipy as scp import matplotlib.pyplot as plt import matplotlib.image as mpimg import scipy.ndimage as nd import skimage from skimage import io from skimage.morphology import watershed, is_local_maximum from skimage.segmentation import find_boundaries, visualize_boundaries from skimage.color import gray2rgb #read files jpeg file image = mpimg.imread('c:\\test.jpg') image_thresh = image > 140 labels = nd.label(image_thresh)[0] distance = nd.distance_transform_edt(image_thresh) local_maxi = is_local_maximum(distance, labels=labels, footprint=np.ones((9, 9))) markers = nd.label(local_maxi)[0] labelled_image = watershed(-distance, markers, mask=image_thresh) #find outline of objects for plotting boundaries = find_boundaries(labelled_image) img_rgb = gray2rgb(image) overlay = np.flipud(visualize_boundaries(img_rgb,boundaries)) imshow(overlay) -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Mon Nov 12 18:57:19 2012 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Mon, 12 Nov 2012 15:57:19 -0800 (PST) Subject: Oversplitting by watershed In-Reply-To: References: Message-ID: I've looked at these two algorithms, and the biggest difference seems to be in the output. `is_local_maximum` returns a Boolean array, while `peak_local_max` returns the indices of points corresponding to maxima (a la `np.sort` vs. `np.argsort`, though you cannot just pass the output of `peak_local_max` as indices). The other differences are more subtle, but significant. The API for `peak_local_max` could use some cleanup - the first threshold kwarg is set to 'deprecated'(!) and IMHO should be removed - but this algorithm allows finer control over thresholding and peak searching. This would be good to avoid finding phantom peaks in noise if large dark regions were present. One significant drawback of this algorithm is the min_distance kwarg is set by default to 10, rather arbitrary, and ANY input (even the minimum value of 1) excludes both neighboring pixels AND the border. See example below. In contrast, `is_local_maximum` has a much simpler API. It doesn't have the finer thresholding / peak searching controls, but has a unique ability to search for peaks ONLY within arbitrary, connected, labeled regions. This has some interesting potentials for masking etc, though I believe within each label only one peak will be found. This algorithm also has the ability to search arbitrary local regions for peaks using something akin to a morphological structuring element, through the `footprint=` kwarg. The documentation for this could probably be clarified. The way `peak_local_max` excludes borders concerns me for general use, as does its default `min_distance=10`, and personally I would prefer to wok around the limitations in `is_local_maximum`. A best-of-both-worlds combination could probably be created without overly much effort... Snippet showing border-excluding behavior of `peak_local_max`, which will only get worse with higher values of `min_distance`: import numpy as np import matplotlib.pyplot as plt from skimage.feature import peak_local_max from skimage.morphology import is_local_maximum # Generate standardized random data np.random.seed(seed=1234) testim = np.random.randint(0, 255, size=(20, 20)) # Find peaks using both methods ismax = is_local_maximum(testim) # Boolean image returned peakmax = peak_local_max(testim, min_distance=1) # (M, 2) indices returned # `peakmax` not plottable - placing values in 2d array Ipeakmax = np.zeros(testim.shape) Ipeakmax[peakmax[:, 0], peakmax[:, 1]] = 1 # Show the results fig, ax = plt.subplots(ncols=2, nrows=1) ax[0].imshow(ismax, cmap='gray') ax[0].set_title('Peaks found by `is_local_maximum`') ax[1].imshow(Ipeakmax, cmap='gray') ax[1].set_title('Peaks found by `peak_local_max`') plt.show() On Monday, November 12, 2012 3:57:28 PM UTC-6, Tony S Yu wrote: > > > > On Mon, Nov 12, 2012 at 7:43 AM, Frank > > wrote: > >> Dear group, >> >> I have some issues with the watershed algorithm implemented in scikits >> image. I use a global threshold to segment cells from background, but some >> cells touch and I want them to be split. Watershed seems the appropriate >> way to deal with my problem, however my particles are split in too many >> pieces. Is there a way to adjust the sensitivity of the watershed method? >> >> Many thanks for any suggestion! >> >> The code that I use looks like below. An example image that I want to >> process can be downloaded here: >> https://dl.dropbox.com/u/10373933/test.jpg >> >> # packages needed to perform image processing and analysis >> import numpy as np >> import scipy as scp >> import matplotlib.pyplot as plt >> import matplotlib.image as mpimg >> import scipy.ndimage as nd >> import skimage >> from skimage import io >> from skimage.morphology import watershed, is_local_maximum >> from skimage.segmentation import find_boundaries, visualize_boundaries >> from skimage.color import gray2rgb >> >> #read files jpeg file >> image = mpimg.imread('c:\\test.jpg') >> image_thresh = image > 140 >> labels = nd.label(image_thresh)[0] >> distance = nd.distance_transform_edt(image_thresh) >> local_maxi = is_local_maximum(distance, labels=labels, >> footprint=np.ones((9, 9))) >> markers = nd.label(local_maxi)[0] >> labelled_image = watershed(-distance, markers, mask=image_thresh) >> >> #find outline of objects for plotting >> boundaries = find_boundaries(labelled_image) >> img_rgb = gray2rgb(image) >> overlay = np.flipud(visualize_boundaries(img_rgb,boundaries)) >> imshow(overlay) > > > Hi Frank, > > Actually, I don't think the issue is in the watershed segmentation. > Instead, I think the problem is in the marker specification: Using local > maxima creates too many marker points when a blob deviates greatly from a > circle. (BTW, does anyone know if there are any differences between > `is_local_maximum` and `peak_local_max`? Maybe the former should be > deprecated.) > > Using the centroids of blobs gives cleaner results. See slightly-modified > example below. > > Best, > -Tony > > # packages needed to perform image processing and analysis > import numpy as np > import matplotlib.pyplot as plt > import scipy.ndimage as nd > > from skimage import io > from skimage import measure > from skimage.morphology import watershed > from skimage.segmentation import find_boundaries, visualize_boundaries > from skimage.color import gray2rgb > > #read files jpeg file > image = io.imread('test.jpg') > > image_thresh = image > 140 > labels = nd.label(image_thresh)[0] > distance = nd.distance_transform_edt(image_thresh) > > props = measure.regionprops(labels, ['Centroid']) > coords = np.array([np.round(p['Centroid']) for p in props], dtype=int) > # Create marker image where blob centroids are marked True > markers = np.zeros(image.shape, dtype=bool) > markers[tuple(np.transpose(coords))] = True > > labelled_image = watershed(-distance, markers, mask=image_thresh) > > #find outline of objects for plotting > boundaries = find_boundaries(labelled_image) > img_rgb = gray2rgb(image) > overlay = visualize_boundaries(img_rgb, boundaries, color=(1, 0, 0)) > > plt.imshow(overlay) > plt.show() > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Mon Nov 12 16:56:47 2012 From: tsyu80 at gmail.com (Tony Yu) Date: Mon, 12 Nov 2012 16:56:47 -0500 Subject: Oversplitting by watershed In-Reply-To: References: Message-ID: On Mon, Nov 12, 2012 at 7:43 AM, Frank wrote: > Dear group, > > I have some issues with the watershed algorithm implemented in scikits > image. I use a global threshold to segment cells from background, but some > cells touch and I want them to be split. Watershed seems the appropriate > way to deal with my problem, however my particles are split in too many > pieces. Is there a way to adjust the sensitivity of the watershed method? > > Many thanks for any suggestion! > > The code that I use looks like below. An example image that I want to > process can be downloaded here: https://dl.dropbox.com/u/10373933/test.jpg > > # packages needed to perform image processing and analysis > import numpy as np > import scipy as scp > import matplotlib.pyplot as plt > import matplotlib.image as mpimg > import scipy.ndimage as nd > import skimage > from skimage import io > from skimage.morphology import watershed, is_local_maximum > from skimage.segmentation import find_boundaries, visualize_boundaries > from skimage.color import gray2rgb > > #read files jpeg file > image = mpimg.imread('c:\\test.jpg') > image_thresh = image > 140 > labels = nd.label(image_thresh)[0] > distance = nd.distance_transform_edt(image_thresh) > local_maxi = is_local_maximum(distance, labels=labels, > footprint=np.ones((9, 9))) > markers = nd.label(local_maxi)[0] > labelled_image = watershed(-distance, markers, mask=image_thresh) > > #find outline of objects for plotting > boundaries = find_boundaries(labelled_image) > img_rgb = gray2rgb(image) > overlay = np.flipud(visualize_boundaries(img_rgb,boundaries)) > imshow(overlay) Hi Frank, Actually, I don't think the issue is in the watershed segmentation. Instead, I think the problem is in the marker specification: Using local maxima creates too many marker points when a blob deviates greatly from a circle. (BTW, does anyone know if there are any differences between `is_local_maximum` and `peak_local_max`? Maybe the former should be deprecated.) Using the centroids of blobs gives cleaner results. See slightly-modified example below. Best, -Tony # packages needed to perform image processing and analysis import numpy as np import matplotlib.pyplot as plt import scipy.ndimage as nd from skimage import io from skimage import measure from skimage.morphology import watershed from skimage.segmentation import find_boundaries, visualize_boundaries from skimage.color import gray2rgb #read files jpeg file image = io.imread('test.jpg') image_thresh = image > 140 labels = nd.label(image_thresh)[0] distance = nd.distance_transform_edt(image_thresh) props = measure.regionprops(labels, ['Centroid']) coords = np.array([np.round(p['Centroid']) for p in props], dtype=int) # Create marker image where blob centroids are marked True markers = np.zeros(image.shape, dtype=bool) markers[tuple(np.transpose(coords))] = True labelled_image = watershed(-distance, markers, mask=image_thresh) #find outline of objects for plotting boundaries = find_boundaries(labelled_image) img_rgb = gray2rgb(image) overlay = visualize_boundaries(img_rgb, boundaries, color=(1, 0, 0)) plt.imshow(overlay) plt.show() -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Tue Nov 13 00:35:08 2012 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Mon, 12 Nov 2012 21:35:08 -0800 (PST) Subject: Oversplitting by watershed In-Reply-To: References: Message-ID: I can probably put something together. What should the goal be? Expand the featureset of one algorithm, such that the other can be collapsed into a wrapper function with no loss of backwards compatibility, or expand the featureset of one and eliminate the other (carefully changing all internal references to the old function)? The latter might be the best/ideal world solution, but even if all of the internal references were changed appropriately it could break 3rd party code. I would lean toward the former option, moving in the direction of `is_local_maximum`, though this does appear to be the slower algorithm at present. On Monday, November 12, 2012 9:43:27 PM UTC-6, Tony S Yu wrote: > > Thanks for the detailed response! > > On Mon, Nov 12, 2012 at 6:57 PM, Josh Warner > > wrote: > >> I've looked at these two algorithms, and the biggest difference seems to >> be in the output. `is_local_maximum` returns a Boolean array, while >> `peak_local_max` returns the indices of points corresponding to maxima (a >> la `np.sort` vs. `np.argsort`, though you cannot just pass the output of >> `peak_local_max` as indices). >> >> The other differences are more subtle, but significant. >> >> The API for `peak_local_max` could use some cleanup - the first threshold >> kwarg is set to 'deprecated'(!) and IMHO should be removed - but this >> algorithm allows finer control over thresholding and peak searching. >> > > > I think it's marked 'deprecated' so that it can be removed gracefully > (i.e. it gives people time to change their code). That said, it was marked > 'deprecated' a few releases back, so it's probably ripe for removal. > > > >> This would be good to avoid finding phantom peaks in noise if large dark >> regions were present. One significant drawback of this algorithm is the >> min_distance kwarg is set by default to 10, rather arbitrary, and ANY input >> (even the minimum value of 1) excludes both neighboring pixels AND the >> border. See example below. >> >> In contrast, `is_local_maximum` has a much simpler API. It doesn't have >> the finer thresholding / peak searching controls, but has a unique ability >> to search for peaks ONLY within arbitrary, connected, labeled regions. >> This has some interesting potentials for masking etc, though I believe >> within each label only one peak will be found. This algorithm also has the >> ability to search arbitrary local regions for peaks using something akin to >> a morphological structuring element, through the `footprint=` kwarg. The >> documentation for this could probably be clarified. >> >> The way `peak_local_max` excludes borders concerns me for general use, as >> does its default `min_distance=10`, and personally I would prefer to wok >> around the limitations in `is_local_maximum`. >> >> A best-of-both-worlds combination could probably be created without >> overly much effort... >> > > > This is a great summary of the API differences. I agree that excluding the > border region is a bit of a wart. (My guess is that this could be fixed by > changing the boundary condition on the maximum filter used in > `peak_local_max`.) Although I agree that defaulting to `min_distance=10` is > arbitrary, I'm not sure there's an obvious choice. I would normally assume > that peaks separated by 1 pixel are just noise. > > The footprint and mask parameter would definitely be a nice addition to > `peak_local_max`. > > If you have time, a pull request to address some or all of these issues > would be great. If not, maybe you could this as an issue on github? > > Thanks, > -Tony > > > >> >> >> Snippet showing border-excluding behavior of `peak_local_max`, which will >> only get worse with higher values of `min_distance`: >> >> import numpy as np >> import matplotlib.pyplot as plt >> from skimage.feature import peak_local_max >> from skimage.morphology import is_local_maximum >> >> # Generate standardized random data >> np.random.seed(seed=1234) >> testim = np.random.randint(0, 255, size=(20, 20)) >> >> # Find peaks using both methods >> ismax = is_local_maximum(testim) # Boolean image >> returned >> peakmax = peak_local_max(testim, min_distance=1) # (M, 2) indices >> returned >> >> # `peakmax` not plottable - placing values in 2d array >> Ipeakmax = np.zeros(testim.shape) >> Ipeakmax[peakmax[:, 0], peakmax[:, 1]] = 1 >> >> # Show the results >> fig, ax = plt.subplots(ncols=2, nrows=1) >> ax[0].imshow(ismax, cmap='gray') >> ax[0].set_title('Peaks found by `is_local_maximum`') >> ax[1].imshow(Ipeakmax, cmap='gray') >> ax[1].set_title('Peaks found by `peak_local_max`') >> >> plt.show() >> >> >> >> On Monday, November 12, 2012 3:57:28 PM UTC-6, Tony S Yu wrote: >> >>> >>> >>> On Mon, Nov 12, 2012 at 7:43 AM, Frank wrote: >>> >>>> Dear group, >>>> >>>> I have some issues with the watershed algorithm implemented in scikits >>>> image. I use a global threshold to segment cells from background, but some >>>> cells touch and I want them to be split. Watershed seems the appropriate >>>> way to deal with my problem, however my particles are split in too many >>>> pieces. Is there a way to adjust the sensitivity of the watershed method? >>>> >>>> Many thanks for any suggestion! >>>> >>>> The code that I use looks like below. An example image that I want to >>>> process can be downloaded here: https://dl.dropbox.com/u/** >>>> 10373933/test.jpg >>>> >>>> # packages needed to perform image processing and analysis >>>> import numpy as np >>>> import scipy as scp >>>> import matplotlib.pyplot as plt >>>> import matplotlib.image as mpimg >>>> import scipy.ndimage as nd >>>> import skimage >>>> from skimage import io >>>> from skimage.morphology import watershed, is_local_maximum >>>> from skimage.segmentation import find_boundaries, visualize_boundaries >>>> from skimage.color import gray2rgb >>>> >>>> #read files jpeg file >>>> image = mpimg.imread('c:\\test.jpg') >>>> image_thresh = image > 140 >>>> labels = nd.label(image_thresh)[0] >>>> distance = nd.distance_transform_edt(**image_thresh) >>>> local_maxi = is_local_maximum(distance, labels=labels, >>>> footprint=np.ones((9, 9))) >>>> markers = nd.label(local_maxi)[0] >>>> labelled_image = watershed(-distance, markers, mask=image_thresh) >>>> >>>> #find outline of objects for plotting >>>> boundaries = find_boundaries(labelled_**image) >>>> img_rgb = gray2rgb(image) >>>> overlay = np.flipud(visualize_**boundaries(img_rgb,boundaries)**) >>>> imshow(overlay) >>> >>> >>> Hi Frank, >>> >>> Actually, I don't think the issue is in the watershed segmentation. >>> Instead, I think the problem is in the marker specification: Using local >>> maxima creates too many marker points when a blob deviates greatly from a >>> circle. (BTW, does anyone know if there are any differences between >>> `is_local_maximum` and `peak_local_max`? Maybe the former should be >>> deprecated.) >>> >>> Using the centroids of blobs gives cleaner results. See >>> slightly-modified example below. >>> >>> Best, >>> -Tony >>> >>> # packages needed to perform image processing and analysis >>> import numpy as np >>> import matplotlib.pyplot as plt >>> import scipy.ndimage as nd >>> >>> from skimage import io >>> from skimage import measure >>> from skimage.morphology import watershed >>> from skimage.segmentation import find_boundaries, visualize_boundaries >>> from skimage.color import gray2rgb >>> >>> #read files jpeg file >>> image = io.imread('test.jpg') >>> >>> image_thresh = image > 140 >>> labels = nd.label(image_thresh)[0] >>> distance = nd.distance_transform_edt(**image_thresh) >>> >>> props = measure.regionprops(labels, ['Centroid']) >>> coords = np.array([np.round(p['**Centroid']) for p in props], dtype=int) >>> # Create marker image where blob centroids are marked True >>> markers = np.zeros(image.shape, dtype=bool) >>> markers[tuple(np.transpose(**coords))] = True >>> >>> labelled_image = watershed(-distance, markers, mask=image_thresh) >>> >>> #find outline of objects for plotting >>> boundaries = find_boundaries(labelled_**image) >>> img_rgb = gray2rgb(image) >>> overlay = visualize_boundaries(img_rgb, boundaries, color=(1, 0, 0)) >>> >>> plt.imshow(overlay) >>> plt.show() >>> >> -- >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Mon Nov 12 22:42:44 2012 From: tsyu80 at gmail.com (Tony Yu) Date: Mon, 12 Nov 2012 22:42:44 -0500 Subject: Oversplitting by watershed In-Reply-To: References: Message-ID: Thanks for the detailed response! On Mon, Nov 12, 2012 at 6:57 PM, Josh Warner wrote: > I've looked at these two algorithms, and the biggest difference seems to > be in the output. `is_local_maximum` returns a Boolean array, while > `peak_local_max` returns the indices of points corresponding to maxima (a > la `np.sort` vs. `np.argsort`, though you cannot just pass the output of > `peak_local_max` as indices). > > The other differences are more subtle, but significant. > > The API for `peak_local_max` could use some cleanup - the first threshold > kwarg is set to 'deprecated'(!) and IMHO should be removed - but this > algorithm allows finer control over thresholding and peak searching. > I think it's marked 'deprecated' so that it can be removed gracefully (i.e. it gives people time to change their code). That said, it was marked 'deprecated' a few releases back, so it's probably ripe for removal. > This would be good to avoid finding phantom peaks in noise if large dark > regions were present. One significant drawback of this algorithm is the > min_distance kwarg is set by default to 10, rather arbitrary, and ANY input > (even the minimum value of 1) excludes both neighboring pixels AND the > border. See example below. > > In contrast, `is_local_maximum` has a much simpler API. It doesn't have > the finer thresholding / peak searching controls, but has a unique ability > to search for peaks ONLY within arbitrary, connected, labeled regions. > This has some interesting potentials for masking etc, though I believe > within each label only one peak will be found. This algorithm also has the > ability to search arbitrary local regions for peaks using something akin to > a morphological structuring element, through the `footprint=` kwarg. The > documentation for this could probably be clarified. > > The way `peak_local_max` excludes borders concerns me for general use, as > does its default `min_distance=10`, and personally I would prefer to wok > around the limitations in `is_local_maximum`. > > A best-of-both-worlds combination could probably be created without overly > much effort... > This is a great summary of the API differences. I agree that excluding the border region is a bit of a wart. (My guess is that this could be fixed by changing the boundary condition on the maximum filter used in `peak_local_max`.) Although I agree that defaulting to `min_distance=10` is arbitrary, I'm not sure there's an obvious choice. I would normally assume that peaks separated by 1 pixel are just noise. The footprint and mask parameter would definitely be a nice addition to `peak_local_max`. If you have time, a pull request to address some or all of these issues would be great. If not, maybe you could this as an issue on github? Thanks, -Tony > > > Snippet showing border-excluding behavior of `peak_local_max`, which will > only get worse with higher values of `min_distance`: > > import numpy as np > import matplotlib.pyplot as plt > from skimage.feature import peak_local_max > from skimage.morphology import is_local_maximum > > # Generate standardized random data > np.random.seed(seed=1234) > testim = np.random.randint(0, 255, size=(20, 20)) > > # Find peaks using both methods > ismax = is_local_maximum(testim) # Boolean image > returned > peakmax = peak_local_max(testim, min_distance=1) # (M, 2) indices > returned > > # `peakmax` not plottable - placing values in 2d array > Ipeakmax = np.zeros(testim.shape) > Ipeakmax[peakmax[:, 0], peakmax[:, 1]] = 1 > > # Show the results > fig, ax = plt.subplots(ncols=2, nrows=1) > ax[0].imshow(ismax, cmap='gray') > ax[0].set_title('Peaks found by `is_local_maximum`') > ax[1].imshow(Ipeakmax, cmap='gray') > ax[1].set_title('Peaks found by `peak_local_max`') > > plt.show() > > > > On Monday, November 12, 2012 3:57:28 PM UTC-6, Tony S Yu wrote: > >> >> >> On Mon, Nov 12, 2012 at 7:43 AM, Frank wrote: >> >>> Dear group, >>> >>> I have some issues with the watershed algorithm implemented in scikits >>> image. I use a global threshold to segment cells from background, but some >>> cells touch and I want them to be split. Watershed seems the appropriate >>> way to deal with my problem, however my particles are split in too many >>> pieces. Is there a way to adjust the sensitivity of the watershed method? >>> >>> Many thanks for any suggestion! >>> >>> The code that I use looks like below. An example image that I want to >>> process can be downloaded here: https://dl.dropbox.com/u/** >>> 10373933/test.jpg >>> >>> # packages needed to perform image processing and analysis >>> import numpy as np >>> import scipy as scp >>> import matplotlib.pyplot as plt >>> import matplotlib.image as mpimg >>> import scipy.ndimage as nd >>> import skimage >>> from skimage import io >>> from skimage.morphology import watershed, is_local_maximum >>> from skimage.segmentation import find_boundaries, visualize_boundaries >>> from skimage.color import gray2rgb >>> >>> #read files jpeg file >>> image = mpimg.imread('c:\\test.jpg') >>> image_thresh = image > 140 >>> labels = nd.label(image_thresh)[0] >>> distance = nd.distance_transform_edt(**image_thresh) >>> local_maxi = is_local_maximum(distance, labels=labels, >>> footprint=np.ones((9, 9))) >>> markers = nd.label(local_maxi)[0] >>> labelled_image = watershed(-distance, markers, mask=image_thresh) >>> >>> #find outline of objects for plotting >>> boundaries = find_boundaries(labelled_**image) >>> img_rgb = gray2rgb(image) >>> overlay = np.flipud(visualize_**boundaries(img_rgb,boundaries)**) >>> imshow(overlay) >> >> >> Hi Frank, >> >> Actually, I don't think the issue is in the watershed segmentation. >> Instead, I think the problem is in the marker specification: Using local >> maxima creates too many marker points when a blob deviates greatly from a >> circle. (BTW, does anyone know if there are any differences between >> `is_local_maximum` and `peak_local_max`? Maybe the former should be >> deprecated.) >> >> Using the centroids of blobs gives cleaner results. See slightly-modified >> example below. >> >> Best, >> -Tony >> >> # packages needed to perform image processing and analysis >> import numpy as np >> import matplotlib.pyplot as plt >> import scipy.ndimage as nd >> >> from skimage import io >> from skimage import measure >> from skimage.morphology import watershed >> from skimage.segmentation import find_boundaries, visualize_boundaries >> from skimage.color import gray2rgb >> >> #read files jpeg file >> image = io.imread('test.jpg') >> >> image_thresh = image > 140 >> labels = nd.label(image_thresh)[0] >> distance = nd.distance_transform_edt(**image_thresh) >> >> props = measure.regionprops(labels, ['Centroid']) >> coords = np.array([np.round(p['**Centroid']) for p in props], dtype=int) >> # Create marker image where blob centroids are marked True >> markers = np.zeros(image.shape, dtype=bool) >> markers[tuple(np.transpose(**coords))] = True >> >> labelled_image = watershed(-distance, markers, mask=image_thresh) >> >> #find outline of objects for plotting >> boundaries = find_boundaries(labelled_**image) >> img_rgb = gray2rgb(image) >> overlay = visualize_boundaries(img_rgb, boundaries, color=(1, 0, 0)) >> >> plt.imshow(overlay) >> plt.show() >> > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pennekampster at googlemail.com Tue Nov 13 04:54:15 2012 From: pennekampster at googlemail.com (Frank Pennekamp) Date: Tue, 13 Nov 2012 10:54:15 +0100 Subject: Oversplitting by watershed In-Reply-To: References: Message-ID: Hi Tony, thanks for helping me out on this again. Your solution produces a nice segmentation of the image, but the particles that need to be split remain touching (the diving cell left of the big blob in the middle; the two cells in the lower left quarter that touch on their tips). I think it is the same result as just using the global threshold. I agree with you that the problem seem to be the markers. I have about three times more markers than actual objects, so that's not corresponding to the actual number of objects at all. On the other extreme, replacing the regional maxima with the centroid of the thresholded blobs is not splitting the touching objects, because there is only one centroid per object. I had some success in splitting objects with the watershed algorithm implemented in ImageJ, maybe there is a way of translating their approach into Python. Their description is the follwing: *Watershed segmentation* of the Euclidian distance map (EDM) is a way of automatically separating or cutting apart particles that touch (Watershed separation of a grayscale image is available via the Find Maxima...command). The Watershed command requires a binary image containing black particles on a white background. It first calculates the Euclidian distance map and finds the ultimate eroded points (UEPs). It then dilates each of the UEPs (the peaks or local maxima of the EDM) as far as possible - either until the edge of the particle is reached, or the edge of the region of another (growing) UEP. Watershed segmentation works best for smooth convex objects that don't overlap too much. [image: watershed example] *Ultimate points:* generates the ultimate eroded points (UEPs) of the EDM. Requires a binary image as input. The UEPs represent the centers of particles that would be separated by segmentation. The UEP's gray value is equal to the radius of the inscribed circle of the corresponding particle. Use Process>Binary>Optionsto set the background color (black or white) and the output type. How could i get the ultimate eroded points in scikit image? There seems no function to do so for the moment, but may you have a suggestion how to tackle this problem? Many thanks in any case for your help already! Best, Frank On Mon, Nov 12, 2012 at 10:56 PM, Tony Yu wrote: > > > On Mon, Nov 12, 2012 at 7:43 AM, Frank wrote: > >> Dear group, >> >> I have some issues with the watershed algorithm implemented in scikits >> image. I use a global threshold to segment cells from background, but some >> cells touch and I want them to be split. Watershed seems the appropriate >> way to deal with my problem, however my particles are split in too many >> pieces. Is there a way to adjust the sensitivity of the watershed method? >> >> Many thanks for any suggestion! >> >> The code that I use looks like below. An example image that I want to >> process can be downloaded here: >> https://dl.dropbox.com/u/10373933/test.jpg >> >> # packages needed to perform image processing and analysis >> import numpy as np >> import scipy as scp >> import matplotlib.pyplot as plt >> import matplotlib.image as mpimg >> import scipy.ndimage as nd >> import skimage >> from skimage import io >> from skimage.morphology import watershed, is_local_maximum >> from skimage.segmentation import find_boundaries, visualize_boundaries >> from skimage.color import gray2rgb >> >> #read files jpeg file >> image = mpimg.imread('c:\\test.jpg') >> image_thresh = image > 140 >> labels = nd.label(image_thresh)[0] >> distance = nd.distance_transform_edt(image_thresh) >> local_maxi = is_local_maximum(distance, labels=labels, >> footprint=np.ones((9, 9))) >> markers = nd.label(local_maxi)[0] >> labelled_image = watershed(-distance, markers, mask=image_thresh) >> >> #find outline of objects for plotting >> boundaries = find_boundaries(labelled_image) >> img_rgb = gray2rgb(image) >> overlay = np.flipud(visualize_boundaries(img_rgb,boundaries)) >> imshow(overlay) > > > Hi Frank, > > Actually, I don't think the issue is in the watershed segmentation. > Instead, I think the problem is in the marker specification: Using local > maxima creates too many marker points when a blob deviates greatly from a > circle. (BTW, does anyone know if there are any differences between > `is_local_maximum` and `peak_local_max`? Maybe the former should be > deprecated.) > > Using the centroids of blobs gives cleaner results. See slightly-modified > example below. > > Best, > -Tony > > # packages needed to perform image processing and analysis > import numpy as np > import matplotlib.pyplot as plt > import scipy.ndimage as nd > > from skimage import io > from skimage import measure > from skimage.morphology import watershed > from skimage.segmentation import find_boundaries, visualize_boundaries > from skimage.color import gray2rgb > > #read files jpeg file > image = io.imread('test.jpg') > > image_thresh = image > 140 > labels = nd.label(image_thresh)[0] > distance = nd.distance_transform_edt(image_thresh) > > props = measure.regionprops(labels, ['Centroid']) > coords = np.array([np.round(p['Centroid']) for p in props], dtype=int) > # Create marker image where blob centroids are marked True > markers = np.zeros(image.shape, dtype=bool) > markers[tuple(np.transpose(coords))] = True > > labelled_image = watershed(-distance, markers, mask=image_thresh) > > #find outline of objects for plotting > boundaries = find_boundaries(labelled_image) > img_rgb = gray2rgb(image) > overlay = visualize_boundaries(img_rgb, boundaries, color=(1, 0, 0)) > > plt.imshow(overlay) > plt.show() > > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pennekampster at googlemail.com Tue Nov 13 06:57:33 2012 From: pennekampster at googlemail.com (Frank Pennekamp) Date: Tue, 13 Nov 2012 12:57:33 +0100 Subject: Oversplitting by watershed In-Reply-To: References: Message-ID: Hi, just found a way to get my desired result: applying a gaussian filter to the distance map allows be to adjust the number of local maxima found and thereby controlling the sensitivity of the following watershed. Maybe not the best option, but it serves the purpose. Here the code if you are interested to check it yourself. # packages needed to perform image processing and analysis import numpy as np import scipy as scp import matplotlib.pyplot as plt import matplotlib.image as mpimg import scipy.ndimage as nd import skimage from skimage import io from skimage.morphology import watershed, is_local_maximum from skimage.segmentation import find_boundaries, visualize_boundaries from skimage.color import gray2rgb #read files jpeg file image = mpimg.imread('c:\\test.jpg') image_thresh = image > 140 labels = nd.label(image_thresh)[0] distance = nd.distance_transform_edt(image_thresh) #apply Gaussian filter to the distance map to merge several local maxima into one distance=nd.gaussian_filter(distance,3) local_maxi = is_local_maximum(distance, labels=labels, footprint=np.ones((9, 9))) markers = nd.label(local_maxi)[0] labelled_image = watershed(-distance, markers, mask=image_thresh) #find outline of objects for plotting boundaries = find_boundaries(labelled_image) img_rgb = gray2rgb(image) overlay = np.flipud(visualize_boundaries(img_rgb,boundaries)) imshow(overlay) Cheers, Frank On Tue, Nov 13, 2012 at 10:54 AM, Frank Pennekamp < pennekampster at googlemail.com> wrote: > Hi Tony, > > thanks for helping me out on this again. Your solution produces a nice > segmentation of the image, but the particles that need to be split remain > touching (the diving cell left of the big blob in the middle; the two cells > in the lower left quarter that touch on their tips). I think it is the same > result as just using the global threshold. > > I agree with you that the problem seem to be the markers. I have about > three times more markers than actual objects, so that's not corresponding > to the actual number of objects at all. On the other extreme, replacing the > regional maxima with the centroid of the thresholded blobs is not splitting > the touching objects, because there is only one centroid per object. > > I had some success in splitting objects with the watershed algorithm > implemented in ImageJ, maybe there is a way of translating their approach > into Python. Their description is the follwing: > > *Watershed segmentation* of the Euclidian distance map (EDM) is a way of > automatically separating or cutting apart particles that touch (Watershed > separation of a grayscale image is available via the Find Maxima...command). The Watershed command requires a binary image containing black > particles on a white background. It first calculates the Euclidian distance > map and finds the ultimate eroded points (UEPs). It then dilates each of > the UEPs (the peaks or local maxima of the EDM) as far as possible - either > until the edge of the particle is reached, or the edge of the region of > another (growing) UEP. Watershed segmentation works best for smooth convex > objects that don't overlap too much. > > [image: watershed example] > *Ultimate points:* generates the ultimate eroded points (UEPs) of the > EDM. Requires a binary image as input. The UEPs represent the centers of > particles that would be separated by segmentation. The UEP's gray value is > equal to the radius of the inscribed circle of the corresponding particle. > Use Process>Binary>Optionsto set the background color (black or white) and the output type. > > How could i get the ultimate eroded points in scikit image? There seems no > function to do so for the moment, but may you have a suggestion how to > tackle this problem? > > Many thanks in any case for your help already! > > Best, > > Frank > > > On Mon, Nov 12, 2012 at 10:56 PM, Tony Yu wrote: > >> >> >> On Mon, Nov 12, 2012 at 7:43 AM, Frank wrote: >> >>> Dear group, >>> >>> I have some issues with the watershed algorithm implemented in scikits >>> image. I use a global threshold to segment cells from background, but some >>> cells touch and I want them to be split. Watershed seems the appropriate >>> way to deal with my problem, however my particles are split in too many >>> pieces. Is there a way to adjust the sensitivity of the watershed method? >>> >>> Many thanks for any suggestion! >>> >>> The code that I use looks like below. An example image that I want to >>> process can be downloaded here: >>> https://dl.dropbox.com/u/10373933/test.jpg >>> >>> # packages needed to perform image processing and analysis >>> import numpy as np >>> import scipy as scp >>> import matplotlib.pyplot as plt >>> import matplotlib.image as mpimg >>> import scipy.ndimage as nd >>> import skimage >>> from skimage import io >>> from skimage.morphology import watershed, is_local_maximum >>> from skimage.segmentation import find_boundaries, visualize_boundaries >>> from skimage.color import gray2rgb >>> >>> #read files jpeg file >>> image = mpimg.imread('c:\\test.jpg') >>> image_thresh = image > 140 >>> labels = nd.label(image_thresh)[0] >>> distance = nd.distance_transform_edt(image_thresh) >>> local_maxi = is_local_maximum(distance, labels=labels, >>> footprint=np.ones((9, 9))) >>> markers = nd.label(local_maxi)[0] >>> labelled_image = watershed(-distance, markers, mask=image_thresh) >>> >>> #find outline of objects for plotting >>> boundaries = find_boundaries(labelled_image) >>> img_rgb = gray2rgb(image) >>> overlay = np.flipud(visualize_boundaries(img_rgb,boundaries)) >>> imshow(overlay) >> >> >> Hi Frank, >> >> Actually, I don't think the issue is in the watershed segmentation. >> Instead, I think the problem is in the marker specification: Using local >> maxima creates too many marker points when a blob deviates greatly from a >> circle. (BTW, does anyone know if there are any differences between >> `is_local_maximum` and `peak_local_max`? Maybe the former should be >> deprecated.) >> >> Using the centroids of blobs gives cleaner results. See slightly-modified >> example below. >> >> Best, >> -Tony >> >> # packages needed to perform image processing and analysis >> import numpy as np >> import matplotlib.pyplot as plt >> import scipy.ndimage as nd >> >> from skimage import io >> from skimage import measure >> from skimage.morphology import watershed >> from skimage.segmentation import find_boundaries, visualize_boundaries >> from skimage.color import gray2rgb >> >> #read files jpeg file >> image = io.imread('test.jpg') >> >> image_thresh = image > 140 >> labels = nd.label(image_thresh)[0] >> distance = nd.distance_transform_edt(image_thresh) >> >> props = measure.regionprops(labels, ['Centroid']) >> coords = np.array([np.round(p['Centroid']) for p in props], dtype=int) >> # Create marker image where blob centroids are marked True >> markers = np.zeros(image.shape, dtype=bool) >> markers[tuple(np.transpose(coords))] = True >> >> labelled_image = watershed(-distance, markers, mask=image_thresh) >> >> #find outline of objects for plotting >> boundaries = find_boundaries(labelled_image) >> img_rgb = gray2rgb(image) >> overlay = visualize_boundaries(img_rgb, boundaries, color=(1, 0, 0)) >> >> plt.imshow(overlay) >> plt.show() >> >> -- >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Nov 13 15:58:16 2012 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 13 Nov 2012 12:58:16 -0800 Subject: Oversplitting by watershed In-Reply-To: References: Message-ID: On Mon, Nov 12, 2012 at 3:57 PM, Josh Warner wrote: > In contrast, `is_local_maximum` has a much simpler API. It doesn't have the Just for the record, `is_local_maximum` is mentioned in: http://shop.oreilly.com/product/0636920020219.do So, if we could write a unified backend but still expose this function, that'd be good! St?fan From tsyu80 at gmail.com Tue Nov 13 20:26:16 2012 From: tsyu80 at gmail.com (Tony Yu) Date: Tue, 13 Nov 2012 20:26:16 -0500 Subject: Oversplitting by watershed In-Reply-To: References: Message-ID: On Tue, Nov 13, 2012 at 3:58 PM, St?fan van der Walt wrote: > On Mon, Nov 12, 2012 at 3:57 PM, Josh Warner > wrote: > > In contrast, `is_local_maximum` has a much simpler API. It doesn't have > the > > Just for the record, `is_local_maximum` is mentioned in: > > http://shop.oreilly.com/product/0636920020219.do > > So, if we could write a unified backend but still expose this > function, that'd be good! > > St?fan > > I'm also in favor of keeping `is_local_maximum` and `peak_local_max` as separate functions, primarily because they have different return values (both of which have valid use cases). But... I'd be in favor of deprecating the current `is_local_maximum` in the `morphology` subpackage and renaming it to `is_local_max` in the `feature` subpackage. Unfortunately emoving the current function would break code (although the transition could be smooth if it's removed after a couple of releases with a deprecation warning). What do you think? On Tue, Nov 13, 2012 at 12:35 AM, Josh Warner wrote: > I can probably put something together. What should the goal be? Expand > the featureset of one algorithm, such that the other can be collapsed into > a wrapper function with no loss of backwards compatibility, or expand the > featureset of one and eliminate the other (carefully changing all internal > references to the old function)? > > The latter might be the best/ideal world solution, but even if all of the > internal references were changed appropriately it could break 3rd party > code. I would lean toward the former option, moving in the direction of > `is_local_maximum`, though this does appear to be the slower algorithm at > present. > As mentioned above, I think it'd be best to have a single core function, but keep the other function (though possibly renamed and relocated) as a wrapper of the core function. If `is_local_maximum` is slower, I think it would be good to start with `peak_local_max` as the base. As a first pass, it'd be great to fix the border issue with `peak_local_max`. (It might be nice to make this optional, since a user may want to require that the peak must be `min_distance` away from the image border---I could go either way on whether or not this is optional). Adding masking and a footprint parameter would be great, and I assume it should be straightforward (note that `scipy.ndimage.maximum_filter` has a footprint parameter). Best, -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Wed Nov 14 00:20:02 2012 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Tue, 13 Nov 2012 21:20:02 -0800 (PST) Subject: Oversplitting by watershed In-Reply-To: References: Message-ID: Thanks for the input, Tony and Stefan. Unless someone else wants to tackle this, I'll see what I can put together over a week or so. Josh On Tuesday, November 13, 2012 7:26:58 PM UTC-6, Tony S Yu wrote: > > > > On Tue, Nov 13, 2012 at 3:58 PM, St?fan van der Walt > > wrote: > >> On Mon, Nov 12, 2012 at 3:57 PM, Josh Warner > >> wrote: >> > In contrast, `is_local_maximum` has a much simpler API. It doesn't >> have the >> >> Just for the record, `is_local_maximum` is mentioned in: >> >> http://shop.oreilly.com/product/0636920020219.do >> >> So, if we could write a unified backend but still expose this >> function, that'd be good! >> >> St?fan >> >> > > I'm also in favor of keeping `is_local_maximum` and `peak_local_max` as > separate functions, primarily because they have different return values > (both of which have valid use cases). But... I'd be in favor of deprecating > the current `is_local_maximum` in the `morphology` subpackage and renaming > it to `is_local_max` in the `feature` subpackage. Unfortunately emoving the > current function would break code (although the transition could be smooth > if it's removed after a couple of releases with a deprecation warning). > What do you think? > > > On Tue, Nov 13, 2012 at 12:35 AM, Josh Warner > > wrote: > >> I can probably put something together. What should the goal be? Expand >> the featureset of one algorithm, such that the other can be collapsed into >> a wrapper function with no loss of backwards compatibility, or expand the >> featureset of one and eliminate the other (carefully changing all internal >> references to the old function)? >> >> The latter might be the best/ideal world solution, but even if all of the >> internal references were changed appropriately it could break 3rd party >> code. I would lean toward the former option, moving in the direction of >> `is_local_maximum`, though this does appear to be the slower algorithm at >> present. >> > > As mentioned above, I think it'd be best to have a single core function, > but keep the other function (though possibly renamed and relocated) as a > wrapper of the core function. If `is_local_maximum` is slower, I think it > would be good to start with `peak_local_max` as the base. > > As a first pass, it'd be great to fix the border issue with > `peak_local_max`. (It might be nice to make this optional, since a user may > want to require that the peak must be `min_distance` away from the image > border---I could go either way on whether or not this is optional). Adding > masking and a footprint parameter would be great, and I assume it should be > straightforward (note that `scipy.ndimage.maximum_filter` has a footprint > parameter). > > Best, > -Tony > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Nov 14 17:41:26 2012 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 14 Nov 2012 14:41:26 -0800 Subject: Rank filters Message-ID: Hi everyone, We just merged an exciting new rank filtering module, implemented by Olivier Debeir and guided through review by Johannes Sch?nberger. These filters work on the principle that, as a sliding window moves over an image, its histogram can be efficiently updated by considering only new values introduced (and left behind) by the moving footprint. Here are some examples from the gallery: http://scikit-image.org/docs/dev/auto_examples/applications/plot_rank_filters.html Enjoy! St?fan From stefan at sun.ac.za Wed Nov 14 20:01:43 2012 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 14 Nov 2012 17:01:43 -0800 Subject: Rank filters In-Reply-To: <87B4A180-5E77-493B-8AFB-226DE953C348@gmail.com> References: <87B4A180-5E77-493B-8AFB-226DE953C348@gmail.com> Message-ID: On Wed, Nov 14, 2012 at 2:45 PM, Sch?nberger Johannes wrote: > I just noticed, we have a problem with the API reference. `skimage.filter.rank` is not listed as subpackage. Yes, the API generator is set up to only explore one level below skimage. We should probably rather have it look at the bento config for the structure? St?fan From hannesschoenberger at gmail.com Wed Nov 14 17:45:31 2012 From: hannesschoenberger at gmail.com (=?iso-8859-1?Q?Sch=F6nberger_Johannes?=) Date: Wed, 14 Nov 2012 23:45:31 +0100 Subject: Rank filters In-Reply-To: References: Message-ID: <87B4A180-5E77-493B-8AFB-226DE953C348@gmail.com> I just noticed, we have a problem with the API reference. `skimage.filter.rank` is not listed as subpackage. Johannes Sch?nberger Am 14.11.2012 um 23:41 schrieb St?fan van der Walt : > Hi everyone, > > We just merged an exciting new rank filtering module, implemented by > Olivier Debeir and guided through review by Johannes Sch?nberger. > These filters work on the principle that, as a sliding window moves > over an image, its histogram can be efficiently updated by considering > only new values introduced (and left behind) by the moving footprint. > > Here are some examples from the gallery: > > http://scikit-image.org/docs/dev/auto_examples/applications/plot_rank_filters.html > > Enjoy! > > St?fan > > -- > > From tsyu80 at gmail.com Thu Nov 15 00:09:27 2012 From: tsyu80 at gmail.com (Tony Yu) Date: Thu, 15 Nov 2012 00:09:27 -0500 Subject: Rank filters In-Reply-To: References: <87B4A180-5E77-493B-8AFB-226DE953C348@gmail.com> Message-ID: These new filters look fantastic! Many thanks to Olivier for sticking with the long review process required for such a hefty contribution, and Johannes for his many contributions to the PR. On Wed, Nov 14, 2012 at 8:01 PM, St?fan van der Walt wrote: > On Wed, Nov 14, 2012 at 2:45 PM, Sch?nberger Johannes > wrote: > > I just noticed, we have a problem with the API reference. > `skimage.filter.rank` is not listed as subpackage. > > Yes, the API generator is set up to only explore one level below > skimage. We should probably rather have it look at the bento config > for the structure? > I'm not sure I follow: Are you two talking about the API reference in the documentation? When I build the docs on my system, I see a listing for filter.rank, and that links to a doc page properly documenting the subpackage. -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From dre25 at cam.ac.uk Wed Nov 14 20:25:48 2012 From: dre25 at cam.ac.uk (Damian Eads) Date: Thu, 15 Nov 2012 01:25:48 +0000 Subject: Rank filters In-Reply-To: References: Message-ID: That's very cool. LIBCVD does something similar in that updates are only made to the histogram for pixels that enter or leave the structuring element as it's shifted across the image. Any plans to replace erode and dilate with rank.minimum and rank.maximum? Damian On Wed, Nov 14, 2012 at 10:41 PM, St?fan van der Walt wrote: > Hi everyone, > > We just merged an exciting new rank filtering module, implemented by > Olivier Debeir and guided through review by Johannes Sch?nberger. > These filters work on the principle that, as a sliding window moves > over an image, its histogram can be efficiently updated by considering > only new values introduced (and left behind) by the moving footprint. > > Here are some examples from the gallery: > > > http://scikit-image.org/docs/dev/auto_examples/applications/plot_rank_filters.html > > Enjoy! > > St?fan > > -- > > > -- Damian Eads, PhD Research Associate, Machine Intelligence Laboratory Engineering Department, University of Cambridge Trumpington Street, Cambridge, CB2 1PZ, ENGLAND Web: http://mi.eng.cam.ac.uk/~dre25 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hannesschoenberger at gmail.com Thu Nov 15 01:17:47 2012 From: hannesschoenberger at gmail.com (=?UTF-8?Q?Johannes_Sch=C3=B6nberger?=) Date: Thu, 15 Nov 2012 07:17:47 +0100 Subject: Rank filters In-Reply-To: References: Message-ID: > That's very cool. LIBCVD does something similar in that updates are only > made to the histogram for pixels that enter or leave the structuring element > as it's shifted across the image. Any plans to replace erode and dilate with > rank.minimum and rank.maximum? I was planning to add another PR which addresses this. So, I would leave the current dilate and erode functions and only "redirect" images of certain dtype to the rank filters, which are only implemented for uint8 and uint16. From guillaume.calmettes at gmail.com Thu Nov 22 23:08:01 2012 From: guillaume.calmettes at gmail.com (Guillaume CALMETTES) Date: Thu, 22 Nov 2012 20:08:01 -0800 Subject: Jet colormap to grayscale Message-ID: Hello, I am looking to convert an image saved with the jet colormap to grayscale. I tried to play a bit with the color module of skimage, but for the moment I couldn't manage to to map the jet colormap to linearly go from dark to light shades of gray. The main problem when applying a basic rgb2gray conversion is that the min/max values (blue and red) converge to the same dark value in the grayscale image, while the mid-range values (yellow/green) become the brightest. Do someone has any trick to convert a "jet image" to grayscale? Thanks a lot Guillaume PS: I have attached a picture as an example if you're willing to play with it ;) -------------- next part -------------- A non-text attachment was scrubbed... Name: im2.tif Type: image/tif Size: 142232 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cmap.png Type: image/png Size: 42850 bytes Desc: not available URL: From siggin at gmail.com Fri Nov 23 11:13:28 2012 From: siggin at gmail.com (Sigmund) Date: Fri, 23 Nov 2012 08:13:28 -0800 (PST) Subject: Help with find_boundaries or watershed Message-ID: Moin moin, I'm using the find_boundaries function for separating labeled regions (separated by the watershed). Afterward I set the as True returned pixels to 0. This works quit well but since my objects (labeled regions) can be small (couple pixels) and seldom stick together I would like to limit the find_boundaries to connected regions. Is there a easy way to only find the boundaries not connected to 0. Or tell the watershed function to leave a space between separate objects? Example: import numpy as np from scipy import ndimage as nd from skimage.segmentation.boundaries import find_boundaries a = np.zeros((15,15)) a[4:9,4:9]=1 a[6:10,7:12]=2 a[1:3,1:3]=3 print "a" print a.astype(int) a[find_boundaries(a)]=0 print a.astype(int) Output: [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 3 3 0 0 0 0 0 0 0 0 0 0 0 0] [0 3 3 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 1 1 1 1 1 0 0 0 0 0 0] [0 0 0 0 1 1 1 1 1 0 0 0 0 0 0] [0 0 0 0 1 1 1 2 2 2 2 2 0 0 0] [0 0 0 0 1 1 1 2 2 2 2 2 0 0 0] [0 0 0 0 1 1 1 2 2 2 2 2 0 0 0] [0 0 0 0 0 0 0 2 2 2 2 2 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] After find_boundaries: [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 3 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 1 1 1 1 0 0 0 0 0 0] [0 0 0 0 0 1 1 0 0 0 0 0 0 0 0] [0 0 0 0 0 1 1 0 2 2 2 2 0 0 0] [0 0 0 0 0 1 1 0 2 2 2 2 0 0 0] [0 0 0 0 0 0 0 0 2 2 2 2 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] As you can see, the region 3 is almost gone. Thanks Siggi -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume.calmettes at gmail.com Fri Nov 23 12:48:14 2012 From: guillaume.calmettes at gmail.com (Guillaume CALMETTES) Date: Fri, 23 Nov 2012 09:48:14 -0800 Subject: Jet colormap to grayscale In-Reply-To: <7DDA9E11-B1DF-4C08-BBE8-D61EF3A28924@gmail.com> References: <7DDA9E11-B1DF-4C08-BBE8-D61EF3A28924@gmail.com> Message-ID: <49C582BF-6877-41CB-8B40-3CB12A595983@gmail.com> Hi Johannes, Thanks, I'll try! Guillaume On Nov 23, 2012, at 9:39 AM, Sch?nberger Johannes wrote: > Hi, > > this might be a way to achieve this: > > 1. create jet colormap > 2. convert color map to HSV-space and sort according to hue > 3. use sorted jet color map as lookup table / colormap for image > > I hope this helps. > > Johannes Sch?nberger > > Am 23.11.2012 um 05:08 schrieb Guillaume CALMETTES : > >> Hello, >> >> I am looking to convert an image saved with the jet colormap to grayscale. >> I tried to play a bit with the color module of skimage, but for the moment I couldn't manage to to map the jet colormap to linearly go from dark to light shades of gray. The main problem when applying a basic rgb2gray conversion is that the min/max values (blue and red) converge to the same dark value in the grayscale image, while the mid-range values (yellow/green) become the brightest. >> >> Do someone has any trick to convert a "jet image" to grayscale? >> >> Thanks a lot >> >> Guillaume >> >> PS: I have attached a picture as an example if you're willing to play with it ;) >> >> >> -- >> >> >> > > -- > > From hannesschoenberger at gmail.com Fri Nov 23 12:39:37 2012 From: hannesschoenberger at gmail.com (=?iso-8859-1?Q?Sch=F6nberger_Johannes?=) Date: Fri, 23 Nov 2012 18:39:37 +0100 Subject: Jet colormap to grayscale In-Reply-To: References: Message-ID: <7DDA9E11-B1DF-4C08-BBE8-D61EF3A28924@gmail.com> Hi, this might be a way to achieve this: 1. create jet colormap 2. convert color map to HSV-space and sort according to hue 3. use sorted jet color map as lookup table / colormap for image I hope this helps. Johannes Sch?nberger Am 23.11.2012 um 05:08 schrieb Guillaume CALMETTES : > Hello, > > I am looking to convert an image saved with the jet colormap to grayscale. > I tried to play a bit with the color module of skimage, but for the moment I couldn't manage to to map the jet colormap to linearly go from dark to light shades of gray. The main problem when applying a basic rgb2gray conversion is that the min/max values (blue and red) converge to the same dark value in the grayscale image, while the mid-range values (yellow/green) become the brightest. > > Do someone has any trick to convert a "jet image" to grayscale? > > Thanks a lot > > Guillaume > > PS: I have attached a picture as an example if you're willing to play with it ;) > > > -- > > > From stefan at sun.ac.za Sun Nov 25 23:25:36 2012 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 25 Nov 2012 20:25:36 -0800 Subject: Jet colormap to grayscale In-Reply-To: References: Message-ID: Hi Guillaume On Thu, Nov 22, 2012 at 8:08 PM, Guillaume CALMETTES wrote: > Do someone has any trick to convert a "jet image" to grayscale? Here's a snippet that does it by brute force lookup: https://gist.github.com/4146612 Hope that helps! St?fan From guillaume.calmettes at gmail.com Sun Nov 25 23:37:54 2012 From: guillaume.calmettes at gmail.com (Guillaume CALMETTES) Date: Sun, 25 Nov 2012 20:37:54 -0800 Subject: Jet colormap to grayscale In-Reply-To: References: Message-ID: Hi Stefan, Works perfectly. However, I'm not sure why but I had to initialize the jet colormap object to create the _lut array. I replaced: lut = plt.cm.jet._lut[..., :3] By: jet = plt.cm.jet jet._init() lut = jet._lut[..., :3] Thanks a lot. Guillaume On Nov 25, 2012, at 8:25 PM, St?fan van der Walt wrote: > Hi Guillaume > > On Thu, Nov 22, 2012 at 8:08 PM, Guillaume CALMETTES > wrote: >> Do someone has any trick to convert a "jet image" to grayscale? > > Here's a snippet that does it by brute force lookup: > > https://gist.github.com/4146612 > > Hope that helps! > St?fan > > -- > > From jni.soma at gmail.com Tue Nov 27 01:40:20 2012 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Tue, 27 Nov 2012 17:40:20 +1100 Subject: Help with find_boundaries or watershed In-Reply-To: References: Message-ID: Hi Siggi, How about replacing >>> a[find_boundaries(a)]=0 with >>> s = nd.generate_binary_structure(2, 1) >>> a[find_boundaries(a) * (nd.grey_erosion(a, footprint=s) != 0)] = 0 ? That will leave you with big objects that are separated with respect to connectivity=1, and objects that aren't touching will not be "reduced". The output for your example is: >>> print a.astype(int) [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 3 3 0 0 0 0 0 0 0 0 0 0 0 0] [0 3 3 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 1 1 1 1 1 0 0 0 0 0 0] [0 0 0 0 1 1 1 1 1 0 0 0 0 0 0] [0 0 0 0 1 1 1 0 0 2 2 2 0 0 0] [0 0 0 0 1 1 1 0 2 2 2 2 0 0 0] [0 0 0 0 1 1 1 0 2 2 2 2 0 0 0] [0 0 0 0 0 0 0 2 2 2 2 2 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] Juan. On Sat, Nov 24, 2012 at 3:13 AM, Sigmund wrote: > Moin moin, > > I'm using the find_boundaries function for separating labeled regions > (separated by the watershed). Afterward I set the as True returned pixels > to 0. This works quit well but since my objects (labeled regions) can be > small (couple pixels) and seldom stick together I would like to limit the > find_boundaries to connected regions. > Is there a easy way to only find the boundaries not connected to 0. Or > tell the watershed function to leave a space between separate objects? > > Example: > import numpy as np > from scipy import ndimage as nd > from skimage.segmentation.boundaries import find_boundaries > a = np.zeros((15,15)) > a[4:9,4:9]=1 > a[6:10,7:12]=2 > a[1:3,1:3]=3 > print "a" > print a.astype(int) > a[find_boundaries(a)]=0 > print a.astype(int) > > Output: > > [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 3 3 0 0 0 0 0 0 0 0 0 0 0 0] > [0 3 3 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 1 1 1 1 1 0 0 0 0 0 0] > [0 0 0 0 1 1 1 1 1 0 0 0 0 0 0] > [0 0 0 0 1 1 1 2 2 2 2 2 0 0 0] > [0 0 0 0 1 1 1 2 2 2 2 2 0 0 0] > [0 0 0 0 1 1 1 2 2 2 2 2 0 0 0] > [0 0 0 0 0 0 0 2 2 2 2 2 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] > After find_boundaries: > [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 3 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 1 1 1 1 0 0 0 0 0 0] > [0 0 0 0 0 1 1 0 0 0 0 0 0 0 0] > [0 0 0 0 0 1 1 0 2 2 2 2 0 0 0] > [0 0 0 0 0 1 1 0 2 2 2 2 0 0 0] > [0 0 0 0 0 0 0 0 2 2 2 2 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] > [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] > > As you can see, the region 3 is almost gone. > > Thanks > Siggi > > -- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Nov 29 19:56:01 2012 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 29 Nov 2012 16:56:01 -0800 Subject: Error importing feature: Issue with match_template function? In-Reply-To: <96aeaee8-9e00-4ebb-9127-9529157fa176@googlegroups.com> References: <96aeaee8-9e00-4ebb-9127-9529157fa176@googlegroups.com> Message-ID: Hi Marianne On Thu, Nov 29, 2012 at 9:01 AM, Marianne Corvellec wrote: > I am having an Import Error when I try to import feature, namely: > > In [28]: from skimage import feature Could you please try the following? $ get fetch origin $ git reset --hard origin/master Rebuild and then try again? You may have to replace "origin" with "upstream" or whatever you called the official repo. St?fan From emmanuelle.gouillart at nsup.org Fri Nov 30 03:15:38 2012 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Fri, 30 Nov 2012 09:15:38 +0100 Subject: Making algorithms at least 3D, preferably nD In-Reply-To: References: Message-ID: <20121130081538.GB30004@phare.normalesup.org> Hi Juan, +1 for having 3-D support whenever it's easy to implement. I'm processing mostly 3-D images (from X-ray tomography), so I have a strong interest in algorithms compatible with 3-D images. Adding a paragraph to the contribute section sounds like a good idea. Cheers, Emmanuelle On Fri, Nov 30, 2012 at 03:45:49PM +1100, Juan Nunez-Iglesias wrote: > Hey Guys, > I mentioned this briefly at SciPy, but I would like to reiterate: a lot of > data is 3D images these days, and more and more data is being generated > that is multi-channel, 3D+t. Therefore, it would be awesome if > scikit-image started making more of an effort to support these. In the > best case, the dimension of the underlying array can be abstracted away ? > see [1]here for example, the functions juicy_center (which extracts the > centre of an array, along all dimensions), surfaces (grabs the "border" > arrays along each dimension), hollowed (zeroes-out the centre), and more. > Otherwise, writing a 3D function that gracefully degrades to 2D when one > of the dimensions is 1 is also possible. > In general, the amount of additional effort to make code 3-, 4- or n- > dimensional is relatively low when you write the algorithm initially, > relative to refactoring a whole bunch of functions later. I'll try to > fiddle with whichever code I need, but in the meantime, what do you think > about adding a paragraph or a sentence about this issue in the > scikit-image [2]contribute section, so that people at least have this in > mind when they are thinking of writing something new? > Thanks, > Juan. From marianne.corvellec at ens-lyon.org Fri Nov 30 13:53:04 2012 From: marianne.corvellec at ens-lyon.org (Marianne Corvellec) Date: Fri, 30 Nov 2012 10:53:04 -0800 (PST) Subject: Error importing feature: Issue with match_template function? In-Reply-To: References: <96aeaee8-9e00-4ebb-9127-9529157fa176@googlegroups.com> Message-ID: Hi St?fan, Thanks for the reply. Okay. The official repo is upstream for me. So $ git fetch upstream $ git reset --hard upstream/master HEAD is now at a77923f Merge pull request #373 from luispedro/imread_io_plugin But I'm getting the same import error. Actually, I'm a little confused. When I first try to import feature, I get an error related to the import of filter: >>> from skimage import feature Traceback (most recent call last): File "", line 1, in File "/home/[...]/skimage/feature/__init__.py", line 5, in from .template import match_template File "/home/[...]/skimage/feature/template.py", line 4, in from . import _template File "_template.pyx", line 37, in init skimage.feature._template (skimage/feature/_template.c:4073) File "/home/[...]/skimage/transform/__init__.py", line 1, in from .hough_transform import * File "/home/[...]/skimage/transform/hough_transform.py", line 8, in from skimage import measure, morphology File "/home/[...]/skimage/measure/__init__.py", line 2, in from ._regionprops import regionprops, perimeter File "/home/[...]/skimage/measure/_regionprops.py", line 6, in from skimage.morphology import convex_hull_image File "/home/[...]/skimage/morphology/__init__.py", line 6, in from .watershed import watershed, is_local_maximum File "/home/[...]/skimage/morphology/watershed.py", line 30, in from ..filter import rank_order File "/home/[...]/skimage/filter/__init__.py", line 7, in from ._denoise import denoise_bilateral ImportError: No module named _denoise And on the second try or later, the error is indeed the same as before the fetch and reset: >>> from skimage import feature Traceback (most recent call last): File "", line 1, in File "/home/[...]/skimage/feature/__init__.py", line 5, in from .template import match_template File "/home/[...]/skimage/feature/template.py", line 4, in from . import _template ImportError: cannot import name _template (I displayed the error messages from the Python interpreter just because they are shorter than those from IPython.) ... Marianne On Friday, November 30, 2012 1:56:01 AM UTC+1, Stefan van der Walt wrote: > > Hi Marianne > > On Thu, Nov 29, 2012 at 9:01 AM, Marianne Corvellec > > wrote: > > I am having an Import Error when I try to import feature, namely: > > > > In [28]: from skimage import feature > > Could you please try the following? > > $ get fetch origin > $ git reset --hard origin/master > > Rebuild and then try again? You may have to replace "origin" with > "upstream" or whatever you called the official repo. > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at mitotic-machine.org Fri Nov 30 06:24:25 2012 From: guillaume at mitotic-machine.org (Guillaume Gay) Date: Fri, 30 Nov 2012 12:24:25 +0100 Subject: Peak detection algorithm In-Reply-To: <20121130081538.GB30004@phare.normalesup.org> References: <20121130081538.GB30004@phare.normalesup.org> Message-ID: <50B89769.7080606@mitotic-machine.org> Hi all, I have implemented the (2D) Gaussian peak detection method described in Segr? et al. Nature Methods *5*, 8 (2008) . In short it is a patch based detection where a likelihood ratio (Gaussian peak vs background noise) is computed on a moving window over an image. The originality (and interest for my kind of fluorescence microscopy problems) is that this is followed by a /deflation/ step, where detected peaks are subtracted from the original image and a new detection is performed on the so called deflated image. Also it is noise resistant and there are not so many knobs to adjust (only window size, sensibility and peak radius). Is there an interest to include this in skimage? The code is available on github there . I imagine there's room for improvements, and more compatibility with skimage standards. Cheers, Guillaume -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri Nov 30 15:37:22 2012 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 30 Nov 2012 12:37:22 -0800 Subject: Error importing feature: Issue with match_template function? In-Reply-To: References: <96aeaee8-9e00-4ebb-9127-9529157fa176@googlegroups.com> Message-ID: Hi Marianne It looks like your extension modules are not compiled. Did the build complete successfully? St?fan On Nov 30, 2012 12:21 PM, "Marianne Corvellec" < marianne.corvellec at ens-lyon.org> wrote: -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Thu Nov 29 21:43:11 2012 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Fri, 30 Nov 2012 13:43:11 +1100 Subject: "No definition found" Message-ID: I'm getting this message when trying to find the definition header for skimage.segmentation.slic: In [12]: %pdef segmentation.slic No definition header found for segmentation.slic In [15]: %pdef segmentation.join_segmentations segmentation.join_segmentations(s1, s2) Anyone know why this is? Is this a general feature of Cython functions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From marianne.corvellec at ens-lyon.org Fri Nov 30 16:48:11 2012 From: marianne.corvellec at ens-lyon.org (Marianne Corvellec) Date: Fri, 30 Nov 2012 13:48:11 -0800 (PST) Subject: Error importing feature: Issue with match_template function? In-Reply-To: References: <96aeaee8-9e00-4ebb-9127-9529157fa176@googlegroups.com> Message-ID: <1b7e8ec1-acd2-4b5b-b14f-4ed2de6a43e6@googlegroups.com> Hi again, Oh, sorry! I skipped 'Rebuild and then try again?' which you wrote earlier... And I should have read the README more carefully. ;) I love READMEs... <3 It's all fine now. Thank you so much for your help! Marianne On Friday, November 30, 2012 9:37:22 PM UTC+1, Stefan van der Walt wrote: > > Hi Marianne > > It looks like your extension modules are not compiled. Did the build > complete successfully? > > St?fan > On Nov 30, 2012 12:21 PM, "Marianne Corvellec" > > wrote: > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri Nov 30 18:26:41 2012 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 30 Nov 2012 15:26:41 -0800 Subject: Error importing feature: Issue with match_template function? In-Reply-To: <1b7e8ec1-acd2-4b5b-b14f-4ed2de6a43e6@googlegroups.com> References: <96aeaee8-9e00-4ebb-9127-9529157fa176@googlegroups.com> <1b7e8ec1-acd2-4b5b-b14f-4ed2de6a43e6@googlegroups.com> Message-ID: On Fri, Nov 30, 2012 at 1:48 PM, Marianne Corvellec wrote: > It's all fine now. Fantastic! Glad you can finally start coding. Happy hacking, St?fan From jni.soma at gmail.com Thu Nov 29 23:45:49 2012 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Fri, 30 Nov 2012 15:45:49 +1100 Subject: Making algorithms at least 3D, preferably nD Message-ID: Hey Guys, I mentioned this briefly at SciPy, but I would like to reiterate: a lot of data is 3D images these days, and more and more data is being generated that is multi-channel, 3D+t. Therefore, it would be awesome if scikit-image started making more of an effort to support these. In the best case, the dimension of the underlying array can be abstracted away ? see herefor example, the functions juicy_center (which extracts the centre of an array, along all dimensions), surfaces (grabs the "border" arrays along each dimension), hollowed (zeroes-out the centre), and more. Otherwise, writing a 3D function that gracefully degrades to 2D when one of the dimensions is 1 is also possible. In general, the amount of additional effort to make code 3-, 4- or n- dimensional is relatively low when you write the algorithm initially, relative to refactoring a whole bunch of functions later. I'll try to fiddle with whichever code I need, but in the meantime, what do you think about adding a paragraph or a sentence about this issue in the scikit-image contribute section, so that people at least have this in mind when they are thinking of writing something new? Thanks, Juan. -------------- next part -------------- An HTML attachment was scrubbed... URL: