From stefanv at berkeley.edu Wed Apr 1 01:56:03 2015 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Tue, 31 Mar 2015 22:56:03 -0700 Subject: Automatic formatting of Python code In-Reply-To: References: Message-ID: On Tue, Mar 31, 2015 at 10:18 PM, Juan Nunez-Iglesias wrote: > From Google, a tool to automatically format your Python code, even beyond > what PEP8 prescribes: > > https://github.com/google/yapf > > I always thought Go's gofmt tool (and convention) were a great asset to that > community. It'd be awesome to have the same for Python. I think you should run the scikit-image code through YAPF, make pull requests accordingly, and claim the glory for yourself. I'd be happy to review :) St?fan From jni.soma at gmail.com Wed Apr 1 01:18:33 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 01 Apr 2015 05:18:33 +0000 Subject: Automatic formatting of Python code Message-ID: >From Google, a tool to automatically format your Python code, even beyond what PEP8 prescribes: https://github.com/google/yapf I always thought Go's gofmt tool (and convention) were a great asset to that community. It'd be awesome to have the same for Python. YAPF is pre-alpha but interesting enough to share now. =) Juan. PS: This is apparently not an April Fool's joke. =P -------------- next part -------------- An HTML attachment was scrubbed... URL: From matteo.niccoli at gmail.com Thu Apr 2 09:22:03 2015 From: matteo.niccoli at gmail.com (Matteo) Date: Thu, 2 Apr 2015 06:22:03 -0700 (PDT) Subject: Issue with morphological filters In-Reply-To: <1427688200136.b53bcefa@Nodemailer> References: <1427688200136.b53bcefa@Nodemailer> Message-ID: <33118697-5377-432a-a5fe-b55135df54a9@googlegroups.com> OK Thanks so much for your efforts Juan, I will take a look. Matteo On Sunday, March 29, 2015 at 10:03:23 PM UTC-6, Juan Nunez-Iglesias wrote: > Hmm, I must say I don't know what's going on with either the > reconstruction or the binary_fill_holes. (Originally I thought the image > was inverted but you tried both polarities...) My advice would be to look > at a few iterations of morphological reconstruction manually and see what's > going on... > > Also, I would use the "grey" colormap, which is the most intuitive to look > at (you used a reversed colormap for a couple of the images). > > Finally, it may be that you need to fill each "blob" independently. If so, > have a look at skimage.measure.regionprops.filled_image. > http://scikit-image.org/docs/dev/api/skimage.measure.html#regionprops > > Juan. > > > > > On Sat, Mar 28, 2015 at 2:32 AM, Matteo > wrote: > >> Hello Juan >> >> Here it is: >> >> http://nbviewer.ipython.org/urls/dl.dropbox.com/s/ancfxe2gx1fbyyp/morphology_test.ipynb?dl=0 >> I get the same, odd results, with both ndimage's binary_fill_holes, and >> reconstruction. IS it because of the structuring elements/masks? >> Thanks for your help. >> Matteo >> >> On Thursday, March 26, 2015 at 11:14:05 PM UTC-6, Juan Nunez-Iglesias >> wrote: >> >>> Hi Matteo, >>> >>> Can you try putting this notebook up as a gist and pasting a link to the >>> notebook? It's hard for me to follow all of the steps (and the polarity of >>> the image) without the images inline. Is it just the inverse of what you >>> want? And anyway why aren't you just using ndimage's binary_fill_holes? >>> >>> >>> https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.ndimage.morphology.binary_fill_holes.html >>> >>> Juan. >>> >>> >>> >>> >>> On Fri, Mar 27, 2015 at 9:09 AM, Matteo wrote: >>> >>> Hello Juan >>> >>> Thanks so much for your suggestions. >>> Once I convertedthe image as you suggested: >>> # import back image >>> cfthdr=io.imread('filled_contour_THDR.png') >>> cfthdr = color.rgb2gray(cfthdr) > 0.5 >>> >>> I get good results with opening: >>> # clean it up with opening >>> selem17 = disk(17) >>> opened_thdr = opening(cfthdr, selem17)/255 >>> # plot it >>> fig = plt.figure(figsize=(5, 5)) >>> ax = fig.add_subplot(1, 1, 1) >>> ax.set_xticks([]) >>> ax.set_yticks([]) >>> plt.imshow(opened_thdr,cmap='bone') >>> plt.show() >>> # not bad >>> >>> >>> With remove_small_objects the advantage is that it does not join blobs >>> in the original: >>> cfthdr_inv = ~cfthdr >>> test=remove_small_objects(cfthdr,10000) >>> # plot it >>> fig = plt.figure(figsize=(5, 5)) >>> ax = fig.add_subplot(1, 1, 1) >>> ax.set_xticks([]) >>> ax.set_yticks([]) >>> plt.imshow(test,cmap='bone') >>> plt.show() >>> >>> >>> but with reconstruction done as this: >>> # filling holes with morphological reconstruction >>> seed = np.copy(cfthdr_inv) >>> seed[1:-1, 1:-1] = cfthdr_inv.max() >>> mask = cfthdr_inv >>> filled = reconstruction(seed, mask, method='erosion') >>> # plot it >>> fig = plt.figure(figsize=(5, 5)) >>> ax = fig.add_subplot(1, 1, 1) >>> ax.set_xticks([]) >>> ax.set_yticks([]) >>> plt.imshow(filled,cmap='bone',vmin=cfthdr_inv.min(), vmax=cfthdr_inv.max >>> ()) >>> plt.show() >>> >>> I get a completely white image. Do you have any suggestions as to why? >>> >>> Thank again. Cheers, >>> Matteo >>> >>> >>> On Wednesday, March 25, 2015 at 6:29:43 PM UTC-6, Juan Nunez-Iglesias >>> wrote: >>> >>> Hi Matteo, >>> >>> My guess is that even though you are looking at a "black and white" >>> image, the png is actually an RGB png. Just check the output of >>> "print(cfthdr.shape)". Should be straightforward to make it a binary image: >>> >>> from skimage import color >>> cfthdr = color.rgb2gray(cfthdr) > 0.5 >>> >>> Then you should have a binary image. (And inverting should be as simple >>> as "cfthdr_inv = ~cfthdr") We have morphology.binary_fill_holes to do what >>> you want. >>> >>> btw, there's also morphology.remove_small_objects, which does exactly >>> what you did but as a function call. Finally, it looks like you are not >>> using the latest version of scikit-image (0.11), so you might want to >>> upgrade. >>> >>> Hope that helps! >>> >>> Juan. >>> >>> >>> >>> >>> On Thu, Mar 26, 2015 at 8:48 AM, Matteo wrote: >>> >>> *Issues with morphological filters when trying to remove white holes >>> in black objects in a binary images. Using opening or filling holes on >>> inverted (or complement) of the original binary.* >>> >>> Hi there >>> >>> I have a series of derivatives calculated on geophysical data. >>> >>> Many of these derivatives have nice continuous maxima, so I treat them >>> as images on which I do some cleanup with morphological filter. >>> >>> Here's one example of operations that I do routinely, and successfully: >>> >>> # threshold theta map using Otsu method >>> >>> thresh_th = threshold_otsu(theta) >>> >>> binary_th = theta > thresh_th >>> >>> # clean up small objects >>> >>> label_objects_th, nb_labels_th = sp.ndimage.label(binary_th) >>> >>> sizes_th = np.bincount(label_objects_th.ravel()) >>> >>> mask_sizes_th = sizes_th > 175 >>> >>> mask_sizes_th[0] = 0 >>> >>> binary_cleaned_th = mask_sizes_th[label_objects_th] >>> >>> # further enhance with morphological closing (dilation followed by an >>> erosion) to remove small dark spots and connect small bright cracks >>> >>> # followed by an extra erosion >>> >>> selem = disk(1) >>> >>> closed_th = closing(binary_cleaned_th, selem)/255 >>> >>> eroded_th = erosion(closed_th, selem)/255 >>> >>> # Finally, extract lienaments using skeletonization >>> >>> skeleton_th=skeletonize(binary_th) >>> >>> skeleton_cleaned_th=skeletonize(binary_cleaned_th) >>> >>> # plot to compare >>> >>> fig = plt.figure(figsize=(20, 7)) >>> >>> ax = fig.add_subplot(1, 2, 1) >>> >>> imshow(skeleton_th, cmap='bone_r', interpolation='none') >>> >>> ax2 = fig.add_subplot(1, 3, 2) >>> >>> imshow(skeleton_cleaned_th, cmap='bone_r', interpolation='none') >>> >>> ax.set_xticks([]) >>> >>> ax.set_yticks([]) >>> >>> ax2.set_xticks([]) >>> ax2.set_yticks([]) >>> >>> Unfortunately I cannot share the data as it is proprietary, but I will >>> for the next example, which is the one that does not work. >>> >>> There's one derivative that shows lots of detail but not continuous >>> maxima. As a workaround I created filled contours in Matplotlib >>> >>> exported as an image. The image is attached. >>> >>> Now I want to import back the image and plot it to test: >>> >>> # import back image >>> >>> cfthdr=io.imread('filled_contour.png') >>> >>> # threshold using using Otsu method >>> >>> thresh_thdr = threshold_otsu(cfthdr) >>> >>> binary_thdr = cfthdr > thresh_thdr >>> >>> # plot it >>> >>> fig = plt.figure(figsize=(5, 5)) >>> >>> ax = fig.add_subplot(1, 1, 1) >>> >>> ax.set_xticks([]) >>> >>> ax.set_yticks([]) >>> >>> plt.imshow(binary_thdr, cmap='bone') >>> >>> plt.show() >>> >>> The above works without issues. >>> >>> >>> >>> Next I want to fill the white holes inside the black blobs. I thought of >>> 2 strategies. >>> >>> The first would be to use opening; the second to invert the image, and >>> then fill the holes as in here: >>> >>> http://scikit-image.org/docs/dev/auto_examples/plot_holes_and_peaks.html >>> >>> By the way, I found a similar example for opencv here >>> >>> >>> http://stackoverflow.com/questions/10316057/filling-holes-inside-a-binary-object >>> >>> Let's start with opening. When I try: >>> >>> selem = disk(1) >>> >>> opened_thdr = opening(binary_thdr, selem) >>> >>> or: >>> >>> selem = disk(1) >>> >>> opened_thdr = opening(cfthdr, selem) >>> >>> I get an error message like this: >>> >>> --------------------------------------------------------------------------- >>> >>> >>> ValueError Traceback (most recent call >>> last) >>> >>> in () >>> >>> 1 #binary_thdr=img_as_float(binary_thdr,force_copy=False) >>> >>> ----> 2 opened_thdr = opening(binary_thdr, selem)/255 >>> >>> 3 >>> >>> ... >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Thu Apr 2 22:34:37 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Thu, 2 Apr 2015 19:34:37 -0700 (PDT) Subject: Automatic formatting of Python code In-Reply-To: References: Message-ID: <1333b674-421c-47af-874f-6576fc73ca77@googlegroups.com> yapf works fairly well. It still needs some work to be truly PEP8 compliant, especially regarding math expressions, and this is an issue I have with most PEP8 style checkers. This is the actual snippet provided by PEP8 as an example of what not to do: i=i+1 submitted +=1 x = x * 2 - 1 hypot2 = x * x + y * y c = (a + b) * (a - b) which, in a true PEP8 style checker/formatter, should change to i = i + 1 submitted += 1 x = x*2 - 1 hypot2 = x*x + y*y c = (a+b) * (a-b) again, this isn't a contrived example, it's directly from PEP8 ! Yet all style checkers I know of get the first two lines right, then fail on the last three thinking the bad snippet is correct while throwing a fit over the correct (and much more readable) one. yapf does the same. Unfortunately the only real way to correct this in the community is to fix the style checkers. So long as our automated tools throw a fit over the ideal formatting, this behavior is going to get more entrenched. There is a lot of inertia to overcome, but it might be worth bucking the trend. It does essentially require us to teach the checker order of operations. Another part of PEP8 which almost nobody obeys for similar reasons pertains to complicated slicing operations. The following is considered correct, and I know violations of these are littered all over the package. ham[1:9], ham[1:9:3], ham[:9:3], ham[1::3], ham[1:9:] ham[lower:upper], ham[lower:upper:], ham[lower::step] ham[lower+offset : upper+offset] ham[: upper_fn(x) : step_fn(x)], ham[:: step_fn(x)] ham[lower + offset : upper + offset] Food for thought! Josh On Wednesday, April 1, 2015 at 12:56:27 AM UTC-5, stefanv wrote: On Tue, Mar 31, 2015 at 10:18 PM, Juan Nunez-Iglesias wrote: > > From Google, a tool to automatically format your Python code, even > beyond > > what PEP8 prescribes: > > > > https://github.com/google/yapf > > > > I always thought Go's gofmt tool (and convention) were a great asset to > that > > community. It'd be awesome to have the same for Python. > > I think you should run the scikit-image code through YAPF, make pull > requests accordingly, and claim the glory for yourself. > > I'd be happy to review :) > > St?fan > ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From yutaxsato at gmail.com Fri Apr 3 09:53:46 2015 From: yutaxsato at gmail.com (Yuta Sato) Date: Fri, 3 Apr 2015 22:53:46 +0900 Subject: Apply segmentation to a large binary image In-Reply-To: <1426382296175.071e7ea3@Nodemailer> References: <1426382296175.071e7ea3@Nodemailer> Message-ID: Dear Juan Nunez-Iglesias and Josh Warner: Thanks for your kind responses. Lets take a more simpler case, e.g., binary_fill_holes. I want to apply to the WHOLE image at once, because if I apply it in the parts of image and later combine, the result differs. Does putting the image into hdf5 and applying the binary_fill_holes solve my problem? Can I really apply binary_fill_holes to hdf5 file? Thanks for your helps. Yuta On Sun, Mar 15, 2015 at 10:18 AM, Juan Nunez-Iglesias wrote: > Josh, you might be thinking of morphology.remove_small_objects, but that > is O(image.size), rather than O(sum(image == label)), which is what you are > after. In fact we would need a flood-fill algorithm, which we don't have... > That would be a fantastic addition. > > > > > On Sun, Mar 15, 2015 at 10:24 AM, Josh Warner > wrote: > >> Would it be possible to generalize / refactor `clear_border` to a >> function which removes all points connected to a specific pixel/voxel? That >> would greatly simplify the work needed here. >> >> I thought we had some sort of `remove_object` functionality like this, >> but I don't see it. >> >> Josh >> >> On Friday, March 13, 2015 at 9:04:12 PM UTC-5, Juan Nunez-Iglesias wrote: >>> >>> Hey Yuta, >>> >>> You'll need to do some stitching out-of-core. That's a really tricky >>> problem and I don't have any ready-made solutions for you. The solution >>> will depend on the nature of your segments. The only thing I would >>> recommend is that you use a format such as HDF5 (you can use the excellent >>> h5py library) that allows random access into the underlying disk data. >>> >>> Other than that, as I said, to my knowledge you'll have to develop your >>> own stitching: segment *overlapping* tiles independently in memory, and >>> when it comes time to write to disk, load the tile and overlapping tiles >>> that have already been segmented, and resolve label mapping then... >>> >>> Generally, think of it this way: tile i has already been segmented and >>> written out. We now want to write out tile j, which overlaps tile i. Labels >>> from tile i that intersect labels from tile j in the overlap region should >>> be matched. labels in tile j that *don't* intersect tile i should be >>> relabelled to ensure they are unique with respect to tile i. >>> >>> Of course this gets a bit more complicated in 2D or 3D... >>> >>> Juan. >>> >>> >>> >>> >>> On Fri, Mar 13, 2015 at 7:20 PM, Yuta Sato wrote: >>> >>>> Dear SKIMAGE Developers and Users: >>>> >>>> I want to use the following algorithm in a large binary image that does >>>> not fit into my PC memory. So, I am thinking to split my large image into >>>> tiles and apply algorithm one by one. However, the original border >>>> definition change when I do it in parts. I need the result as applied in >>>> original full image. How can I do it efficiently? >>>> >>>> skimage.segmentation.clear_border(image, buffer_size=0, bgval=0) >>>> >>>> Thanks for your ideas. >>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "scikit-image" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to scikit-image+unsubscribe at googlegroups.com. >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>> >>> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yutaxsato at gmail.com Fri Apr 3 09:56:07 2015 From: yutaxsato at gmail.com (Yuta Sato) Date: Fri, 3 Apr 2015 22:56:07 +0900 Subject: Range of beta values in segmentation algorithm? In-Reply-To: References: Message-ID: Dear skimage developers: I would really appreciate to hear the answer on my question if it does worth. Thanks On Thu, Mar 12, 2015 at 4:04 PM, Yuta Sato wrote: > In the following skimage.segmentation.random_walker algorithm: > What is the range of 'beta' values that can be supplied? > I am working with a single band 8bit unsigned image. > > Is it 0 to 255? > > > skimage.segmentation.random_walker(data, labels, beta=130, mode='bf', > tol=0.001, copy=True,multichannel=False, return_full_prob=False, > spacing=None) > > beta : float [Penalization coefficient for the random walker motion (the > greater beta, the more difficult the diffusion)] > > Thanks for your support. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Apr 3 17:08:45 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 3 Apr 2015 23:08:45 +0200 Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> <3e5a8631-5918-4602-a341-1c835bbf5299@googlegroups.com> Message-ID: On Fri, Mar 27, 2015 at 3:04 PM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > On Fri, Mar 27, 2015 at 2:27 AM, Ralf Gommers > wrote: > >> >> >> On Thu, Mar 26, 2015 at 8:40 PM, AMAN singh >> wrote: >> >>> Thank you everyone for your insightful comments. >>> I have tried to incorporate your suggestion in the proposal. Kindly >>> have a look at the new proposal here >>> >>> and suggest the improvements. >>> >> >> Hi Aman, this looks quite good to me. For the timeline I think it will >> take longer to get the iterators right and shorter to port the last >> functions at the end - once you get the hang of it you'll be able to do the >> last ones quickly I expect. >> > > That sounds about right. I think that breaking down the schedule to what > function will be ported what week is little more than wishful thinking, and > that keeping things at the file level would make more sense. But I think > you are getting your proposal there. > > One idea that just crossed my mind: checking your implementation of the > iterators and other stuff in support.c for correctness and performance is > going to be an important part of the project. Perhaps it is a good idea to > identify, either now or very early on the project, a few current ndimage > top level functions that use each of those objects, if possible without > interaction with the others, and build a sequence that could look something > like (I am making this up in a hurry, so don't take the actual function > names proposed too seriously, although they may actually make sense): > > Port NI_PointIterator -> Port NI_CenterOfMass, benchmark and test > Port NI_LineBuffer -> Port NI_UniformFilter1D, benchmark and test > ... > > This would very likely extend the time you will need to implement all the > items in support.c. But by the time you were finished with that we would > both have high confidence that things were going well, plus a "Rosetta > Stone" that should make it a breeze to finish the job, both for you and > anyone else. We would also have an intermediate milestone (everything in > support ported plus a working example of each being used, with correctness > and performance verified), that would be a worthy deliverable on its own: > if we are terribly miscalculating task duration, and everything slips and > is delayed, getting there could still be considered a success, since it > would make finishing the job for others much, much simpler. > That sounds like an excellent idea to me. > One little concern of mine, and the questions don't really go to Aman, but > to the scipy devs: the Cython docs on fused types have a big fat warning at > the top on support still being experimental. Also, this is going to bump > the version requirements for Cython to a very recent one. Are we OK with > this? > We're using fused types in more places in Scipy now. They've been around for a while, and apart from that you have to be careful with using multiple usages of a fused type in a single function (which explodes the generated code and binary size) I don't remember many problems with it. Maybe worth asking the Cython devs why they haven't removed that warning yet? > Similarly, you suggest using Cython's prange to parallelize computations. > I haven't seen OpenMP used anywhere in NumPy or SciPy, and have the feeling > that parallel implementations are left out on purpose. Am I right, or would > parallelizing were possible be OK? > Yep, that has been on purpose so far. That could change of course, but it would need significant discussion and an overall strategy first. OpenMP proposals for individual functions have always been rejected before. So would be better to remove it from this GSoC proposal. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Sat Apr 4 08:58:38 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sat, 04 Apr 2015 05:58:38 -0700 (PDT) Subject: Range of beta values in segmentation algorithm? In-Reply-To: References: Message-ID: <1428152318555.6a208de2@Nodemailer> Hi Yuta, Sorry, this slipped through the cracks. I haven't used random walker segmentation so I can't give you advice here... You might want to read the original publication [1], or, more practically, try out different betas on a logarithmic scale. Juan. [1] http://webdocs.cs.ualberta.ca/~nray1/CMPUT615/MRF/grady2006random.pdf On Sat, Apr 4, 2015 at 12:56 AM, Yuta Sato wrote: > Dear skimage developers: > I would really appreciate to hear the answer on my question if it does > worth. > Thanks > On Thu, Mar 12, 2015 at 4:04 PM, Yuta Sato wrote: >> In the following skimage.segmentation.random_walker algorithm: >> What is the range of 'beta' values that can be supplied? >> I am working with a single band 8bit unsigned image. >> >> Is it 0 to 255? >> >> >> skimage.segmentation.random_walker(data, labels, beta=130, mode='bf', >> tol=0.001, copy=True,multichannel=False, return_full_prob=False, >> spacing=None) >> >> beta : float [Penalization coefficient for the random walker motion (the >> greater beta, the more difficult the diffusion)] >> >> Thanks for your support. >> > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Sat Apr 4 09:02:27 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Sat, 04 Apr 2015 06:02:27 -0700 (PDT) Subject: Apply segmentation to a large binary image In-Reply-To: References: Message-ID: <1428152546685.075ca424@Nodemailer> Hey Yuta, I'm not sure how much h5py mirrors the numpy array interface, but I suspect not sufficiently to allow C/Cython functions to work. I think you'll need to find a clever way to partition your image to get the right result, because although you can use numpy.memmap [1] to use an on-disk array, I think that might be far too slow (since binary fill holes uses quite a few iterations... Juan. [1] http://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html On Sat, Apr 4, 2015 at 12:54 AM, Yuta Sato wrote: > Dear Juan Nunez-Iglesias and Josh Warner: > Thanks for your kind responses. > Lets take a more simpler case, e.g., binary_fill_holes. > I want to apply to the WHOLE image at once, because if I apply it in the > parts of image and later combine, the result differs. > Does putting the image into hdf5 and applying the binary_fill_holes solve > my problem? > Can I really apply binary_fill_holes to hdf5 file? > Thanks for your helps. > Yuta > On Sun, Mar 15, 2015 at 10:18 AM, Juan Nunez-Iglesias > wrote: >> Josh, you might be thinking of morphology.remove_small_objects, but that >> is O(image.size), rather than O(sum(image == label)), which is what you are >> after. In fact we would need a flood-fill algorithm, which we don't have... >> That would be a fantastic addition. >> >> >> >> >> On Sun, Mar 15, 2015 at 10:24 AM, Josh Warner >> wrote: >> >>> Would it be possible to generalize / refactor `clear_border` to a >>> function which removes all points connected to a specific pixel/voxel? That >>> would greatly simplify the work needed here. >>> >>> I thought we had some sort of `remove_object` functionality like this, >>> but I don't see it. >>> >>> Josh >>> >>> On Friday, March 13, 2015 at 9:04:12 PM UTC-5, Juan Nunez-Iglesias wrote: >>>> >>>> Hey Yuta, >>>> >>>> You'll need to do some stitching out-of-core. That's a really tricky >>>> problem and I don't have any ready-made solutions for you. The solution >>>> will depend on the nature of your segments. The only thing I would >>>> recommend is that you use a format such as HDF5 (you can use the excellent >>>> h5py library) that allows random access into the underlying disk data. >>>> >>>> Other than that, as I said, to my knowledge you'll have to develop your >>>> own stitching: segment *overlapping* tiles independently in memory, and >>>> when it comes time to write to disk, load the tile and overlapping tiles >>>> that have already been segmented, and resolve label mapping then... >>>> >>>> Generally, think of it this way: tile i has already been segmented and >>>> written out. We now want to write out tile j, which overlaps tile i. Labels >>>> from tile i that intersect labels from tile j in the overlap region should >>>> be matched. labels in tile j that *don't* intersect tile i should be >>>> relabelled to ensure they are unique with respect to tile i. >>>> >>>> Of course this gets a bit more complicated in 2D or 3D... >>>> >>>> Juan. >>>> >>>> >>>> >>>> >>>> On Fri, Mar 13, 2015 at 7:20 PM, Yuta Sato wrote: >>>> >>>>> Dear SKIMAGE Developers and Users: >>>>> >>>>> I want to use the following algorithm in a large binary image that does >>>>> not fit into my PC memory. So, I am thinking to split my large image into >>>>> tiles and apply algorithm one by one. However, the original border >>>>> definition change when I do it in parts. I need the result as applied in >>>>> original full image. How can I do it efficiently? >>>>> >>>>> skimage.segmentation.clear_border(image, buffer_size=0, bgval=0) >>>>> >>>>> Thanks for your ideas. >>>>> >>>>> -- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "scikit-image" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to scikit-image+unsubscribe at googlegroups.com. >>>>> For more options, visit https://groups.google.com/d/optout. >>>>> >>>> >>>> -- >>> You received this message because you are subscribed to the Google Groups >>> "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to scikit-image+unsubscribe at googlegroups.com. >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ug201310004 at iitj.ac.in Sat Apr 4 14:06:28 2015 From: ug201310004 at iitj.ac.in (AMAN singh) Date: Sat, 4 Apr 2015 11:06:28 -0700 (PDT) Subject: GSoC: Rewriting scipy.ndimage in Cython In-Reply-To: References: <761337ee-68ba-4114-8ec4-a9cf8182376b@googlegroups.com> <3e5a8631-5918-4602-a341-1c835bbf5299@googlegroups.com> Message-ID: <7246250b-f2b8-488e-8102-0117587beab9@googlegroups.com> Hi everyone @Jaime Thanks for the suggestion. This is really a great idea I will follow this excellent strategy while rewriting the module. @Stefanv I was not able to add the suggestions of Jaime since my proposal was locked . Can you please allow me revise my proposal? I want to include Jaime's suggestion in it. Regards, Aman Singh > That sounds about right. I think that breaking down the schedule to what > function will be ported what week is little more than wishful thinking, and > that keeping things at the file level would make more sense. But I think > you are getting your proposal there. > > One idea that just crossed my mind: checking your implementation of the > iterators and other stuff in support.c for correctness and performance is > going to be an important part of the project. Perhaps it is a good idea to > identify, either now or very early on the project, a few current ndimage > top level functions that use each of those objects, if possible without > interaction with the others, and build a sequence that could look something > like (I am making this up in a hurry, so don't take the actual function > names proposed too seriously, although they may actually make sense): > > Port NI_PointIterator -> Port NI_CenterOfMass, benchmark and test > Port NI_LineBuffer -> Port NI_UniformFilter1D, benchmark and test > ... > > This would very likely extend the time you will need to implement all the > items in support.c. But by the time you were finished with that we would > both have high confidence that things were going well, plus a "Rosetta > Stone" that should make it a breeze to finish the job, both for you and > anyone else. We would also have an intermediate milestone (everything in > support ported plus a working example of each being used, with correctness > and performance verified), that would be a worthy deliverable on its own: > if we are terribly miscalculating task duration, and everything slips and > is delayed, getting there could still be considered a success, since it > would make finishing the job for others much, much simpler. > > One little concern of mine, and the questions don't really go to Aman, but > to the scipy devs: the Cython docs on fused types have a big fat warning at > the top on support still being experimental. Also, this is going to bump > the version requirements for Cython to a very recent one. Are we OK with > this? > > Similarly, you suggest using Cython's prange to parallelize computations. > I haven't seen OpenMP used anywhere in NumPy or SciPy, and have the feeling > that parallel implementations are left out on purpose. Am I right, or would > parallelizing were possible be OK? > > Jaime > > -- > (\__/) > ( O.o) > ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes > de dominaci?n mundial. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emmanuelle.gouillart at normalesup.org Sat Apr 4 09:22:36 2015 From: emmanuelle.gouillart at normalesup.org (Emmanuelle Gouillart) Date: Sat, 4 Apr 2015 15:22:36 +0200 Subject: Range of beta values in segmentation algorithm? In-Reply-To: <1428152318555.6a208de2@Nodemailer> References: <1428152318555.6a208de2@Nodemailer> Message-ID: Hi Yuta, beta has to take a positive value. In the algorithm, the weight on a graph edge is given by exp(- beta * diff) where diff is the absolute value of pixels differences on both sides of the edge. Furthermore, the value of beta you give is normalized by ten times the standard deviation of the image, so that you don't have to worry about the image range (I know this sounds a bit weird, but that's how it's coded. I might even be responsible for this hack :-). Therefore, if you put a large value of beta there will be a very small weight on edges for which pixels have different values, and diffusion will be difficult. On the other hand, for small values diffusion will be easy and regions will be "flooded" for markers, no matter the gradients. A larger value of beta means that boundaries are more likely to lie on pixels with a strong gradient. I would advise that you start with a small value of beta (1 for example) and look at the result. If you feel like boudaries are "leaky" it means that diffusion is too fast and you should increase beta. Hope this helps Emma 2015-04-04 14:58 GMT+02:00 Juan Nunez-Iglesias : > Hi Yuta, > > Sorry, this slipped through the cracks. I haven't used random walker > segmentation so I can't give you advice here... You might want to read the > original publication [1], or, more practically, try out different betas on > a logarithmic scale. > > Juan. > > [1] http://webdocs.cs.ualberta.ca/~nray1/CMPUT615/MRF/grady2006random.pdf > > > > > On Sat, Apr 4, 2015 at 12:56 AM, Yuta Sato wrote: > >> >> Dear skimage developers: >> I would really appreciate to hear the answer on my question if it does >> worth. >> >> Thanks >> >> On Thu, Mar 12, 2015 at 4:04 PM, Yuta Sato wrote: >> >>> In the following skimage.segmentation.random_walker algorithm: >>> What is the range of 'beta' values that can be supplied? >>> I am working with a single band 8bit unsigned image. >>> >>> Is it 0 to 255? >>> >>> >>> skimage.segmentation.random_walker(data, labels, beta=130, mode='bf', >>> tol=0.001, copy=True,multichannel=False, return_full_prob=False, >>> spacing=None) >>> >>> beta : float [Penalization coefficient for the random walker motion (the >>> greater beta, the more difficult the diffusion)] >>> >>> Thanks for your support. >>> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yutaxsato at gmail.com Sat Apr 4 09:46:00 2015 From: yutaxsato at gmail.com (Yuta Sato) Date: Sat, 4 Apr 2015 22:46:00 +0900 Subject: Apply segmentation to a large binary image In-Reply-To: <1428152546685.075ca424@Nodemailer> References: <1428152546685.075ca424@Nodemailer> Message-ID: Thanks Juan Nunez-Iglesias for the information! On Sat, Apr 4, 2015 at 10:02 PM, Juan Nunez-Iglesias wrote: > Hey Yuta, > > I'm not sure how much h5py mirrors the numpy array interface, but I > suspect not sufficiently to allow C/Cython functions to work. I think > you'll need to find a clever way to partition your image to get the right > result, because although you can use numpy.memmap [1] to use an on-disk > array, I think that might be far too slow (since binary fill holes uses > quite a few iterations... > > Juan. > > [1] http://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html > > > > > On Sat, Apr 4, 2015 at 12:54 AM, Yuta Sato wrote: > >> Dear Juan Nunez-Iglesias and Josh Warner: >> >> Thanks for your kind responses. >> Lets take a more simpler case, e.g., binary_fill_holes. >> I want to apply to the WHOLE image at once, because if I apply it in >> the parts of image and later combine, the result differs. >> >> Does putting the image into hdf5 and applying the binary_fill_holes >> solve my problem? >> Can I really apply binary_fill_holes to hdf5 file? >> >> Thanks for your helps. >> >> Yuta >> >> >> >> >> >> On Sun, Mar 15, 2015 at 10:18 AM, Juan Nunez-Iglesias > > wrote: >> >>> Josh, you might be thinking of morphology.remove_small_objects, but that >>> is O(image.size), rather than O(sum(image == label)), which is what you are >>> after. In fact we would need a flood-fill algorithm, which we don't have... >>> That would be a fantastic addition. >>> >>> >>> >>> >>> On Sun, Mar 15, 2015 at 10:24 AM, Josh Warner < >>> silvertrumpet999 at gmail.com> wrote: >>> >>>> Would it be possible to generalize / refactor `clear_border` to a >>>> function which removes all points connected to a specific pixel/voxel? That >>>> would greatly simplify the work needed here. >>>> >>>> I thought we had some sort of `remove_object` functionality like this, >>>> but I don't see it. >>>> >>>> Josh >>>> >>>> On Friday, March 13, 2015 at 9:04:12 PM UTC-5, Juan Nunez-Iglesias >>>> wrote: >>>>> >>>>> Hey Yuta, >>>>> >>>>> You'll need to do some stitching out-of-core. That's a really tricky >>>>> problem and I don't have any ready-made solutions for you. The solution >>>>> will depend on the nature of your segments. The only thing I would >>>>> recommend is that you use a format such as HDF5 (you can use the excellent >>>>> h5py library) that allows random access into the underlying disk data. >>>>> >>>>> Other than that, as I said, to my knowledge you'll have to develop >>>>> your own stitching: segment *overlapping* tiles independently in memory, >>>>> and when it comes time to write to disk, load the tile and overlapping >>>>> tiles that have already been segmented, and resolve label mapping then... >>>>> >>>>> Generally, think of it this way: tile i has already been segmented and >>>>> written out. We now want to write out tile j, which overlaps tile i. Labels >>>>> from tile i that intersect labels from tile j in the overlap region should >>>>> be matched. labels in tile j that *don't* intersect tile i should be >>>>> relabelled to ensure they are unique with respect to tile i. >>>>> >>>>> Of course this gets a bit more complicated in 2D or 3D... >>>>> >>>>> Juan. >>>>> >>>>> >>>>> >>>>> >>>>> On Fri, Mar 13, 2015 at 7:20 PM, Yuta Sato >>>>> wrote: >>>>> >>>>>> Dear SKIMAGE Developers and Users: >>>>>> >>>>>> I want to use the following algorithm in a large binary image that >>>>>> does not fit into my PC memory. So, I am thinking to split my large image >>>>>> into tiles and apply algorithm one by one. However, the original border >>>>>> definition change when I do it in parts. I need the result as applied in >>>>>> original full image. How can I do it efficiently? >>>>>> >>>>>> skimage.segmentation.clear_border(image, buffer_size=0, bgval=0) >>>>>> >>>>>> Thanks for your ideas. >>>>>> >>>>>> -- >>>>>> You received this message because you are subscribed to the Google >>>>>> Groups "scikit-image" group. >>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>> send an email to scikit-image+unsubscribe at googlegroups.com. >>>>>> For more options, visit https://groups.google.com/d/optout. >>>>>> >>>>> >>>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "scikit-image" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to scikit-image+unsubscribe at googlegroups.com. >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to scikit-image+unsubscribe at googlegroups.com. >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yutaxsato at gmail.com Sat Apr 4 09:47:25 2015 From: yutaxsato at gmail.com (Yuta Sato) Date: Sat, 4 Apr 2015 22:47:25 +0900 Subject: Range of beta values in segmentation algorithm? In-Reply-To: References: <1428152318555.6a208de2@Nodemailer> Message-ID: Thanks Emmanuelle Gouillart for describing me all the hack! On Sat, Apr 4, 2015 at 10:22 PM, Emmanuelle Gouillart < emmanuelle.gouillart at normalesup.org> wrote: > Hi Yuta, > > beta has to take a positive value. In the algorithm, the weight on a graph > edge is given by exp(- beta * diff) where diff is the absolute value of > pixels differences on both sides of the edge. Furthermore, the value of > beta you give is normalized by ten times the standard deviation of the > image, so that you don't have to worry about the image range (I know this > sounds a bit weird, but that's how it's coded. I might even be responsible > for this hack :-). > > Therefore, if you put a large value of beta there will be a very small > weight on edges for which pixels have different values, and diffusion will > be difficult. On the other hand, for small values diffusion will be easy > and regions will be "flooded" for markers, no matter the gradients. A > larger value of beta means that boundaries are more likely to lie on pixels > with a strong gradient. I would advise that you start with a small value of > beta (1 for example) and look at the result. If you feel like boudaries are > "leaky" it means that diffusion is too fast and you should increase beta. > > Hope this helps > Emma > > > > 2015-04-04 14:58 GMT+02:00 Juan Nunez-Iglesias : > >> Hi Yuta, >> >> Sorry, this slipped through the cracks. I haven't used random walker >> segmentation so I can't give you advice here... You might want to read the >> original publication [1], or, more practically, try out different betas on >> a logarithmic scale. >> >> Juan. >> >> [1] http://webdocs.cs.ualberta.ca/~nray1/CMPUT615/MRF/grady2006random.pdf >> >> >> >> >> On Sat, Apr 4, 2015 at 12:56 AM, Yuta Sato wrote: >> >>> >>> Dear skimage developers: >>> I would really appreciate to hear the answer on my question if it does >>> worth. >>> >>> Thanks >>> >>> On Thu, Mar 12, 2015 at 4:04 PM, Yuta Sato wrote: >>> >>>> In the following skimage.segmentation.random_walker algorithm: >>>> What is the range of 'beta' values that can be supplied? >>>> I am working with a single band 8bit unsigned image. >>>> >>>> Is it 0 to 255? >>>> >>>> >>>> skimage.segmentation.random_walker(data, labels, beta=130, mode='bf', >>>> tol=0.001, copy=True,multichannel=False, return_full_prob=False, >>>> spacing=None) >>>> >>>> beta : float [Penalization coefficient for the random walker motion >>>> (the greater beta, the more difficult the diffusion)] >>>> >>>> Thanks for your support. >>>> >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to scikit-image+unsubscribe at googlegroups.com. >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ciaran.robb at googlemail.com Tue Apr 7 16:35:09 2015 From: ciaran.robb at googlemail.com (ciaran.robb at googlemail.com) Date: Tue, 7 Apr 2015 13:35:09 -0700 (PDT) Subject: regionprops - displaying region properties In-Reply-To: References: <46469c78-2cfb-4c8c-913a-a639745c4ab9@googlegroups.com> <8B196A79-5ED9-48DB-ADA6-1C57EAFA3944@demuc.de> <5b40325e-aff4-4b49-9533-7722efba9905@googlegroups.com> Message-ID: <9598de83-e3b6-4384-a9cf-bdddbc5d70a0@googlegroups.com> Hi again, Here is a demo of the routine using a skimage.data example. I guess it'd be case of incorporating the loop or something like it in somewhere.... from skimage import graph, data,segmentation from matplotlib import pyplot as plt import numpy as np #creating a segmented image im = data.immunohistochemistry() seg = segmentation.felzenszwalb(im, scale=200, sigma=0.7, min_size=50) BW = segmentation.find_boundaries(seg) im[BW==1]=0 plt.imshow(im) plt.show() from skimage.measure import regionprops Props = regionprops(seg,['Area']) #here is the code for creating the regionprops image labels = np.unique(seg) #a vector of label vals PropIM = np.zeros_like(seg) # allocated blank array for label in labels: propval=Props[label-1]['Area'] PropIM[seg==label]=propval #for visualising with segment boundaries PropIM[BW==1]=0 plt.imshow(PropIM, vmin=PropIM.min(), vmax=PropIM.max()) plt.colorbar() plt.show() On Monday, March 2, 2015 at 11:38:21 PM UTC, Johannes Sch?nberger wrote: > > Maybe, there is a way to elegantly integrate this into the RegionProperty > class? > > Could you share your current implementation, so we can decide for a good > strategy? > > > On Mar 2, 2015, at 6:02 PM, ciara... at googlemail.com > wrote: > > > > Hi Johannes, > > > > Yeah of course. Would it be best placed in module color? > > > > Ciaran > > > > On Monday, March 2, 2015 at 5:26:12 PM UTC, Johannes Sch?nberger wrote: > > That sounds great. Would you be willing to work on integrating this into > skimage? > > > > Thanks. > > > > > On Feb 26, 2015, at 11:51 AM, ciara... at googlemail.com wrote: > > > > > > Hi > > > Adding to my own post but hey.... > > > > > > I have since written my own code which allows visualising of region > properties (eg area, eccentricity etc) via colormap, if anyone is > interested let me know! > > > > > > Ciaran > > > > > > On Sunday, February 1, 2015 at 11:45:44 PM UTC, > ciara... at googlemail.com wrote: > > > Hello everyone, > > > > > > I have recently been attempting to modify some existing skimage code > to display regionprops for a labeled image (e.g. area or eccentricity) > > > > > > I initially tried to translate a vectorized bit of old matlab code I > had, but gave up on that and decided to alter the existing label2rgb > skimage function > > > > > > I am attempting to change each label value to it's area property value > similar to the label2rgb "avg" function. > > > > > > so I have: > > > labels = a labeled image > > > > > > out = np.zeros_like(labels) #a blank array > > > labels2 = np.unique(labels) #a vector of label vals > > > out = np.zeros_like(labels) > > > Props = regionprops(labels, ['Area']) > > > bg_label=0 > > > bg = (labels2 == bg_label) > > > if bg.any(): > > > labels2 = labels2[labels2 != bg_label] > > > out[bg] = 0 > > > for label in labels2: > > > mask = (labels == label).nonzero() > > > color = Props[label].area > > > out[mask] = color > > > but the "out" props image does not correspond to the correct area > values? > > > Can anyone help me with this? > > > It also throws the following error: > > > "list index out of range" > > > It would certainly be useful to have a way to view the spatial > distribution of label properties in this way - perhaps in a future skimage > version? > > > > > > > > > -- > > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image... at googlegroups.com. > > > For more options, visit https://groups.google.com/d/optout. > > > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image... at googlegroups.com . > > For more options, visit https://groups.google.com/d/optout. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fars.rg at gmail.com Wed Apr 8 07:03:27 2015 From: fars.rg at gmail.com (Forest Applied Remote Sensing RG (FARS)) Date: Wed, 8 Apr 2015 04:03:27 -0700 (PDT) Subject: Multiple peaks with peak_local_max Message-ID: <2cf5e44b-e236-4fad-ae2d-fa0cf60f233f@googlegroups.com> Hi, I'm trying to use peak_local_max for tree detection. The only problem is that I keep getting multiple peaks (image attached). I tried already a combination of different filters but no success. Can anyone help me out? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Tree_detection.png Type: image/png Size: 312357 bytes Desc: not available URL: From stefanv at berkeley.edu Wed Apr 8 16:56:07 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 08 Apr 2015 13:56:07 -0700 Subject: Multiple peaks with peak_local_max In-Reply-To: <2cf5e44b-e236-4fad-ae2d-fa0cf60f233f@googlegroups.com> References: <2cf5e44b-e236-4fad-ae2d-fa0cf60f233f@googlegroups.com> Message-ID: <87r3rul4pk.fsf@berkeley.edu> On 2015-04-08 04:03:27, Forest Applied Remote Sensing RG (FARS) wrote: > I'm trying to use peak_local_max for tree detection. The only problem is > that I keep getting multiple peaks (image attached). > I tried already a combination of different filters but no success. Can > anyone help me out? The peak detector needs some attention. Here are other discussions about the topic: https://github.com/scikit-image/scikit-image/issues/1246 https://github.com/scikit-image/scikit-image/pull/1248 And ideas, feedback and especially PRs welcome. St?fan From fars.rg at gmail.com Thu Apr 9 09:25:35 2015 From: fars.rg at gmail.com (Forest Applied Remote Sensing RG (FARS)) Date: Thu, 9 Apr 2015 06:25:35 -0700 (PDT) Subject: Multiple peaks with peak_local_max In-Reply-To: <87r3rul4pk.fsf@berkeley.edu> References: <2cf5e44b-e236-4fad-ae2d-fa0cf60f233f@googlegroups.com> <87r3rul4pk.fsf@berkeley.edu> Message-ID: <3e7ff68f-fa93-4141-af04-c9d7f8a13ea7@googlegroups.com> Stefan, Thanks for your help, but I end up solving the problem. I combined the gaussin filter plus the max filter. The result now is much better. Now I'm strugling to export the local maxima points. Is there a function to export the points from the local maxima? Cheers, JP -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Capturar.PNG Type: image/png Size: 192155 bytes Desc: not available URL: From silvertrumpet999 at gmail.com Thu Apr 9 11:22:51 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Thu, 9 Apr 2015 08:22:51 -0700 (PDT) Subject: Multiple peaks with peak_local_max In-Reply-To: <3e7ff68f-fa93-4141-af04-c9d7f8a13ea7@googlegroups.com> References: <2cf5e44b-e236-4fad-ae2d-fa0cf60f233f@googlegroups.com> <87r3rul4pk.fsf@berkeley.edu> <3e7ff68f-fa93-4141-af04-c9d7f8a13ea7@googlegroups.com> Message-ID: @FARS - My recommendation was going to be applying some blur first, I'm glad that worked for you. How have you labeled the red points in the image above? If they are in a separate - possibly boolean - array, you can extract the coordinate indices directly via `np.where` or `np.nonzero`. If not, we'll need a little more information about those red dots to advise. Josh On Thursday, April 9, 2015 at 10:12:29 AM UTC-5, Forest Applied Remote Sensing RG (FARS) wrote: > > Stefan, > > Thanks for your help, but I end up solving the problem. I combined the > gaussin filter plus the max filter. The result now is much better. > > Now I'm strugling to export the local maxima points. Is there a function > to export the points from the local maxima? > > Cheers, > > JP > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fars.rg at gmail.com Thu Apr 9 11:39:10 2015 From: fars.rg at gmail.com (Forest Applied Remote Sensing RG (FARS)) Date: Thu, 9 Apr 2015 08:39:10 -0700 (PDT) Subject: Multiple peaks with peak_local_max In-Reply-To: References: <2cf5e44b-e236-4fad-ae2d-fa0cf60f233f@googlegroups.com> <87r3rul4pk.fsf@berkeley.edu> <3e7ff68f-fa93-4141-af04-c9d7f8a13ea7@googlegroups.com> Message-ID: <5a4e29d9-3dd7-4996-bee5-45387b839e7f@googlegroups.com> Thank you for you answer Josh, these red dots are actually an array, where each cell has a coordinate x and y. To be honest I wanted to export this red dots with the following structure: 590600,00 6890408,00 1019,04 This image I'm using each pixel has a geographic coordinate. But, at the moment I use the image in the scrip, the coordinates are lost and remains only basic pixel coordinates (i. e. 40, 412, 210). I'm quite new at scikit and python. So I'm trying to learn things with practice. Thanks for your attention Em quinta-feira, 9 de abril de 2015 17:22:51 UTC+2, Josh Warner escreveu: > > @FARS - My recommendation was going to be applying some blur first, I'm > glad that worked for you. > > How have you labeled the red points in the image above? If they are in a > separate - possibly boolean - array, you can extract the coordinate indices > directly via `np.where` or `np.nonzero`. If not, we'll need a little more > information about those red dots to advise. > > Josh > > > On Thursday, April 9, 2015 at 10:12:29 AM UTC-5, Forest Applied Remote > Sensing RG (FARS) wrote: >> >> Stefan, >> >> Thanks for your help, but I end up solving the problem. I combined the >> gaussin filter plus the max filter. The result now is much better. >> >> Now I'm strugling to export the local maxima points. Is there a function >> to export the points from the local maxima? >> >> Cheers, >> >> JP >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Thu Apr 9 14:51:11 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Thu, 9 Apr 2015 11:51:11 -0700 (PDT) Subject: Multiple peaks with peak_local_max In-Reply-To: <5a4e29d9-3dd7-4996-bee5-45387b839e7f@googlegroups.com> References: <2cf5e44b-e236-4fad-ae2d-fa0cf60f233f@googlegroups.com> <87r3rul4pk.fsf@berkeley.edu> <3e7ff68f-fa93-4141-af04-c9d7f8a13ea7@googlegroups.com> <5a4e29d9-3dd7-4996-bee5-45387b839e7f@googlegroups.com> Message-ID: <0a26b20d-0d86-46e5-8544-602cdfea2cb8@googlegroups.com> NumPy exclusively uses zero-indexed integers for indexing. What format does your raw data come from which has the coordinates? However, assuming this is a regularly sampled array you should be able to map the raw integer coordinate indices to true coordinates. This should be a fairly simple operation, but complicated somewhat if rotation is included. Less efficient in terms of memory, you could separate out known x/y coordinates as two separate NumPy arrays. Then directly index those with the raw coordinates to return your known good, calibrated values. Josh On Thursday, April 9, 2015 at 10:43:47 AM UTC-5, Forest Applied Remote Sensing RG (FARS) wrote: > > Thank you for you answer Josh, > > these red dots are actually an array, where each cell has a coordinate x > and y. > To be honest I wanted to export this red dots with the following structure: > > 590600,00 6890408,00 1019,04 > > This image I'm using each pixel has a geographic coordinate. But, at the > moment I use the image in the scrip, the coordinates are lost and remains > only basic pixel coordinates (i. e. 40, 412, 210). > I'm quite new at scikit and python. So I'm trying to learn things with > practice. > > Thanks for your attention > > > Em quinta-feira, 9 de abril de 2015 17:22:51 UTC+2, Josh Warner escreveu: >> >> @FARS - My recommendation was going to be applying some blur first, I'm >> glad that worked for you. >> >> How have you labeled the red points in the image above? If they are in a >> separate - possibly boolean - array, you can extract the coordinate indices >> directly via `np.where` or `np.nonzero`. If not, we'll need a little more >> information about those red dots to advise. >> >> Josh >> >> >> On Thursday, April 9, 2015 at 10:12:29 AM UTC-5, Forest Applied Remote >> Sensing RG (FARS) wrote: >>> >>> Stefan, >>> >>> Thanks for your help, but I end up solving the problem. I combined the >>> gaussin filter plus the max filter. The result now is much better. >>> >>> Now I'm strugling to export the local maxima points. Is there a function >>> to export the points from the local maxima? >>> >>> Cheers, >>> >>> JP >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fars.rg at gmail.com Fri Apr 10 04:25:11 2015 From: fars.rg at gmail.com (Forest Applied Remote Sensing RG (FARS)) Date: Fri, 10 Apr 2015 01:25:11 -0700 (PDT) Subject: Multiple peaks with peak_local_max In-Reply-To: <0a26b20d-0d86-46e5-8544-602cdfea2cb8@googlegroups.com> References: <2cf5e44b-e236-4fad-ae2d-fa0cf60f233f@googlegroups.com> <87r3rul4pk.fsf@berkeley.edu> <3e7ff68f-fa93-4141-af04-c9d7f8a13ea7@googlegroups.com> <5a4e29d9-3dd7-4996-bee5-45387b839e7f@googlegroups.com> <0a26b20d-0d86-46e5-8544-602cdfea2cb8@googlegroups.com> Message-ID: <8a03a112-bb52-498c-852f-2da3ed987694@googlegroups.com> Josh, My data is originally a bmp image exported from ArcGIS. The image is georeferenced. So every pixel has a 3D coordinate, coordinate East and West (Planar), and the third coordinate height (x, y, z). Basically I want to import the image, run the peak_local_max algorithm, get the local maxima and export the points with the original 3D coordinates in a txt file. So far I was able to do everything except the export part. That is where I have problems. Em quinta-feira, 9 de abril de 2015 20:51:11 UTC+2, Josh Warner escreveu: > > NumPy exclusively uses zero-indexed integers for indexing. What format > does your raw data come from which has the coordinates? > > However, assuming this is a regularly sampled array you should be able to > map the raw integer coordinate indices to true coordinates. This should be > a fairly simple operation, but complicated somewhat if rotation is included. > > Less efficient in terms of memory, you could separate out known x/y > coordinates as two separate NumPy arrays. Then directly index those with > the raw coordinates to return your known good, calibrated values. > > Josh > > On Thursday, April 9, 2015 at 10:43:47 AM UTC-5, Forest Applied Remote > Sensing RG (FARS) wrote: >> >> Thank you for you answer Josh, >> >> these red dots are actually an array, where each cell has a coordinate x >> and y. >> To be honest I wanted to export this red dots with the following >> structure: >> >> 590600,00 6890408,00 1019,04 >> >> This image I'm using each pixel has a geographic coordinate. But, at the >> moment I use the image in the scrip, the coordinates are lost and remains >> only basic pixel coordinates (i. e. 40, 412, 210). >> I'm quite new at scikit and python. So I'm trying to learn things with >> practice. >> >> Thanks for your attention >> >> >> Em quinta-feira, 9 de abril de 2015 17:22:51 UTC+2, Josh Warner escreveu: >>> >>> @FARS - My recommendation was going to be applying some blur first, I'm >>> glad that worked for you. >>> >>> How have you labeled the red points in the image above? If they are in a >>> separate - possibly boolean - array, you can extract the coordinate indices >>> directly via `np.where` or `np.nonzero`. If not, we'll need a little more >>> information about those red dots to advise. >>> >>> Josh >>> >>> >>> On Thursday, April 9, 2015 at 10:12:29 AM UTC-5, Forest Applied Remote >>> Sensing RG (FARS) wrote: >>>> >>>> Stefan, >>>> >>>> Thanks for your help, but I end up solving the problem. I combined the >>>> gaussin filter plus the max filter. The result now is much better. >>>> >>>> Now I'm strugling to export the local maxima points. Is there a >>>> function to export the points from the local maxima? >>>> >>>> Cheers, >>>> >>>> JP >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ross.m.mckinney at gmail.com Fri Apr 10 16:03:57 2015 From: ross.m.mckinney at gmail.com (Ross McKinney) Date: Fri, 10 Apr 2015 13:03:57 -0700 (PDT) Subject: Fast(er) Radon Transform Message-ID: Hello All, Does anyone know of a more efficient way to implement the skimage.transform.radon() function? Here are some comparisons of computation times for an image that is 384 x 512 pixels: Python (scikit_image-0.10.1-py2.7-macosx-10.5-x86_64): skimage.transform.radon(image) -- 4.295662 sec MATLAB (R2014a): radon(image) -- 0.204158 sec I am trying to rotationally align a large series of images (>10,000) by taking their radon projections and then converting them into the frequency domain. Unfortunately, this is going to take a really long time using the current implementation of the radon() function. I've started to take a look at the code myself, but I'm not an expert on the subject. Also, if there are any alternative methods of rotational alignment (that are computationally efficient), I'd love to hear them! Thanks for the help/input, -Ross -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Mon Apr 13 03:13:02 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 13 Apr 2015 00:13:02 -0700 Subject: Fast(er) Radon Transform In-Reply-To: References: Message-ID: <87a8yctsap.fsf@berkeley.edu> Hi Ross On 2015-04-10 13:03:57, Ross McKinney wrote: > Does anyone know of a more efficient way to implement the > skimage.transform.radon() function? Beylkin's 1987 paper already describes a faster way of computing it; this 1993 paper improves further upon those ideas: http://dx.doi.org/10.1109/83.236530 Unfortunately, we do not have an implementation of that paper available. > I am trying to rotationally align a large series of images > (>10,000) by taking their radon projections and then converting > them into the frequency domain. Unfortunately, this is going to > take a really long time using the current implementation of the > radon() function. I've started to take a look at the code > myself, but I'm not an expert on the subject. Also, if there > are any alternative methods of rotational alignment (that are > computationally efficient), I'd love to hear them! Rotation correction can be done without the help of the radon transform. E.g., rotation can be estimated directly from the Fourier transform. The log-polar transform allows for estimates of rotation and scale. With offset, it becomes a bit more tricky but can still be done. Perhaps show us the kinds of images you have in mind? Regards St?fan From schaabou87 at gmail.com Mon Apr 13 13:59:55 2015 From: schaabou87 at gmail.com (souZou) Date: Mon, 13 Apr 2015 10:59:55 -0700 (PDT) Subject: reconstruct image after preprocessing Message-ID: Hello, Im beginner, I have an image which i done a preprocessing with sklearn img_scaled = preprocessing.scale(img) my question how can reconstrcut my original image just from img_scaled?? is it possible or no?? is there a function that ensures the reverse of preprocessing.scale??? Thx for replay. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Mon Apr 13 14:32:56 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 13 Apr 2015 11:32:56 -0700 Subject: reconstruct image after preprocessing In-Reply-To: References: Message-ID: <87vbgzswtj.fsf@berkeley.edu> On 2015-04-13 10:59:55, souZou wrote: > Im beginner, I have an image which i done a preprocessing with > sklearn > > img_scaled = preprocessing.scale(img) > > my question how can reconstrcut my original image just from > img_scaled?? is it possible or no?? I think this question will be better answered on the scikit-learn mailing list. Regards St?fan From matt.newville at gmail.com Mon Apr 20 07:59:06 2015 From: matt.newville at gmail.com (Matthew Newville) Date: Mon, 20 Apr 2015 04:59:06 -0700 (PDT) Subject: speed of iradon transform Message-ID: <4795e56d-0383-43e4-971b-5455cd14e581@googlegroups.com> Hi, Some time ago, I raised Issue #929 on github about the interpolation step in skimage.transforms.radon_transform.iradon being too slow. After some on-and-off investigation, I found two opportunities to speed this up. First, numpy's linear interpolation routine interp() was very slow for repeated interpolations of well-ordered input -- this is now fixed in the github master repo for numy, and gives roughly a 3x to 4x speed-up. Second, the trigonometric calculations can be cached when doing repeated calls to iradon() for the same geometry (image size, number of angles). This can give another factor of 1.5 to 2.0x speed-up. This is PR #1474. Some preliminary timing results: Python2.7.8 (linux), with an image shape of (500, 500), and sinogram shape of (500, 361). I used iradon() options of dict(filter='shepp-logan', interpolation='linear') and tried both circle=True and False. The "workspace?" column here indicates whether the iradon_workspace() function introduced in PR1474 was used. with circle=True: numpy skimage workspace? Best, Worst of 5 (s) |---------+-----------+------------+---------------------| 1.9.2 master N/A 6.085 6.132 master master N/A 1.784, 1.813 master PR1474 No 1.788, 1.809 master PR1474 Yes 1.103, 1.160 |---------+-----------+------------+---------------------| with circle=False: numpy skimage workspace? Best, Worst of 5 (s) |---------+-----------+------------+---------------------| 1.9.2 master N/A 2.736, 2.767 master master N/A 0.820, 0.831 master PR1474 No 0.802, 0.814 master PR1474 Yes 0.535, 0.540 |---------+-----------+------------+---------------------| That is, the cumulative improvement is about 5x. Also note that not using the workspace reverts to the older behavior, and that using he workspace for 1 run of iradon() has basically no improvement over not using the workspace. Oddly, iradon() is now faster than radon() on this machine/image size (radon() takes about 3.5 sec). I don't understand why that would be. Anybody understand why radon() is so slow? Cheers, --Matt Newville -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Mon Apr 20 13:48:45 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 20 Apr 2015 10:48:45 -0700 Subject: speed of iradon transform In-Reply-To: <4795e56d-0383-43e4-971b-5455cd14e581@googlegroups.com> References: <4795e56d-0383-43e4-971b-5455cd14e581@googlegroups.com> Message-ID: <874moavgg2.fsf@berkeley.edu> Hi Matt On 2015-04-20 04:59:06, Matthew Newville wrote: > Some preliminary timing results: Thank you very much for your detailed investigation! > numpy skimage workspace? Best, Worst of 5 (s) > |---------+-----------+------------+---------------------| > 1.9.2 master N/A 6.085 6.132 master > master N/A 1.784, 1.813 master PR1474 > No 1.788, 1.809 master PR1474 Yes > 1.103, 1.160 > |---------+-----------+------------+---------------------| It looks like one gets about 1.5x for using the workspace. I am always careful about added code complexity for a relatively small gain, but in this case your refactoring *improves* legibility of the code--so +1 from me. > Oddly, iradon() is now faster than radon() on this machine/image > size (radon() takes about 3.5 sec). I don't understand why > that would be. Anybody understand why radon() is so slow? I'm afraid we don't implement an optimized version of the forward radon transform, such as Brady, "A Fast Discrete Approximation Algorithm for the Radon Transform", SIAM J. Comput., 27(1), 2006 A fast digital Radon transform--An efficient means for evaluating the Hough transform WA G?tz, HJ Druckm?ller - Pattern Recognition, 1996 This overview from William Press of Numerical Recipes fame is helpful: http://www.pnas.org/content/103/51/19249.full Regards St?fan From luecks at gmail.com Wed Apr 22 03:19:12 2015 From: luecks at gmail.com (Snowflake) Date: Wed, 22 Apr 2015 00:19:12 -0700 (PDT) Subject: Object detection in images (HOG) Message-ID: <8c9a0278-f754-47da-a3ef-f55a6f889f28@googlegroups.com> Hi! I am new to machine learning and I need some help. I want to detect objects inside cells of microscopy images. I have a lot of annotated images (app. 50.000 images with an object and 500.000 without an object). So far I tried to extract features using HOG and classifying using logistic regression and LinearSVC. I have tried several parameters for HOG or color spaces (RGB, HSV, LAB) but I don't see a big difference, the predication rate is about 70 %. I have several questions. How many images should I use to train the descriptor? How many images should I use to test the prediction? I have tried with about 1000 images for training, which gives me 55 % positive and 5000, which gives me about 72 % positive. However, it also depends a lot on the test set, sometimes a test set can reach 80-90 % positive detected images. Here are two examples containing an object and two images without an object: Object01 object02 cell01 cell02 Another problem is, sometimes the images contain several objects: objects Should I try to increase the examples of the learning set? How should I choose the images for the training set, just random? What else could I try? Any help or tips would be very appreciated, thank you very much in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Wed Apr 22 06:42:24 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 22 Apr 2015 03:42:24 -0700 (PDT) Subject: Object detection in images (HOG) In-Reply-To: <8c9a0278-f754-47da-a3ef-f55a6f889f28@googlegroups.com> References: <8c9a0278-f754-47da-a3ef-f55a6f889f28@googlegroups.com> Message-ID: <1429699343770.f8070e9e@Nodemailer> Hello! Firstly, please sign up to the mailing list before posting ? if you don't, every post from you has to be manually filtered through. On to your problem! So, it looks like there should be plenty of signal to distinguish between object/no-object. It's key to understand the features you're using. HOG may not be appropriate here: it measures gradients, not image intensity/color. In this case, it looks like there will be many more dark pixels in the object images. What I would do based on the examples you showed is to just take Lab-transformed image and then do a histogram, and use the histogram as the feature vector. You have a lot of labelled images, so use them! I would split your set into 40k training / 10k test, then do 4-fold cross-validation on the training set. scikit-learn has nice classes for doing cross-validation automatically. As to the choice of classifier, it might be worth asking their list, but *by far* the easiest to use "out-of-the-box", without fiddling with parameters, is the Random Forest. Hope that helped! Juan. On Wed, Apr 22, 2015 at 8:21 PM, Snowflake wrote: > Hi! > I am new to machine learning and I need some help. > I want to detect objects inside cells of microscopy images. I have a lot of > annotated images (app. 50.000 images with an object and 500.000 without an > object). > So far I tried to extract features using HOG and classifying using logistic > regression and LinearSVC. I have tried several parameters for HOG or color > spaces (RGB, HSV, LAB) but I don't see a big difference, the predication > rate is about 70 %. > I have several questions. How many images should I use to train the > descriptor? How many images should I use to test the prediction? > I have tried with about 1000 images for training, which gives me 55 % > positive and 5000, which gives me about 72 % positive. However, it also > depends a lot on the test set, sometimes a test set can reach 80-90 % > positive detected images. > Here are two examples containing an object and two images without an object: > Object01 > object02 > cell01 > cell02 > Another problem is, sometimes the images contain several objects: > objects > Should I try to increase the examples of the learning set? How should I > choose the images for the training set, just random? What else could I try? > Any help or tips would be very appreciated, thank you very much in advance! > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Apr 22 15:09:58 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 22 Apr 2015 12:09:58 -0700 Subject: Website designer volunteer In-Reply-To: References: Message-ID: <87383s7zeh.fsf@berkeley.edu> Hi Tony On 2015-03-27 23:01:01, Tony Yu wrote: > Do you have some ideas for what improvements you're looking for? Sorry, I seem to have missed your reply! Yes, I am looking for someone to work on the readability of the docs (stylesheet + font design) and to look at the structure of the site overall (making sure important information is easily accessible, etc.). If you know of someone, or if you are interested yourself, please let me know! Thanks St?fan > On Wed, Mar 25, 2015 at 9:09 PM, St?fan van der Walt > > wrote: > >> Hi folks, >> >> I know it's a bit of a long shot, but I'd like to find a >> volunteer to work on the layout and readability of our website. >> >> If you know of anyone interested in doing design work the same >> way we do software development, please let me know. They will >> have to be able to work with our current workflow, so good >> technical chops are a must. >> >> Thanks, St?fan From luecks at gmail.com Thu Apr 23 01:49:24 2015 From: luecks at gmail.com (Snowflake) Date: Wed, 22 Apr 2015 22:49:24 -0700 (PDT) Subject: Object detection in images (HOG) In-Reply-To: <1429699343770.f8070e9e@Nodemailer> References: <8c9a0278-f754-47da-a3ef-f55a6f889f28@googlegroups.com> <1429699343770.f8070e9e@Nodemailer> Message-ID: Hello Juan! Thank you for your reply! I am sorry about the technical problem, Google told me that I am signed up for this group, I did not realize. I hope this message will be recognized as a member. I really appreciate your tips and experience. However, I have one concern about using only intensity/color. I have several images, were the cell and the object are very light stained and others with objects which I don't want to detect are very dark stained, that's why I used HOG (the object which I am looking for has always kind of finger structure). I am giving it a try at the moment with Lab features and I will see :-) Thanks a lot for the cross validation tip and how many images to use, this was very helpful. Cheers, Stefanie Am Mittwoch, 22. April 2015 12:42:25 UTC+2 schrieb Juan Nunez-Iglesias: > > Hello! > > Firstly, please sign up to the mailing list before posting ? if you don't, > every post from you has to be manually filtered through. > > On to your problem! > > So, it looks like there should be plenty of signal to distinguish between > object/no-object. It's key to understand the features you're using. HOG may > not be appropriate here: it measures gradients, not image intensity/color. > In this case, it looks like there will be many more dark pixels in the > object images. What I would do based on the examples you showed is to just > take Lab-transformed image and then do a histogram, and use the histogram > as the feature vector. > > You have a lot of labelled images, so use them! I would split your set > into 40k training / 10k test, then do 4-fold cross-validation on the > training set. scikit-learn has nice classes for doing cross-validation > automatically. > > As to the choice of classifier, it might be worth asking their list, but > *by far* the easiest to use "out-of-the-box", without fiddling with > parameters, is the Random Forest. > > Hope that helped! > > Juan. > > > > > On Wed, Apr 22, 2015 at 8:21 PM, Snowflake > > wrote: > >> Hi! >> >> I am new to machine learning and I need some help. >> >> I want to detect objects inside cells of microscopy images. I have a >> lot of annotated images (app. 50.000 images with an object and 500.000 >> without an object). >> >> So far I tried to extract features using HOG and classifying using >> logistic regression and LinearSVC. I have tried several parameters for HOG >> or color spaces (RGB, HSV, LAB) but I don't see a big difference, the >> predication rate is about 70 %. >> >> I have several questions. How many images should I use to train the >> descriptor? How many images should I use to test the prediction? >> >> I have tried with about 1000 images for training, which gives me 55 % >> positive and 5000, which gives me about 72 % positive. However, it also >> depends a lot on the test set, sometimes a test set can reach 80-90 % >> positive detected images. >> >> Here are two examples containing an object and two images without an >> object: >> >> Object01 >> object02 >> >> cell01 >> >> cell02 >> >> Another problem is, sometimes the images contain several objects: >> >> objects >> >> Should I try to increase the examples of the learning set? How should I >> choose the images for the training set, just random? What else could I try? >> >> Any help or tips would be very appreciated, thank you very much in >> advance! >> >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From claiborne.morton at gmail.com Thu Apr 23 18:59:43 2015 From: claiborne.morton at gmail.com (Claiborne Morton) Date: Thu, 23 Apr 2015 15:59:43 -0700 (PDT) Subject: How can I resample an Image in Scikit? Message-ID: <67ebc545-2545-494b-8e2f-fd648b10f92d@googlegroups.com> Hey guys, I was wondering if there is a way to resample images in Scikit Image. Im not even sure that is the correct terminology for what I am trying to do? Basically my goal is to increase the size of images by a factor of 4. My goal is to smooth the edges of shapes within my images, by increasing the number of pixels. I have tried doing this using the resize function but I end up with the same rough edges, just on a larger scale. Also the images I am working with are binary, and I would like this to remain when the image is expanded. I have tried doing this using resize, but the result is an image with grayscale pixels, and the rough edges are only magnified. Some time in the past I was able to get these result using Fiji and Preview (mac), but am not exactly sure how. I have attached both a zoomed image of the resize function as well as a zoom on the results I have made in the past. Also attached is the images that I am trying to multiply by a factor of 4 (binary_filled_bp). Sorry for the confusing post. Please let me know if I can clear anything up! Thanks Clay -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Zoom on Good.png Type: image/png Size: 16084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Zoom on Resize.png Type: image/png Size: 26713 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: binary_filled_bp.png Type: image/png Size: 112692 bytes Desc: not available URL: From jni.soma at gmail.com Thu Apr 23 23:25:37 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 23 Apr 2015 20:25:37 -0700 (PDT) Subject: How can I resample an Image in Scikit? In-Reply-To: <67ebc545-2545-494b-8e2f-fd648b10f92d@googlegroups.com> References: <67ebc545-2545-494b-8e2f-fd648b10f92d@googlegroups.com> Message-ID: <1429845937326.31386dfa@Nodemailer> Hey Claiborne, You can use transform.rescale (instead of resize) to scale by a given factor. You can use order=0 to ensure a binary image. That will give you the zoomed-in blocks without the grayscale. To get smoothing, I would try one of two things: ?- get the blocky image, then apply a binary closing with a disk of radius 2-4; or ?- rescale with order>2 and then threshold the image to get a binary image again. I'm not sure whether either of these will give you a nice enough result, though. But, give it a try! On Fri, Apr 24, 2015 at 8:59 AM, Claiborne Morton wrote: > Hey guys, > I was wondering if there is a way to resample images in Scikit Image. Im > not even sure that is the correct terminology for what I am trying to do? > Basically my goal is to increase the size of images by a factor of 4. My > goal is to smooth the edges of shapes within my images, by increasing the > number of pixels. I have tried doing this using the resize function but I > end up with the same rough edges, just on a larger scale. Also the images I > am working with are binary, and I would like this to remain when the image > is expanded. I have tried doing this using resize, but the result is an > image with grayscale pixels, and the rough edges are only magnified. Some > time in the past I was able to get these result using Fiji and Preview > (mac), but am not exactly sure how. I have attached both a zoomed image of > the resize function as well as a zoom on the results I have made in the > past. Also attached is the images that I am trying to multiply by a factor > of 4 (binary_filled_bp). Sorry for the confusing post. Please let me know > if I can clear anything up! > Thanks Clay > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Mon Apr 27 15:39:01 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 27 Apr 2015 12:39:01 -0700 Subject: Euroscipy tutorial on scikit-image In-Reply-To: <20150427191656.GC711301@phare.normalesup.org> References: <20150427191656.GC711301@phare.normalesup.org> Message-ID: <87k2wx744q.fsf@berkeley.edu> Hi Emmanuelle On 2015-04-27 12:16:56, Emmanuelle Gouillart wrote: > I'm thinking of submitting an abstract for a tutorial on > scikit-image for the next Euroscipy in Cambridge (24-28 > August). I know there was already an excellent tutorial by > St?fan last year :-), but as new people keep on joining the > community, it still might be interesting for a wide > audience. There is also the possibility to focus on a more > specific or more advanced aspect of image processing, such as > image segmentation, 3D images, etc. An excellent idea! Josh, Steven and I will be preparing more material for the US SciPy conference, and you are more than welcome to use as much of it as you need. The existing teaching repo is at: https://github.com/scikit-image/skimage-tutorials St?fan From stefanv at berkeley.edu Mon Apr 27 17:48:56 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 27 Apr 2015 14:48:56 -0700 Subject: Google Summer of Code 2015 Message-ID: <87egn56y47.fsf@berkeley.edu> Hi all, scikit-image is again participating in Google's Summer of Code under the umbrella of the PSF. Google today announced the accepted students, and we'd like to congratulate Daniil Pakhomov (project: "Implementing a patent-free Face Detection algorithm") and Aman Singh (project: "rewriting scipy.ndimage in cython")! We are now in the community bonding period [0], so please join us on Gitter (https://gitter.im/scikit-image/scikit-image) and let's have some fun coding together before the projects officially kick off! To the students who did not get accepted this year: thank you for all the hard work you put into your proposals, and please stay involved and in touch. We look forward to working with you! Here's to a great few months ahead! St?fan [0] http://googlesummerofcode.blogspot.nl/2007/04/so-what-is-this-community-bonding-all.html From jni.soma at gmail.com Mon Apr 27 21:22:38 2015 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Mon, 27 Apr 2015 18:22:38 -0700 (PDT) Subject: Euroscipy tutorial on scikit-image In-Reply-To: <20150427191656.GC711301@phare.normalesup.org> References: <20150427191656.GC711301@phare.normalesup.org> Message-ID: <1430184157788.1dd5322f@Nodemailer> Hi Emmanuelle! Happily, I'll be at EuroSciPy this year! So I'll be very happy to help you with the tutorial! I think an intermediate tutorial works fine; as you say, there's always new users coming in! =) One idea: I think these tutes can be a really great way to gain contributors. Depending on how much time we get, we could have a mini-sprint at the end, where we get people to work on pre-selected "easy" issues on the repo. Juan. On Tue, Apr 28, 2015 at 5:17 AM, Emmanuelle Gouillart wrote: > Dear all, > I'm thinking of submitting an abstract for a tutorial on > scikit-image for the next Euroscipy in Cambridge (24-28 August). I know > there was already an excellent tutorial by St?fan last year :-), but as > new people keep on joining the community, it still might be interesting > for a wide audience. There is also the possibility to focus on a more > specific or more advanced aspect of image processing, such as image > segmentation, 3D images, etc. > Would anybody be interested in giving a joint tutorial with me on > scikit-image at Euroscipy 2015? Ideas on subjects important to cover on > such a tutorial are also very welcome. > Cheers, > Emmanuelle > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emmanuelle.gouillart at nsup.org Mon Apr 27 15:16:56 2015 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Mon, 27 Apr 2015 21:16:56 +0200 Subject: Euroscipy tutorial on scikit-image Message-ID: <20150427191656.GC711301@phare.normalesup.org> Dear all, I'm thinking of submitting an abstract for a tutorial on scikit-image for the next Euroscipy in Cambridge (24-28 August). I know there was already an excellent tutorial by St??fan last year :-), but as new people keep on joining the community, it still might be interesting for a wide audience. There is also the possibility to focus on a more specific or more advanced aspect of image processing, such as image segmentation, 3D images, etc. Would anybody be interested in giving a joint tutorial with me on scikit-image at Euroscipy 2015? Ideas on subjects important to cover on such a tutorial are also very welcome. Cheers, Emmanuelle From doron.yotam at googlemail.com Tue Apr 28 03:36:33 2015 From: doron.yotam at googlemail.com (Yotam Doron) Date: Tue, 28 Apr 2015 00:36:33 -0700 (PDT) Subject: A DataFrame-like container for image data Message-ID: <72789451-29b2-4eac-b138-2ae3b1579e65@googlegroups.com> Hi all, I wrote the beginnings of a DataFrame-like container for image processing. It's an experiment for the moment, a small wrapper around a dictionary of arrays, and I'd be interested in any kind of feedback https://github.com/yotam/pictureframe https://pypi.python.org/pypi/pictureframe/0.1.0 https://github.com/yotam/pictureframe/blob/master/examples/quickstart.py I wrote this because in my work I either end up with - procedural code where I pass around lots of image-shaped arrays, or - classes that have lots of image-shaped arrays as member variables I wanted to avoid having to write repeated indexing code whenever I work on a subset, and to have some guarantees about the shape of the data. I also wanted to be able to quickly generate a scaled-down version of all my arrays and to leave the door open for higher dimensional data like vxel grids. Main differences from Pandas DataFrame - Arrays can have varying dimensions, only the first `fixed_dim` dimensions must match. So you can keep together data such as RGB, depth, label distributions, weight maps and so on. - Higher dimensional data, not just a tabular structure. Main differences from scikit-image ImageCollection and MultiImage - Slicing and indexing operate on the underlying array data rather than selecting a subset of the images. - Images are constrained to match on first dimensions. - Not constrained to image data, e.g. indexing can return a PictureFrame with fewer constrained dimensions. Any thoughts, suggestions or "aren't you just reimplementing library X?" are very welcome. This is the first time I've tried to release a package so if anything seems unusual please let me know. Thanks, Yotam -------------- next part -------------- An HTML attachment was scrubbed... URL: From emmanuelle.gouillart at nsup.org Tue Apr 28 02:41:17 2015 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Tue, 28 Apr 2015 08:41:17 +0200 Subject: Euroscipy tutorial on scikit-image In-Reply-To: <1430184157788.1dd5322f@Nodemailer> References: <20150427191656.GC711301@phare.normalesup.org> <1430184157788.1dd5322f@Nodemailer> Message-ID: <20150428064117.GA85873@phare.normalesup.org> That's great news! I'll be writing a first version of the abstract today, since the deadline is this Thursday, and I'll send it to you. +1 for having a short "pre-sprint" after the tutorial (then we should ask that it is the last one of the day), and maybe a longer sprint on Sunday 30. Emma On Mon, Apr 27, 2015 at 06:22:38PM -0700, Juan Nunez-Iglesias wrote: > Hi Emmanuelle! > Happily, I'll be at EuroSciPy this year! So I'll be very happy to help you with > the tutorial! I think an intermediate tutorial works fine; as you say, there's > always new users coming in! =) > One idea: I think these tutes can be a really great way to gain contributors. > Depending on how much time we get, we could have a mini-sprint at the end, > where we get people to work on pre-selected "easy" issues on the repo. > Juan. > On Tue, Apr 28, 2015 at 5:17 AM, Emmanuelle Gouillart < > emmanuelle.gouillart at nsup.org> wrote: > Dear all, > I'm thinking of submitting an abstract for a tutorial on > scikit-image for the next Euroscipy in Cambridge (24-28 August). I know > there was already an excellent tutorial by St??fan last year :-), but as > new people keep on joining the community, it still might be interesting > for a wide audience. There is also the possibility to focus on a more > specific or more advanced aspect of image processing, such as image > segmentation, 3D images, etc. > Would anybody be interested in giving a joint tutorial with me on > scikit-image at Euroscipy 2015? Ideas on subjects important to cover on > such a tutorial are also very welcome. > Cheers, > Emmanuelle From emmanuelle.gouillart at nsup.org Tue Apr 28 03:57:49 2015 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Tue, 28 Apr 2015 09:57:49 +0200 Subject: A DataFrame-like container for image data In-Reply-To: <72789451-29b2-4eac-b138-2ae3b1579e65@googlegroups.com> References: <72789451-29b2-4eac-b138-2ae3b1579e65@googlegroups.com> Message-ID: <20150428075749.GB85873@phare.normalesup.org> Hi Yotam, thanks to bringing your package to the list's attention! Hope you will get some comments and users. Your post made me think that it would be interesting to learn from list members about their usage patterns for image processing of multiple images. As for now, we tend to avoid having too many abstractions on top of numpy arrays in scikit-image, since one of the very nice things of the package is the super-easy API image_array_out = function(image_array_in, params). But it's very interesting to know what kind of framework people use for their image processing tasks. Cheers, Emmanuelle On Tue, Apr 28, 2015 at 12:36:33AM -0700, Yotam Doron wrote: > Hi all, > I wrote the beginnings of a DataFrame-like container for image processing. It's > an experiment for the moment, a small wrapper around a dictionary of arrays, > and I'd be interested in any kind of feedback > https://github.com/yotam/pictureframe > https://pypi.python.org/pypi/pictureframe/0.1.0 > https://github.com/yotam/pictureframe/blob/master/examples/quickstart.py > I wrote this because in my work I either end up with > ? procedural code where I pass around lots of image-shaped arrays, or > ? classes that have lots of image-shaped arrays as member variables > I wanted to avoid having to write repeated indexing code whenever I work on a > subset, and to have some guarantees about the shape of the data. I also wanted > to be able to quickly generate a scaled-down version of all my arrays and to > leave the door open for higher dimensional data like vxel grids. > Main differences from Pandas DataFrame > ? Arrays can have varying dimensions, only the first `fixed_dim` dimensions > must match. So you can keep together data such as RGB, depth, label > distributions, weight maps and so on. > ? Higher dimensional data, not just a tabular structure. > Main differences from scikit-image ImageCollection and MultiImage > ? Slicing and indexing operate on the underlying array data rather than > selecting a subset of the images. > ? Images are constrained to match on first dimensions. > ? Not constrained to image data, e.g. indexing can return a PictureFrame with > fewer constrained dimensions. > Any thoughts, suggestions or "aren't you just reimplementing library X?" are > very welcome. This is the first time I've tried to release a package so if > anything seems unusual please let me know. > Thanks, > Yotam From stefanv at berkeley.edu Tue Apr 28 15:15:00 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Tue, 28 Apr 2015 12:15:00 -0700 Subject: A DataFrame-like container for image data In-Reply-To: <72789451-29b2-4eac-b138-2ae3b1579e65@googlegroups.com> References: <72789451-29b2-4eac-b138-2ae3b1579e65@googlegroups.com> Message-ID: <87r3r42hfv.fsf@berkeley.edu> Hi Yotam On 2015-04-28 00:36:33, Yotam Doron wrote: > I wrote the beginnings of a DataFrame-like container for image > processing. It's an experiment for the moment, a small wrapper > around a dictionary of arrays, and I'd be interested in any > kind of feedback > > https://github.com/yotam/pictureframe > > https://pypi.python.org/pypi/pictureframe/0.1.0 > > https://github.com/yotam/pictureframe/blob/master/examples/quickstart.py This looks interesting. You should also have a chat with Stephan Hoyer, the author of X-Ray. St?fan From oscarjdm19 at gmail.com Tue Apr 28 15:28:21 2015 From: oscarjdm19 at gmail.com (Oscar J. Delgado) Date: Tue, 28 Apr 2015 12:28:21 -0700 (PDT) Subject: Python count crystals (total and by color) Message-ID: <411ea465-d17a-4a75-82dc-bcea03cdd4c4@googlegroups.com> Good morning to all, I have pictures from a microscope and I am trying to count the total number of crystal as well as those with specific colors (red, blue or yellow). In principle it looks possible to do it just by looking at the picture but considering I have to do it for over 200, I believe a script would be more effective. My first approach was to follow the examples in the "skimage" gallery, but I have a big issue as the crystals color and the background color (an alcohol solution) are almost the same (transparent). So far I have successfully segmented the particles using the following code: import matplotlib.pyplot as plt import numpy as np from skimage.morphology import disk from skimage.filters import rank, threshold_adaptive from skimage.util import img_as_ubyte from skimage.measure import label from skimage.color import label2rgb import pylab from PIL import Image path = r".../11-3.jpg" img = pylab.array(Image.open(path).convert('L')) image = img_as_ubyte(img) # denoise image denoised = rank.median(image, disk(10)) #local gradient gradient = rank.gradient(denoised, disk(2)) #threshold_adaptive filter threshold = threshold_adaptive(gradient, 15000) # label image regions label_image = label(threshold) image_label_overlay = label2rgb(label_image, image=image) # display results fig, axe = plt.subplots(ncols=1, figsize=(10,5)) axe.imshow(image, cmap=plt.cm.gray, interpolation='nearest') axe.imshow(gradient, interpolation='nearest', cmap=plt.cm.spectral, alpha= 0.2) axe.imshow(threshold, interpolation='nearest', cmap=plt.cm.binary_r, alpha= 0.2) axe.imshow(label_image, alpha=0.2) axe.axis('off') fig.subplots_adjust(hspace=0.01, wspace=0.01, top=1, bottom=0, left=0, right =1) plt.show() Please note I had to increase the threshold_adaptive to more than 10,000 in order to get the crystals, but this is really slow. I am still working on how identify if the particles segmented are blue tinted (nothing yet!). I really really will appreciate if anyone can give me a hand with this. I have tried even modifying the way I take the pictures but nothing. Again, I have two objectives: count the total number of particles and then count those particles that have a tint (red, blue or yellow) Please find below one typical picture: [image: Cell Image] Best! -------------- next part -------------- An HTML attachment was scrubbed... URL: From doron.yotam at googlemail.com Tue Apr 28 17:54:26 2015 From: doron.yotam at googlemail.com (Yotam Doron) Date: Tue, 28 Apr 2015 14:54:26 -0700 (PDT) Subject: A DataFrame-like container for image data In-Reply-To: <87r3r42hfv.fsf@berkeley.edu> References: <72789451-29b2-4eac-b138-2ae3b1579e65@googlegroups.com> <87r3r42hfv.fsf@berkeley.edu> Message-ID: Hi St?fan, Thanks for the pointer, X-Ray does look like a very close match. I'll read up on it and get in touch with Stephan. Regards, Yotam On Tuesday, 28 April 2015 20:15:07 UTC+1, stefanv wrote: > > Hi Yotam > > On 2015-04-28 00:36:33, Yotam Doron > > > wrote: > > I wrote the beginnings of a DataFrame-like container for image > > processing. It's an experiment for the moment, a small wrapper > > around a dictionary of arrays, and I'd be interested in any > > kind of feedback > > > > https://github.com/yotam/pictureframe > > > > https://pypi.python.org/pypi/pictureframe/0.1.0 > > > > https://github.com/yotam/pictureframe/blob/master/examples/quickstart.py > > This looks interesting. You should also have a chat with Stephan > Hoyer, the author of X-Ray. > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From viral.j.parikh at gmail.com Tue Apr 28 21:47:28 2015 From: viral.j.parikh at gmail.com (Viral Parikh) Date: Tue, 28 Apr 2015 18:47:28 -0700 (PDT) Subject: Blog post on scikit-image by Eric Chiang In-Reply-To: References: <20140202184126.GF14501@gmail.com> Message-ID: <03b7d89a-8b43-4bc3-9cbe-d93fc6cb88c4@googlegroups.com> Hi All, Has anyone run the below part of the code mentioned in the blog - ggplot(pd.DataFrame(),aes(fill=True,alpha=0.5)) + \ geom_density(aes(x=gray_image.flatten()),color='green') + \ geom_density(aes(x=equalized_image.flatten()),color='orange') + \ ggtitle("Histogram Equalization Process\n(From Green to Orange)") + \ xlab("pixel intensity") + \ ylab("density") I run and I keep getting errors. Can anyone tell me what - pd.DataFrame() mean over here? Thank you in advance! Best, Viral On Monday, February 3, 2014 at 9:19:26 AM UTC-6, Ankit Agrawal wrote: > > Cool blog post!! +1 for inclusion of feature detection, extraction and > matching example in future post. > > Regards, > Ankit Agrawal, > Communication and Signal Processing, > IIT Bombay. > > > On Mon, Feb 3, 2014 at 8:30 PM, Johannes Sch?nberger > wrote: > >> Really nice, the blog post should definitely get a follow up with the >> new feature detection, extraction and matching capabilities. >> >> On Sun, Feb 2, 2014 at 1:41 PM, St?fan van der Walt > > wrote: >> > Have a look at this great blog post: >> > >> > http://blog.yhathq.com/posts/image-processing-with-scikit-image.html >> > >> > -- >> > You received this message because you are subscribed to the Google >> Groups "scikit-image" group. >> > To unsubscribe from this group and stop receiving emails from it, send >> an email to scikit-image... at googlegroups.com . >> > For more options, visit https://groups.google.com/groups/opt_out. >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> For more options, visit https://groups.google.com/groups/opt_out. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From viral.j.parikh at gmail.com Tue Apr 28 22:59:40 2015 From: viral.j.parikh at gmail.com (Viral Parikh) Date: Tue, 28 Apr 2015 19:59:40 -0700 (PDT) Subject: Blog post on scikit-image by Eric Chiang In-Reply-To: References: <20140202184126.GF14501@gmail.com> Message-ID: <78c0b71d-034e-4d90-886e-66009bedfd91@googlegroups.com> hi all, the blog post is good! but its unclear how he is using pd.DataFrame() in step 10 - ggplot(pd.DataFrame(),aes(fill=True,alpha=0.5)) + \ geom_density(aes(x=gray_image.flatten()),color='green') + \ geom_density(aes(x=equalized_image.flatten()),color='orange') + \ ggtitle("Histogram Equalization Process\n(From Green to Orange)") + \ xlab("pixel intensity") + \ ylab("density") what should be contained in the dataframe is unclear to me? would appreciate your response, thank you! On Monday, February 3, 2014 at 9:19:26 AM UTC-6, Ankit Agrawal wrote: > > Cool blog post!! +1 for inclusion of feature detection, extraction and > matching example in future post. > > Regards, > Ankit Agrawal, > Communication and Signal Processing, > IIT Bombay. > > > On Mon, Feb 3, 2014 at 8:30 PM, Johannes Sch?nberger > wrote: > >> Really nice, the blog post should definitely get a follow up with the >> new feature detection, extraction and matching capabilities. >> >> On Sun, Feb 2, 2014 at 1:41 PM, St?fan van der Walt > > wrote: >> > Have a look at this great blog post: >> > >> > http://blog.yhathq.com/posts/image-processing-with-scikit-image.html >> > >> > -- >> > You received this message because you are subscribed to the Google >> Groups "scikit-image" group. >> > To unsubscribe from this group and stop receiving emails from it, send >> an email to scikit-image... at googlegroups.com . >> > For more options, visit https://groups.google.com/groups/opt_out. >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> For more options, visit https://groups.google.com/groups/opt_out. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fars.rg at gmail.com Wed Apr 29 05:47:36 2015 From: fars.rg at gmail.com (Forest Applied Remote Sensing RG (FARS)) Date: Wed, 29 Apr 2015 02:47:36 -0700 (PDT) Subject: Scikit-image installation in Quantum GIS Message-ID: <3cff2337-7096-438d-a24a-ae0df630bd10@googlegroups.com> Hello everyone, I am trying to use scikit-image in QGIS's Python console. I manage to install many other packages with pip in OSGeo4W Shell but with scikit-image the same error always come (attached). Can anybody help me with this issue? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Wed Apr 29 08:34:25 2015 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Wed, 29 Apr 2015 05:34:25 -0700 (PDT) Subject: Scikit-image installation in Quantum GIS In-Reply-To: <3cff2337-7096-438d-a24a-ae0df630bd10@googlegroups.com> References: <3cff2337-7096-438d-a24a-ae0df630bd10@googlegroups.com> Message-ID: This appears to be a NumPy issue related to the LAPACK/BLAS library requirements. Pip will try to install requirements before scikit-image. Try `pip install numpy` and if this is reproduced, the issue is with installation of NumPy. Josh On Wednesday, April 29, 2015 at 5:35:25 AM UTC-5, Forest Applied Remote Sensing RG (FARS) wrote: > Hello everyone, > > I am trying to use scikit-image in QGIS's Python > console. I manage to install many other packages with pip in OSGeo4W > Shell but with scikit-image the same error always come (attached). > > Can anybody help me with this issue? > > Thank you From julien.derr at gmail.com Wed Apr 29 12:21:00 2015 From: julien.derr at gmail.com (Julien Derr) Date: Wed, 29 Apr 2015 18:21:00 +0200 Subject: problem with io.imread Message-ID: Hi everyone, I have a very basic problem, but I have no idea how to solve it ! I installed scikitimage on a new computer, and I cannot manage to load an image with the io module from skimage import io, works fine : Variable Type Data/Info ------------------------------ filename str ./small.jpg io module skimage/io/__init__.pyc'> but when I want to load my image: I get the error In [5]: camera = io.imread(filename) ValueError: Could not load "./small.jpg" Please see documentation at: http://pillow.readthedocs.org/en/latest/installation.html#external-libraries I went on the website where they ask to unistall PIL before installing pillow, this is what I did, and still I get the error ... any ideas ? thanks a lot! Julien -------------- next part -------------- An HTML attachment was scrubbed... URL: From emmanuelle.gouillart at nsup.org Wed Apr 29 12:26:54 2015 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Wed, 29 Apr 2015 18:26:54 +0200 Subject: problem with io.imread In-Reply-To: References: Message-ID: <20150429162654.GA827007@phare.normalesup.org> Hi Julien, how did you install scikit-image? Are you using a scientific Python distribution such as Anaconda or Canopy? Did you try re-installing scikit-image after installing pillow? Which version of scikit-image did you install? Emma On Wed, Apr 29, 2015 at 06:21:00PM +0200, Julien Derr wrote: > Hi everyone, > I have a very basic problem, but I have no idea how to solve it ! > I installed scikitimage on a new computer, and I cannot manage to load an image > with the io module > from skimage import io,?? works fine : > Variable???? Type?????????? Data/Info > ------------------------------ > filename???? str???????????? ./small.jpg > io???????????????? module?????? skimage/io/__init__.pyc'> > but when I want to load my image: I get the error > In [5]: camera = io.imread(filename) > ValueError: Could not load "./small.jpg" > Please see documentation at: http://pillow.readthedocs.org/en/latest/ > installation.html#external-libraries > I went on the website where they ask to unistall PIL before installing pillow, > this is what I did, and still I get the error ... > any ideas ? > thanks a lot! > Julien From stefanv at berkeley.edu Thu Apr 30 01:58:14 2015 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 29 Apr 2015 22:58:14 -0700 Subject: problem with io.imread In-Reply-To: References: Message-ID: <87lhhayx6x.fsf@berkeley.edu> Hi Julian On 2015-04-29 09:21:00, Julien Derr wrote: > ValueError: Could not load "./small.jpg" Please see > documentation at: > http://pillow.readthedocs.org/en/latest/installation.html#external-libraries What that document tells you is that you should first install libjpeg, then install (or re-install) PIL, and then the loader should work. Alternatively, you can try: from skimage import io io.imread('./small.jpg', plugin='matplotlib') Other options include: - https://pypi.python.org/pypi/imageio - https://pypi.python.org/pypi/imread Regards St?fan