From deklerkmc at gmail.com Tue Jun 4 06:05:42 2013 From: deklerkmc at gmail.com (Marc de Klerk) Date: Tue, 4 Jun 2013 03:05:42 -0700 (PDT) Subject: GSoC Proposal In-Reply-To: <0f98605a-8ab3-4b0a-b1d7-376cc53ad807@googlegroups.com> References: <0f98605a-8ab3-4b0a-b1d7-376cc53ad807@googlegroups.com> Message-ID: <34a0cffa-51d0-4120-98e4-94fe26ff9f09@googlegroups.com> Hi Guys, Just wanted to say a big thanks for opportunity to do a GSOC :) I'll be blogging about it on a weekly basis at http://mygsoc.blogspot.com. I'll report back as soon as I've sorted out the admin of setting up code repositories etc. Cheers, Marc On Saturday, May 4, 2013 8:07:21 AM UTC+2, Marc de Klerk wrote: > > Hello Everyone, > > I just wanted to post my proposal for the GSoC: > > https://google-melange.appspot.com/gsoc/proposal/review/google/gsoc2013/deklerkmc/1 > > I hope it sparks a bit of interest :) > > Looking forward to hearing any feedback... > > Cheers > Marc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jschoenberger at demuc.de Wed Jun 5 02:57:23 2013 From: jschoenberger at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Wed, 5 Jun 2013 08:57:23 +0200 Subject: Weird imsave inverting behaviour In-Reply-To: References: <51A2E1E5.3000804@gmail.com> Message-ID: <2D0C5C48-9800-4533-A26B-96958E91680A@demuc.de> What's the status of this discussion? Do we want to tackle this? Johannes Sch?nberger Am 27.05.2013 um 15:48 schrieb St?fan van der Walt : > On Mon, May 27, 2013 at 3:08 PM, Juan Nunez-Iglesias wrote: >> I think you have the right approach, ie imread() should just work. However, >> what started this post is that it doesn't. =) > > Yes, that's a bug and should be fixed. > > St?fan > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > From ronnie.ghose at gmail.com Wed Jun 5 10:04:11 2013 From: ronnie.ghose at gmail.com (Ronnie Ghose) Date: Wed, 5 Jun 2013 10:04:11 -0400 Subject: Color combine function In-Reply-To: <51AF4423.1030403@mitotic-machine.org> References: <51AF4423.1030403@mitotic-machine.org> Message-ID: just wondering what do you do if there's 2 grey level images? an option to choose which channels? Thanks, Ronnie On Wed, Jun 5, 2013 at 9:58 AM, Guillaume Gay wrote: > Hi list, > > I coded a very simple color combine function, that creates an RGB image > from one to three grey level images. Is there an interest to add it to > skimage? Would colorconv.py be the correct place to add it? > > Cheers, > > Guillaume > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe@**googlegroups.com > . > For more options, visit https://groups.google.com/**groups/opt_out > . > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronnie.ghose at gmail.com Wed Jun 5 14:26:35 2013 From: ronnie.ghose at gmail.com (Ronnie Ghose) Date: Wed, 5 Jun 2013 14:26:35 -0400 Subject: Color combine function In-Reply-To: <0F12566F-E6BF-46D0-9EB3-131A1967B515@demuc.de> References: <51AF4423.1030403@mitotic-machine.org> <51AF6494.4000602@mitotic-machine.org> <0F12566F-E6BF-46D0-9EB3-131A1967B515@demuc.de> Message-ID: Its just an alias on top of for ease of use I assume On Jun 5, 2013 12:54 PM, "Johannes Sch?nberger" wrote: > > I didn't know that one, my bad... So the only thing left is the data > handling (float conversion and possibly normalization), plus the cases > where you only provide 1 or 2 channels (which often the case in > fluorescence images), which is probably not worth a function. Thank's for > the tip anyway > > If the channel handling (when one of the channels is missing) is regarded > as a common use case, we could definitely add this function! > > Johannes Sch?nberger > > Am 05.06.2013 um 18:17 schrieb Guillaume Gay < > guillaume at mitotic-machine.org>: > > > > > Le 05/06/2013 17:34, Johannes Sch?nberger a ?crit : > >> np.dstack([R, G, B]) > > I didn't know that one, my bad... So the only thing left is the data > handling (float conversion and possibly normalization), plus the cases > where you only provide 1 or 2 channels (which often the case in > fluorescence images), which is probably not worth a function. Thank's for > the tip anyway > > > > G. > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. > > > > > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at mitotic-machine.org Wed Jun 5 09:58:59 2013 From: guillaume at mitotic-machine.org (Guillaume Gay) Date: Wed, 05 Jun 2013 15:58:59 +0200 Subject: Color combine function Message-ID: <51AF4423.1030403@mitotic-machine.org> Hi list, I coded a very simple color combine function, that creates an RGB image from one to three grey level images. Is there an interest to add it to skimage? Would colorconv.py be the correct place to add it? Cheers, Guillaume From guillaume at mitotic-machine.org Wed Jun 5 10:49:54 2013 From: guillaume at mitotic-machine.org (Guillaume Gay) Date: Wed, 05 Jun 2013 16:49:54 +0200 Subject: Color combine function In-Reply-To: References: <51AF4423.1030403@mitotic-machine.org> Message-ID: <51AF5012.1020004@mitotic-machine.org> The way I did that is by accepting `None` as a value for each of the channels, and fill the corresponding output with zeros.. G Le 05/06/2013 16:04, Ronnie Ghose a ?crit : > just wondering what do you do if there's 2 grey level images? an > option to choose which channels? > > > Thanks, > Ronnie > > > On Wed, Jun 5, 2013 at 9:58 AM, Guillaume Gay > > > wrote: > > Hi list, > > I coded a very simple color combine function, that creates an RGB > image from one to three grey level images. Is there an interest to > add it to skimage? Would colorconv.py be the correct place to add it? > > Cheers, > > Guillaume > > -- > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, > send an email to scikit-image+unsubscribe at googlegroups.com > . > For more options, visit https://groups.google.com/groups/opt_out. > > > > -- > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jschoenberger at demuc.de Wed Jun 5 11:34:00 2013 From: jschoenberger at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Wed, 5 Jun 2013 17:34:00 +0200 Subject: Color combine function In-Reply-To: References: <51AF4423.1030403@mitotic-machine.org> Message-ID: Maybe I misunderstand the purpose of the function but what's wrong with np.dstack([R, G, B])? Johannes Sch?nberger Am 05.06.2013 um 16:52 schrieb Juan Nunez-Iglesias : > I'd certainly use such a function. I've done it by hand quite a few times. I agree that a two channel option where the user can specify the channels would be useful. I would say, for example, > > def combine_to_rgb(images, channels=(0, 1, 2)): > > do_something() > > > Then the user can specify which channel corresponds to which image. So, for two images, channels=(2, 0), the function would make the first image the blue channel and the second image the red channel. > > > On Thu, Jun 6, 2013 at 12:04 AM, Ronnie Ghose wrote: > just wondering what do you do if there's 2 grey level images? an option to choose which channels? > > > Thanks, > Ronnie > > > On Wed, Jun 5, 2013 at 9:58 AM, Guillaume Gay wrote: > Hi list, > > I coded a very simple color combine function, that creates an RGB image from one to three grey level images. Is there an interest to add it to skimage? Would colorconv.py be the correct place to add it? > > Cheers, > > Guillaume > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > From jni.soma at gmail.com Wed Jun 5 04:15:21 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 5 Jun 2013 18:15:21 +1000 Subject: Weird imsave inverting behaviour In-Reply-To: <2D0C5C48-9800-4533-A26B-96958E91680A@demuc.de> References: <51A2E1E5.3000804@gmail.com> <2D0C5C48-9800-4533-A26B-96958E91680A@demuc.de> Message-ID: I'd argue yes. ;) But I'm not sure any hard conclusions were reached as to what exactly needs to be done. I guess the first thing is fixing the TIFF input bug, before any more refactoring. But that might go away in a refactor anyway... On Wed, Jun 5, 2013 at 4:57 PM, Johannes Sch?nberger wrote: > What's the status of this discussion? Do we want to tackle this? > > Johannes Sch?nberger > > Am 27.05.2013 um 15:48 schrieb St?fan van der Walt : > > > On Mon, May 27, 2013 at 3:08 PM, Juan Nunez-Iglesias > wrote: > >> I think you have the right approach, ie imread() should just work. > However, > >> what started this post is that it doesn't. =) > > > > Yes, that's a bug and should be fixed. > > > > St?fan > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. > > > > > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaume at mitotic-machine.org Wed Jun 5 12:17:24 2013 From: guillaume at mitotic-machine.org (Guillaume Gay) Date: Wed, 05 Jun 2013 18:17:24 +0200 Subject: Color combine function In-Reply-To: References: <51AF4423.1030403@mitotic-machine.org> Message-ID: <51AF6494.4000602@mitotic-machine.org> Le 05/06/2013 17:34, Johannes Sch?nberger a ?crit : > np.dstack([R, G, B]) I didn't know that one, my bad... So the only thing left is the data handling (float conversion and possibly normalization), plus the cases where you only provide 1 or 2 channels (which often the case in fluorescence images), which is probably not worth a function. Thank's for the tip anyway G. From jschoenberger at demuc.de Wed Jun 5 12:54:55 2013 From: jschoenberger at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Wed, 5 Jun 2013 18:54:55 +0200 Subject: Color combine function In-Reply-To: <51AF6494.4000602@mitotic-machine.org> References: <51AF4423.1030403@mitotic-machine.org> <51AF6494.4000602@mitotic-machine.org> Message-ID: <0F12566F-E6BF-46D0-9EB3-131A1967B515@demuc.de> > I didn't know that one, my bad... So the only thing left is the data handling (float conversion and possibly normalization), plus the cases where you only provide 1 or 2 channels (which often the case in fluorescence images), which is probably not worth a function. Thank's for the tip anyway If the channel handling (when one of the channels is missing) is regarded as a common use case, we could definitely add this function! Johannes Sch?nberger Am 05.06.2013 um 18:17 schrieb Guillaume Gay : > > Le 05/06/2013 17:34, Johannes Sch?nberger a ?crit : >> np.dstack([R, G, B]) > I didn't know that one, my bad... So the only thing left is the data handling (float conversion and possibly normalization), plus the cases where you only provide 1 or 2 channels (which often the case in fluorescence images), which is probably not worth a function. Thank's for the tip anyway > > G. > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > From aaaagrawal at gmail.com Wed Jun 5 22:41:12 2013 From: aaaagrawal at gmail.com (Ankit Agrawal) Date: Wed, 5 Jun 2013 19:41:12 -0700 (PDT) Subject: Help needed with Git Message-ID: <5edc2047-4e5f-47b9-8205-4a1a26450a8d@googlegroups.com> Hi everyone, For this PR , I was trying to rebase but messed up my local repository because of my noobness with git. My complete log can be viewed here. Probably Stefan got busy over the past two days and hence could not reply to my last query in the discussions of the above PR. It would be great if someone here could guide me resolving my repo. If a chat medium seems convenient for you to guide me through this mess, I am available on IRC channel #skimage. Thank you. Regards, Ankit Agrawal, Communication and Signal Processing, IIT Bombay. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronnie.ghose at gmail.com Wed Jun 5 20:45:41 2013 From: ronnie.ghose at gmail.com (Ronnie Ghose) Date: Wed, 5 Jun 2013 20:45:41 -0400 Subject: Color combine function In-Reply-To: References: <51AF4423.1030403@mitotic-machine.org> <51AF6494.4000602@mitotic-machine.org> <0F12566F-E6BF-46D0-9EB3-131A1967B515@demuc.de> Message-ID: Np dstack/hstack/vstack wasn't that bad for me but some of the things you can do with dimensions in numpy and how you do them efficiently / sometimes even do them at all ..wow o_o those still surprise me, .e.g. some of the ways to do things in _geometic.py :) On Wed, Jun 5, 2013 at 8:35 PM, Juan Nunez-Iglesias wrote: > I too was unaware of np.dstack for a long time, and was doing it via > np.concatenate([R[..., np.newaxis], G[..., np.newaxis], B[..., > np.newaxis]], axis=-1)... Which is a mouthful. ;) And even after I became > aware of it I still couldn't remember to use it! Sometimes aliases are > useful. And, as we mentioned, channel handling could be important, though I > usually have three channels. > > > On Thu, Jun 6, 2013 at 4:26 AM, Ronnie Ghose wrote: > >> Its just an alias on top of for ease of use I assume >> On Jun 5, 2013 12:54 PM, "Johannes Sch?nberger" >> wrote: >> >>> > I didn't know that one, my bad... So the only thing left is the data >>> handling (float conversion and possibly normalization), plus the cases >>> where you only provide 1 or 2 channels (which often the case in >>> fluorescence images), which is probably not worth a function. Thank's for >>> the tip anyway >>> >>> If the channel handling (when one of the channels is missing) is >>> regarded as a common use case, we could definitely add this function! >>> >>> Johannes Sch?nberger >>> >>> Am 05.06.2013 um 18:17 schrieb Guillaume Gay < >>> guillaume at mitotic-machine.org>: >>> >>> > >>> > Le 05/06/2013 17:34, Johannes Sch?nberger a ?crit : >>> >> np.dstack([R, G, B]) >>> > I didn't know that one, my bad... So the only thing left is the data >>> handling (float conversion and possibly normalization), plus the cases >>> where you only provide 1 or 2 channels (which often the case in >>> fluorescence images), which is probably not worth a function. Thank's for >>> the tip anyway >>> > >>> > G. >>> > >>> > -- >>> > You received this message because you are subscribed to the Google >>> Groups "scikit-image" group. >>> > To unsubscribe from this group and stop receiving emails from it, send >>> an email to scikit-image+unsubscribe at googlegroups.com. >>> > For more options, visit https://groups.google.com/groups/opt_out. >>> > >>> > >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to scikit-image+unsubscribe at googlegroups.com. >>> For more options, visit https://groups.google.com/groups/opt_out. >>> >>> >>> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/groups/opt_out. >> >> >> > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaaagrawal at gmail.com Thu Jun 6 02:23:43 2013 From: aaaagrawal at gmail.com (Ankit Agrawal) Date: Wed, 5 Jun 2013 23:23:43 -0700 (PDT) Subject: Help needed with Git In-Reply-To: References: <5edc2047-4e5f-47b9-8205-4a1a26450a8d@googlegroups.com> Message-ID: <0980bf13-796b-46db-ad96-ef66ae77c2fb@googlegroups.com> Hi Puneeth, Thanks a lot!! Your described workflow worked smoothly and I enjoyed resolving the conflicts. @Others : Please review the PR and comment if any other changes/additions are needed. On Thursday, June 6, 2013 8:45:53 AM UTC+5:30, punchagan wrote: > > Hi Ankit, > > On Thu, Jun 6, 2013 at 8:11 AM, Ankit Agrawal > > wrote: > > Hi everyone, > > > > For this PR, I was trying to rebase but messed up my local > > repository because of my noobness with git. My complete log can be > viewed > > here. Probably Stefan got busy over the past two days and hence could > not > > reply to my last query in the discussions of the above PR. It would be > great > > if someone here could guide me resolving my repo. If a chat medium seems > > convenient for you to guide me through this mess, I am available on IRC > > channel #skimage. Thank you. > > Looks like you didn't go through, until the end of the rebase process, > to complete it and that messed up things quite a bit. > > Assuming, you haven't made any more changes, apart from what your bash > session log shows, you should be able to restore world-order by > following these steps: > > 1. Abort the previous rebase > `git rebase --abort` > > 2. If you had any local changes, stash them away > `git stash save messed-up rebase` # everything following save is > just a message > > 3. `git checkout gammaCorrect` > > 4. `git rebase master` > > 5. Now, don't add all the changes, without resolving conflicts! > Look at [1] and [2] for more information on how to resolve conflicts > > 6. Add all the files in which you have resolved conflicts. > `skimage/exposure/__init__.py` seems to be one of them > `git add skimage/exposure/__init__.py` > # Add other files, if any > > 7. `git rebase --continue` > > 8. There might be more conflicts, which will take you back to step 5. > Continue doing 5, 6, 7, until you are done. > > 9. Force push your re-based branch > `git push -f github gammaCorrect` > > 10. You are done! Notify the devs! > > Hope that helps, > Puneeth > > [1] - > http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging#Basic-Merge-Conflicts > [2] - http://githowto.com/resolving_conflicts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Wed Jun 5 10:52:02 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 6 Jun 2013 00:52:02 +1000 Subject: Color combine function In-Reply-To: References: <51AF4423.1030403@mitotic-machine.org> Message-ID: I'd certainly use such a function. I've done it by hand quite a few times. I agree that a two channel option where the user can specify the channels would be useful. I would say, for example, def combine_to_rgb(images, channels=(0, 1, 2)): do_something() Then the user can specify which channel corresponds to which image. So, for two images, channels=(2, 0), the function would make the first image the blue channel and the second image the red channel. On Thu, Jun 6, 2013 at 12:04 AM, Ronnie Ghose wrote: > just wondering what do you do if there's 2 grey level images? an option to > choose which channels? > > > Thanks, > Ronnie > > > On Wed, Jun 5, 2013 at 9:58 AM, Guillaume Gay < > guillaume at mitotic-machine.org> wrote: > >> Hi list, >> >> I coded a very simple color combine function, that creates an RGB image >> from one to three grey level images. Is there an interest to add it to >> skimage? Would colorconv.py be the correct place to add it? >> >> Cheers, >> >> Guillaume >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe@**googlegroups.com >> . >> For more options, visit https://groups.google.com/**groups/opt_out >> . >> >> >> > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Wed Jun 5 10:52:27 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 6 Jun 2013 00:52:27 +1000 Subject: Color combine function In-Reply-To: References: <51AF4423.1030403@mitotic-machine.org> Message-ID: Haha or Guillaume's way works also, and well. ;) On Thu, Jun 6, 2013 at 12:52 AM, Juan Nunez-Iglesias wrote: > I'd certainly use such a function. I've done it by hand quite a few times. > I agree that a two channel option where the user can specify the channels > would be useful. I would say, for example, > > def combine_to_rgb(images, channels=(0, 1, 2)): > do_something() > > > Then the user can specify which channel corresponds to which image. So, > for two images, channels=(2, 0), the function would make the first image > the blue channel and the second image the red channel. > > > On Thu, Jun 6, 2013 at 12:04 AM, Ronnie Ghose wrote: > >> just wondering what do you do if there's 2 grey level images? an option >> to choose which channels? >> >> >> Thanks, >> Ronnie >> >> >> On Wed, Jun 5, 2013 at 9:58 AM, Guillaume Gay < >> guillaume at mitotic-machine.org> wrote: >> >>> Hi list, >>> >>> I coded a very simple color combine function, that creates an RGB image >>> from one to three grey level images. Is there an interest to add it to >>> skimage? Would colorconv.py be the correct place to add it? >>> >>> Cheers, >>> >>> Guillaume >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "scikit-image" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to scikit-image+unsubscribe@**googlegroups.com >>> . >>> For more options, visit https://groups.google.com/**groups/opt_out >>> . >>> >>> >>> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/groups/opt_out. >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From punchagan at gmail.com Wed Jun 5 23:15:53 2013 From: punchagan at gmail.com (Puneeth Chaganti) Date: Thu, 6 Jun 2013 08:45:53 +0530 Subject: Help needed with Git In-Reply-To: <5edc2047-4e5f-47b9-8205-4a1a26450a8d@googlegroups.com> References: <5edc2047-4e5f-47b9-8205-4a1a26450a8d@googlegroups.com> Message-ID: Hi Ankit, On Thu, Jun 6, 2013 at 8:11 AM, Ankit Agrawal wrote: > Hi everyone, > > For this PR, I was trying to rebase but messed up my local > repository because of my noobness with git. My complete log can be viewed > here. Probably Stefan got busy over the past two days and hence could not > reply to my last query in the discussions of the above PR. It would be great > if someone here could guide me resolving my repo. If a chat medium seems > convenient for you to guide me through this mess, I am available on IRC > channel #skimage. Thank you. Looks like you didn't go through, until the end of the rebase process, to complete it and that messed up things quite a bit. Assuming, you haven't made any more changes, apart from what your bash session log shows, you should be able to restore world-order by following these steps: 1. Abort the previous rebase `git rebase --abort` 2. If you had any local changes, stash them away `git stash save messed-up rebase` # everything following save is just a message 3. `git checkout gammaCorrect` 4. `git rebase master` 5. Now, don't add all the changes, without resolving conflicts! Look at [1] and [2] for more information on how to resolve conflicts 6. Add all the files in which you have resolved conflicts. `skimage/exposure/__init__.py` seems to be one of them `git add skimage/exposure/__init__.py` # Add other files, if any 7. `git rebase --continue` 8. There might be more conflicts, which will take you back to step 5. Continue doing 5, 6, 7, until you are done. 9. Force push your re-based branch `git push -f github gammaCorrect` 10. You are done! Notify the devs! Hope that helps, Puneeth [1] - http://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging#Basic-Merge-Conflicts [2] - http://githowto.com/resolving_conflicts From jni.soma at gmail.com Wed Jun 5 20:35:03 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 6 Jun 2013 10:35:03 +1000 Subject: Color combine function In-Reply-To: References: <51AF4423.1030403@mitotic-machine.org> <51AF6494.4000602@mitotic-machine.org> <0F12566F-E6BF-46D0-9EB3-131A1967B515@demuc.de> Message-ID: I too was unaware of np.dstack for a long time, and was doing it via np.concatenate([R[..., np.newaxis], G[..., np.newaxis], B[..., np.newaxis]], axis=-1)... Which is a mouthful. ;) And even after I became aware of it I still couldn't remember to use it! Sometimes aliases are useful. And, as we mentioned, channel handling could be important, though I usually have three channels. On Thu, Jun 6, 2013 at 4:26 AM, Ronnie Ghose wrote: > Its just an alias on top of for ease of use I assume > On Jun 5, 2013 12:54 PM, "Johannes Sch?nberger" > wrote: > >> > I didn't know that one, my bad... So the only thing left is the data >> handling (float conversion and possibly normalization), plus the cases >> where you only provide 1 or 2 channels (which often the case in >> fluorescence images), which is probably not worth a function. Thank's for >> the tip anyway >> >> If the channel handling (when one of the channels is missing) is regarded >> as a common use case, we could definitely add this function! >> >> Johannes Sch?nberger >> >> Am 05.06.2013 um 18:17 schrieb Guillaume Gay < >> guillaume at mitotic-machine.org>: >> >> > >> > Le 05/06/2013 17:34, Johannes Sch?nberger a ?crit : >> >> np.dstack([R, G, B]) >> > I didn't know that one, my bad... So the only thing left is the data >> handling (float conversion and possibly normalization), plus the cases >> where you only provide 1 or 2 channels (which often the case in >> fluorescence images), which is probably not worth a function. Thank's for >> the tip anyway >> > >> > G. >> > >> > -- >> > You received this message because you are subscribed to the Google >> Groups "scikit-image" group. >> > To unsubscribe from this group and stop receiving emails from it, send >> an email to scikit-image+unsubscribe at googlegroups.com. >> > For more options, visit https://groups.google.com/groups/opt_out. >> > >> > >> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/groups/opt_out. >> >> >> -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders.bll at gmail.com Fri Jun 7 19:15:15 2013 From: anders.bll at gmail.com (Anders Boesen Lindbo Larsen) Date: Fri, 7 Jun 2013 16:15:15 -0700 (PDT) Subject: Feature Detectors and Descriptors in scikit-image In-Reply-To: References: Message-ID: On Friday, June 7, 2013 3:32:25 PM UTC+2, Ankit Agrawal wrote: > Hi all, > > I have some queries about feature Detectors and Descriptors that I > have to implement as a part of my GSoC project > . > > Practically, a descriptor is more useful around feature points that > are more distinctive in nature and hence will then produce a greater > accuracy for the tasks where descriptors are used - Correspondence matching > in two image frames for Stereo, Image Alignment, Object Recognition and > Tracking etc. On the same note, the feature descriptors in OpenCV take > input argument as a Keypoint object which is mainly a vector of > keypoints(extracted using a feature detector). OpenCV thus has classes for > Keypoint, > Feature Detector, > DescriptorExtractoretc. This enables the flexibility of using any FeatureDescriptor an > keypoints extracted using any FeatureDetector. > > I took a look at the implementationof Daisy feature descriptor in skimage and noticed that it finds > descriptors around points that are spread uniformly with density based on > step argument as the input. For this I checked its paper(Pg > 3, section 3) and it said it can be used around feature-points as well as > non feature-points. Almost all the Feature Descriptors(that I know of) > including the ones that I am going to implement are calculated about > keypoints. Because of the above reasons, I think an option should be > provided in the functions of feature detectors to return the output as a > vector containing location of feature-points. I would like to know the > views/suggestions of community members experienced in this part of Computer > Vision on this point and to suggest the best possible data-flow between > functions of Feature Detectors and Feature Descriptors. Thank you. > Feature description is a messy business - there is little consensus in the literature and in the implementations available! For an overview of the feature extraction pipeline, I recommend reading until and including section 2.3.2 in http://www.vlfeat.org/~vedaldi/assets/pubs/vedaldi10knowing.pdf Here, different types of interest points are described (disk, oriented disk, ellipse, etc.). Moreover, the feature description pipeline is divided into 3 steps (detection, canonization, description). This means that for each interest point type, you will have to make a canonization method that can bring the underlying image patch can to a form suitable for the description algorithm, e.g. a 64x64 image patch. I recommend this approach because it is more flexible than if the detection and description code is combined as it is done in e.g. SIFT. However, I should mention that the approach is not ideal for 2 reasons: - It requires more computations. In SIFT, the scale-space pyramid generated in the detection step can be reused for description. - The canonization step introduces noise because we typically will have to warp the image. I hope some of it made sense. Returning to your question on the data flow between detectors and descriptors: I would recommend making the detectors return a list of interest points. This list of interest points can then be given to a descriptor function. It is up to the descriptor to canonize the interest points if needed. BTW, some time ago I wrote some code to canonize an affine interest point (ellipse): https://github.com/andersbll/jetdesc/blob/master/util.py#L50 Feel free to copy-paste whatever you might find useful in that repository. :) Cheers, Anders -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri Jun 7 11:08:17 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 7 Jun 2013 17:08:17 +0200 Subject: Feature Detectors and Descriptors in scikit-image In-Reply-To: References: Message-ID: Hi Ankit On Fri, Jun 7, 2013 at 3:32 PM, Ankit Agrawal wrote: > I took a look at the implementation of Daisy feature descriptor in > skimage and noticed that it finds descriptors around points that are spread > uniformly with density based on step argument as the input. For this I > checked its paper(Pg 3, section 3) and it said it can be used around > feature-points as well as non feature-points. Almost all the Feature > Descriptors(that I know of) including the ones that I am going to implement > are calculated about keypoints. Because of the above reasons, I think an > option should be provided in the functions of feature detectors to return > the output as a vector containing location of feature-points. I would like > to know the views/suggestions of community members experienced in this part > of Computer Vision on this point and to suggest the best possible data-flow > between functions of Feature Detectors and Feature Descriptors. Thank you. I am in favor of developing a feature descriptor API that can calculate descriptors only at specified positions. As far as structures are concerned, we try and stick to ndarrays wherever possible; however, it may not always be possible. Hopefully, we'll discover a good and practical API via your project. Regards St?fan From aaaagrawal at gmail.com Fri Jun 7 09:32:25 2013 From: aaaagrawal at gmail.com (Ankit Agrawal) Date: Fri, 7 Jun 2013 21:32:25 +0800 Subject: Feature Detectors and Descriptors in scikit-image Message-ID: Hi all, I have some queries about feature Detectors and Descriptors that I have to implement as a part of my GSoC project . Practically, a descriptor is more useful around feature points that are more distinctive in nature and hence will then produce a greater accuracy for the tasks where descriptors are used - Correspondence matching in two image frames for Stereo, Image Alignment, Object Recognition and Tracking etc. On the same note, the feature descriptors in OpenCV take input argument as a Keypoint object which is mainly a vector of keypoints(extracted using a feature detector). OpenCV thus has classes for Keypoint, Feature Detector, DescriptorExtractoretc. This enables the flexibility of using any FeatureDescriptor an keypoints extracted using any FeatureDetector. I took a look at the implementationof Daisy feature descriptor in skimage and noticed that it finds descriptors around points that are spread uniformly with density based on step argument as the input. For this I checked its paper(Pg 3, section 3) and it said it can be used around feature-points as well as non feature-points. Almost all the Feature Descriptors(that I know of) including the ones that I am going to implement are calculated about keypoints. Because of the above reasons, I think an option should be provided in the functions of feature detectors to return the output as a vector containing location of feature-points. I would like to know the views/suggestions of community members experienced in this part of Computer Vision on this point and to suggest the best possible data-flow between functions of Feature Detectors and Feature Descriptors. Thank you. Regards, Ankit Agrawal, Communication and Signal Processing, IIT Bombay. -------------- next part -------------- An HTML attachment was scrubbed... URL: From deklerkmc at gmail.com Mon Jun 10 05:15:40 2013 From: deklerkmc at gmail.com (Marc de Klerk) Date: Mon, 10 Jun 2013 02:15:40 -0700 (PDT) Subject: GSOC Message-ID: <9eac6ad1-6872-430c-b0e7-de71d5d9715c@googlegroups.com> Hey guys, The 3 GSOC students, Chintak, Ankit and myself have created a channel, #skimage on freenode. There is an official GSOC channel #gsoc, but this is just to discuss a fews things pertaining to scikit-image in realtime and get to know one another a bit better etc... So please do join in if you can, we promise not to fl00d! Cheers, Marc -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaaagrawal at gmail.com Tue Jun 11 17:41:05 2013 From: aaaagrawal at gmail.com (Ankit Agrawal) Date: Tue, 11 Jun 2013 14:41:05 -0700 (PDT) Subject: GSOC In-Reply-To: <20130611211916.GE18987@phare.normalesup.org> References: <9eac6ad1-6872-430c-b0e7-de71d5d9715c@googlegroups.com> <20130611211916.GE18987@phare.normalesup.org> Message-ID: Hi Emmanuelle, Our complete submitted proposals can be seen on these links - 1. Marc : https://google-melange.appspot.com/gsoc/proposal/review/google/gsoc2013/deklerkmc/1 2. Chintak : http://www.google-melange.com/gsoc/proposal/review/google/gsoc2013/chintak/1 3. Ankit : https://google-melange.appspot.com/gsoc/proposal/review/google/gsoc2013/aaaagrawal/8001 A (slightly)updated version of my proposal that I will keep on editing with time can be found on Github wiki : https://github.com/scikit-image/scikit-image/wiki/GSoC-2013-Ankit-Agrawal-Implementation-of-STAR-and-Binary-Feature-Detectors-and-Descriptors Regards, Ankit Agrawal, Communication and Signal Processing, IIT Bombay. On Wednesday, June 12, 2013 5:19:16 AM UTC+8, Emmanuelle Gouillart wrote: > > Dear GSOC students, > > when I log on Melange (Google's application for GSOC), your proposals now > only appear as short abstracts. Is it possible to find somewhere the long > version that was reviewed? > > Anyway, it seems like a good idea to write a thorough description of the > objectives and timeline of your project on your blog, as a good starting > point. Of course you can discuss it with mentors first. > > Cheers, > Emmanuelle > > On Mon, Jun 10, 2013 at 02:15:40AM -0700, Marc de Klerk wrote: > > Hey guys, > > > The 3 GSOC students, Chintak, Ankit and myself have created a channel, > #skimage > > on freenode. > > > There is an official GSOC channel #gsoc, but this is just to discuss a > fews > > things pertaining to scikit-image in realtime and get to know one > another a bit > > better etc... > > > So please do join in if you can, we promise not to fl00d! > > > Cheers, > > Marc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emmanuelle.gouillart at nsup.org Tue Jun 11 17:19:16 2013 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Tue, 11 Jun 2013 23:19:16 +0200 Subject: GSOC In-Reply-To: <9eac6ad1-6872-430c-b0e7-de71d5d9715c@googlegroups.com> References: <9eac6ad1-6872-430c-b0e7-de71d5d9715c@googlegroups.com> Message-ID: <20130611211916.GE18987@phare.normalesup.org> Dear GSOC students, when I log on Melange (Google's application for GSOC), your proposals now only appear as short abstracts. Is it possible to find somewhere the long version that was reviewed? Anyway, it seems like a good idea to write a thorough description of the objectives and timeline of your project on your blog, as a good starting point. Of course you can discuss it with mentors first. Cheers, Emmanuelle On Mon, Jun 10, 2013 at 02:15:40AM -0700, Marc de Klerk wrote: > Hey guys, > The 3 GSOC students, Chintak, Ankit and myself have created a channel, #skimage > on freenode. > There is an official GSOC channel #gsoc, but this is just to discuss a fews > things pertaining to scikit-image in realtime and get to know one another a bit > better etc... > So please do join in if you can, we promise not to fl00d! > Cheers, > Marc From emmanuelle.gouillart at nsup.org Tue Jun 11 17:43:20 2013 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Tue, 11 Jun 2013 23:43:20 +0200 Subject: GSOC In-Reply-To: References: <9eac6ad1-6872-430c-b0e7-de71d5d9715c@googlegroups.com> <20130611211916.GE18987@phare.normalesup.org> Message-ID: <20130611214320.GF18987@phare.normalesup.org> Thank you Ankit! On Tue, Jun 11, 2013 at 02:41:05PM -0700, Ankit Agrawal wrote: > Hi Emmanuelle, > Our complete submitted proposals can be seen on these links - > 1. Marc : https://google-melange.appspot.com/gsoc/proposal/review/google/ > gsoc2013/deklerkmc/1 > 2. Chintak : http://www.google-melange.com/gsoc/proposal/review/google/gsoc2013 > /chintak/1 > 3. Ankit : https://google-melange.appspot.com/gsoc/proposal/review/google/ > gsoc2013/aaaagrawal/8001 > A (slightly)updated version of my proposal that I will keep on editing with > time can be found on Github wiki : https://github.com/scikit-image/scikit-image > /wiki/ > GSoC-2013-Ankit-Agrawal-Implementation-of-STAR-and-Binary-Feature-Detectors-and-Descriptors > Regards, > Ankit Agrawal, > Communication and Signal Processing, > IIT Bombay. > On Wednesday, June 12, 2013 5:19:16 AM UTC+8, Emmanuelle Gouillart wrote: > Dear GSOC students, > when I log on Melange (Google's application for GSOC), your proposals now > only appear as short abstracts. Is it possible to find somewhere the long > version that was reviewed? > Anyway, it seems like a good idea to write a thorough description of the > objectives and timeline of your project on your blog, as a good starting > point. Of course you can discuss it with mentors first. > Cheers, > Emmanuelle > On Mon, Jun 10, 2013 at 02:15:40AM -0700, Marc de Klerk wrote: > > Hey guys, > > The 3 GSOC students, Chintak, Ankit and myself have created a channel, # > skimage > > on freenode. > > There is an official GSOC channel #gsoc, but this is just to discuss a > fews > > things pertaining to scikit-image in realtime and get to know one another > a bit > > better etc... > > So please do join in if you can, we promise not to fl00d! > > Cheers, > > Marc From stefan at sun.ac.za Wed Jun 12 04:44:54 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 12 Jun 2013 10:44:54 +0200 Subject: Fwd: failing tests of 0.8.2 with python3.2 and numpy 1.7.1 In-Reply-To: <20130611030909.GC10723@onerussian.com> References: <20130611030909.GC10723@onerussian.com> Message-ID: Does anyone have a moment to look over these? ---------- Forwarded message ---------- From: Yaroslav Halchenko Date: Tue, Jun 11, 2013 at 5:09 AM Subject: failing tests of 0.8.2 with python3.2 and numpy 1.7.1 any of those look familiar? ====================================================================== ERROR: Test a scalar uint8 image ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/tests/test_exposure.py", line 83, in test_adapthist_scalar adapted = exposure.equalize_adapthist(img, clip_limit=0.02) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/_adapthist.py", line 82, in equalize_adapthist out = _clahe(*args) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/_adapthist.py", line 142, in _clahe aLUT /= bin_size TypeError: ufunc 'true_divide' output (typecode 'd') could not be coerced to provided output parameter (typecode 'l') according to the casting rule ''same_kind'' ====================================================================== ERROR: Test a grayscale float image ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/tests/test_exposure.py", line 103, in test_adapthist_grayscale nbins=128) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/_adapthist.py", line 74, in equalize_adapthist new_l = _clahe(*args).astype(float) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/_adapthist.py", line 142, in _clahe aLUT /= bin_size TypeError: ufunc 'true_divide' output (typecode 'd') could not be coerced to provided output parameter (typecode 'l') according to the casting rule ''same_kind'' ====================================================================== ERROR: Test an RGB color uint16 image ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/tests/test_exposure.py", line 115, in test_adapthist_color adapted = exposure.equalize_adapthist(img, clip_limit=0.01) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/_adapthist.py", line 74, in equalize_adapthist new_l = _clahe(*args).astype(float) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/_adapthist.py", line 142, in _clahe aLUT /= bin_size TypeError: ufunc 'true_divide' output (typecode 'd') could not be coerced to provided output parameter (typecode 'l') according to the casting rule ''same_kind'' ====================================================================== ERROR: test_spath.test_basic ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/graph/tests/test_spath.py", line 11, in test_basic path, cost = spath.shortest_path(x) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/graph/spath.py", line 48, in shortest_path offsets = np.reshape(offset_indices, (arr.ndim, offset_size), order='F').T File "/usr/local/lib/python3.2/dist-packages/numpy/core/fromnumeric.py", line 172, in reshape return reshape(newshape, order=order) ValueError: total size of new array must be unchanged ====================================================================== ERROR: test_spath.test_reach ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/graph/tests/test_spath.py", line 20, in test_reach path, cost = spath.shortest_path(x, reach=2) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/graph/spath.py", line 48, in shortest_path offsets = np.reshape(offset_indices, (arr.ndim, offset_size), order='F').T File "/usr/local/lib/python3.2/dist-packages/numpy/core/fromnumeric.py", line 172, in reshape return reshape(newshape, order=order) ValueError: total size of new array must be unchanged ====================================================================== ERROR: test_spath.test_non_square ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/graph/tests/test_spath.py", line 30, in test_non_square path, cost = spath.shortest_path(x, reach=2) File "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/graph/spath.py", line 48, in shortest_path offsets = np.reshape(offset_indices, (arr.ndim, offset_size), order='F').T File "/usr/local/lib/python3.2/dist-packages/numpy/core/fromnumeric.py", line 172, in reshape return reshape(newshape, order=order) ValueError: total size of new array must be unchanged ---------------------------------------------------------------------- Ran 576 tests in 70.458s -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Senior Research Associate, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From stefan at sun.ac.za Wed Jun 12 08:52:10 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 12 Jun 2013 14:52:10 +0200 Subject: source compilation error: undefined references In-Reply-To: <955fd224-53d5-4d8f-a7f3-9406d9561588@googlegroups.com> References: <955fd224-53d5-4d8f-a7f3-9406d9561588@googlegroups.com> Message-ID: Hi Evan On Wed, Jun 12, 2013 at 2:33 PM, Evan wrote: > On the command line, I seem to avoid the errors by adding the math library, > -lm, at the end: If you modify the setup file as shown here: http://docs.cython.org/src/tutorial/external.html#dynamic-linking Does that help? If so, a pull request would be welcome. I'm not sure why the error occurs: I don't see the same problem on my Ubuntu machine. St?fan From silvertrumpet999 at gmail.com Wed Jun 12 23:04:55 2013 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Wed, 12 Jun 2013 20:04:55 -0700 (PDT) Subject: source compilation error: undefined references In-Reply-To: <955fd224-53d5-4d8f-a7f3-9406d9561588@googlegroups.com> References: <955fd224-53d5-4d8f-a7f3-9406d9561588@googlegroups.com> Message-ID: <60a780a8-b029-4d2b-ae23-aed6ab4595ea@googlegroups.com> Did you use ATLAS/LAPACK installed from your distro's repo, or a personally compiled version? I'm wondering if something in your system environment might not quite be right, especially suspicious about LD_LIBRARY_PATH. On Wednesday, June 12, 2013 7:33:22 AM UTC-5, Evan wrote: > > Hi, I get the below compilation error when compiling from source on a > Linux system. > > Linux marula 2.6.39.4-5.1-server #1 SMP Wed Jan 4 15:15:54 UTC 2012 x86_64 > x86_64 x86_64 GNU/Linux > > @marula scikit-image-0.8.2]$ python setup.py build > non-existing path in 'skimage/_shared': 'tests' > running build > running config_cc > ... > ... > build/temp.linux-x86_64-2.7/skimage/_shared/interpolation.o: In function > `__pyx_f_7skimage_7_shared_13interpolation_bilinear_interpolation': > /opt/python2.7/scikit/scikit-image-0.8.2/skimage/_shared/interpolation.c:559: > undefined reference to `floor' > /opt/python2.7/scikit/scikit-image-0.8.2/skimage/_shared/interpolation.c:568: > undefined reference to `floor' > /opt/python2.7/scikit/scikit-image-0.8.2/skimage/_shared/interpolation.c:577: > undefined reference to `ceil' > /opt/python2.7/scikit/scikit-image-0.8.2/skimage/_shared/interpolation.c:586: > undefined reference to `ceil' > collect2: ld returned 1 exit status > error: Command "gcc -pthread -shared -Wl,--as-needed -Wl,--no-undefined > -Wl,-z,relro -Wl,-O1 -Wl,--build-id -Wl,--enable-new-dtags > build/temp.linux-x86_64-2.7/skimage/_shared/interpolation.o -L/usr/lib64 > -lpython2.7 -o build/lib.linux-x86_64-2.7/skimage/_shared/interpolation.so" > failed with exit status 1 > > On the command line, I seem to avoid the errors by adding the math > library, -lm, at the end: > @marula scikit-image-0.8.2]$ gcc -pthread -shared -Wl,--as-needed > -Wl,--no-undefined -Wl,-z,relro -Wl,-O1 -Wl,--build-id > -Wl,--enable-new-dtags > build/temp.linux-x86_64-2.7/skimage/_shared/interpolation.o -L/usr/lib64 > -lpython2.7 -o build/lib.linux-x86_64-2.7/skimage/_shared/interpolation.so > -lm > @marula scikit-image-0.8.2]$ > > However, i don't know how to introduce this flag into the overall > compilation process. > > I'd be grateful for any help, thanks in advance. > > Evan > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Wed Jun 12 23:23:25 2013 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Wed, 12 Jun 2013 20:23:25 -0700 (PDT) Subject: failing tests of 0.8.2 with python3.2 and numpy 1.7.1 In-Reply-To: References: <20130611030909.GC10723@onerussian.com> Message-ID: <0a87bf70-9f6d-4a8c-aed8-0e99ac7d8a6f@googlegroups.com> The first three are inplace division casting errors supposedly fixed two years ago, according to the 4th and 5th posts in https://github.com/numpy/numpy/pull/99. Are you sure you're using NumPy 1.7.1, not something older like 1.5.1? An older version might explain the other error as well (harder to pin down without the old/new shape tuples). What does `import numpy; numpy.__version__` return using the same python you tried to build with? On Wednesday, June 12, 2013 3:44:54 AM UTC-5, Stefan van der Walt wrote: > > Does anyone have a moment to look over these? > > ---------- Forwarded message ---------- > From: Yaroslav Halchenko > Date: Tue, Jun 11, 2013 at 5:09 AM > Subject: failing tests of 0.8.2 with python3.2 and numpy 1.7.1 > > any of those look familiar? > > ====================================================================== > ERROR: Test a scalar uint8 image > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest > self.test(*self.arg) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/tests/test_exposure.py", > > line 83, in test_adapthist_scalar > adapted = exposure.equalize_adapthist(img, clip_limit=0.02) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/_adapthist.py", > > line 82, in equalize_adapthist > out = _clahe(*args) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/_adapthist.py", > > line 142, in _clahe > aLUT /= bin_size > TypeError: ufunc 'true_divide' output (typecode 'd') could not be > coerced to provided output parameter (typecode 'l') according to the > casting rule ''same_kind'' > > ====================================================================== > ERROR: Test a grayscale float image > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest > self.test(*self.arg) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/tests/test_exposure.py", > > line 103, in test_adapthist_grayscale > nbins=128) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/_adapthist.py", > > line 74, in equalize_adapthist > new_l = _clahe(*args).astype(float) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/_adapthist.py", > > line 142, in _clahe > aLUT /= bin_size > TypeError: ufunc 'true_divide' output (typecode 'd') could not be > coerced to provided output parameter (typecode 'l') according to the > casting rule ''same_kind'' > > ====================================================================== > ERROR: Test an RGB color uint16 image > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest > self.test(*self.arg) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/tests/test_exposure.py", > > line 115, in test_adapthist_color > adapted = exposure.equalize_adapthist(img, clip_limit=0.01) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/_adapthist.py", > > line 74, in equalize_adapthist > new_l = _clahe(*args).astype(float) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/exposure/_adapthist.py", > > line 142, in _clahe > aLUT /= bin_size > TypeError: ufunc 'true_divide' output (typecode 'd') could not be > coerced to provided output parameter (typecode 'l') according to the > casting rule ''same_kind'' > > ====================================================================== > ERROR: test_spath.test_basic > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest > self.test(*self.arg) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/graph/tests/test_spath.py", > > line 11, in test_basic > path, cost = spath.shortest_path(x) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/graph/spath.py", > > line 48, in shortest_path > offsets = np.reshape(offset_indices, (arr.ndim, offset_size), > order='F').T > File "/usr/local/lib/python3.2/dist-packages/numpy/core/fromnumeric.py", > line 172, in reshape > return reshape(newshape, order=order) > ValueError: total size of new array must be unchanged > > ====================================================================== > ERROR: test_spath.test_reach > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest > self.test(*self.arg) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/graph/tests/test_spath.py", > > line 20, in test_reach > path, cost = spath.shortest_path(x, reach=2) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/graph/spath.py", > > line 48, in shortest_path > offsets = np.reshape(offset_indices, (arr.ndim, offset_size), > order='F').T > File "/usr/local/lib/python3.2/dist-packages/numpy/core/fromnumeric.py", > line 172, in reshape > return reshape(newshape, order=order) > ValueError: total size of new array must be unchanged > > ====================================================================== > ERROR: test_spath.test_non_square > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/nose/case.py", line 198, in runTest > self.test(*self.arg) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/graph/tests/test_spath.py", > > line 30, in test_non_square > path, cost = spath.shortest_path(x, reach=2) > File > "/home/yoh/deb/gits/build-area/skimage-0.8.2/debian/tmp/usr/lib/python3/dist-packages/skimage/graph/spath.py", > > line 48, in shortest_path > offsets = np.reshape(offset_indices, (arr.ndim, offset_size), > order='F').T > File "/usr/local/lib/python3.2/dist-packages/numpy/core/fromnumeric.py", > line 172, in reshape > return reshape(newshape, order=order) > ValueError: total size of new array must be unchanged > > ---------------------------------------------------------------------- > Ran 576 tests in 70.458s > > -- > Yaroslav O. Halchenko, Ph.D. > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Senior Research Associate, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From google at terre-adelie.org Wed Jun 12 16:51:02 2013 From: google at terre-adelie.org (=?ISO-8859-1?Q?J=E9r=F4me?= Kieffer) Date: Wed, 12 Jun 2013 22:51:02 +0200 Subject: Sift on GPU Message-ID: <20130612225102.dae0cb98ca059395fe4ba22b@terre-adelie.org> Dear Pythonistas, We are porting the SIFT keypoints extraction algorithm (available from IPOL) to GPU using PyOpenCL. For the moment, the keypoint location works and shows a speed-up of 5 to 10x (without tuning so far, vs C++). A lot of work is remaining, especially: * limit the memory footprint (700MB/10Mpix image currently) * calculate the descriptor for each descriptor * keypoint matching and image alignment. * best interleave of IO/CPU/GPU but we managed to port the most trickiest part to OpenCL (without using textures, which makes it running also on multi-core). I would like to thank the people who published their algorithm on IPOL; making unit testing possible. Last but not least, the code is open source and should have a BSD licence (even if there is a patent on the algorithm in the USA). https://github.com/pierrepaleo/sift_pyocl Cheers, -- J?r?me Kieffer From evanmason at gmail.com Thu Jun 13 05:43:42 2013 From: evanmason at gmail.com (Evan) Date: Thu, 13 Jun 2013 02:43:42 -0700 (PDT) Subject: source compilation error: undefined references In-Reply-To: References: <955fd224-53d5-4d8f-a7f3-9406d9561588@googlegroups.com> Message-ID: <08827183-3627-49cc-bc8c-2208b6c3e7e5@googlegroups.com> On Wednesday, June 12, 2013 2:52:10 PM UTC+2, Stefan van der Walt wrote: > > Hi Evan > > On Wed, Jun 12, 2013 at 2:33 PM, Evan > > wrote: > > On the command line, I seem to avoid the errors by adding the math > library, > > -lm, at the end: > > If you modify the setup file as shown here: > > http://docs.cython.org/src/tutorial/external.html#dynamic-linking > > Does that help? If so, a pull request would be welcome. I'm not sure > why the error occurs: I don't see the same problem on my Ubuntu > machine. > > St?fan > Thanks for your reply. Can you coach me a little on the modification I need to make. I assume the file to edit is: ... /scikit-image-0.8.2/skimage/_shared/setup.py But so far I don't see how to resolve what's needed from http://docs.cython.org/src/tutorial/external.html#dynamic-linking into the setup file. Thanks, Evan -------------- next part -------------- An HTML attachment was scrubbed... URL: From evanmason at gmail.com Thu Jun 13 05:44:21 2013 From: evanmason at gmail.com (Evan) Date: Thu, 13 Jun 2013 02:44:21 -0700 (PDT) Subject: source compilation error: undefined references In-Reply-To: <60a780a8-b029-4d2b-ae23-aed6ab4595ea@googlegroups.com> References: <955fd224-53d5-4d8f-a7f3-9406d9561588@googlegroups.com> <60a780a8-b029-4d2b-ae23-aed6ab4595ea@googlegroups.com> Message-ID: On Thursday, June 13, 2013 5:04:55 AM UTC+2, Josh Warner wrote: > > Did you use ATLAS/LAPACK installed from your distro's repo, or a > personally compiled version? I'm wondering if something in your system > environment might not quite be right, especially suspicious about > LD_LIBRARY_PATH. > Thanks for your reply. The distro is Mandriva 2011, and ATLAS/LAPACK are from the repository, with dev versions also installed. Evan > > On Wednesday, June 12, 2013 7:33:22 AM UTC-5, Evan wrote: >> >> Hi, I get the below compilation error when compiling from source on a >> Linux system. >> >> Linux marula 2.6.39.4-5.1-server #1 SMP Wed Jan 4 15:15:54 UTC 2012 >> x86_64 x86_64 x86_64 GNU/Linux >> >> @marula scikit-image-0.8.2]$ python setup.py build >> non-existing path in 'skimage/_shared': 'tests' >> running build >> running config_cc >> ... >> ... >> build/temp.linux-x86_64-2.7/skimage/_shared/interpolation.o: In function >> `__pyx_f_7skimage_7_shared_13interpolation_bilinear_interpolation': >> /opt/python2.7/scikit/scikit-image-0.8.2/skimage/_shared/interpolation.c:559: >> undefined reference to `floor' >> /opt/python2.7/scikit/scikit-image-0.8.2/skimage/_shared/interpolation.c:568: >> undefined reference to `floor' >> /opt/python2.7/scikit/scikit-image-0.8.2/skimage/_shared/interpolation.c:577: >> undefined reference to `ceil' >> /opt/python2.7/scikit/scikit-image-0.8.2/skimage/_shared/interpolation.c:586: >> undefined reference to `ceil' >> collect2: ld returned 1 exit status >> error: Command "gcc -pthread -shared -Wl,--as-needed -Wl,--no-undefined >> -Wl,-z,relro -Wl,-O1 -Wl,--build-id -Wl,--enable-new-dtags >> build/temp.linux-x86_64-2.7/skimage/_shared/interpolation.o -L/usr/lib64 >> -lpython2.7 -o build/lib.linux-x86_64-2.7/skimage/_shared/interpolation.so" >> failed with exit status 1 >> >> On the command line, I seem to avoid the errors by adding the math >> library, -lm, at the end: >> @marula scikit-image-0.8.2]$ gcc -pthread -shared -Wl,--as-needed >> -Wl,--no-undefined -Wl,-z,relro -Wl,-O1 -Wl,--build-id >> -Wl,--enable-new-dtags >> build/temp.linux-x86_64-2.7/skimage/_shared/interpolation.o -L/usr/lib64 >> -lpython2.7 -o build/lib.linux-x86_64-2.7/skimage/_shared/interpolation.so >> -lm >> @marula scikit-image-0.8.2]$ >> >> However, i don't know how to introduce this flag into the overall >> compilation process. >> >> I'd be grateful for any help, thanks in advance. >> >> Evan >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From evanmason at gmail.com Thu Jun 13 10:03:40 2013 From: evanmason at gmail.com (Evan) Date: Thu, 13 Jun 2013 07:03:40 -0700 (PDT) Subject: source compilation error: undefined references In-Reply-To: References: <955fd224-53d5-4d8f-a7f3-9406d9561588@googlegroups.com> <08827183-3627-49cc-bc8c-2208b6c3e7e5@googlegroups.com> Message-ID: <489490a8-4980-4a3e-8d6f-65829c699245@googlegroups.com> On Thursday, June 13, 2013 1:55:02 PM UTC+2, Stefan van der Walt wrote: > > On Thu, Jun 13, 2013 at 11:43 AM, Evan > > wrote: > > Thanks for your reply. Can you coach me a little on the modification I > need > > to make. I assume the file to edit is: > > ... /scikit-image-0.8.2/skimage/_shared/setup.py > > But so far I don't see how to resolve what's needed from > > http://docs.cython.org/src/tutorial/external.html#dynamic-linking into > the > > setup file. > > Yes, you need to add the 'libraries=["m"]' part. > > St?fan > Ok, got it. Had to work through and edit each each subdir + setup.py as needed. I'll send the pull request in the next day or two. Thanks very much for the help, Evan -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Jun 13 07:55:02 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 13 Jun 2013 13:55:02 +0200 Subject: source compilation error: undefined references In-Reply-To: <08827183-3627-49cc-bc8c-2208b6c3e7e5@googlegroups.com> References: <955fd224-53d5-4d8f-a7f3-9406d9561588@googlegroups.com> <08827183-3627-49cc-bc8c-2208b6c3e7e5@googlegroups.com> Message-ID: On Thu, Jun 13, 2013 at 11:43 AM, Evan wrote: > Thanks for your reply. Can you coach me a little on the modification I need > to make. I assume the file to edit is: > ... /scikit-image-0.8.2/skimage/_shared/setup.py > But so far I don't see how to resolve what's needed from > http://docs.cython.org/src/tutorial/external.html#dynamic-linking into the > setup file. Yes, you need to add the 'libraries=["m"]' part. St?fan From jni.soma at gmail.com Thu Jun 13 02:52:44 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 13 Jun 2013 16:52:44 +1000 Subject: Qhull error Message-ID: Hi all. I'm getting the below error when computing regionprops(..., ['Solidity']). Does anyone know what's going on? At first I thought maybe it was passing a singleton array or something, but this array looks like a reasonable thing to compute a convex hull on, no? If I'm not being totally stupid (which is a big if), perhaps we should add a try/except around the convex hull calculation and return a non-value, e.g. -1, when the calculation fails? In [49]: test_im = array([[ 0., 0., 0., 0., 1., 0., 0.], [ 0., 0., 1., 1., 1., 1., 1.], [ 1., 1., 1., 0., 0., 0., 0.]], np.int) In [50]: test_im Out[50]: array([[0, 0, 0, 0, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1], [1, 1, 1, 0, 0, 0, 0]]) In [51]: measure.regionprops(test_im.astype(int), ['Solidity']) QH6228 Qhull internal error (qh_findbestlower): all neighbors of facet 8 are flipped or upper Delaunay. Please report this error to qhull_bug at qhull.org with the input and all of the output. ERRONEOUS FACET: - f8 - flags: bottom simplicial upperDelaunay - normal: -0.8366 -0.5167 0.182 - offset: 1.254902 - center: 0.7380952380952381 2.30952380952381 3.054054054054054 - vertices: p30(v6) p56(v3) p28(v0) - neighboring facets: f3 f23 f22 While executing: | qhull d Qz Qbb Qt Options selected for Qhull 2010.1 2010/01/14: run-id 1680623163 delaunay Qz-infinity-point Qbbound-last Qtriangulate _pre-merge _zero-centrum Pgood _max-width 7 Error-roundoff 9.7e-15 _one-merge 6.8e-14 _near-inside 3.4e-13 Visible-distance 1.9e-14 U-coplanar-distance 1.9e-14 Width-outside 3.9e-14 _wide-facet 1.2e-13 Last point added to hull was p33. Last merge was #1. At error exit: Delaunay triangulation by the convex hull of 57 points in 3-d: Number of input sites and at-infinity: 10 Number of nearly incident points: 10 Number of Delaunay regions: 0 Number of non-simplicial Delaunay regions: 1 Statistics for: | qhull d Qz Qbb Qt Number of points processed: 10 Number of hyperplanes created: 27 Number of facets in hull: 15 Number of distance tests for qhull: 512 Number of distance tests for merging: 127 Number of distance tests for checking: 0 Number of merged facets: 1 --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in () ----> 1 measure.regionprops(test_im.astype(int), ['Solidity']) /Users/nuneziglesiasj/venv/husc/lib/python2.7/site-packages/scikit_image-0.9dev-py2.7-macosx-10.5-x86_64.egg/skimage/measure/_regionprops.pyc in regionprops(label_image, properties, intensity_image) 363 if 'Solidity' in properties: 364 if _convex_image is None: --> 365 _convex_image = convex_hull_image(array) 366 obj_props['Solidity'] = m[0, 0] / np.sum(_convex_image) 367 /Users/nuneziglesiasj/venv/husc/lib/python2.7/site-packages/scikit_image-0.9dev-py2.7-macosx-10.5-x86_64.egg/skimage/morphology/convex_hull.pyc in convex_hull_image(image) 52 53 # Find the convex hull ---> 54 chull = Delaunay(coords).convex_hull 55 v = coords[np.unique(chull)] 56 /Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/scipy/spatial/qhull.so in scipy.spatial.qhull.Delaunay.__init__ (scipy/spatial/qhull.c:4109)() /Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/scipy/spatial/qhull.so in scipy.spatial.qhull._construct_delaunay (scipy/spatial/qhull.c:1314)() RuntimeError: Qhull error > /Users/nuneziglesiasj/Data/images-work/qhull.pyx(172)scipy.spatial.qhull._construct_delaunay (scipy/spatial/qhull.c:1314)() -------------- next part -------------- An HTML attachment was scrubbed... URL: From deklerkmc at gmail.com Fri Jun 14 05:36:39 2013 From: deklerkmc at gmail.com (Marc de Klerk) Date: Fri, 14 Jun 2013 02:36:39 -0700 (PDT) Subject: Sift on GPU In-Reply-To: <20130612225102.dae0cb98ca059395fe4ba22b@terre-adelie.org> References: <20130612225102.dae0cb98ca059395fe4ba22b@terre-adelie.org> Message-ID: <41a3591f-2856-4c76-bcc9-a33b40e0a24f@googlegroups.com> Hi J?r?me, I cloned the repo and tried running test_all.py, Seems there are a couple bugs in test_image_functions.py that prevent it from executing properly. Is there an example somewhere that I can play with/ Cheers, Marc On Wednesday, June 12, 2013 10:51:02 PM UTC+2, Jerome Kieffer wrote: > > Dear Pythonistas, > > We are porting the SIFT keypoints extraction algorithm (available from > IPOL) > to GPU using PyOpenCL. For the moment, the keypoint location works and > shows a speed-up of 5 to 10x (without tuning so far, vs C++). > > A lot of work is remaining, especially: > * limit the memory footprint (700MB/10Mpix image currently) > * calculate the descriptor for each descriptor > * keypoint matching and image alignment. > * best interleave of IO/CPU/GPU > but we managed to port the most trickiest part to OpenCL (without using > textures, which makes it running also on multi-core). > > I would like to thank the people who published their algorithm on IPOL; > making unit testing possible. > > Last but not least, the code is open source and should have a BSD > licence (even if there is a patent on the algorithm in the USA). > https://github.com/pierrepaleo/sift_pyocl > > Cheers, > > -- > J?r?me Kieffer > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From google at terre-adelie.org Sat Jun 15 07:21:53 2013 From: google at terre-adelie.org (=?ISO-8859-1?Q?J=E9r=F4me?= Kieffer) Date: Sat, 15 Jun 2013 13:21:53 +0200 Subject: Sift on GPU In-Reply-To: <41a3591f-2856-4c76-bcc9-a33b40e0a24f@googlegroups.com> References: <20130612225102.dae0cb98ca059395fe4ba22b@terre-adelie.org> <41a3591f-2856-4c76-bcc9-a33b40e0a24f@googlegroups.com> Message-ID: <20130615132153.2bb9dc30d74a8a47b54be468@terre-adelie.org> Dear Mark, On Fri, 14 Jun 2013 02:36:39 -0700 (PDT) Marc de Klerk wrote: > I cloned the repo and tried running test_all.py, > Seems there are a couple bugs in test_image_functions.py that prevent it > from executing properly. This is highly possible: we still have a small differences in the number of keypoints with C++ implementation. moreover the keypoint localization can vary up to 1 pixel (to be multiplied by the number of octave). This looks like a rounding error but we did not spot it. > Is there an example somewhere that I can play with/ get the reference implementation: git clone -branch numpy git://github.com/kif/imageAlignment.git cd imageAlignment python setup.py build sudo python setup.py install #or modify your PYTHONPATH cd .. git clone git://github.com/kif/sift_pyocl.git cd sift_pyocl/test python test_all.py # I got (failures=2, errors=2, mainly because API changed faster than tests) python crash.py This should show you keypoints (red and blue arrows represents the orientation and the scale, in green are our errors) Tell me if you are doing progress (or not). Cheers, -- J?r?me Kieffer From yongda.chen at gmail.com Mon Jun 17 14:01:38 2013 From: yongda.chen at gmail.com (Yongda Chen) Date: Mon, 17 Jun 2013 11:01:38 -0700 (PDT) Subject: How to get full size image which is transformed by warp function Message-ID: <105e7fad-69b6-47b1-b1e5-4a818c3ae9b2@googlegroups.com> I am trying to do geometric transformation with warp in scikit-image, but I cannot get the full size transformed image which has different image size to original image. Even after setting up output_shape parameter, it still lost part of image which is transformed to negative coordinates. My question is how to get full size transformed image when doing affine transform, projective transform with warp function? Following is the example code I am working on import matplotlib.pyplot as plt import skimage.transform import * import skimage import data import numpy as np checkboard = data.checkboard() tform = AffineTransform(rotation = np.pi/6) checkborad_transformed = warp(checkboard, tform, output_shape=(400,400)) fig = plt.figure() plt.imshow(checkboard_transformed, cmap = plt.cm.gray) Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From jschoenberger at demuc.de Mon Jun 17 15:44:56 2013 From: jschoenberger at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Mon, 17 Jun 2013 21:44:56 +0200 Subject: How to get full size image which is transformed by warp function In-Reply-To: <105e7fad-69b6-47b1-b1e5-4a818c3ae9b2@googlegroups.com> References: <105e7fad-69b6-47b1-b1e5-4a818c3ae9b2@googlegroups.com> Message-ID: Hi, Have a look at the code of the `skimage.transform.rotate` function in `skimage/transform/_warps.py`. The code in there is applicable to arbitrary transformations. Johannes Sch?nberger Am 17.06.2013 um 20:01 schrieb Yongda Chen : > I am trying to do geometric transformation with warp in scikit-image, but I cannot get the full size transformed image which has different image size to original image. Even after setting up output_shape parameter, it still lost part of image which is transformed to negative coordinates. > My question is how to get full size transformed image when doing affine transform, projective transform with warp function? > Following is the example code I am working on > > import matplotlib.pyplot as plt > import skimage.transform import * > import skimage import data > import numpy as np > > checkboard = data.checkboard() > tform = AffineTransform(rotation = np.pi/6) > checkborad_transformed = warp(checkboard, tform, output_shape=(400,400)) > fig = plt.figure() > plt.imshow(checkboard_transformed, cmap = plt.cm.gray) > > > > Thanks > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > From aaaagrawal at gmail.com Tue Jun 18 12:13:34 2013 From: aaaagrawal at gmail.com (Ankit Agrawal) Date: Tue, 18 Jun 2013 09:13:34 -0700 (PDT) Subject: Feature Detectors and Descriptors in scikit-image In-Reply-To: References: Message-ID: <1b313cc9-2ce5-4954-af13-3f1d8828f455@googlegroups.com> Hi Anders, Thanks a lot for this helpful reply. I know I am replying pretty late, and that is because I did not read the VLFeat article on Features Detection and Descriptors completely until now. Feature description is a messy business - there is little consensus in the > literature and in the implementations available! > > For an overview of the feature extraction pipeline, I recommend reading > until and including section 2.3.2 in > http://www.vlfeat.org/~vedaldi/assets/pubs/vedaldi10knowing.pdf > Here, different types of interest points are described (disk, oriented > disk, ellipse, etc.). Moreover, the feature description pipeline is divided > into 3 steps (detection, canonization, description). This means that for > each interest point type, you will have to make a canonization method that > can bring the underlying image patch can to a form suitable for the > description algorithm, e.g. a 64x64 image patch. > I recommend this approach because it is more flexible than if the > detection and description code is combined as it is done in e.g. SIFT. > However, I should mention that the approach is not ideal for 2 reasons: > - It requires more computations. In SIFT, the scale-space pyramid > generated in the detection step can be reused for description. > - The canonization step introduces noise because we typically will have > to warp the image. > > I hope some of it made sense. Returning to your question on the data flow > between detectors and descriptors: I would recommend making the detectors > return a list of interest points. This list of interest points can then be > given to a descriptor function. It is up to the descriptor to canonize the > interest points if needed. > Meanwhile, it would be great if you can review the initial implementationof BRIEF descriptor. The data structure that we have decided to use across skimage for storing keypoints/interest points is (N, 2) numpy array. Thanks a lot again!! BTW, some time ago I wrote some code to canonize an affine interest point > (ellipse): > https://github.com/andersbll/jetdesc/blob/master/util.py#L50 > Feel free to copy-paste whatever you might find useful in that repository. > :) > > Cheers, > Anders > Regards, Ankit Agrawal, Communication and Signal Processing, IIT Bombay. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Jun 18 03:24:46 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 18 Jun 2013 09:24:46 +0200 Subject: How to get full size image which is transformed by warp function In-Reply-To: <105e7fad-69b6-47b1-b1e5-4a818c3ae9b2@googlegroups.com> References: <105e7fad-69b6-47b1-b1e5-4a818c3ae9b2@googlegroups.com> Message-ID: On Mon, Jun 17, 2013 at 8:01 PM, Yongda Chen wrote: > I am trying to do geometric transformation with warp in scikit-image, but I > cannot get the full size transformed image which has different image size to > original image. Even after setting up output_shape parameter, it still lost > part of image which is transformed to negative coordinates. I'd represent the rotation as a transformation matrix, and then also add in a certain amount of translation to account for the negative coordinates. I believe this is what Johannes also recommended, and what is coded up in `skimage.transform.rotate`. St?fan From yongda.chen at gmail.com Wed Jun 19 14:34:21 2013 From: yongda.chen at gmail.com (Yongda Chen) Date: Wed, 19 Jun 2013 11:34:21 -0700 (PDT) Subject: How to get full size image which is transformed by warp function In-Reply-To: References: <105e7fad-69b6-47b1-b1e5-4a818c3ae9b2@googlegroups.com> Message-ID: <30bc73b5-9ec0-4c54-85c5-1d040fda6a51@googlegroups.com> Hi Johannes, Thank you very much for your quick reply. The link you provided not only solve my problem, but also is a good pointer from which I understand the skimage more. Thank you very much Yongda On Monday, June 17, 2013 12:44:56 PM UTC-7, Johannes Sch?nberger wrote: > > Hi, > > Have a look at the code of the `skimage.transform.rotate` function in > `skimage/transform/_warps.py`. The code in there is applicable to arbitrary > transformations. > > Johannes Sch?nberger > > Am 17.06.2013 um 20:01 schrieb Yongda Chen >: > > > > I am trying to do geometric transformation with warp in scikit-image, > but I cannot get the full size transformed image which has different image > size to original image. Even after setting up output_shape parameter, it > still lost part of image which is transformed to negative coordinates. > > My question is how to get full size transformed image when doing affine > transform, projective transform with warp function? > > Following is the example code I am working on > > > > import matplotlib.pyplot as plt > > import skimage.transform import * > > import skimage import data > > import numpy as np > > > > checkboard = data.checkboard() > > tform = AffineTransform(rotation = np.pi/6) > > checkborad_transformed = warp(checkboard, tform, output_shape=(400,400)) > > fig = plt.figure() > > plt.imshow(checkboard_transformed, cmap = plt.cm.gray) > > > > > > > > Thanks > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image... at googlegroups.com . > > For more options, visit https://groups.google.com/groups/opt_out. > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongda.chen at gmail.com Wed Jun 19 14:36:31 2013 From: yongda.chen at gmail.com (Yongda Chen) Date: Wed, 19 Jun 2013 11:36:31 -0700 (PDT) Subject: How to get full size image which is transformed by warp function In-Reply-To: References: <105e7fad-69b6-47b1-b1e5-4a818c3ae9b2@googlegroups.com> Message-ID: Hi Stefan, Thank you very much for your suggestion. I am trying to figure out how to add in a certain amount of translation for different transforms. Thanks Yongda On Tuesday, June 18, 2013 12:24:46 AM UTC-7, Stefan van der Walt wrote: > > On Mon, Jun 17, 2013 at 8:01 PM, Yongda Chen > > wrote: > > I am trying to do geometric transformation with warp in scikit-image, > but I > > cannot get the full size transformed image which has different image > size to > > original image. Even after setting up output_shape parameter, it still > lost > > part of image which is transformed to negative coordinates. > > I'd represent the rotation as a transformation matrix, and then also > add in a certain amount of translation to account for the negative > coordinates. I believe this is what Johannes also recommended, and > what is coded up in `skimage.transform.rotate`. > > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.pincus at yale.edu Wed Jun 19 15:07:17 2013 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 19 Jun 2013 15:07:17 -0400 Subject: Heap and Eikonal eqn In-Reply-To: References: Message-ID: > Hi Stefan, > > Thanks I'll check it out. Though, I will need to store 3 values in the heap. I see that BinaryHeap takes just a float value. I'll try and see if it can take a list perhaps. No, it can't I'm afraid -- the heap stores a flat C-style float* array internally. Your best option is to use the heap cython code as a starting point and either add two more flat arrays of floats, or change it to an array of pointers to some other data structure. If you don't want to hack on cython code just now, you could instead maintain three heaps (one for each value). This is obviously suboptimal, but at least it'll work with only constant-time slowdown so you can use it as a proof of concept. Zach From stefan at sun.ac.za Wed Jun 19 09:29:47 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 19 Jun 2013 15:29:47 +0200 Subject: Heap and Eikonal eqn In-Reply-To: References: Message-ID: Hi Chintak On Wed, Jun 19, 2013 at 12:50 PM, Chintak Sheth wrote: > How should I implement a heap structure in Python? In-built heapq is not > flexible enough. namedtuple seems to be an alternative. Although how should > I add the pointer support for next and prev? I need 2 ints and 1 float for > data. Any thoughts ? We already have some heap structures in skimage--have you had a look at those? St?fan From chintaksheth at gmail.com Wed Jun 19 06:50:11 2013 From: chintaksheth at gmail.com (Chintak Sheth) Date: Wed, 19 Jun 2013 16:20:11 +0530 Subject: Heap and Eikonal eqn Message-ID: Hi guys, I'm implementing image inpainting for my GSoC project. The proposal is scikit-image: Image Inpainting for Restoration. I am first wanting to implement it using Fast Marching Method (FMM). For this I need to solve the Eikonal equation and also implement a heap structure. I checked StackOverflow. ODE can be solved using scipy.integrate.odeint. Any other approaches? How should I implement a heap structure in Python? In-built heapq is not flexible enough. namedtuple seems to be an alternative. Although how should I add the pointer support for next and prev? I need 2 ints and 1 float for data. Any thoughts ? Thanks, Chintak -------------- next part -------------- An HTML attachment was scrubbed... URL: From chintaksheth at gmail.com Wed Jun 19 14:55:03 2013 From: chintaksheth at gmail.com (Chintak Sheth) Date: Thu, 20 Jun 2013 00:25:03 +0530 Subject: Heap and Eikonal eqn In-Reply-To: References: Message-ID: Hi Stefan, Thanks I'll check it out. Though, I will need to store 3 values in the heap. I see that BinaryHeap takes just a float value. I'll try and see if it can take a list perhaps. Chintak -------------- next part -------------- An HTML attachment was scrubbed... URL: From deklerkmc at gmail.com Thu Jun 20 06:17:16 2013 From: deklerkmc at gmail.com (Marc de Klerk) Date: Thu, 20 Jun 2013 03:17:16 -0700 (PDT) Subject: Graph Cuts implementation Message-ID: <1776a04e-954c-413c-954e-04d83c424721@googlegroups.com> Hi Everyone, One of items on my todo list in first three weeks of my GSOC is to implement a CPU Graph-cuts. I've made a couple points from a preliminary literature survey and would like to get some feedback / recommendations / advice. Graph cut algorithms can generally be regarded as either begin push-relabel (PR) or augmenting paths (AP) style. - The goto algorithm has typically been the Boykov and Kolmogorov algorithm (AP) [1] - The current state-of-art graph-cut is packaged as GridCut, product of [2] - Based on [1] - Multi core (from [3]) - Cache efficient memory layout (from [2]) - For grid-like topologies - A push relabel variant exists from [4] - Multi core (from [3]) - Not bound by memory constraints - For grid-like topologies - From what I can gather BK typically outperforms other approaches such as the push-relabel algorithm, but to what extent I'm can't tell from the literature? - [4] describes memory layout strategies to optimize caching for both AP and PR style algorithms, but that choose the implement their strategies on BK. - PR style algorithms are not limited by memory constraints, whereas AP style algorithms are, however doing graph cuts on a coarse to fine scale seem to be an answer. - PR is the only feasible way to do graph cuts on the GPU. So I'm suggestions to implement the PR style algorithm of [4] using the strategies described in [2] to speed things up. This then let's us make a fairer comparison between CPU and GPU versions of the graph cut Does this sound like a good way forward? It does require some multi-core programming which I'm not familiar with - are there any examples of something similar in scikit-image? [1] Boykov, Y., & Kolmogorov, V. (2004). An experimental comparison of min-cut/max- flow algorithms for energy minimization in vision. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(9), 1124?1137. doi:10.1109/TPAMI.2004.60 [2] Jamriska, O., Sykora, D., & Hornung, A. (2012). Cache-efficient graph cuts on structured grids (pp. 3673?3680). Presented at the Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. doi:10.1109/CVPR.2012.6248113 [3] Liu, J., & Sun, J. (2010). Parallel graph-cuts by adaptive bottom-up merging (pp. 2181?2188). Presented at the Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. doi:10.1109/CVPR.2010.5539898 [4] Delong, A., & Boykov, Y. (2008). A Scalable graph-cut algorithm for N-D grids (pp. 1?8). Presented at the Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. doi:10.1109/CVPR.2008.4587464 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Thu Jun 20 19:03:28 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Thu, 20 Jun 2013 18:03:28 -0500 Subject: how to highlight/shade a segment in an image. In-Reply-To: <51C3841E.3080504@gmail.com> References: <51C3841E.3080504@gmail.com> Message-ID: On Thu, Jun 20, 2013 at 5:37 PM, Brickle Macho wrote: > I over segment an image using a superpixel algorithm. I region grow > using the superpixels to end up with a segmented image, a label-image. > I overlay the label boundaries using mark_boundaries(). I can click on > a segment/region and indicate it as either foreground or background. > This foreground/background information is maintained in a python dict. > How can I provide visual feedback, say tinting the clicked segment, in > the image. > > Short version, given a label, a lable-image and a image, how do I > shade/tint the label area. > You could try out the label2rgb PR: https://github.com/scikit-image/scikit-image/pull/485 Cheers, -Tony > Thanks, > > Michael. > -- > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe@**googlegroups.com > . > For more options, visit https://groups.google.com/**groups/opt_out > . > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Fri Jun 21 00:44:37 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Thu, 20 Jun 2013 23:44:37 -0500 Subject: how to highlight/shade a segment in an image. In-Reply-To: <51C3CC23.5090303@gmail.com> References: <51C3841E.3080504@gmail.com> <51C3CC23.5090303@gmail.com> Message-ID: On Thu, Jun 20, 2013 at 10:44 PM, Brickle Macho wrote: > On 21/06/13 7:03 AM, Tony Yu wrote: > > > Short version, given a label, a lable-image and a image, how do I >> shade/tint the label area. >> > > You could try out the label2rgb PR: > > https://github.com/scikit-image/scikit-image/pull/485 > > > Thanks. Look interesting. How do I try/pull/incorporate label2rgb code? > Assuming you're running skimage from git, you could add some config parameters to fetch PRs from skimage: https://gist.github.com/piscisaureus/3342247 Or you could add my repo as a remote and checkout a copy of my branch: git remote add tonysyu https://github.com/tonysyu/scikit-image.git git checkout image_label2rgb tonysyu/image_label2rgb (untested) Best, -Tony Michael. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bricklemacho at gmail.com Thu Jun 20 18:37:18 2013 From: bricklemacho at gmail.com (Brickle Macho) Date: Fri, 21 Jun 2013 06:37:18 +0800 Subject: how to highlight/shade a segment in an image. Message-ID: <51C3841E.3080504@gmail.com> I over segment an image using a superpixel algorithm. I region grow using the superpixels to end up with a segmented image, a label-image. I overlay the label boundaries using mark_boundaries(). I can click on a segment/region and indicate it as either foreground or background. This foreground/background information is maintained in a python dict. How can I provide visual feedback, say tinting the clicked segment, in the image. Short version, given a label, a lable-image and a image, how do I shade/tint the label area. Thanks, Michael. -- From bricklemacho at gmail.com Thu Jun 20 23:44:35 2013 From: bricklemacho at gmail.com (Brickle Macho) Date: Fri, 21 Jun 2013 11:44:35 +0800 Subject: how to highlight/shade a segment in an image. In-Reply-To: References: <51C3841E.3080504@gmail.com> Message-ID: <51C3CC23.5090303@gmail.com> On 21/06/13 7:03 AM, Tony Yu wrote: > > Short version, given a label, a lable-image and a image, how do I > shade/tint the label area. > > > You could try out the label2rgb PR: > > https://github.com/scikit-image/scikit-image/pull/485 Thanks. Look interesting. How do I try/pull/incorporate label2rgb code? Michael. -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jostein.floystad at gmail.com Sat Jun 22 04:31:50 2013 From: jostein.floystad at gmail.com (jostein.floystad at gmail.com) Date: Sat, 22 Jun 2013 01:31:50 -0700 (PDT) Subject: Inconsistent projection centre for the Radon transform Message-ID: <672d3beb-5db8-4463-88cb-679acbc4ce16@googlegroups.com> Hi everyone, I recently found out that scikit-image has a bug (and has had for a long time, it seems) in the Radon transform module (radon/iradon, not related to the discreete versions). The problem manifests itself as images being shifted by going through a forward and inverse Radon transform. In fact, the following code would shift an image N pixels both horizontally and vertically: # image = np.array(...) for i in range(N): sinogram = radon(image) image = iradon(sinogram) These issues can also easily lead to poorer reconstruction quality than what the data in the sinogram allows. The purpose of this message is two-fold: 1. To ask for review on my PR addressing this bug: https://github.com/scikit-image/scikit-image/pull/596 . I realize that this PR is probably not the easiest to review; I guess this reflects that it was tricky for me to get right. However, to me this just makes a review seem all the more important. 2. Alert any users that have based results on this code to check them carefully for systematic errors. If the radon_transform module has been used for tomography simulations, it is quite likely that the results are affected. In the case of using the module for reconstructing tomography data obtained otherwise, the results may be affected and they may not; this will depend on the specifics of the situation. Jostein -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Sun Jun 23 00:01:30 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Sat, 22 Jun 2013 23:01:30 -0500 Subject: SciPy sprint for scikit-image Message-ID: Hi everyone, So it looks like there will be a pretty good collection of scikit-image contributors at SciPy 2013. (I'm looking at you St?fan van der Walt, Josh Warner, Marianne Corvellec, James Bergstra... others?) And any lurkers on the list of whatever level should join in for sure. We should have a pretty good sprint this year. Now might be a good time to propose some ideas and see what we can knock out in a day or two. I think it'd be great to expand the docs and maybe try to improve the docs (with start-to-finish type examples). Also, so improvements to the IO infrastructure and video IO. Cheers! -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Sun Jun 23 00:04:08 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Sat, 22 Jun 2013 23:04:08 -0500 Subject: SciPy sprint for scikit-image In-Reply-To: References: Message-ID: On Sat, Jun 22, 2013 at 11:01 PM, Tony Yu wrote: > Hi everyone, > > So it looks like there will be a pretty good collection of scikit-image > contributors at SciPy 2013. (I'm looking at you St?fan van der Walt, Josh > Warner, Marianne Corvellec, James Bergstra... others?) And any lurkers on > the list of whatever level should join in for sure. > > We should have a pretty good sprint this year. Now might be a good time to > propose some ideas and see what we can knock out in a day or two. > > I think it'd be great to expand the docs and maybe try to improve the docs > (with start-to-finish type examples). Also, so improvements to the IO > infrastructure and video IO. > > Cheers! > -Tony > Sorry, I've been out drinking tonight. There may have been a bit of repetition in that last paragraph ;) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkin at walla.co.il Sun Jun 23 05:16:55 2013 From: dkin at walla.co.il (Dan) Date: Sun, 23 Jun 2013 02:16:55 -0700 (PDT) Subject: 1st and 2st order statistical texture features of an image Message-ID: <79de3091-00d6-435e-9552-42cf32436789@googlegroups.com> Hi, I wish to perform first (histogram based mean, stdev, smoothness, skewness, uniformity and entropy) and second order (GLCM based contrast, correlation, energy, homogeneity) statistical texture features of an image. is it possible in scikit-image? If so a small script will be a huge help. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sun Jun 23 00:19:11 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 23 Jun 2013 06:19:11 +0200 Subject: SciPy sprint for scikit-image In-Reply-To: References: Message-ID: Hey, Tony On Sun, Jun 23, 2013 at 6:01 AM, Tony Yu wrote: > So it looks like there will be a pretty good collection of scikit-image > contributors at SciPy 2013. (I'm looking at you St?fan van der Walt, Josh > Warner, Marianne Corvellec, James Bergstra... others?) And any lurkers on > the list of whatever level should join in for sure. Very exciting! We've got such a fantastic team and great momentum going at the moment, and I can't wait to working with all of you in person again. > We should have a pretty good sprint this year. Now might be a good time to > propose some ideas and see what we can knock out in a day or two. Let's keep track of the sprint ideas on the wiki page: https://github.com/scikit-image/scikit-image/wiki/SciPy2013-Sprint > I think it'd be great to expand the docs and maybe try to improve the docs > (with start-to-finish type examples). Also, so improvements to the IO > infrastructure and video IO. All good ideas! The video I/O especially needs a lot of love. The CV backend works quite well, but the GStreamer backend is fundamentally broken. Also, it would be nice to figure out how to do real-time video display in the viewer module. I'm in the middle of a 30-hour itinerary, so I'll be out of touch for the next day or two. St?fan From stefan at sun.ac.za Sun Jun 23 00:20:59 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 23 Jun 2013 06:20:59 +0200 Subject: SciPy sprint for scikit-image In-Reply-To: References: Message-ID: On Sun, Jun 23, 2013 at 6:04 AM, Tony Yu wrote: > Sorry, I've been out drinking tonight. There may have been a bit of > repetition in that last paragraph ;) As with most late-night posting, it's often only the second one that gives you away :) Cheers! St?fan From stefan at sun.ac.za Sun Jun 23 00:30:47 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 23 Jun 2013 06:30:47 +0200 Subject: Inconsistent projection centre for the Radon transform In-Reply-To: <672d3beb-5db8-4463-88cb-679acbc4ce16@googlegroups.com> References: <672d3beb-5db8-4463-88cb-679acbc4ce16@googlegroups.com> Message-ID: Dear Jostein On Sat, Jun 22, 2013 at 10:31 AM, wrote: > 2. Alert any users that have based results on this code to check them > carefully for systematic errors. If the radon_transform module has been used > for tomography simulations, it is quite likely that the results are > affected. In the case of using the module for reconstructing tomography data > obtained otherwise, the results may be affected and they may not; this will > depend on the specifics of the situation. Thank you for bringing this issue to everyone's attention. Rigorous code review, which leads to the uncovering of such mistakes, is what makes open source science so compelling to me. I'll review your PR, so that we can get this fixed ASAP. Regards St?fan From deklerkmc at gmail.com Sun Jun 23 18:31:41 2013 From: deklerkmc at gmail.com (Marc de Klerk) Date: Sun, 23 Jun 2013 15:31:41 -0700 (PDT) Subject: Graph Cuts implementation In-Reply-To: <20130623214403.GB3302@phare.normalesup.org> References: <1776a04e-954c-413c-954e-04d83c424721@googlegroups.com> <20130623214403.GB3302@phare.normalesup.org> Message-ID: Hi Emmanuelle, I'll have to go have a look weather it's that simple, but thanks for the pointing me in the direction of joblib... In the mean I got a lot of the scaffolding down and put together a demo for Grow Cuts on the GPU. Installation instructions are on my gsoc blog - http://mygsoc.blogspot.com/2013/06/a-kickstart-to-first-week.html Cheers! Marc On Sunday, June 23, 2013 11:44:03 PM UTC+2, Emmanuelle Gouillart wrote: > > Hi Mark, > > thanks for this first survey. I'll read articles [2] and [4] in the next > days, so that I can give you more feedback. For the multicore > programming, is it an embarrassingly parallel problem where you could use > joblib.Parallel (or simply multiprocessing), or do you have a large > number of low-level "small operations" that need to be performed > together? Anyway, maybe you can write a first version of the code that is > not parallel, convince yourself that the implementation is correct, and > only after think about the parallelization? > > Cheers, > Emmanuelle > > On Thu, Jun 20, 2013 at 03:17:16AM -0700, Marc de Klerk wrote: > > Hi Everyone, > > > One of items on my todo list in first three weeks of my GSOC is to > implement a > > CPU Graph-cuts. > > I've made a couple points from a preliminary literature survey and would > like > > to get some feedback / recommendations / advice. > > > Graph cut algorithms can generally be regarded as either begin > push-relabel > > (PR) or augmenting paths (AP) style. > > - The goto algorithm has typically been the Boykov and Kolmogorov > algorithm > > (AP) [1] > > - The current state-of-art graph-cut is packaged as GridCut, product of > [2] > > - Based on [1] > > - Multi core (from [3]) > > - Cache efficient memory layout (from [2]) > > - For grid-like topologies > > - A push relabel variant exists from [4] > > - Multi core (from [3]) > > - Not bound by memory constraints > > - For grid-like topologies > > - From what I can gather BK typically outperforms other approaches such > as the > > push-relabel algorithm, but to what extent I'm can't tell from the > literature? > > - [4] describes memory layout strategies to optimize caching for both AP > and PR > > style algorithms, but that choose the implement their strategies on BK. > > - PR style algorithms are not limited by memory constraints, whereas AP > style > > algorithms are, however doing graph cuts on a coarse to fine scale seem > to be > > an answer. > > - PR is the only feasible way to do graph cuts on the GPU. > > > So I'm suggestions to implement the PR style algorithm of [4] using the > > strategies described in [2] to speed things up. This then let's us make > a > > fairer comparison between CPU and GPU versions of the graph cut > > > Does this sound like a good way forward? > > It does require some multi-core programming which I'm not familiar with > - are > > there any examples of something similar in scikit-image? > > > [1] Boykov, Y., & Kolmogorov, V. (2004). An experimental comparison of > min-cut/ > > max- flow algorithms for energy minimization in vision. Pattern Analysis > and > > Machine Intelligence, IEEE Transactions on, 26(9), 1124?1137. > doi:10.1109/ > > TPAMI.2004.60 > > [2] Jamriska, O., Sykora, D., & Hornung, A. (2012). Cache-efficient > graph cuts > > on structured grids (pp. 3673?3680). Presented at the Computer Vision > and > > Pattern Recognition (CVPR), 2012 IEEE Conference on. doi:10.1109/ > > CVPR.2012.6248113 > > [3] Liu, J., & Sun, J. (2010). Parallel graph-cuts by adaptive bottom-up > > merging (pp. 2181?2188). Presented at the Computer Vision and Pattern > > Recognition (CVPR), 2010 IEEE Conference on. > doi:10.1109/CVPR.2010.5539898 > > [4] Delong, A., & Boykov, Y. (2008). A Scalable graph-cut algorithm for > N-D > > grids (pp. 1?8). Presented at the Computer Vision and Pattern > Recognition, > > 2008. CVPR 2008. IEEE Conference on. doi:10.1109/CVPR.2008.4587464 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marianne.corvellec at ens-lyon.org Sun Jun 23 23:28:02 2013 From: marianne.corvellec at ens-lyon.org (Marianne Corvellec) Date: Sun, 23 Jun 2013 20:28:02 -0700 (PDT) Subject: SciPy sprint for scikit-image In-Reply-To: References: Message-ID: Hello, Yay! I am super excited at SciPy 2013 as well. :D Is providing answers on StackOverFlow (which make use of scikit-image) still {under consideration, considered {a priority, an important thing to do}}? I have some interest for 2d -> 3d stuff; the video stuff sounds great -- I recently contributed (1 line :p) to PiTiVi (http://www.pitivi.org/) so I'm in the mood. ;) Otherwise, I'm all for improving the docs -- you can count on me. This sprint looks promising! :) Can't wait to see some of you in person, Marianne On Sunday, June 23, 2013 12:20:59 AM UTC-4, Stefan van der Walt wrote: > > On Sun, Jun 23, 2013 at 6:04 AM, Tony Yu > > wrote: > > Sorry, I've been out drinking tonight. There may have been a bit of > > repetition in that last paragraph ;) > > As with most late-night posting, it's often only the second one that > gives you away :) > > Cheers! > St?fan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emmanuelle.gouillart at nsup.org Sun Jun 23 17:44:03 2013 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Sun, 23 Jun 2013 23:44:03 +0200 Subject: Graph Cuts implementation In-Reply-To: <1776a04e-954c-413c-954e-04d83c424721@googlegroups.com> References: <1776a04e-954c-413c-954e-04d83c424721@googlegroups.com> Message-ID: <20130623214403.GB3302@phare.normalesup.org> Hi Mark, thanks for this first survey. I'll read articles [2] and [4] in the next days, so that I can give you more feedback. For the multicore programming, is it an embarrassingly parallel problem where you could use joblib.Parallel (or simply multiprocessing), or do you have a large number of low-level "small operations" that need to be performed together? Anyway, maybe you can write a first version of the code that is not parallel, convince yourself that the implementation is correct, and only after think about the parallelization? Cheers, Emmanuelle On Thu, Jun 20, 2013 at 03:17:16AM -0700, Marc de Klerk wrote: > Hi Everyone, > One of items on my todo list in first three weeks of my GSOC is to implement a > CPU Graph-cuts. > I've made a couple points from a preliminary literature survey and would like > to get some feedback / recommendations / advice. > Graph cut algorithms can generally be regarded as either begin push-relabel > (PR) or augmenting paths (AP) style. > - The goto algorithm has typically been the Boykov and Kolmogorov algorithm > (AP) [1] > - The current state-of-art graph-cut is packaged as GridCut, product of [2] > - Based on [1] > - Multi core (from [3]) > - Cache efficient memory layout (from [2]) > - For grid-like topologies > - A push relabel variant exists from [4] > - Multi core (from [3]) > - Not bound by memory constraints > - For grid-like topologies > - From what I can gather BK typically outperforms other approaches such as the > push-relabel algorithm, but to what extent I'm can't tell from the literature? > - [4] describes memory layout strategies to optimize caching for both AP and PR > style algorithms, but that choose the implement their strategies on BK. > - PR style algorithms are not limited by memory constraints, whereas AP style > algorithms are, however doing graph cuts on a coarse to fine scale seem to be > an answer. > - PR is the only feasible way to do graph cuts on the GPU. > So I'm suggestions to implement the PR style algorithm of [4] using the > strategies described in [2] to speed things up. This then let's us make a > fairer comparison between CPU and GPU versions of the graph cut > Does this sound like a good way forward? > It does require some multi-core programming which I'm not familiar with - are > there any examples of something similar in scikit-image? > [1] Boykov, Y., & Kolmogorov, V. (2004). An experimental comparison of min-cut/ > max- flow algorithms for energy minimization in vision. Pattern Analysis and > Machine Intelligence, IEEE Transactions on, 26(9), 1124?1137. doi:10.1109/ > TPAMI.2004.60 > [2] Jamriska, O., Sykora, D., & Hornung, A. (2012). Cache-efficient graph cuts > on structured grids (pp. 3673?3680). Presented at the Computer Vision and > Pattern Recognition (CVPR), 2012 IEEE Conference on. doi:10.1109/ > CVPR.2012.6248113 > [3] Liu, J., & Sun, J. (2010). Parallel graph-cuts by adaptive bottom-up > merging (pp. 2181?2188). Presented at the Computer Vision and Pattern > Recognition (CVPR), 2010 IEEE Conference on. doi:10.1109/CVPR.2010.5539898 > [4] Delong, A., & Boykov, Y. (2008). A Scalable graph-cut algorithm for N-D > grids (pp. 1?8). Presented at the Computer Vision and Pattern Recognition, > 2008. CVPR 2008. IEEE Conference on. doi:10.1109/CVPR.2008.4587464 From jschoenberger at demuc.de Mon Jun 24 01:19:23 2013 From: jschoenberger at demuc.de (=?iso-8859-1?Q?Johannes_Sch=F6nberger?=) Date: Mon, 24 Jun 2013 07:19:23 +0200 Subject: SciPy sprint for scikit-image In-Reply-To: References: Message-ID: Unfortunately, I cannot join you with this sprint, but my ideas / suggestions: - Full Python 3 support - Refactor IO module - Overall consistency improvements for documentation Cheers and have fun! Johannes Sch?nberger Am 24.06.2013 um 05:28 schrieb Marianne Corvellec : > Hello, > > Yay! I am super excited at SciPy 2013 as well. :D > > Is providing answers on StackOverFlow (which make use of scikit-image) still {under consideration, considered {a priority, an important thing to do}}? > > I have some interest for 2d -> 3d stuff; the video stuff sounds great -- I recently contributed (1 line :p) to PiTiVi (http://www.pitivi.org/) so I'm in the mood. ;) > Otherwise, I'm all for improving the docs -- you can count on me. > > This sprint looks promising! :) > > Can't wait to see some of you in person, > Marianne > > On Sunday, June 23, 2013 12:20:59 AM UTC-4, Stefan van der Walt wrote: > On Sun, Jun 23, 2013 at 6:04 AM, Tony Yu wrote: > > Sorry, I've been out drinking tonight. There may have been a bit of > > repetition in that last paragraph ;) > > As with most late-night posting, it's often only the second one that > gives you away :) > > Cheers! > St?fan > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > From jni.soma at gmail.com Mon Jun 24 14:30:43 2013 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Mon, 24 Jun 2013 14:30:43 -0400 Subject: SciPy sprint for scikit-image In-Reply-To: References: Message-ID: Wish I could come! Last year was awesome and it really changed my life ? you might remember I'd never done a PR before that sprint! =) I'll be at EuroSciPy this year, anyone else? On Mon, Jun 24, 2013 at 1:19 AM, Johannes Sch?nberger < jschoenberger at demuc.de> wrote: > Unfortunately, I cannot join you with this sprint, but my ideas / > suggestions: > > - Full Python 3 support > - Refactor IO module > - Overall consistency improvements for documentation > > Cheers and have fun! > > Johannes Sch?nberger > > Am 24.06.2013 um 05:28 schrieb Marianne Corvellec < > marianne.corvellec at ens-lyon.org>: > > > Hello, > > > > Yay! I am super excited at SciPy 2013 as well. :D > > > > Is providing answers on StackOverFlow (which make use of scikit-image) > still {under consideration, considered {a priority, an important thing to > do}}? > > > > I have some interest for 2d -> 3d stuff; the video stuff sounds great -- > I recently contributed (1 line :p) to PiTiVi (http://www.pitivi.org/) so > I'm in the mood. ;) > > Otherwise, I'm all for improving the docs -- you can count on me. > > > > This sprint looks promising! :) > > > > Can't wait to see some of you in person, > > Marianne > > > > On Sunday, June 23, 2013 12:20:59 AM UTC-4, Stefan van der Walt wrote: > > On Sun, Jun 23, 2013 at 6:04 AM, Tony Yu wrote: > > > Sorry, I've been out drinking tonight. There may have been a bit of > > > repetition in that last paragraph ;) > > > > As with most late-night posting, it's often only the second one that > > gives you away :) > > > > Cheers! > > St?fan > > > > -- > > You received this message because you are subscribed to the Google > Groups "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to scikit-image+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/groups/opt_out. > > > > > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Mon Jun 24 23:50:43 2013 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Mon, 24 Jun 2013 20:50:43 -0700 (PDT) Subject: SciPy sprint for scikit-image In-Reply-To: <20130624215905.GB3172@phare.normalesup.org> References: <20130624215905.GB3172@phare.normalesup.org> Message-ID: <8fa20c71-9d9d-4315-9fbd-102fabf51b2d@googlegroups.com> I'm here! I'd love to get some additional 2d -> 3d work done, and this might be a good excuse to touch up and submit several legacy deconvolution algorithms (Weiner, Lucy-Richardson, etc.) I've been sitting on. Looking forward to it, Josh On Monday, June 24, 2013 4:59:05 PM UTC-5, Emmanuelle Gouillart wrote: > > On Mon, Jun 24, 2013 at 02:30:43PM -0400, Juan Nunez-Iglesias wrote: > > Wish I could come! Last year was awesome and it really changed my life ? > you > > might remember I'd never done a PR before that sprint! =) > > > I'll be at EuroSciPy this year, anyone else? > > I'll be there! Anyone else? > > Emmanuelle > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thouis at gmail.com Mon Jun 24 21:22:52 2013 From: thouis at gmail.com (Thouis (Ray) Jones) Date: Mon, 24 Jun 2013 21:22:52 -0400 Subject: Graph Cuts implementation In-Reply-To: References: <1776a04e-954c-413c-954e-04d83c424721@googlegroups.com> <20130623214403.GB3302@phare.normalesup.org> Message-ID: I would suggest implementing some sort of graph simplification, as in: http://www.3dtv-con2009.org/papers/data/883/paper_EMMCVPR_11.pdf or http://hal.archives-ouvertes.fr/docs/00/78/66/55/PDF/main.pdf These can be quite effective at reducing the complexity of the graph for planar st-mincut problems, without being too difficult to implement. Ray Jones On Sun, Jun 23, 2013 at 6:31 PM, Marc de Klerk wrote: > Hi Emmanuelle, > > I'll have to go have a look weather it's that simple, but thanks for the > pointing me in the direction of joblib... > In the mean I got a lot of the scaffolding down and put together a demo > for Grow Cuts on the GPU. > Installation instructions are on my gsoc blog - > http://mygsoc.blogspot.com/2013/06/a-kickstart-to-first-week.html > > Cheers! > Marc > > > On Sunday, June 23, 2013 11:44:03 PM UTC+2, Emmanuelle Gouillart wrote: >> >> Hi Mark, >> >> thanks for this first survey. I'll read articles [2] and [4] in the next >> days, so that I can give you more feedback. For the multicore >> programming, is it an embarrassingly parallel problem where you could use >> joblib.Parallel (or simply multiprocessing), or do you have a large >> number of low-level "small operations" that need to be performed >> together? Anyway, maybe you can write a first version of the code that is >> not parallel, convince yourself that the implementation is correct, and >> only after think about the parallelization? >> >> Cheers, >> Emmanuelle >> >> On Thu, Jun 20, 2013 at 03:17:16AM -0700, Marc de Klerk wrote: >> > Hi Everyone, >> >> > One of items on my todo list in first three weeks of my GSOC is to >> implement a >> > CPU Graph-cuts. >> > I've made a couple points from a preliminary literature survey and >> would like >> > to get some feedback / recommendations / advice. >> >> > Graph cut algorithms can generally be regarded as either begin >> push-relabel >> > (PR) or augmenting paths (AP) style. >> > - The goto algorithm has typically been the Boykov and Kolmogorov >> algorithm >> > (AP) [1] >> > - The current state-of-art graph-cut is packaged as GridCut, product of >> [2] >> > - Based on [1] >> > - Multi core (from [3]) >> > - Cache efficient memory layout (from [2]) >> > - For grid-like topologies >> > - A push relabel variant exists from [4] >> > - Multi core (from [3]) >> > - Not bound by memory constraints >> > - For grid-like topologies >> > - From what I can gather BK typically outperforms other approaches such >> as the >> > push-relabel algorithm, but to what extent I'm can't tell from the >> literature? >> > - [4] describes memory layout strategies to optimize caching for both >> AP and PR >> > style algorithms, but that choose the implement their strategies on BK. >> > - PR style algorithms are not limited by memory constraints, whereas AP >> style >> > algorithms are, however doing graph cuts on a coarse to fine scale seem >> to be >> > an answer. >> > - PR is the only feasible way to do graph cuts on the GPU. >> >> > So I'm suggestions to implement the PR style algorithm of [4] using the >> > strategies described in [2] to speed things up. This then let's us make >> a >> > fairer comparison between CPU and GPU versions of the graph cut >> >> > Does this sound like a good way forward? >> > It does require some multi-core programming which I'm not familiar with >> - are >> > there any examples of something similar in scikit-image? >> >> > [1] Boykov, Y., & Kolmogorov, V. (2004). An experimental comparison of >> min-cut/ >> > max- flow algorithms for energy minimization in vision. Pattern >> Analysis and >> > Machine Intelligence, IEEE Transactions on, 26(9), 1124?1137. >> doi:10.1109/ >> > TPAMI.2004.60 >> > [2] Jamriska, O., Sykora, D., & Hornung, A. (2012). Cache-efficient >> graph cuts >> > on structured grids (pp. 3673?3680). Presented at the Computer Vision >> and >> > Pattern Recognition (CVPR), 2012 IEEE Conference on. doi:10.1109/ >> > CVPR.2012.6248113 >> > [3] Liu, J., & Sun, J. (2010). Parallel graph-cuts by adaptive >> bottom-up >> > merging (pp. 2181?2188). Presented at the Computer Vision and Pattern >> > Recognition (CVPR), 2010 IEEE Conference on. >> doi:10.1109/CVPR.2010.5539898 >> > [4] Delong, A., & Boykov, Y. (2008). A Scalable graph-cut algorithm for >> N-D >> > grids (pp. 1?8). Presented at the Computer Vision and Pattern >> Recognition, >> > 2008. CVPR 2008. IEEE Conference on. doi:10.1109/CVPR.2008.4587464 >> > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emmanuelle.gouillart at nsup.org Mon Jun 24 17:59:05 2013 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Mon, 24 Jun 2013 23:59:05 +0200 Subject: SciPy sprint for scikit-image In-Reply-To: References: Message-ID: <20130624215905.GB3172@phare.normalesup.org> On Mon, Jun 24, 2013 at 02:30:43PM -0400, Juan Nunez-Iglesias wrote: > Wish I could come! Last year was awesome and it really changed my life ? you > might remember I'd never done a PR before that sprint! =) > I'll be at EuroSciPy this year, anyone else? I'll be there! Anyone else? Emmanuelle From r.t.wilson.bak at googlemail.com Tue Jun 25 11:37:20 2013 From: r.t.wilson.bak at googlemail.com (Robin Wilson) Date: Tue, 25 Jun 2013 08:37:20 -0700 (PDT) Subject: Issues with scaling images for canny edge detection Message-ID: Hi, *Summary: *I'm fairly new to skimage, and I'm trying to replicate some work I've done in IDL using the Canny edge detector. I've imported the same image into skimage and tried running the Canny function with the same parameters, but I either get a blank image, or very different results to IDL which don't change regardless of the parameters I use. I suspect my problems may be related to how I am scaling my image to make it between 0 and 1, as the documentation for the skimage Canny function requires. *More details:* The input image I used in both IDL and skimage is available at https://www.dropbox.com/s/xaiq9kitrf1b4cf/HOT_sub.tif. In IDL I called the CANNY function (documentation available at http://www.exelisvis.com/docs/CANNY.html) as follows: result = CANNY(image, HIGH=0.95, LOW=0.3, SIGMA=2) and got the following image: I loaded the image into skimage as follows: hot = skimage.io.imread("HOT_sub.tif") And removed all negative values by adding the absolute value of the minimum: abs_hot = hot + abs(np.min(hot)) >From what I'd read in the documentation, the function img_as_float would then scale this between 0 and 1 in a sensible way, but it gave an error: C:\Python27\lib\site-packages\skimage\util\dtype.pyc in convert(image, dtype) 73 if kind_in == 'f': 74 if np.min(image) < 0 or np.max(image) > 1: ---> 75 raise ValueError("Images of type float must be between 0 and 1") 76 if kind == 'f': 77 # floating point -> floating point ValueError: Images of type float must be between 0 and 1 So I did it myself by simply dividing by the maximum value: im_hot = img_as_float(abs_hot/np.max(abs_hot) However, running the Canny edge detector on this image produces an entirely blank edge image: edges = canny(im_hot, sigma=2, low_threshold=0.3, high_threshold=0.90) np.sum(edges) # Gives 0 showing there are no edges found Regardless how I play with the parameters, I can't seem to get it to give me any edges. Interestingly, if I ignore the instructions to make sure that my input image is between 0 and 1, and just use the raw image: edges = canny(hot, sigma=2, low_threshold=0.3, high_threshold=0.90) I get a more sensible result (well, at least it isn't blank!): But this is very different to the result given by IDL - and furthermore, adjusting the parameters doesn't seem to change the output at all. What am I doing wrong here? I suspect it is something to do with the image scaling, but I'm not sure - it could be a conceptual problem with my image processing knowledge, or I could be using skimage improperly. Does anyone have any ideas or suggestions as to where to go from here? If I manage to solve this I will, of course, write up the solution on my blog so that others can benefit too. Best regards, Robin University of Southampton, UK -------------- next part -------------- An HTML attachment was scrubbed... URL: From deklerkmc at gmail.com Tue Jun 25 18:20:13 2013 From: deklerkmc at gmail.com (Marc de Klerk) Date: Tue, 25 Jun 2013 15:20:13 -0700 (PDT) Subject: Graph Cuts implementation In-Reply-To: References: <1776a04e-954c-413c-954e-04d83c424721@googlegroups.com> <20130623214403.GB3302@phare.normalesup.org> Message-ID: <11fee39a-5140-4170-bdf5-dd530bd40f55@googlegroups.com> Thanks Ray, That was a great read! Slimcuts look promising. Do you have any thoughts on the implementation? >From what I gather each node must be visited and then to check if any adjacent edge has a capacity larger than the sum of the capacities of the remaining adjacent edges. This process is seemingly them performed repeatedly. This will break the regular structure of the graph meaning that vertex indicies will have to be looked up and I can't figure out how they manage to attain the reported performance. I guess once they have the reduced graph graph-cuts is fast, but I can't see how creating the reducing graph can be fast. They did however mention that C source files are available for research purposes - Does GSOC / scikit-image count as research? Marc On Tuesday, June 25, 2013 3:22:52 AM UTC+2, Thouis (Ray) Jones wrote: > > I would suggest implementing some sort of graph simplification, as in: > http://www.3dtv-con2009.org/papers/data/883/paper_EMMCVPR_11.pdf > or > http://hal.archives-ouvertes.fr/docs/00/78/66/55/PDF/main.pdf > > These can be quite effective at reducing the complexity of the graph for > planar st-mincut problems, without being too difficult to implement. > > Ray Jones > > > > On Sun, Jun 23, 2013 at 6:31 PM, Marc de Klerk > > wrote: > >> Hi Emmanuelle, >> >> I'll have to go have a look weather it's that simple, but thanks for the >> pointing me in the direction of joblib... >> In the mean I got a lot of the scaffolding down and put together a demo >> for Grow Cuts on the GPU. >> Installation instructions are on my gsoc blog - >> http://mygsoc.blogspot.com/2013/06/a-kickstart-to-first-week.html >> >> Cheers! >> Marc >> >> >> On Sunday, June 23, 2013 11:44:03 PM UTC+2, Emmanuelle Gouillart wrote: >>> >>> Hi Mark, >>> >>> thanks for this first survey. I'll read articles [2] and [4] in the next >>> days, so that I can give you more feedback. For the multicore >>> programming, is it an embarrassingly parallel problem where you could >>> use >>> joblib.Parallel (or simply multiprocessing), or do you have a large >>> number of low-level "small operations" that need to be performed >>> together? Anyway, maybe you can write a first version of the code that >>> is >>> not parallel, convince yourself that the implementation is correct, and >>> only after think about the parallelization? >>> >>> Cheers, >>> Emmanuelle >>> >>> On Thu, Jun 20, 2013 at 03:17:16AM -0700, Marc de Klerk wrote: >>> > Hi Everyone, >>> >>> > One of items on my todo list in first three weeks of my GSOC is to >>> implement a >>> > CPU Graph-cuts. >>> > I've made a couple points from a preliminary literature survey and >>> would like >>> > to get some feedback / recommendations / advice. >>> >>> > Graph cut algorithms can generally be regarded as either begin >>> push-relabel >>> > (PR) or augmenting paths (AP) style. >>> > - The goto algorithm has typically been the Boykov and Kolmogorov >>> algorithm >>> > (AP) [1] >>> > - The current state-of-art graph-cut is packaged as GridCut, product >>> of [2] >>> > - Based on [1] >>> > - Multi core (from [3]) >>> > - Cache efficient memory layout (from [2]) >>> > - For grid-like topologies >>> > - A push relabel variant exists from [4] >>> > - Multi core (from [3]) >>> > - Not bound by memory constraints >>> > - For grid-like topologies >>> > - From what I can gather BK typically outperforms other approaches >>> such as the >>> > push-relabel algorithm, but to what extent I'm can't tell from the >>> literature? >>> > - [4] describes memory layout strategies to optimize caching for both >>> AP and PR >>> > style algorithms, but that choose the implement their strategies on >>> BK. >>> > - PR style algorithms are not limited by memory constraints, whereas >>> AP style >>> > algorithms are, however doing graph cuts on a coarse to fine scale >>> seem to be >>> > an answer. >>> > - PR is the only feasible way to do graph cuts on the GPU. >>> >>> > So I'm suggestions to implement the PR style algorithm of [4] using >>> the >>> > strategies described in [2] to speed things up. This then let's us >>> make a >>> > fairer comparison between CPU and GPU versions of the graph cut >>> >>> > Does this sound like a good way forward? >>> > It does require some multi-core programming which I'm not familiar >>> with - are >>> > there any examples of something similar in scikit-image? >>> >>> > [1] Boykov, Y., & Kolmogorov, V. (2004). An experimental comparison of >>> min-cut/ >>> > max- flow algorithms for energy minimization in vision. Pattern >>> Analysis and >>> > Machine Intelligence, IEEE Transactions on, 26(9), 1124?1137. >>> doi:10.1109/ >>> > TPAMI.2004.60 >>> > [2] Jamriska, O., Sykora, D., & Hornung, A. (2012). Cache-efficient >>> graph cuts >>> > on structured grids (pp. 3673?3680). Presented at the Computer Vision >>> and >>> > Pattern Recognition (CVPR), 2012 IEEE Conference on. doi:10.1109/ >>> > CVPR.2012.6248113 >>> > [3] Liu, J., & Sun, J. (2010). Parallel graph-cuts by adaptive >>> bottom-up >>> > merging (pp. 2181?2188). Presented at the Computer Vision and Pattern >>> > Recognition (CVPR), 2010 IEEE Conference on. >>> doi:10.1109/CVPR.2010.5539898 >>> > [4] Delong, A., & Boykov, Y. (2008). A Scalable graph-cut algorithm >>> for N-D >>> > grids (pp. 1?8). Presented at the Computer Vision and Pattern >>> Recognition, >>> > 2008. CVPR 2008. IEEE Conference on. doi:10.1109/CVPR.2008.4587464 >>> >> -- >> You received this message because you are subscribed to the Google Groups >> "scikit-image" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to scikit-image... at googlegroups.com . >> For more options, visit https://groups.google.com/groups/opt_out. >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Tue Jun 25 23:55:31 2013 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Tue, 25 Jun 2013 20:55:31 -0700 (PDT) Subject: Issues with scaling images for canny edge detection In-Reply-To: References: Message-ID: <6384b933-5da8-499b-84e7-c8ba707bde20@googlegroups.com> I can?t duplicate this, but I may know what's going on. img_as_float converts non-float datatypes into floating-point images on the range [0, 1]. The traceback you note shows that a floating point array was passed to img_as_float, but the image had values outside [0, 1]. Try checking hot.dtype before running img_as_float; if it's an (unsigned) integer, everything should work fine. >From the operation you listed, abs_hot = hot + abs(np.min(hot)), it seems like hot should still be an integer, but that traceback code path is only active for inputs where arr.dtype.kind == 'f' so abs_hot got converted to float at some point. Check abs_hot.dtype right prior to running img_as_float; is it an integer or a float? My intuition for a blank canny result is that your division operation abs_hot / np.max(abs_hot) may have been between two integer types, resulting in a boolean array which would have almost no edges. Try assigning normhot = abs_hot / np.max(abs_hot) and checking the dtype; if it's boolean, cast one or both to float before the division and re-run. The other possibility is the canny parameters are pretty far off. I?m not sure what?s going on with the raw result, but check the above and get back with us! Hopefully that helps to get things moving. Josh On Tuesday, June 25, 2013 10:37:20 AM UTC-5, Robin Wilson wrote: Hi, > > *Summary: *I'm fairly new to skimage, and I'm trying to replicate some > work I've done in IDL using the Canny edge detector. I've imported the same > image into skimage and tried running the Canny function with the same > parameters, but I either get a blank image, or very different results to > IDL which don't change regardless of the parameters I use. I suspect my > problems may be related to how I am scaling my image to make it between 0 > and 1, as the documentation for the skimage Canny function requires. > > *More details:* > The input image I used in both IDL and skimage is available at > https://www.dropbox.com/s/xaiq9kitrf1b4cf/HOT_sub.tif. > > In IDL I called the CANNY function (documentation available at > http://www.exelisvis.com/docs/CANNY.html) as follows: > > result = CANNY(image, HIGH=0.95, LOW=0.3, SIGMA=2) > > and got the following image: > > > > > I loaded the image into skimage as follows: > > hot = skimage.io.imread("HOT_sub.tif") > > And removed all negative values by adding the absolute value of the > minimum: > > abs_hot = hot + abs(np.min(hot)) > > From what I'd read in the documentation, the function img_as_float would > then scale this between 0 and 1 in a sensible way, but it gave an error: > > C:\Python27\lib\site-packages\skimage\util\dtype.pyc in convert(image, > dtype) > 73 if kind_in == 'f': > 74 if np.min(image) < 0 or np.max(image) > 1: > ---> 75 raise ValueError("Images of type float must be between > 0 and 1") > 76 if kind == 'f': > 77 # floating point -> floating point > > ValueError: Images of type float must be between 0 and 1 > > So I did it myself by simply dividing by the maximum value: > > im_hot = img_as_float(abs_hot/np.max(abs_hot) > > However, running the Canny edge detector on this image produces an > entirely blank edge image: > > edges = canny(im_hot, sigma=2, low_threshold=0.3, high_threshold=0.90) > np.sum(edges) # Gives 0 showing there are no edges found > > Regardless how I play with the parameters, I can't seem to get it to give > me any edges. > > Interestingly, if I ignore the instructions to make sure that my input > image is between 0 and 1, and just use the raw image: > > edges = canny(hot, sigma=2, low_threshold=0.3, high_threshold=0.90) > > I get a more sensible result (well, at least it isn't blank!): > > > > But this is very different to the result given by IDL - and furthermore, > adjusting the parameters doesn't seem to change the output at all. > > What am I doing wrong here? I suspect it is something to do with the image > scaling, but I'm not sure - it could be a conceptual problem with my image > processing knowledge, or I could be using skimage improperly. Does anyone > have any ideas or suggestions as to where to go from here? If I manage to > solve this I will, of course, write up the solution on my blog so that > others can benefit too. > > Best regards, > > Robin > University of Southampton, UK > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Tue Jun 25 23:59:09 2013 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Tue, 25 Jun 2013 20:59:09 -0700 (PDT) Subject: Issues with scaling images for canny edge detection In-Reply-To: <6384b933-5da8-499b-84e7-c8ba707bde20@googlegroups.com> References: <6384b933-5da8-499b-84e7-c8ba707bde20@googlegroups.com> Message-ID: Argh, correction: if normhot = abs_hot / np.max(abs_hot) were conducting truncating division the result wouldn't be a boolean array but an integer array, with all values 0 except the previous maximum value(s), which would be 1. Functionally boolean, but the dtype would be integer. On Tuesday, June 25, 2013 10:55:31 PM UTC-5, Josh Warner wrote: > > I can?t duplicate this, but I may know what's going on. > > img_as_float converts non-float datatypes into floating-point images on > the range [0, 1]. The traceback you note shows that a floating point array > was passed to img_as_float, but the image had values outside [0, 1]. Try > checking hot.dtype before running img_as_float; if it's an (unsigned) > integer, everything should work fine. > > From the operation you listed, abs_hot = hot + abs(np.min(hot)), it seems > like hot should still be an integer, but that traceback code path is only > active for inputs where arr.dtype.kind == 'f' so abs_hot got converted to > float at some point. Check abs_hot.dtype right prior to running > img_as_float; is it an integer or a float? > > My intuition for a blank canny result is that your division operation abs_hot > / np.max(abs_hot) may have been between two integer types, resulting in a > boolean array which would have almost no edges. Try assigning normhot = > abs_hot / np.max(abs_hot) and checking the dtype; if it's boolean, cast > one or both to float before the division and re-run. The other possibility > is the canny parameters are pretty far off. > > I?m not sure what?s going on with the raw result, but check the above and > get back with us! Hopefully that helps to get things moving. > > Josh > > On Tuesday, June 25, 2013 10:37:20 AM UTC-5, Robin Wilson wrote: > > Hi, >> >> *Summary: *I'm fairly new to skimage, and I'm trying to replicate some >> work I've done in IDL using the Canny edge detector. I've imported the same >> image into skimage and tried running the Canny function with the same >> parameters, but I either get a blank image, or very different results to >> IDL which don't change regardless of the parameters I use. I suspect my >> problems may be related to how I am scaling my image to make it between 0 >> and 1, as the documentation for the skimage Canny function requires. >> >> *More details:* >> The input image I used in both IDL and skimage is available at >> https://www.dropbox.com/s/xaiq9kitrf1b4cf/HOT_sub.tif. >> >> In IDL I called the CANNY function (documentation available at >> http://www.exelisvis.com/docs/CANNY.html) as follows: >> >> result = CANNY(image, HIGH=0.95, LOW=0.3, SIGMA=2) >> >> and got the following image: >> >> >> >> >> I loaded the image into skimage as follows: >> >> hot = skimage.io.imread("HOT_sub.tif") >> >> And removed all negative values by adding the absolute value of the >> minimum: >> >> abs_hot = hot + abs(np.min(hot)) >> >> From what I'd read in the documentation, the function img_as_float would >> then scale this between 0 and 1 in a sensible way, but it gave an error: >> >> C:\Python27\lib\site-packages\skimage\util\dtype.pyc in convert(image, >> dtype) >> 73 if kind_in == 'f': >> 74 if np.min(image) < 0 or np.max(image) > 1: >> ---> 75 raise ValueError("Images of type float must be >> between 0 and 1") >> 76 if kind == 'f': >> 77 # floating point -> floating point >> >> ValueError: Images of type float must be between 0 and 1 >> >> So I did it myself by simply dividing by the maximum value: >> >> im_hot = img_as_float(abs_hot/np.max(abs_hot) >> >> However, running the Canny edge detector on this image produces an >> entirely blank edge image: >> >> edges = canny(im_hot, sigma=2, low_threshold=0.3, high_threshold=0.90) >> np.sum(edges) # Gives 0 showing there are no edges found >> >> Regardless how I play with the parameters, I can't seem to get it to give >> me any edges. >> >> Interestingly, if I ignore the instructions to make sure that my input >> image is between 0 and 1, and just use the raw image: >> >> edges = canny(hot, sigma=2, low_threshold=0.3, high_threshold=0.90) >> >> I get a more sensible result (well, at least it isn't blank!): >> >> >> >> But this is very different to the result given by IDL - and furthermore, >> adjusting the parameters doesn't seem to change the output at all. >> >> What am I doing wrong here? I suspect it is something to do with the >> image scaling, but I'm not sure - it could be a conceptual problem with my >> image processing knowledge, or I could be using skimage improperly. Does >> anyone have any ideas or suggestions as to where to go from here? If I >> manage to solve this I will, of course, write up the solution on my blog so >> that others can benefit too. >> >> Best regards, >> >> Robin >> University of Southampton, UK >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.t.wilson.bak at googlemail.com Wed Jun 26 07:08:12 2013 From: r.t.wilson.bak at googlemail.com (Robin Wilson) Date: Wed, 26 Jun 2013 04:08:12 -0700 (PDT) Subject: Issues with scaling images for canny edge detection In-Reply-To: References: <6384b933-5da8-499b-84e7-c8ba707bde20@googlegroups.com> Message-ID: <2be59cfd-28d6-46de-80a5-99b77bf32b61@googlegroups.com> Hi Josh, Thanks for the responses - I think I've managed to sort out what is going on a bit more clearly now. In terms of the data types issue, the original image is a float array, with both positive and negative values. I didn't realise that img_as_float wouldn't take a floating point array as input - I thought if it got given a float it would just do the rescaling as appropriate. Given that the original image is a float, all of the derived arrays are floats too - including the result of the division operation. Anyway, I think your final comment about the canny parameters possibly being far off is the real problem. I've now managed to get a sensible result with both the scaled image, and the original image (which is interesting, as the documentation specifically states that the input image should be normalised to be in the range 0.0-1.0). I managed to do this by using radically different canny parameters to the ones I was using before: # For the scaled image edges = canny(normhot, sigma=2.0, low_threshold=0.0000000000000001, high_threshold=0.01) # For the original image edges = canny(hot, sigma=2.0, low_threshold=10, high_threshold=150) (for comparison, the parameters I was using earlier - which worked fine in IDL - were 0.3 and 0.9) I've worked out why those parameters worked in IDL but didn't give useful results when using skimage: the IDL documentation ( http://www.exelisvis.com/docs/CANNY.html) describes the parameters as: High: The high value used to calculate the high threshold during edge detection, given as a factor of the histogram of the magnitude array. Low: The low value used to calculate the low threshold during edge detection, given as a factor of the HIGH value. Thus, the IDL code expects parameters in the range 0-1, which are then used to find a percentile of the histogram of magnitude values, whereas the skimage code does thresholding on the raw magnitude values, and that is why the same parameters were giving such different results in IDL and skimage. I'm now engaged in looking into the IDL code to see exactly how their thresholding works, as my naive modification of the skimage canny code to try and replicate the IDL results doesn't give me quite the same answers. I think I've managed to solve most of my problems, but I have one question: the canny routine seems to work fine for my original image, without rescaling it from 0-1. Is there a reason that it should only work for images between 0-1, or am I safe to use my original images? Cheers, Robin On Wednesday, 26 June 2013 04:59:09 UTC+1, Josh Warner wrote: > > Argh, correction: if normhot = abs_hot / np.max(abs_hot) were conducting > truncating division the result wouldn't be a boolean array but an integer > array, with all values 0 except the previous maximum value(s), which would > be 1. Functionally boolean, but the dtype would be integer. > > > On Tuesday, June 25, 2013 10:55:31 PM UTC-5, Josh Warner wrote: >> >> I can?t duplicate this, but I may know what's going on. >> >> img_as_float converts non-float datatypes into floating-point images on >> the range [0, 1]. The traceback you note shows that a floating point array >> was passed to img_as_float, but the image had values outside [0, 1]. Try >> checking hot.dtype before running img_as_float; if it's an (unsigned) >> integer, everything should work fine. >> >> From the operation you listed, abs_hot = hot + abs(np.min(hot)), it >> seems like hot should still be an integer, but that traceback code path >> is only active for inputs where arr.dtype.kind == 'f' so abs_hot got >> converted to float at some point. Check abs_hot.dtype right prior to >> running img_as_float; is it an integer or a float? >> >> My intuition for a blank canny result is that your division operation abs_hot >> / np.max(abs_hot) may have been between two integer types, resulting in >> a boolean array which would have almost no edges. Try assigning normhot >> = abs_hot / np.max(abs_hot) and checking the dtype; if it's boolean, >> cast one or both to float before the division and re-run. The other >> possibility is the canny parameters are pretty far off. >> >> I?m not sure what?s going on with the raw result, but check the above and >> get back with us! Hopefully that helps to get things moving. >> >> Josh >> >> On Tuesday, June 25, 2013 10:37:20 AM UTC-5, Robin Wilson wrote: >> >> Hi, >>> >>> *Summary: *I'm fairly new to skimage, and I'm trying to replicate some >>> work I've done in IDL using the Canny edge detector. I've imported the same >>> image into skimage and tried running the Canny function with the same >>> parameters, but I either get a blank image, or very different results to >>> IDL which don't change regardless of the parameters I use. I suspect my >>> problems may be related to how I am scaling my image to make it between 0 >>> and 1, as the documentation for the skimage Canny function requires. >>> >>> *More details:* >>> The input image I used in both IDL and skimage is available at >>> https://www.dropbox.com/s/xaiq9kitrf1b4cf/HOT_sub.tif. >>> >>> In IDL I called the CANNY function (documentation available at >>> http://www.exelisvis.com/docs/CANNY.html) as follows: >>> >>> result = CANNY(image, HIGH=0.95, LOW=0.3, SIGMA=2) >>> >>> and got the following image: >>> >>> >>> >>> >>> I loaded the image into skimage as follows: >>> >>> hot = skimage.io.imread("HOT_sub.tif") >>> >>> And removed all negative values by adding the absolute value of the >>> minimum: >>> >>> abs_hot = hot + abs(np.min(hot)) >>> >>> From what I'd read in the documentation, the function img_as_float would >>> then scale this between 0 and 1 in a sensible way, but it gave an error: >>> >>> C:\Python27\lib\site-packages\skimage\util\dtype.pyc in convert(image, >>> dtype) >>> 73 if kind_in == 'f': >>> 74 if np.min(image) < 0 or np.max(image) > 1: >>> ---> 75 raise ValueError("Images of type float must be >>> between 0 and 1") >>> 76 if kind == 'f': >>> 77 # floating point -> floating point >>> >>> ValueError: Images of type float must be between 0 and 1 >>> >>> So I did it myself by simply dividing by the maximum value: >>> >>> im_hot = img_as_float(abs_hot/np.max(abs_hot) >>> >>> However, running the Canny edge detector on this image produces an >>> entirely blank edge image: >>> >>> edges = canny(im_hot, sigma=2, low_threshold=0.3, high_threshold=0.90) >>> np.sum(edges) # Gives 0 showing there are no edges found >>> >>> Regardless how I play with the parameters, I can't seem to get it to >>> give me any edges. >>> >>> Interestingly, if I ignore the instructions to make sure that my input >>> image is between 0 and 1, and just use the raw image: >>> >>> edges = canny(hot, sigma=2, low_threshold=0.3, high_threshold=0.90) >>> >>> I get a more sensible result (well, at least it isn't blank!): >>> >>> >>> >>> But this is very different to the result given by IDL - and furthermore, >>> adjusting the parameters doesn't seem to change the output at all. >>> >>> What am I doing wrong here? I suspect it is something to do with the >>> image scaling, but I'm not sure - it could be a conceptual problem with my >>> image processing knowledge, or I could be using skimage improperly. Does >>> anyone have any ideas or suggestions as to where to go from here? If I >>> manage to solve this I will, of course, write up the solution on my blog so >>> that others can benefit too. >>> >>> Best regards, >>> >>> Robin >>> University of Southampton, UK >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Wed Jun 26 18:48:18 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Wed, 26 Jun 2013 17:48:18 -0500 Subject: Issues with scaling images for canny edge detection In-Reply-To: <2be59cfd-28d6-46de-80a5-99b77bf32b61@googlegroups.com> References: <6384b933-5da8-499b-84e7-c8ba707bde20@googlegroups.com> <2be59cfd-28d6-46de-80a5-99b77bf32b61@googlegroups.com> Message-ID: On Wed, Jun 26, 2013 at 6:08 AM, Robin Wilson wrote: > I think I've managed to solve most of my problems, but I have one > question: the canny routine seems to work fine for my original image, > without rescaling it from 0-1. Is there a reason that it should only work > for images between 0-1, or am I safe to use my original images? > > Hi Robin, It may be OK to use the scaling in the original images for canny, but it's best to stick to (0, 1). For details, see: http://scikit-image.org/docs/dev/user_guide/data_types.html The easiest way to handle rescaling to the correct float range is using rescale_intensity: >>> from skimage import exposure >>> exposure.rescale_intensity(hot) By default, this takes the minimum and maximum values in the image (-6548.7, 4123.2) and rescales the image such that those values map to the data types' min/max values; (0, 1) or (-1, 1) for float images (depends on whether the input has negative values). Since your image has negative values, the rescaled result will be (-1, 1). Usually it's preferable to stick to (0, 1), so you might want to force the output range: >>> exposure.rescale_intensity(hot, out_range=(0, 1)) Note that most of your data are the middle ranges. If you plot the image, it'll just appear gray. To fix that, you may want to clip the input range: >>> exposure.rescale_intensity(hot, in_range=(-100, 100), out_range=(0, 1)) Now everything at or below -100 in the original gets mapped to 0; everything at or above 100 gets mapped to 1; and everything in between -100 and 100 is linearly rescaled between 0 and 1. Hope that helps! -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Jun 27 02:11:08 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 27 Jun 2013 01:11:08 -0500 Subject: storing keypoints In-Reply-To: <20130627073213.aefe3ced9b0db1ade4c5936b@terre-adelie.org> References: <20130627073213.aefe3ced9b0db1ade4c5936b@terre-adelie.org> Message-ID: Hi Jerome On Thu, Jun 27, 2013 at 12:32 AM, J?r?me Kieffer wrote: > I am wondering what is the best way to store (in python) SIFT keypoints > to exchange them (serialize, save, load-back, ...) SIFT is a keypoint > extraction algorithm for images so it transforms a 2D image into > n-keypoints; each keypoint being composed of 4floats > (x, y, scale, orientation) and 128 uint8. Probably an array with elements of dtype = np.dtype([('x', float), ('y', float), ('scale', float), ('orientation', float), ('feature', (np.uint8, 128))]) Regards St?fan From ronnie.ghose at gmail.com Thu Jun 27 01:35:42 2013 From: ronnie.ghose at gmail.com (Ronnie Ghose) Date: Thu, 27 Jun 2013 01:35:42 -0400 Subject: storing keypoints In-Reply-To: <20130627073213.aefe3ced9b0db1ade4c5936b@terre-adelie.org> References: <20130627073213.aefe3ced9b0db1ade4c5936b@terre-adelie.org> Message-ID: http://docs.scipy.org/doc/numpy/reference/generated/numpy.load.html On Thu, Jun 27, 2013 at 1:32 AM, J?r?me Kieffer wrote: > Dear pythonistas, > > I am wondering what is the best way to store (in python) SIFT keypoints > to exchange them (serialize, save, load-back, ...) SIFT is a keypoint > extraction algorithm for images so it transforms a 2D image into > n-keypoints; each keypoint being composed of 4floats > (x, y, scale, orientation) and 128 uint8. > > * I would like to use at best numpy machinery. > * record array look indicated but the 128xuint8 block is likely to be > tedious > * I would like to be able to separate directly the float from uint8. > > Maybe nx36 float with 32 float being actually 128uint and use > recordarrays and views ? > > Any idea? > Thanks. > > -- > J?r?me Kieffer > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From google at terre-adelie.org Thu Jun 27 01:32:13 2013 From: google at terre-adelie.org (=?ISO-8859-1?Q?J=E9r=F4me?= Kieffer) Date: Thu, 27 Jun 2013 07:32:13 +0200 Subject: storing keypoints Message-ID: <20130627073213.aefe3ced9b0db1ade4c5936b@terre-adelie.org> Dear pythonistas, I am wondering what is the best way to store (in python) SIFT keypoints to exchange them (serialize, save, load-back, ...) SIFT is a keypoint extraction algorithm for images so it transforms a 2D image into n-keypoints; each keypoint being composed of 4floats (x, y, scale, orientation) and 128 uint8. * I would like to use at best numpy machinery. * record array look indicated but the 128xuint8 block is likely to be tedious * I would like to be able to separate directly the float from uint8. Maybe nx36 float with 32 float being actually 128uint and use recordarrays and views ? Any idea? Thanks. -- J?r?me Kieffer From google at terre-adelie.org Thu Jun 27 03:06:58 2013 From: google at terre-adelie.org (Jerome Kieffer) Date: Thu, 27 Jun 2013 09:06:58 +0200 Subject: storing keypoints In-Reply-To: References: <20130627073213.aefe3ced9b0db1ade4c5936b@terre-adelie.org> Message-ID: <20130627090658.8bdda446.google@terre-adelie.org> On Thu, 27 Jun 2013 01:11:08 -0500 St?fan van der Walt wrote: > Hi Jerome > > On Thu, Jun 27, 2013 at 12:32 AM, J?r?me Kieffer > wrote: > > I am wondering what is the best way to store (in python) SIFT keypoints > > to exchange them (serialize, save, load-back, ...) SIFT is a keypoint > > extraction algorithm for images so it transforms a 2D image into > > n-keypoints; each keypoint being composed of 4floats > > (x, y, scale, orientation) and 128 uint8. > > Probably an array with elements of > > dtype = np.dtype([('x', float), ('y', float), ('scale', float), > ('orientation', float), ('feature', (np.uint8, 128))]) Thanks a lot, I didn't know this was possible with record-arrays. -- Jerome Kieffer From jostein.floystad at gmail.com Thu Jun 27 07:28:56 2013 From: jostein.floystad at gmail.com (=?ISO-8859-1?Q?Jostein_B=F8_Fl=F8ystad?=) Date: Thu, 27 Jun 2013 13:28:56 +0200 Subject: Inconsistent projection centre for the Radon transform In-Reply-To: References: <672d3beb-5db8-4463-88cb-679acbc4ce16@googlegroups.com> Message-ID: Dear Stefan, thanks for taking the time. I will be without internet connectivity until next Thursday, so there is no rush. I'm guessing you're at SciPy; enjoy the conference! Regards Jostein 2013/6/23 St?fan van der Walt > Dear Jostein > > On Sat, Jun 22, 2013 at 10:31 AM, wrote: > > 2. Alert any users that have based results on this code to check them > > carefully for systematic errors. If the radon_transform module has been > used > > for tomography simulations, it is quite likely that the results are > > affected. In the case of using the module for reconstructing tomography > data > > obtained otherwise, the results may be affected and they may not; this > will > > depend on the specifics of the situation. > > Thank you for bringing this issue to everyone's attention. Rigorous > code review, which leads to the uncovering of such mistakes, is what > makes open source science so compelling to me. I'll review your PR, > so that we can get this fixed ASAP. > > Regards > St?fan > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.t.wilson.bak at googlemail.com Fri Jun 28 12:04:52 2013 From: r.t.wilson.bak at googlemail.com (Robin Wilson) Date: Fri, 28 Jun 2013 09:04:52 -0700 (PDT) Subject: Algorithm to 'walk' along a line from an endpoint by N pixels Message-ID: <185fd7ca-86b0-4747-b891-28c2fd5b6600@googlegroups.com> Hi, Does anyone know if an algorithm to take an endpoint of a binary line in an image and 'walk' back along the line for N pixels already exists in skimage? (or in any of the related projects). I'm happy to go ahead and implement it, but it seems like the kind of thing that would have already been implemented, even though I can't find it in the documentation. Does this already exist? Cheers, Robin -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.t.wilson.bak at googlemail.com Fri Jun 28 12:26:09 2013 From: r.t.wilson.bak at googlemail.com (Robin Wilson) Date: Fri, 28 Jun 2013 09:26:09 -0700 (PDT) Subject: Algorithm to 'walk' along a line from an endpoint by N pixels In-Reply-To: References: <185fd7ca-86b0-4747-b891-28c2fd5b6600@googlegroups.com> Message-ID: Hi Stefan, Thanks for the really quick reply. I'm not sure how I can use the label function to do what I want. I have the image shown in the numpy array below: array([[0, 0, 1, 1, 0], [0, 0, 1, 0, 0], [0, 1, 1, 0, 0], [0, X, 0, 0, 0], [0, 0, 0, 0, 0]]) I'd like to start at the pixel that is marked with an X (it has a value of 1 really, but I've just marked it in this post as an example), and then find the pixel that is n pixels further down the line than that. So, for example, with n=3 I'd get the pixel marked with E below. array([[0, 0, 1, 1, 0], [0, 0, E, 0, 0], [0, 1, 1, 0, 0], [0, X, 0, 0, 0], [0, 0, 0, 0, 0]]) I can't see how the label function will do that (although it will let me extract just the line I want to work on - which will be handy as a first step), but I may well be entirely wrong. If the label function won't, is there any other function, or group of functions, that would help me do that? Cheers, Robin On Friday, 28 June 2013 17:08:04 UTC+1, Stefan van der Walt wrote: > > Hi Robin > > It sounds like you may be able to get away with a connected component > search. Have a look at the "skimage.morphology.label". > > St?fan > > On Fri, Jun 28, 2013 at 11:04 AM, Robin Wilson > > wrote: > > Hi, > > > > Does anyone know if an algorithm to take an endpoint of a binary line in > an > > image and 'walk' back along the line for N pixels already exists in > skimage? > > (or in any of the related projects). I'm happy to go ahead and implement > it, > > but it seems like the kind of thing that would have already been > > implemented, even though I can't find it in the documentation. > > > > Does this already exist? > > > > Cheers, > > > > Robin > > > > -- > > You received this message because you are subscribed to the Google > Groups > > "scikit-image" group. > > To unsubscribe from this group and stop receiving emails from it, send > an > > email to scikit-image... at googlegroups.com . > > For more options, visit https://groups.google.com/groups/opt_out. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri Jun 28 12:08:04 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 28 Jun 2013 11:08:04 -0500 Subject: Algorithm to 'walk' along a line from an endpoint by N pixels In-Reply-To: <185fd7ca-86b0-4747-b891-28c2fd5b6600@googlegroups.com> References: <185fd7ca-86b0-4747-b891-28c2fd5b6600@googlegroups.com> Message-ID: Hi Robin It sounds like you may be able to get away with a connected component search. Have a look at the "skimage.morphology.label". St?fan On Fri, Jun 28, 2013 at 11:04 AM, Robin Wilson wrote: > Hi, > > Does anyone know if an algorithm to take an endpoint of a binary line in an > image and 'walk' back along the line for N pixels already exists in skimage? > (or in any of the related projects). I'm happy to go ahead and implement it, > but it seems like the kind of thing that would have already been > implemented, even though I can't find it in the documentation. > > Does this already exist? > > Cheers, > > Robin > > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > From stefan at sun.ac.za Fri Jun 28 12:51:24 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 28 Jun 2013 11:51:24 -0500 Subject: Algorithm to 'walk' along a line from an endpoint by N pixels In-Reply-To: References: <185fd7ca-86b0-4747-b891-28c2fd5b6600@googlegroups.com> Message-ID: Hi Robin On Fri, Jun 28, 2013 at 11:26 AM, Robin Wilson wrote: > Thanks for the really quick reply. I'm not sure how I can use the label > function to do what I want. I have the image shown in the numpy array below: > > array([[0, 0, 1, 1, 0], > [0, 0, 1, 0, 0], > [0, 1, 1, 0, 0], > [0, X, 0, 0, 0], > [0, 0, 0, 0, 0]]) Right, so the label function will just identify the pixels, and then perhaps you can do something like this: http://stackoverflow.com/questions/8686926/python-image-processing-help-needed-for-corner-detection-in-preferably-pil-or St?fan From stefan at sun.ac.za Fri Jun 28 16:48:41 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 28 Jun 2013 15:48:41 -0500 Subject: Algorithm to 'walk' along a line from an endpoint by N pixels In-Reply-To: References: <185fd7ca-86b0-4747-b891-28c2fd5b6600@googlegroups.com> Message-ID: On Fri, Jun 28, 2013 at 3:38 PM, Anders Klint wrote: > However, the only mention of distance transform I find in the docs is an optional parameter > to morphology.medial_axis... Is there no distance transform in skimage, or is this the one > in a (for me, at least) unexpected place? Currently, distance transform can be found in scipy.ndimage. St?fan From anders.c.klint at gmail.com Fri Jun 28 16:38:36 2013 From: anders.c.klint at gmail.com (Anders Klint) Date: Fri, 28 Jun 2013 22:38:36 +0200 Subject: Algorithm to 'walk' along a line from an endpoint by N pixels In-Reply-To: References: <185fd7ca-86b0-4747-b891-28c2fd5b6600@googlegroups.com> Message-ID: Hi all, Just some thoughts and a question: On 28 jun 2013, at 18:51, St?fan van der Walt wrote: > Hi Robin > > On Fri, Jun 28, 2013 at 11:26 AM, Robin Wilson > wrote: >> Thanks for the really quick reply. I'm not sure how I can use the label >> function to do what I want. I have the image shown in the numpy array below: >> >> array([[0, 0, 1, 1, 0], >> [0, 0, 1, 0, 0], >> [0, 1, 1, 0, 0], >> [0, X, 0, 0, 0], >> [0, 0, 0, 0, 0]]) > > Right, so the label function will just identify the pixels, and then > perhaps you can do something like this: > > http://stackoverflow.com/questions/8686926/python-image-processing-help-needed-for-corner-detection-in-preferably-pil-or > Could it help to use the endpoints identified as suggested above as background, calculate a distance transform and mask that with the initial line? That should give you lines with values corresponding to the distance from the endpoints, right? Locate the wanted distance values and you should have your pixels. In practice, you may have to do that on a line at a time to avoid "interference" from other lines, it may not be efficient, but in general I think this could work. Or? However, the only mention of distance transform I find in the docs is an optional parameter to morphology.medial_axis... Is there no distance transform in skimage, or is this the one in a (for me, at least) unexpected place? /Anders > St?fan > > -- > You received this message because you are subscribed to the Google Groups "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > From jeanpatrick.pommier at gmail.com Sat Jun 29 08:15:47 2013 From: jeanpatrick.pommier at gmail.com (Jean-Patrick Pommier) Date: Sat, 29 Jun 2013 05:15:47 -0700 (PDT) Subject: Algorithm to 'walk' along a line from an endpoint by N pixels In-Reply-To: <185fd7ca-86b0-4747-b891-28c2fd5b6600@googlegroups.com> References: <185fd7ca-86b0-4747-b891-28c2fd5b6600@googlegroups.com> Message-ID: <344fcc22-63e0-4f75-ad46-1988e97923bf@googlegroups.com> Hi, This is not advertizing for my blog, but I faced the same kind of question here. The idea would be first to convert the image of the curve into a set of ordered pixels, then counting pixels from that list is easy. jean-pat Le vendredi 28 juin 2013 18:04:52 UTC+2, Robin Wilson a ?crit : > > Hi, > > Does anyone know if an algorithm to take an endpoint of a binary line in > an image and 'walk' back along the line for N pixels already exists in > skimage? (or in any of the related projects). I'm happy to go ahead and > implement it, but it seems like the kind of thing that would have already > been implemented, even though I can't find it in the documentation. > > Does this already exist? > > Cheers, > > Robin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sat Jun 29 11:16:33 2013 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 29 Jun 2013 10:16:33 -0500 Subject: how to highlight/shade a segment in an image. In-Reply-To: <51C3841E.3080504@gmail.com> References: <51C3841E.3080504@gmail.com> Message-ID: Hi Michael On Thu, Jun 20, 2013 at 5:37 PM, Brickle Macho wrote: > I over segment an image using a superpixel algorithm. I region grow > using the superpixels to end up with a segmented image, a label-image. > I overlay the label boundaries using mark_boundaries(). I'd be interested to hear more about your approach. We're also working on some segmentation for gsoc, and I'd be very interested in a region growing PR. St?fan From silvertrumpet999 at gmail.com Sat Jun 29 17:58:22 2013 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Sat, 29 Jun 2013 14:58:22 -0700 (PDT) Subject: segmentation fault with the viewer In-Reply-To: <20130629212054.GC2073@phare.normalesup.org> References: <20130629212054.GC2073@phare.normalesup.org> Message-ID: I got it to segfault once using the tk backend for pylab, but it wasn't robustly repeatable (randomly happened once out of a lot of focus losses/regains, and never on close). It looked similar to routine segfaults I get for the tk backend, which I've never been able to track down. That's why I usually avoid the tk backend like the plage... Is this reproducible on any other backends? On Saturday, June 29, 2013 4:20:54 PM UTC-5, Emmanuelle Gouillart wrote: > > Dear all, > > I've started playing around with skimage's viewer, and I get a > segmentation fault when I try to close the viewer window, or when I click > on the window after having the focus on another window. Also, the command > viewer.show() is blocking (I'm running ipython --pylab - corresponding to > the TkAgg backend). > > Running a test script from ipython in gdb yields the following message: > > Program received signal SIGSEGV, Segmentation fault. > 0x00007fffd4aa511d in > Shiboken::Conversions::isPythonToCppConvertible(SbkConverter*, _object*) > () from /usr/lib/x86_64-linux-gnu/libshiboken-python2.7.so.1.1 > > I'm running Ubuntu 12.04 with Nvidia's proprietary drivers. > > I am the only one having this problem? Any idea where the seg fault comes > from? I reproduce below the test script. > > Cheers, > Emmanuelle > > ***************** > > from skimage import data > from skimage.viewer import ImageViewer > > image = data.coins() > from skimage.filter import tv_denoise > from skimage.viewer.plugins.base import Plugin > > denoise_plugin = Plugin(image_filter=tv_denoise) > from skimage.viewer.widgets import Slider > from skimage.viewer.widgets.history import SaveButtons > > denoise_plugin += Slider('weight', 0.01, 0.5, update_on='release') > denoise_plugin += SaveButtons() > > viewer = ImageViewer(image) > viewer += denoise_plugin > viewer.show() > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvertrumpet999 at gmail.com Sat Jun 29 18:10:08 2013 From: silvertrumpet999 at gmail.com (Josh Warner) Date: Sat, 29 Jun 2013 15:10:08 -0700 (PDT) Subject: segmentation fault with the viewer In-Reply-To: References: <20130629212054.GC2073@phare.normalesup.org> <20130629220318.GD2073@phare.normalesup.org> Message-ID: <79b37a02-6c2b-4903-b9ae-07333bcd229e@googlegroups.com> I can confirm that `viewer.show()` is blocking on PySide, though it's quite stable for me on qt. On Saturday, June 29, 2013 5:07:49 PM UTC-5, Tony S Yu wrote: > > > On Sat, Jun 29, 2013 at 5:03 PM, Emmanuelle Gouillart < > emmanuelle... at nsup.org > wrote: > >> On Sat, Jun 29, 2013 at 02:58:22PM -0700, Josh Warner wrote: >> > I got it to segfault once using the tk backend for pylab, but it wasn't >> > robustly repeatable (randomly happened once out of a lot of focus >> losses/ >> > regains, and never on close). >> >> > It looked similar to routine segfaults I get for the tk backend, which >> I've >> > never been able to track down. That's why I usually avoid the tk >> backend like >> > the plage... >> >> > Is this reproducible on any other backends? >> >> Yes, tk, wx, gtk, I get a segfault with all of them (using qt raises an >> error). >> >> Emmanuelle >> > > Are you on master? Specifically, a version later than yesterday morning? > The viewer-linking PR that was merged yesterday had a number of fixes for > PySide. I don't see any issue on PyQt at the moment. > > -Tony > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.t.wilson.bak at googlemail.com Sat Jun 29 11:28:24 2013 From: r.t.wilson.bak at googlemail.com (Robin Wilson) Date: Sat, 29 Jun 2013 16:28:24 +0100 Subject: Algorithm to 'walk' along a line from an endpoint by N pixels In-Reply-To: <344fcc22-63e0-4f75-ad46-1988e97923bf@googlegroups.com> References: <185fd7ca-86b0-4747-b891-28c2fd5b6600@googlegroups.com> <344fcc22-63e0-4f75-ad46-1988e97923bf@googlegroups.com> Message-ID: Hi all, Thanks for the responses. Stefan - that is a lovely idea which I started to work with, but then realised that if my line is curved (for example, U shaped) then the distance transform will give smaller distances to some of the points than should be given: eg. starting at the top of the U on the left, the top of the U on the right would have a lower distance to the starting point than one of the pixels at the bottom of the U). Jean-Pat - your code looks very similar to the code I've started to write. I'll probably take some of your ideas and merge them into my code, if that's ok. Thanks, Robin On Sat, Jun 29, 2013 at 1:15 PM, Jean-Patrick Pommier < jeanpatrick.pommier at gmail.com> wrote: > Hi, > This is not advertizing for my blog, but I faced the same kind of question > here. > The idea would be first to convert the image of the curve into a set of > ordered pixels, then counting pixels from that list is easy. > > jean-pat > > Le vendredi 28 juin 2013 18:04:52 UTC+2, Robin Wilson a ?crit : > >> Hi, >> >> Does anyone know if an algorithm to take an endpoint of a binary line in >> an image and 'walk' back along the line for N pixels already exists in >> skimage? (or in any of the related projects). I'm happy to go ahead and >> implement it, but it seems like the kind of thing that would have already >> been implemented, even though I can't find it in the documentation. >> >> Does this already exist? >> >> Cheers, >> >> Robin >> > -- > You received this message because you are subscribed to the Google Groups > "scikit-image" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scikit-image+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/groups/opt_out. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amueller at ais.uni-bonn.de Sat Jun 29 11:00:28 2013 From: amueller at ais.uni-bonn.de (Andreas Mueller) Date: Sat, 29 Jun 2013 17:00:28 +0200 Subject: Graph Cuts implementation In-Reply-To: <11fee39a-5140-4170-bdf5-dd530bd40f55@googlegroups.com> References: <1776a04e-954c-413c-954e-04d83c424721@googlegroups.com> <20130623214403.GB3302@phare.normalesup.org> <11fee39a-5140-4170-bdf5-dd530bd40f55@googlegroups.com> Message-ID: <51CEF68C.4020607@ais.uni-bonn.de> Hey Marc. I just saw that you are working on graph cuts and segmentation for skimage. That is pretty awesome. I'm super busy at the moment so I only sort of follow skimage at the time and I can't see melange. Could you shortly give an idea of what the goals of your GSOC are? Are you also planning to implement alpha-expansion? And what are the thoughts about the patent issues? Also, I seem to have completely missed the fact that skimage wants to do gpu now. Are there any pointers on how this is planned / what is already implemented? I implemented some of the segmentation algorithms in skimage and I'd really like to know what is happening and if I could help in any way. I also did some energy minimization stuff, but was a bit bummed out by the patent issues. Btw, the slic implementation is not identical with the reference implementation. If you want to work on segmentation, I think this should really be investigated .... Cheers, Andy From amueller at ais.uni-bonn.de Sat Jun 29 11:05:20 2013 From: amueller at ais.uni-bonn.de (Andreas Mueller) Date: Sat, 29 Jun 2013 17:05:20 +0200 Subject: Graph Cuts implementation In-Reply-To: <51CEF68C.4020607@ais.uni-bonn.de> References: <1776a04e-954c-413c-954e-04d83c424721@googlegroups.com> <20130623214403.GB3302@phare.normalesup.org> <11fee39a-5140-4170-bdf5-dd530bd40f55@googlegroups.com> <51CEF68C.4020607@ais.uni-bonn.de> Message-ID: <51CEF7B0.9050005@ais.uni-bonn.de> ps: your blog says your GSoC is aboug graph cuts, grow cuts and quickshift. But quickshift is already implemented, right? There is also a CUDA implementation of Quickshift by Brian Fulkerson and a paper about it. I used the implementation regularly before switching to slic. Is there any reason you prefer quickshift over slic? Cheers, Andy From tsyu80 at gmail.com Sat Jun 29 18:07:49 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Sat, 29 Jun 2013 17:07:49 -0500 Subject: segmentation fault with the viewer In-Reply-To: <20130629220318.GD2073@phare.normalesup.org> References: <20130629212054.GC2073@phare.normalesup.org> <20130629220318.GD2073@phare.normalesup.org> Message-ID: On Sat, Jun 29, 2013 at 5:03 PM, Emmanuelle Gouillart < emmanuelle.gouillart at nsup.org> wrote: > On Sat, Jun 29, 2013 at 02:58:22PM -0700, Josh Warner wrote: > > I got it to segfault once using the tk backend for pylab, but it wasn't > > robustly repeatable (randomly happened once out of a lot of focus losses/ > > regains, and never on close). > > > It looked similar to routine segfaults I get for the tk backend, which > I've > > never been able to track down. That's why I usually avoid the tk backend > like > > the plage... > > > Is this reproducible on any other backends? > > Yes, tk, wx, gtk, I get a segfault with all of them (using qt raises an > error). > > Emmanuelle > Are you on master? Specifically, a version later than yesterday morning? The viewer-linking PR that was merged yesterday had a number of fixes for PySide. I don't see any issue on PyQt at the moment. -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From amueller at ais.uni-bonn.de Sat Jun 29 11:08:53 2013 From: amueller at ais.uni-bonn.de (Andreas Mueller) Date: Sat, 29 Jun 2013 17:08:53 +0200 Subject: how to highlight/shade a segment in an image. In-Reply-To: References: <51C3841E.3080504@gmail.com> <51C3CC23.5090303@gmail.com> Message-ID: <51CEF885.2020705@ais.uni-bonn.de> On 06/21/2013 06:44 AM, Tony Yu wrote: > > > > On Thu, Jun 20, 2013 at 10:44 PM, Brickle Macho > > wrote: > > On 21/06/13 7:03 AM, Tony Yu wrote: >> >> Short version, given a label, a lable-image and a image, how do I >> shade/tint the label area. >> >> >> You could try out the label2rgb PR: >> >> https://github.com/scikit-image/scikit-image/pull/485 > > Thanks. Look interesting. How do I try/pull/incorporate > label2rgb code? > > I know this reply is a bit late, still, I thought it might be useful. I usually do it using matplotlib and the alpha setting of imshow. I just plot an image and then plot the segments using some alpha value. That works quite well. Hth, Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From amueller at ais.uni-bonn.de Sat Jun 29 11:11:44 2013 From: amueller at ais.uni-bonn.de (Andreas Mueller) Date: Sat, 29 Jun 2013 17:11:44 +0200 Subject: Graph Cuts implementation In-Reply-To: <51CEF7B0.9050005@ais.uni-bonn.de> References: <1776a04e-954c-413c-954e-04d83c424721@googlegroups.com> <20130623214403.GB3302@phare.normalesup.org> <11fee39a-5140-4170-bdf5-dd530bd40f55@googlegroups.com> <51CEF68C.4020607@ais.uni-bonn.de> <51CEF7B0.9050005@ais.uni-bonn.de> Message-ID: <51CEF930.2070604@ais.uni-bonn.de> pps: have you read about CUDA cuts? http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4563095&tag=1 From amueller at ais.uni-bonn.de Sat Jun 29 11:16:55 2013 From: amueller at ais.uni-bonn.de (Andreas Mueller) Date: Sat, 29 Jun 2013 17:16:55 +0200 Subject: Graph Cuts implementation In-Reply-To: <51CEF930.2070604@ais.uni-bonn.de> References: <1776a04e-954c-413c-954e-04d83c424721@googlegroups.com> <20130623214403.GB3302@phare.normalesup.org> <11fee39a-5140-4170-bdf5-dd530bd40f55@googlegroups.com> <51CEF68C.4020607@ais.uni-bonn.de> <51CEF7B0.9050005@ais.uni-bonn.de> <51CEF930.2070604@ais.uni-bonn.de> Message-ID: <51CEFA67.6020200@ais.uni-bonn.de> On 06/29/2013 05:11 PM, Andreas Mueller wrote: > pps: > have you read about CUDA cuts? > http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4563095&tag=1 > Also this: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.60.6159 Ok, now O'm going back to work and stop flooding the skimage list, sorry ;) From emmanuelle.gouillart at nsup.org Sat Jun 29 17:20:54 2013 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Sat, 29 Jun 2013 23:20:54 +0200 Subject: segmentation fault with the viewer Message-ID: <20130629212054.GC2073@phare.normalesup.org> Dear all, I've started playing around with skimage's viewer, and I get a segmentation fault when I try to close the viewer window, or when I click on the window after having the focus on another window. Also, the command viewer.show() is blocking (I'm running ipython --pylab - corresponding to the TkAgg backend). Running a test script from ipython in gdb yields the following message: Program received signal SIGSEGV, Segmentation fault. 0x00007fffd4aa511d in Shiboken::Conversions::isPythonToCppConvertible(SbkConverter*, _object*) () from /usr/lib/x86_64-linux-gnu/libshiboken-python2.7.so.1.1 I'm running Ubuntu 12.04 with Nvidia's proprietary drivers. I am the only one having this problem? Any idea where the seg fault comes from? I reproduce below the test script. Cheers, Emmanuelle ***************** from skimage import data from skimage.viewer import ImageViewer image = data.coins() from skimage.filter import tv_denoise from skimage.viewer.plugins.base import Plugin denoise_plugin = Plugin(image_filter=tv_denoise) from skimage.viewer.widgets import Slider from skimage.viewer.widgets.history import SaveButtons denoise_plugin += Slider('weight', 0.01, 0.5, update_on='release') denoise_plugin += SaveButtons() viewer = ImageViewer(image) viewer += denoise_plugin viewer.show() From emmanuelle.gouillart at nsup.org Sat Jun 29 18:03:18 2013 From: emmanuelle.gouillart at nsup.org (Emmanuelle Gouillart) Date: Sun, 30 Jun 2013 00:03:18 +0200 Subject: segmentation fault with the viewer In-Reply-To: References: <20130629212054.GC2073@phare.normalesup.org> Message-ID: <20130629220318.GD2073@phare.normalesup.org> On Sat, Jun 29, 2013 at 02:58:22PM -0700, Josh Warner wrote: > I got it to segfault once using the tk backend for pylab, but it wasn't > robustly repeatable (randomly happened once out of a lot of focus losses/ > regains, and never on close). > It looked similar to routine segfaults I get for the tk backend, which I've > never been able to track down. That's why I usually avoid the tk backend like > the plage... > Is this reproducible on any other backends? Yes, tk, wx, gtk, I get a segfault with all of them (using qt raises an error). Emmanuelle From tsyu80 at gmail.com Sun Jun 30 23:11:28 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Sun, 30 Jun 2013 22:11:28 -0500 Subject: 1st and 2st order statistical texture features of an image In-Reply-To: <79de3091-00d6-435e-9552-42cf32436789@googlegroups.com> References: <79de3091-00d6-435e-9552-42cf32436789@googlegroups.com> Message-ID: On Sun, Jun 23, 2013 at 4:16 AM, Dan wrote: > Hi, > > > I wish to perform first (histogram based mean, stdev, smoothness, > skewness, uniformity and entropy) and second order (GLCM based contrast, > correlation, energy, homogeneity) statistical texture features of an image. > is it possible in scikit-image? > > If so a small script will be a huge help. > > Thanks. > Hi Dan, Sorry for the delayed reply. GLCM is implemented, as shown in this example: http://scikit-image.org/docs/dev/auto_examples/plot_glcm.html Unfortunately, only a handful of statistics are currently calculated. It's not too difficult to calculate your own since you have access to the GLCM. More statistics should be added to the base implementation, though. As for first order statistics, I'm not completely sure what you mean. If you just want to calculate these on a given image patch, there are some numpy functions for a few of these (`np.mean`, `np.std`), but for others it would be fairly simple to implement your own (sorry, I don't have it in me to provide an example---it's been a long week). For local statistics, maybe you're looking for rank filters: http://scikit-image.org/docs/dev/auto_examples/applications/plot_rank_filters.html This only implements a few of the filters you'd want (and the entropy filter is a bit broken since it should really return float values---the output resolution/range is pretty low at the moment). Hope that helps, -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsyu80 at gmail.com Sun Jun 30 23:13:51 2013 From: tsyu80 at gmail.com (Tony Yu) Date: Sun, 30 Jun 2013 22:13:51 -0500 Subject: 1st and 2st order statistical texture features of an image In-Reply-To: References: <79de3091-00d6-435e-9552-42cf32436789@googlegroups.com> Message-ID: On Sun, Jun 30, 2013 at 10:11 PM, Tony Yu wrote: > > > > On Sun, Jun 23, 2013 at 4:16 AM, Dan wrote: > >> Hi, >> >> >> I wish to perform first (histogram based mean, stdev, smoothness, >> skewness, uniformity and entropy) and second order (GLCM based contrast, >> correlation, energy, homogeneity) statistical texture features of an image. >> is it possible in scikit-image? >> >> If so a small script will be a huge help. >> >> Thanks. >> > > Hi Dan, > > Sorry for the delayed reply. GLCM is implemented, as shown in this example: > > http://scikit-image.org/docs/dev/auto_examples/plot_glcm.html > > Unfortunately, only a handful of statistics are currently calculated. It's > not too difficult to calculate your own since you have access to the GLCM. > More statistics should be added to the base implementation, though. > Also, the mahotas package implements GLCM-based Haralick features. Best, -Tony -------------- next part -------------- An HTML attachment was scrubbed... URL: