From kwadwoboamah76 at gmail.com Tue Mar 6 07:54:09 2018 From: kwadwoboamah76 at gmail.com (emmanuel obeng) Date: Tue, 6 Mar 2018 12:54:09 +0000 Subject: [scikit-image] Detailed Explanation of codes Message-ID: <5a9e8f70.a2addf0a.56687.705d@mx.google.com> Sent from Mail for Windows 10 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PYTHON IMPLEMENTATION CODES.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 15115 bytes Desc: not available URL: From martin.sladecek at gmail.com Mon Mar 19 13:03:15 2018 From: martin.sladecek at gmail.com (martin sladecek) Date: Mon, 19 Mar 2018 18:03:15 +0100 Subject: [scikit-image] robust epipolar geometry estimation with ransac Message-ID: Hello, I'm having trouble achieving robust performance with `skimage.measure.ransac` when estimating fundamental matrix for a pair of images. I'm seeing highly varying results with different random seeds when compared to OpenCV's `findFundamentalMatrix`. I'm running both skimage's and opencv's ransac on the same sets of keypoints and with (what I'm assuming are) equivalent parameters. I'm using the same image pair as OpenCV python tutorials (https://github.com/abidrahmank/OpenCV2-Python-Tutorials/tree/master/data). Here's my demonstration script: ??? import cv2 ??? import numpy as np ??? from skimage import io ??? from skimage.measure import ransac ??? from skimage.feature import ORB, match_descriptors ??? from skimage.transform import FundamentalMatrixTransform ??? orb = ORB(n_keypoints=500) ??? img1 = io.imread('images/right.jpg', as_grey=True) ??? orb.detect_and_extract(img1) ??? kp1 = orb.keypoints ??? desc1 = orb.descriptors ??? img2 = io.imread('images/left.jpg', as_grey=True) ??? orb.detect_and_extract(img2) ??? kp2 = orb.keypoints ??? desc2 = orb.descriptors ??? matches = match_descriptors(desc1, desc2, metric='hamming', cross_check=True) ??? kp1 = kp1[matches[:, 0]] ??? kp2 = kp2[matches[:, 1]] ??? n_iter = 10 ??? skimage_inliers = np.empty((n_iter, len(matches))) ??? opencv_inliers = skimage_inliers.copy() ??? for i in range(n_iter): ??????? fmat, inliers = ransac((kp1, kp2), FundamentalMatrixTransform, ?????????????????????????????? min_samples=8, residual_threshold=3, ?????????????????????????????? max_trials=5000, stop_probability=0.99, ?????????????????????????????? random_state=i) ??????? skimage_inliers[i, :] = inliers ??????? cv2.setRNGSeed(i) ??????? fmat, inliers = cv2.findFundamentalMat(kp1, kp2, method=cv2.FM_RANSAC, ?????????????????????????????????????????????? param1=3, param2=0.99) ??????? opencv_inliers[i, :] = (inliers.ravel() == 1) ??? skimage_sum_of_vars = np.sum(np.var(skimage_inliers, axis=0)) ??? opencv_sum_of_vars = np.sum(np.var(opencv_inliers, axis=0)) ??? print(f'Scikit-Image sum of inlier variances: {skimage_sum_of_vars:>8.3f}') ??? print(f'OpenCV sum of inlier variances: {opencv_sum_of_vars:>8.3f}') And the output: ??? Scikit-Image sum of inlier variances:?? 13.240 ??? OpenCV sum of inlier variances:????????? 0.000 I use the sum of variances of inliers obtained from different random seeds as the metric of robustness. I would expect this number to be very close to zero, because truly robust ransac should converge to the same model independently of it's random initialization. How can I make skimage's `ransac` behave as robustly as opencv's? Any other tips on this subject would be greatly appreciated. Best regards, Martin (I originally posted this question on stackoverflow, but I'm not getting much traction there, so I figured I'd try the mailing list.) https://stackoverflow.com/questions/49342469/robust-epipolar-geometry-estimation-with-scikit-images-ransac From jni.soma at gmail.com Tue Mar 20 23:03:50 2018 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Tue, 20 Mar 2018 23:03:50 -0400 Subject: [scikit-image] robust epipolar geometry estimation with ransac In-Reply-To: References: Message-ID: @Martin, thanks for the ping. I don?t know about other devs but I?m easier to reach here, for sure. =) I added a comment to SO. Having said that I think St?fan is more experienced with RANSAC. (My experience ends at having attended St?fan?s tutorial on the topic. =P) But, can you confirm that the fundamental matrix is also varying between runs of skimage? Generally, I?m concerned about whether the parameters are really the same. I couldn?t find an API reference for cv2 so I couldn?t check for differences. Can you point me to how you set up the cv2 ransac parameters? Thanks, Juan. On 19 Mar 2018, 1:03 PM -0400, martin sladecek , wrote: > Hello, > > I'm having trouble achieving robust performance with > `skimage.measure.ransac` when estimating fundamental matrix for a pair > of images. > I'm seeing highly varying results with different random seeds when > compared to OpenCV's `findFundamentalMatrix`. > > I'm running both skimage's and opencv's ransac on the same sets of > keypoints and with (what I'm assuming are) equivalent parameters. > I'm using the same image pair as OpenCV python tutorials > (https://github.com/abidrahmank/OpenCV2-Python-Tutorials/tree/master/data). > > Here's my demonstration script: > > ??? import cv2 > ??? import numpy as np > > ??? from skimage import io > ??? from skimage.measure import ransac > ??? from skimage.feature import ORB, match_descriptors > ??? from skimage.transform import FundamentalMatrixTransform > > ??? orb = ORB(n_keypoints=500) > > ??? img1 = io.imread('images/right.jpg', as_grey=True) > ??? orb.detect_and_extract(img1) > ??? kp1 = orb.keypoints > ??? desc1 = orb.descriptors > > ??? img2 = io.imread('images/left.jpg', as_grey=True) > ??? orb.detect_and_extract(img2) > ??? kp2 = orb.keypoints > ??? desc2 = orb.descriptors > > ??? matches = match_descriptors(desc1, desc2, metric='hamming', > cross_check=True) > ??? kp1 = kp1[matches[:, 0]] > ??? kp2 = kp2[matches[:, 1]] > > ??? n_iter = 10 > ??? skimage_inliers = np.empty((n_iter, len(matches))) > ??? opencv_inliers = skimage_inliers.copy() > > ??? for i in range(n_iter): > ??????? fmat, inliers = ransac((kp1, kp2), FundamentalMatrixTransform, > ?????????????????????????????? min_samples=8, residual_threshold=3, > ?????????????????????????????? max_trials=5000, stop_probability=0.99, > ?????????????????????????????? random_state=i) > ??????? skimage_inliers[i, :] = inliers > > ??????? cv2.setRNGSeed(i) > ??????? fmat, inliers = cv2.findFundamentalMat(kp1, kp2, > method=cv2.FM_RANSAC, > ?????????????????????????????????????????????? param1=3, param2=0.99) > ??????? opencv_inliers[i, :] = (inliers.ravel() == 1) > > ??? skimage_sum_of_vars = np.sum(np.var(skimage_inliers, axis=0)) > ??? opencv_sum_of_vars = np.sum(np.var(opencv_inliers, axis=0)) > > ??? print(f'Scikit-Image sum of inlier variances: > {skimage_sum_of_vars:>8.3f}') > ??? print(f'OpenCV sum of inlier variances: {opencv_sum_of_vars:>8.3f}') > > And the output: > > ??? Scikit-Image sum of inlier variances:?? 13.240 > ??? OpenCV sum of inlier variances:????????? 0.000 > > I use the sum of variances of inliers obtained from different random > seeds as the metric of robustness. > > I would expect this number to be very close to zero, because truly > robust ransac should converge to the same model independently of it's > random initialization. > > How can I make skimage's `ransac` behave as robustly as opencv's? > > Any other tips on this subject would be greatly appreciated. > > Best regards, > Martin > > (I originally posted this question on stackoverflow, but I'm not getting > much traction there, so I figured I'd try the mailing list.) > > https://stackoverflow.com/questions/49342469/robust-epipolar-geometry-estimation-with-scikit-images-ransac > > _______________________________________________ > scikit-image mailing list > scikit-image at python.org > https://mail.python.org/mailman/listinfo/scikit-image -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.sladecek at gmail.com Wed Mar 21 18:40:21 2018 From: martin.sladecek at gmail.com (martin sladecek) Date: Wed, 21 Mar 2018 23:40:21 +0100 Subject: [scikit-image] robust epipolar geometry estimation with ransac In-Reply-To: References: Message-ID: <112bfc6f-a2bd-f1cf-dd73-f6c03e566542@gmail.com> Hi Juan, thanks for your response. I can indeed confirm that the fundamental matrix varies as well. Here are the variances for the same experiment as before (after normalization): Scikit-Image variance of fundamental matrix: [[1.462e-11 4.067e-09 3.153e-04] [3.701e-09 2.891e-10 8.637e-06] [2.857e-03 3.343e-05 0.000e+00]] OpenCV variance of fundamental matrix: [[0.000e+00 1.148e-41 0.000e+00] [0.000e+00 0.000e+00 0.000e+00] [2.708e-35 0.000e+00 0.000e+00]] It makes sense to me, because the inliers should be calculated based on how well they comply with the epipolar constraint, here represented by the fundamental matrix. As for the parameters, I am also uncertain whether they are the same or not. I chose the values based on the Fundamental matrix estimation example (in this case the images are already rectified unlike mine), and the OpenCV Epipolar Geometry tutorial. http://scikit-image.org/docs/dev/auto_examples/transform/plot_fundamental_matrix.html#sphx-glr-auto-examples-transform-plot-fundamental-matrix-py https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_calib3d/py_epipolar_geometry/py_epipolar_geometry.html#epipolar-geometry When deciding on the parameters I inspected both the cv2 and skimage APIs: https://docs.opencv.org/3.3.1/d9/d0c/group__calib3d.html#ga30ccb52f4e726daa039fd5cb5bf0822b http://scikit-image.org/docs/dev/api/skimage.measure.html#ransac The OpenCV API is for C++, but the python bindings are autogenerated from it so the parameters should be the same. Unfortunately I don't know enough C++ to go through the the code and understand all the differences between the two implementations. ~Martin On 21/03/18 04:03, Juan Nunez-Iglesias wrote: > > @Martin, thanks for the ping. I don?t know about other devs but I?m > easier to reach here, for sure. =) I added a comment to SO. Having > said that I think St?fan is more experienced with RANSAC. (My > experience ends at having attended St?fan?s tutorial on the topic. =P) > But, can you confirm that the fundamental matrix is also varying > between runs of skimage? > > > Generally, I?m concerned about whether the parameters are really the > same. I couldn?t find an API reference for cv2 so I couldn?t check for > differences. Can you point me to how you set up the cv2 ransac parameters? > > > Thanks, > > > Juan. > > > On 19 Mar 2018, 1:03 PM -0400, martin sladecek > , wrote: >> Hello, >> >> I'm having trouble achieving robust performance with >> `skimage.measure.ransac` when estimating fundamental matrix for a pair >> of images. >> I'm seeing highly varying results with different random seeds when >> compared to OpenCV's `findFundamentalMatrix`. >> >> I'm running both skimage's and opencv's ransac on the same sets of >> keypoints and with (what I'm assuming are) equivalent parameters. >> I'm using the same image pair as OpenCV python tutorials >> (https://github.com/abidrahmank/OpenCV2-Python-Tutorials/tree/master/data). >> >> Here's my demonstration script: >> >> ??? import cv2 >> ??? import numpy as np >> >> ??? from skimage import io >> ??? from skimage.measure import ransac >> ??? from skimage.feature import ORB, match_descriptors >> ??? from skimage.transform import FundamentalMatrixTransform >> >> ??? orb = ORB(n_keypoints=500) >> >> ??? img1 = io.imread('images/right.jpg', as_grey=True) >> ??? orb.detect_and_extract(img1) >> ??? kp1 = orb.keypoints >> ??? desc1 = orb.descriptors >> >> ??? img2 = io.imread('images/left.jpg', as_grey=True) >> ??? orb.detect_and_extract(img2) >> ??? kp2 = orb.keypoints >> ??? desc2 = orb.descriptors >> >> ??? matches = match_descriptors(desc1, desc2, metric='hamming', >> cross_check=True) >> ??? kp1 = kp1[matches[:, 0]] >> ??? kp2 = kp2[matches[:, 1]] >> >> ??? n_iter = 10 >> ??? skimage_inliers = np.empty((n_iter, len(matches))) >> ??? opencv_inliers = skimage_inliers.copy() >> >> ??? for i in range(n_iter): >> ??????? fmat, inliers = ransac((kp1, kp2), FundamentalMatrixTransform, >> ?????????????????????????????? min_samples=8, residual_threshold=3, >> ?????????????????????????????? max_trials=5000, stop_probability=0.99, >> ?????????????????????????????? random_state=i) >> ??????? skimage_inliers[i, :] = inliers >> >> ??????? cv2.setRNGSeed(i) >> ??????? fmat, inliers = cv2.findFundamentalMat(kp1, kp2, >> method=cv2.FM_RANSAC, >> ?????????????????????????????????????????????? param1=3, param2=0.99) >> ??????? opencv_inliers[i, :] = (inliers.ravel() == 1) >> >> ??? skimage_sum_of_vars = np.sum(np.var(skimage_inliers, axis=0)) >> ??? opencv_sum_of_vars = np.sum(np.var(opencv_inliers, axis=0)) >> >> ??? print(f'Scikit-Image sum of inlier variances: >> {skimage_sum_of_vars:>8.3f}') >> ??? print(f'OpenCV sum of inlier variances: {opencv_sum_of_vars:>8.3f}') >> >> And the output: >> >> ??? Scikit-Image sum of inlier variances:?? 13.240 >> ??? OpenCV sum of inlier variances:????????? 0.000 >> >> I use the sum of variances of inliers obtained from different random >> seeds as the metric of robustness. >> >> I would expect this number to be very close to zero, because truly >> robust ransac should converge to the same model independently of it's >> random initialization. >> >> How can I make skimage's `ransac` behave as robustly as opencv's? >> >> Any other tips on this subject would be greatly appreciated. >> >> Best regards, >> Martin >> >> (I originally posted this question on stackoverflow, but I'm not getting >> much traction there, so I figured I'd try the mailing list.) >> >> https://stackoverflow.com/questions/49342469/robust-epipolar-geometry-estimation-with-scikit-images-ransac >> >> _______________________________________________ >> scikit-image mailing list >> scikit-image at python.org >> https://mail.python.org/mailman/listinfo/scikit-image > > > _______________________________________________ > scikit-image mailing list > scikit-image at python.org > https://mail.python.org/mailman/listinfo/scikit-image -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsch at demuc.de Thu Mar 22 05:07:49 2018 From: jsch at demuc.de (=?utf-8?Q?Johannes_Sch=C3=B6nberger?=) Date: Thu, 22 Mar 2018 10:07:49 +0100 Subject: [scikit-image] robust epipolar geometry estimation with ransac In-Reply-To: <112bfc6f-a2bd-f1cf-dd73-f6c03e566542@gmail.com> References: <112bfc6f-a2bd-f1cf-dd73-f6c03e566542@gmail.com> Message-ID: <4AF94AF8-FD46-4343-BC27-C7978C77696C@demuc.de> Hi, It seems like OpenCV is computing the point to epipolar line distance (see https://github.com/opencv/opencv/blob/master/modules/calib3d/src/fundam.cpp#L205) while we use the geometrically more meaningful Sampson error (see https://github.com/scikit-image/scikit-image/blob/master/skimage/transform/_geometric.py#L367). They are not equivalent, hence your residual threshold between the two calls is not consistent. You could implement the point to epipolar line residual yourself by subclassing our FundamentalMatrixTransform. Cheers, Johannes > On Mar 21, 2018, at 11:40 PM, martin sladecek wrote: > > Hi Juan, > > thanks for your response. I can indeed confirm that the fundamental matrix varies as well. Here are the variances for the same experiment as before (after normalization): > > Scikit-Image variance of fundamental matrix: > [[1.462e-11 4.067e-09 3.153e-04] > [3.701e-09 2.891e-10 8.637e-06] > [2.857e-03 3.343e-05 0.000e+00]] > OpenCV variance of fundamental matrix: > [[0.000e+00 1.148e-41 0.000e+00] > [0.000e+00 0.000e+00 0.000e+00] > [2.708e-35 0.000e+00 0.000e+00]] > > > It makes sense to me, because the inliers should be calculated based on how well they comply with the epipolar constraint, here represented by the fundamental matrix. > > As for the parameters, I am also uncertain whether they are the same or not. > I chose the values based on the Fundamental matrix estimation example (in this case the images are already rectified unlike mine), and the OpenCV Epipolar Geometry tutorial. > > http://scikit-image.org/docs/dev/auto_examples/transform/plot_fundamental_matrix.html#sphx-glr-auto-examples-transform-plot-fundamental-matrix-py > > https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_calib3d/py_epipolar_geometry/py_epipolar_geometry.html#epipolar-geometry > > When deciding on the parameters I inspected both the cv2 and skimage APIs: > > https://docs.opencv.org/3.3.1/d9/d0c/group__calib3d.html#ga30ccb52f4e726daa039fd5cb5bf0822b > > http://scikit-image.org/docs/dev/api/skimage.measure.html#ransac > > The OpenCV API is for C++, but the python bindings are autogenerated from it so the parameters should be the same. > Unfortunately I don't know enough C++ to go through the the code and understand all the differences between the two implementations. > > ~Martin > > On 21/03/18 04:03, Juan Nunez-Iglesias wrote: >> @Martin, thanks for the ping. I don?t know about other devs but I?m easier to reach here, for sure. =) I added a comment to SO. Having said that I think St?fan is more experienced with RANSAC. (My experience ends at having attended St?fan?s tutorial on the topic. =P) But, can you confirm that the fundamental matrix is also varying between runs of skimage? >> >> Generally, I?m concerned about whether the parameters are really the same. I couldn?t find an API reference for cv2 so I couldn?t check for differences. Can you point me to how you set up the cv2 ransac parameters? >> >> Thanks, >> >> Juan. >> >> On 19 Mar 2018, 1:03 PM -0400, martin sladecek , wrote: >>> Hello, >>> >>> I'm having trouble achieving robust performance with >>> `skimage.measure.ransac` when estimating fundamental matrix for a pair >>> of images. >>> I'm seeing highly varying results with different random seeds when >>> compared to OpenCV's `findFundamentalMatrix`. >>> >>> I'm running both skimage's and opencv's ransac on the same sets of >>> keypoints and with (what I'm assuming are) equivalent parameters. >>> I'm using the same image pair as OpenCV python tutorials >>> (https://github.com/abidrahmank/OpenCV2-Python-Tutorials/tree/master/data). >>> >>> Here's my demonstration script: >>> >>> import cv2 >>> import numpy as np >>> >>> from skimage import io >>> from skimage.measure import ransac >>> from skimage.feature import ORB, match_descriptors >>> from skimage.transform import FundamentalMatrixTransform >>> >>> orb = ORB(n_keypoints=500) >>> >>> img1 = io.imread('images/right.jpg', as_grey=True) >>> orb.detect_and_extract(img1) >>> kp1 = orb.keypoints >>> desc1 = orb.descriptors >>> >>> img2 = io.imread('images/left.jpg', as_grey=True) >>> orb.detect_and_extract(img2) >>> kp2 = orb.keypoints >>> desc2 = orb.descriptors >>> >>> matches = match_descriptors(desc1, desc2, metric='hamming', >>> cross_check=True) >>> kp1 = kp1[matches[:, 0]] >>> kp2 = kp2[matches[:, 1]] >>> >>> n_iter = 10 >>> skimage_inliers = np.empty((n_iter, len(matches))) >>> opencv_inliers = skimage_inliers.copy() >>> >>> for i in range(n_iter): >>> fmat, inliers = ransac((kp1, kp2), FundamentalMatrixTransform, >>> min_samples=8, residual_threshold=3, >>> max_trials=5000, stop_probability=0.99, >>> random_state=i) >>> skimage_inliers[i, :] = inliers >>> >>> cv2.setRNGSeed(i) >>> fmat, inliers = cv2.findFundamentalMat(kp1, kp2, >>> method=cv2.FM_RANSAC, >>> param1=3, param2=0.99) >>> opencv_inliers[i, :] = (inliers.ravel() == 1) >>> >>> skimage_sum_of_vars = np.sum(np.var(skimage_inliers, axis=0)) >>> opencv_sum_of_vars = np.sum(np.var(opencv_inliers, axis=0)) >>> >>> print(f'Scikit-Image sum of inlier variances: >>> {skimage_sum_of_vars:>8.3f}') >>> print(f'OpenCV sum of inlier variances: {opencv_sum_of_vars:>8.3f}') >>> >>> And the output: >>> >>> Scikit-Image sum of inlier variances: 13.240 >>> OpenCV sum of inlier variances: 0.000 >>> >>> I use the sum of variances of inliers obtained from different random >>> seeds as the metric of robustness. >>> >>> I would expect this number to be very close to zero, because truly >>> robust ransac should converge to the same model independently of it's >>> random initialization. >>> >>> How can I make skimage's `ransac` behave as robustly as opencv's? >>> >>> Any other tips on this subject would be greatly appreciated. >>> >>> Best regards, >>> Martin >>> >>> (I originally posted this question on stackoverflow, but I'm not getting >>> much traction there, so I figured I'd try the mailing list.) >>> >>> https://stackoverflow.com/questions/49342469/robust-epipolar-geometry-estimation-with-scikit-images-ransac >>> >>> _______________________________________________ >>> scikit-image mailing list >>> scikit-image at python.org >>> https://mail.python.org/mailman/listinfo/scikit-image >> >> >> _______________________________________________ >> scikit-image mailing list >> >> scikit-image at python.org >> https://mail.python.org/mailman/listinfo/scikit-image > > _______________________________________________ > scikit-image mailing list > scikit-image at python.org > https://mail.python.org/mailman/listinfo/scikit-image From ahmedtabia2 at gmail.com Thu Mar 29 05:43:09 2018 From: ahmedtabia2 at gmail.com (ahmed tabia) Date: Thu, 29 Mar 2018 11:43:09 +0200 Subject: [scikit-image] saliency map Message-ID: Hello everyone , I'm looking for someone who knows how to code a saliency map to help me Many Thanks , Ahmed From jni.soma at gmail.com Thu Mar 29 13:46:18 2018 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 29 Mar 2018 13:46:18 -0400 Subject: [scikit-image] data types Message-ID: <5626815a-afa7-4b7d-9413-ef34d8b064a8@Spark> I think maybe 50% of our bug reports/help requests have to do with image data types. Does anyone want to express an opinion about how we can fix things? My humble (really) suggestions, *to start* (ie more needs to be done than this): * If a 16-bit or higher image has no values above 4096 or below 0, treat the image as 12 bit. This is a very common image type for some reason. * If an integer image has no values above 255, treat it as an 8-bit image. This also happens a lot. * If a floating point image has values outside [0, 1], don?t croak, just accept it. (This might have already happened?) If it has values only in [0, 1/255], and the user wants to convert to uint8, use the input range as the range. Some of these, especially the last one, may appear too magical, and in some ways I think they are, but honestly, given the frequency of problems that we get because of this, I think it?s time to suck it up and really work on doing what most of our users want most of the time. We don?t need to coddle the power users ? they can be annoyed and micromanage the image range properly. To paraphrase a tweet I saw once (sorry, couldn?t find attribution): ?edge cases should be used to check the design, not drive it.? Applied to this case, we shouldn?t scale a uint32 image by 2**(-32) just because we can come up with a test case where this is useful. Some of these problems would be alleviated by some consistent metadata conventions. Juan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Mar 29 22:46:58 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 29 Mar 2018 19:46:58 -0700 Subject: [scikit-image] data types In-Reply-To: <5626815a-afa7-4b7d-9413-ef34d8b064a8@Spark> References: <5626815a-afa7-4b7d-9413-ef34d8b064a8@Spark> Message-ID: On Thu, Mar 29, 2018 at 10:46 AM, Juan Nunez-Iglesias wrote: > I think maybe 50% of our bug reports/help requests have to do with image > data types. Does anyone want to express an opinion about how we can fix > things? > > My humble (really) suggestions, *to start* (ie more needs to be done than > this): > > * If a 16-bit or higher image has no values above 4096 or below 0, treat > the image as 12 bit. This is a very common image type for some reason. > > * If an integer image has no values above 255, treat it as an 8-bit image. > This also happens a lot. > > * If a floating point image has values outside [0, 1], don?t croak, just > accept it. (This might have already happened?) If it has values only in [0, > 1/255], and the user wants to convert to uint8, use the input range as the > range. > > Some of these, especially the last one, may appear too magical, and in > some ways I think they are, but honestly, given the frequency of problems > that we get because of this, I think it?s time to suck it up and really > work on doing what most of our users want most of the time. > Do these things currently give warnings? If they'd say something like "this is a 16-bit image format but from its values appears to be 12-bit; if this is the case you can convert it with "? If the conversions do happen based on value, that will introduce new issues that also will require warnings. E.g. if one has stacks of 16-bit images and some of those are dark acquisitions for normalization (that used to be my typical use case), only those will get converted to 12-bit which then may introduce silent errors. Ralf > We don?t need to coddle the power users ? they can be annoyed and > micromanage the image range properly. To paraphrase a tweet I saw once > (sorry, couldn?t find attribution): ?edge cases should be used to check the > design, not drive it.? > > Applied to this case, we shouldn?t scale a uint32 image by 2**(-32) just > because we can come up with a test case where this is useful. > > Some of these problems would be alleviated by some consistent metadata > conventions. > > Juan. > > > _______________________________________________ > scikit-image mailing list > scikit-image at python.org > https://mail.python.org/mailman/listinfo/scikit-image > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From grlee77 at gmail.com Fri Mar 30 10:10:44 2018 From: grlee77 at gmail.com (Gregory Lee) Date: Fri, 30 Mar 2018 10:10:44 -0400 Subject: [scikit-image] data types In-Reply-To: <5626815a-afa7-4b7d-9413-ef34d8b064a8@Spark> References: <5626815a-afa7-4b7d-9413-ef34d8b064a8@Spark> Message-ID: On Thu, Mar 29, 2018 at 1:46 PM, Juan Nunez-Iglesias wrote: > I think maybe 50% of our bug reports/help requests have to do with image > data types. Does anyone want to express an opinion about how we can fix > things? > > My humble (really) suggestions, *to start* (ie more needs to be done than > this): > > * If a 16-bit or higher image has no values above 4096 or below 0, treat > the image as 12 bit. This is a very common image type for some reason. > One common source for 12-bit is the DICOM standard used by industry for medical imaging. * If an integer image has no values above 255, treat it as an 8-bit image. > This also happens a lot. > > * If a floating point image has values outside [0, 1], don?t croak, just > accept it. (This might have already happened?) If it has values only in [0, > 1/255], and the user wants to convert to uint8, use the input range as the > range. > > I am in favor of accepting arbitrarily scaled floats unless the algorithm depends on values being within a particular range (not sure if we have many of these?). We do already allow unscaled floats in some places (e.g. compare_nrmse, etc), but it is not very consistent. For example, I recently noticed that denoise_wavelet enforces floats to be in [0, 1] (or [-1, 1]), but it would work equally well for unscaled data. > Some of these, especially the last one, may appear too magical, and in > some ways I think they are, but honestly, given the frequency of problems > that we get because of this, I think it?s time to suck it up and really > work on doing what most of our users want most of the time. We don?t need > to coddle the power users ? they can be annoyed and micromanage the image > range properly. To paraphrase a tweet I saw once (sorry, couldn?t find > attribution): ?edge cases should be used to check the design, not drive it.? > > Applied to this case, we shouldn?t scale a uint32 image by 2**(-32) just > because we can come up with a test case where this is useful. > > Some of these problems would be alleviated by some consistent metadata > conventions. > > Juan. > > > _______________________________________________ > scikit-image mailing list > scikit-image at python.org > https://mail.python.org/mailman/listinfo/scikit-image > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcaswell at gmail.com Fri Mar 30 16:18:53 2018 From: tcaswell at gmail.com (Thomas Caswell) Date: Fri, 30 Mar 2018 20:18:53 +0000 Subject: [scikit-image] data types In-Reply-To: References: <5626815a-afa7-4b7d-9413-ef34d8b064a8@Spark> Message-ID: Automatically picking bit-depth based on value seems dangerous, but a `guess_best_dtype(input_data: np.array) -> dtype` helper function would be useful. Tom On Fri, Mar 30, 2018 at 10:10 AM Gregory Lee wrote: > On Thu, Mar 29, 2018 at 1:46 PM, Juan Nunez-Iglesias > wrote: > >> I think maybe 50% of our bug reports/help requests have to do with image >> data types. Does anyone want to express an opinion about how we can fix >> things? >> >> My humble (really) suggestions, *to start* (ie more needs to be done than >> this): >> >> * If a 16-bit or higher image has no values above 4096 or below 0, treat >> the image as 12 bit. This is a very common image type for some reason. >> > > > One common source for 12-bit is the DICOM standard used by industry for > medical imaging. > > > * If an integer image has no values above 255, treat it as an 8-bit image. >> This also happens a lot. >> > >> * If a floating point image has values outside [0, 1], don?t croak, just >> accept it. (This might have already happened?) If it has values only in [0, >> 1/255], and the user wants to convert to uint8, use the input range as the >> range. >> >> > I am in favor of accepting arbitrarily scaled floats unless the algorithm > depends on values being within a particular range (not sure if we have many > of these?). We do already allow unscaled floats in some places (e.g. > compare_nrmse, etc), but it is not very consistent. For example, I > recently noticed that denoise_wavelet enforces floats to be in [0, 1] (or > [-1, 1]), but it would work equally well for unscaled data. > > > >> Some of these, especially the last one, may appear too magical, and in >> some ways I think they are, but honestly, given the frequency of problems >> that we get because of this, I think it?s time to suck it up and really >> work on doing what most of our users want most of the time. We don?t need >> to coddle the power users ? they can be annoyed and micromanage the image >> range properly. To paraphrase a tweet I saw once (sorry, couldn?t find >> attribution): ?edge cases should be used to check the design, not drive it.? >> >> Applied to this case, we shouldn?t scale a uint32 image by 2**(-32) just >> because we can come up with a test case where this is useful. >> >> Some of these problems would be alleviated by some consistent metadata >> conventions. >> >> Juan. >> >> >> _______________________________________________ >> scikit-image mailing list >> scikit-image at python.org >> https://mail.python.org/mailman/listinfo/scikit-image >> >> _______________________________________________ > scikit-image mailing list > scikit-image at python.org > https://mail.python.org/mailman/listinfo/scikit-image > -------------- next part -------------- An HTML attachment was scrubbed... URL: