[scikit-learn] Fwd: SSIM with tolerances

Bill Ross ross at cgl.ucsf.edu
Mon Apr 3 14:39:05 EDT 2017


I wonder naively: if you can make rules, why train something to learn 
them vs. just implementing them directly? I'm really curious if there's 
an advantage in logistics or performance (can meaningful extrapolation 
somehow occur?).

I think the answer for machine learning is not to make rules, but to 
gather examples based on perceptual experiments, assuming what you are 
after is noticeability. In that case, you will likely allow dropouts (I 
assume black pixels) more when they are in dark areas, I imagine, which 
may not be desireable, or might save the corporation a few pennies. :-) 
Those perceptual experiments might be costly, would save the angst of 
getting the rules right, and I wonder what sort of Quality Index you 
might derive beyond pass/fail. The data might be leveraged for other 
applications. Or maybe you have existing data that could be used to train?

I've only made general comparisons of images (using color histograms at 
the moment for my interactive image associator), but have the QA 
background to appreciate the motivation. I'd love to stay on top of it 
if a fellow learner could be of use.

Regards,

Bill


On 4/3/17 11:04 AM, mat saunders wrote:
> Hi,
>
> I am using SSIM to compare 2 video streams\sets of images and I find 
> it to be almost too accurate. I would like some fudge factor like 
> other image comparison tools have. I used to do it in an automated 
> test suite but due to file sizes and amounts I turned to scikit.
>
> I do quality assurance on a render engine and we just want to make 
> sure the images are meaningfully identical build to build. Currently 
> with SSIM I am seeing things as small as 4 pixels across a 1920x1080 
> image different. I personally would like to ignore those 4 pixels but 
> still catch meaningful items. Say if 8 pixels near each other were off 
> keep those but if they are 8 pixels randomly through the image ignore 
> them.
>
> Does this sound like something logical, say using an adjacency of 
> pixels with a tolerance value for color and number of pixels as arguments?
>
> See attached image for example of how little is different in the 
> entire image. It is a GIF zoomed in to the exact spot of 3 different 
> pixels so hopefully it works.
>
> Regards,
> Mathew Saunders
>
>
>
>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn at python.org
> https://mail.python.org/mailman/listinfo/scikit-learn

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20170403/cf7fca59/attachment.html>


More information about the scikit-learn mailing list