Algorithm that makes maximum compression of completly diffused data.

jonas.thornvall at gmail.com jonas.thornvall at gmail.com
Wed Oct 30 15:47:09 EDT 2013


Den onsdagen den 30:e oktober 2013 kl. 20:35:59 UTC+1 skrev Tim Delaney:
> On 31 October 2013 05:21,  <jonas.t... at gmail.com> wrote:
> 
> I am searching for the program or algorithm that makes the best possible of completly (diffused data/random noise) and wonder what the state of art compression is.
> 
> 
> 
> 
> I understand this is not the correct forum but since i think i have an algorithm that can do this very good, and do not know where to turn for such question i was thinking to start here.
> 
> 
> 
> It is of course lossless compression i am speaking of.
> 
> 
> 
> This is not an appropriate forum for this question. If you know it's an inappropriate forum (as you stated) then do not post the question here. Do a search with your preferred search engine and look up compression on lossless Wikipedia. And read and understand the following link:
> 
> 
> 
> http://www.catb.org/esr/faqs/smart-questions.html
> 
> 
> 
> paying special attention to the following parts:
> 
> 
> 
> http://www.catb.org/esr/faqs/smart-questions.html#forum
> http://www.catb.org/esr/faqs/smart-questions.html#prune
> 
> 
> http://www.catb.org/esr/faqs/smart-questions.html#courtesy
> 
> http://www.catb.org/esr/faqs/smart-questions.html#keepcool
> 
> 
> http://www.catb.org/esr/faqs/smart-questions.html#classic
> 
> 
> 
> If you have *python* code implementing this algorithm and want help, post the parts you want help with (and preferably post the entire algorithm in a repository).
> 
> 
> 
> 
> However, having just seen the following from you in a reply to Mark ("I do not follow instructions, i make them accesible to anyone"), I am not not going to give a second chance - fail to learn from the above advice and you'll meet my spam filter.
> 
> 
> 
> If the data is truly completely random noise, then there is very little that lossless compression can do. On any individual truly random data set you might get a lot of compression, a small amount of compression, or even expansion, depending on what patterns have randomly occurred in the data set. But there is no current lossless compression algorithm that can take truly random data and systematically compress it to be smaller than the original.
> 
> 
> 
> If you think you have an algorithm that can do this on truly random data, you're probably wrong - either your data is has patterns the algorithm can exploit, or you've simply been lucky with the randomness of your data so far.
> 
> 
> 
> Tim Delaney

No i am not wrong.
End of story



More information about the Python-list mailing list