[Mailman-Users] Problem with archrunner using large %'s of cpu (read faq & archives)

Brad Knowles brad.knowles at skynet.be
Sat Nov 1 23:45:08 CET 2003


At 9:29 PM -0500 2003/10/31, Scott Lambert wrote:

>  If we were talking about more than 10,000 files, I might buy it.  But we
>  are talking about 1300 files.

	Many filesystems start significantly slowing down around 1,000 
files, not 10,000.  Moreover, are you sure that this is the largest 
number of files you've ever had in that directory?

>                                 Also the processing goes something like
>  O(n), in reverse, slower as it processes the files in the directory.

	That is a bit strange, but might be explained by holes in the 
directory structure that need to be skipped.

>                                                                        I
>  might buy it staying slow if it started slow but it doesn't.

	I've seen mail servers at large freemail providers that had 
previously grown to very large sizes, and worked reasonably well for 
numbers of files in the low thousands, but seriously flaked out when 
pushed much beyond that.

	Move the directory aside, move the files to a new directory, and 
restart -- suddenly everything works like magic again.


	Unless you know the filesystem code intimately, as well as the 
code that is using the filesystem, it can be difficult to predict how 
or when things will break or how badly they will break.

-- 
Brad Knowles, <brad.knowles at skynet.be>

"They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety."
     -Benjamin Franklin, Historical Review of Pennsylvania.

GCS/IT d+(-) s:+(++)>: a C++(+++)$ UMBSHI++++$ P+>++ L+ !E-(---) W+++(--) N+
!w--- O- M++ V PS++(+++) PE- Y+(++) PGP>+++ t+(+++) 5++(+++) X++(+++) R+(+++)
tv+(+++) b+(++++) DI+(++++) D+(++) G+(++++) e++>++++ h--- r---(+++)* z(+++)




More information about the Mailman-Users mailing list