using mmap on large (> 2 Gig) files

"Martin v. Löwis" martin at v.loewis.de
Wed Oct 25 07:23:40 EDT 2006


sturlamolden schrieb:
> 2. The OS may be stupid. Mapping a large file may be a major slowdown
> simply because the memory mapping is implemented suboptimally inside
> the OS. For example it may try to load and synchronise huge portions of
> the file that you don't need.

Can you give an example of an operating system that behaves that way?
To my knowledge, all current systems integrating memory mapping somehow
with the page/buffer caches, using various strategies to write-back
(or just discard in case of no writes) pages that haven't been used
for a while.

> The missing offset argument is essential for getting adequate
> performance from a memory-mapped file object.

I very much question that statement. Do you have any numbers to
prove it?

Regards,
Martin



More information about the Python-list mailing list