Newbie question about text encoding

Dave Angel davea at davea.name
Fri Feb 27 10:24:55 EST 2015


On 02/27/2015 09:22 AM, Chris Angelico wrote:
> On Sat, Feb 28, 2015 at 1:02 AM, Dave Angel <davea at davea.name> wrote:
>> The term "virtual memory" is used for many aspects of the modern memory
>> architecture.  But I presume you're using it in the sense of "running in a
>> swapfile" as opposed to running in physical RAM.
>
> Given that this started with a quote about "you can't fake what you
> ain't got", I would say that, yes, this refers to using hard disk to
> provide more RAM.
>
> If you're trying to use the pagefile/swapfile as if it's more memory
> ("I have 256MB of memory, but 10GB of swap space, so that's 10GB of
> memory!"), then yes, these performance considerations are huge. But
> suppose you need to run a program that's larger than your available
> RAM. On MS-DOS, sometimes you'd need to work with program overlays (a
> concept borrowed from older systems, but ones that I never worked on,
> so I'm going back no further than DOS here). You get a *massive*
> complexity hit the instant you start using them, whether your program
> would have been able to fit into memory on some systems or not. Just
> making it possible to have only part of your code in memory places
> demands on your code that you, the programmer, have to think about.
> With virtual memory, though, you just write your code as if it's all
> in memory, and some of it may, at some times, be on disk. Less code to
> debug = less time spent debugging. The performance question is largely
> immaterial (you'll be using the disk either way), but the savings on
> complexity are tremendous. And then when you do find yourself running
> on a system with enough RAM? No code changes needed, and full
> performance. That's where virtual memory shines.
>
> It's funny how the world changes, though. Back in the 90s, virtual
> memory was the key. No home computer ever had enough RAM. Today? A
> home-grade PC could easily have 16GB... and chances are you don't need
> all of that. So we go for the opposite optimization: disk caching.
> Apart from when I rebuild my "Audio-Only Frozen" project [1] and the
> caches get completely blasted through, heaps and heaps of my work can
> be done inside the disk cache. Hey, Sikorsky, got any files anywhere
> on the hard disk matching *Pastel*.iso case insensitively? *chug chug
> chug* Nope. Okay. Sikorsky, got any files matching *Pas5*.iso case
> insensitively? *zip* Yeah, here it is. I didn't tell the first search
> to hold all that file system data in memory; the hard drive controller
> managed it all for me, and I got the performance benefit. Same as the
> above: the main benefit is that this sort of thing requires zero
> application code complexity. It's all done in a perfectly generic way
> at a lower level.

In 1973, I did manual swapping to an external 8k ramdisk.  It was a box 
that sat on the floor and contained 8k of core memory (not 
semiconductor).  The memory was non-volatile, so it contained the 
working copy of my code.  Then I built a small swapper that would bring 
in the set of routines currently needed.  My onboard RAM (semiconductor) 
was 1.5k, which had to hold the swapper, the code, and the data.  I was 
writing a GPS system for shipboard use, and the final version of the 
code had to fit entirely in EPROM, 2k of it.  But debugging EPROM code 
is a pain, since every small change took half an hour to make new chips.

Later, I built my first PC with 512k of RAM, and usually used much of it 
as a ramdisk, since programs didn't use nearly that amount.


-- 
DaveA



More information about the Python-list mailing list