anything like C++ references?

Stephen Horne intentionally at blank.co.uk
Tue Jul 15 02:49:53 EDT 2003


On 15 Jul 2003 06:56:56 +0200, martin at v.loewis.de (Martin v. Löwis)
wrote:

>Stephen Horne <intentionally at blank.co.uk> writes:
>
>> Write...
>> 
>>   x = classname()
>>   y = x
>> 
>> ... and that would be an error.
>[...]
>> However...
>> 
>>   x = &classname()
>>   y = x
>> 
>> ... and everything is fine.
>
>If any kind of implicit reference assignment would be an error, then I
>assume
>
>x = classname()
>x.foo()
>
>would also be an error? because that assigns a reference of the object
>to the implicit 'self' parameter of the method?

The whole point of 'self' is to act as a name for the object (not the
value) so there is no problem in having it behave that way. Its
already special because its implicit, so making it special in this way
too wouldn't be a big issue.

Implementation detail. Not a fundamental problem.

>
>> > This is completely different from the notion of
>> >values in C or C++, where each occurrence of the literal 5 creates a
>> >new value whose state is 5.
>> 
>> Not true. Each occurence of the literal 5 creates a new 'symbol' which
>> represents the value 5. The immutability of values is preserved in
>> C++.
>
>That is not true. There are no "symbols" in C beyond those that you
>use to name functions and global variables. Every occurrence of an
>integer literal *does* create a temporary object, every time the
>literal is *executed*.

I was using the word 'symbol' in the theoretical sense - the sense in
which any computer might be called a machine for manipulating symbols.
Not in the sense of an identifier.

The data stored in RAM (or registers or whatever) may be considered
just a bunch of electrical charges or currents that may or may not be
present at certain points. But that ignores the fact that these
electrical signals have meaning.

These electrical signals have meaning because they form symbols which
represent integers, or floating point numbers, or strings or whatever.
The principle is no different to symbols written on paper in order to
form words, numbers or whatever.

In C, variables bind directly to precisely that type of symbolic
representation of value.

If you write...

  int main (int argc, char *argv [])
  {
    int x = 100;
    int y = x;

    y = 200;

    return EXIT_SUCCESS;
  }

...then it is perfectly true that you wouldn't tend to think of the
patterns of bits in memory as symbols, but that is what they are. That
is why at the end of the program, the value 100 is still 100 and the
value of x is still 100. You have not manipulated values (in a highly
pedantic theoretical sense) because values are immutable and cannot be
changed. You have simply changed the symbol stored in the memory
associated with variable y to one that represents the value 200.

These days, symbols representing integers almost certainly uses a twos
complement binary notation. But that isn't the only way of
representing such values. It could have been a binary coded decimal
notation. In COBOL, it could well be an ASCII or EBCDIC string of
characters - each of those characters in turn being symbolised by a
number and so on. Even with twos complement, it may be big endian or
little endian.

An integer value can be represented by many symbolic notations, and
machines that manipulate symbolic representations (I have seen
computers defined in pretty much those words before) have used a
number of different symbolic representations for that purpose.

The symbol isn't the value - it only represents the value.

In general, of course, it is not useful to think in these extremely
pedantic terms.





More information about the Python-list mailing list