GCD in Fractions

Robert E. Beaudoin rbeaudoin at comcast.net
Wed Sep 24 19:19:36 EDT 2014


On 09/24/14 09:25, blindanagram wrote:
> On 24/09/2014 12:44, Steven D'Aprano wrote:
>
>> blindanagram wrote:
> [snip]
>> - Mathworld says that GCD of two negative numbers is a negative number;
>>
>> - but Mathematica says that GCD of two negative numbers is a positive;
>>
>> - Wikipedia agrees with Mathematica and disagrees with Mathworld;
>
> After looking at these (and several other) on-line mathematical sites, I
> realised that I would have to go back to long standing mathemmatical
> references to find how the gcd is defined by those that explicitly cover
> the greatest common divisor for negative integers (I did this before
> raising the issue here).
>
> All four that I have so far looked at have definitions that lead to
> gcd(a, b) for integers being equal to gcd(|a|, |b|). I hope to visit a
> University library shortly to review more.  Does anyone know of such a
> reference that uses a definition that conflicts with gcd(a, b) for
> integers being equal to gcd(|a|, |b|)?
>

I doubt you'll find an advanced mathematical text that has such a 
definition.  I think most abstract algebra texts would give a definition 
equivalent to saying that for any integers n and m the ideal generated 
by n and m is equal to the principal ideal generated by gcd(n,m); that 
is the heart of the matter mathematically, and this definition 
generalizes to other so-called "principal ideal domains" than the integers.

(Don't want to Google for "ideal" and "principal ideal"?  OK:  an ideal 
is a subset of the integers closed under addition of any two of its 
elements, and under multiplication of any of its elements by any 
integer; a set of integers generates an ideal if that ideal is the 
intersection of all ideals containing that set, and a principal ideal is 
an ideal generated by a singleton set.  For more elaboration, though, 
I'm going to point you at the internet, or better, some of those 
advanced texts in the library.)

After working through what the definitions of ideals and principal 
ideals imply for the the definition above of gcd, you get the statement 
that k is the (or, better, a) gcd of n and m if and only if k divides n 
and m, and any other integer that divides both n and m divides k.  The 
upshot is that, mathematically, gcd is only defined up to a change of 
sign, which helps explain why references may disagree with each other. 
Some authors may impose the restriction than gcd(n,m) >= 0 for all n and 
m (which does have the advantage that then "greatest" really means 
greatest and not just "greatest absolute value"), but that isn't really 
a necessary part of the definition as far as the mathematically 
important properties of gcd are concerned.  All that the abstract 
algebra requires is that

|gcd(n,m)| = |gcd(|n|,|m|)|.

So implementations of gcd that sometimes return a negative value are 
not, it seems to me, mathematically broken, though they might violate 
the principle of least surprise.

for-whatever-it's-worth'ly-yrs,

R. Beaudoin




More information about the Python-list mailing list