win32all odbc/MS SQLServer/bigint

Bjorn Pettersen BPettersen at NAREX.com
Sun Apr 13 19:24:21 EDT 2003


> From: M.-A. Lemburg [mailto:mal at lemburg.com] 
> 
> Bjorn Pettersen wrote:
> >>From: M.-A. Lemburg [mailto:mal at lemburg.com] 
> >>
> >>Bjorn Pettersen wrote:
> > 
> > [...]
> > 
> 
> The odbc module returns a long in case floor(x) == x, but the
> interfacing at C level is done using a C double.

Ah, I see. Our account IDs are a combination of bitfields (thus using
all 64 bits) so they're always large enough for this to be true...

[...]

> Sure, but __int64 is only available in VC C++ AFAIK. Many
> compilers have a "long long" type which could be used, but
> then again, how do you know whether the ODBC driver was
> compiled with the same C type and layout as the application
> using it ?

I haven't found a compiler without a long long type, but then I haven't
checked e.g. the Palm or Cray's, so I'm assuming they're out there :-)
This isn't a concern for the win32all odbc module of course, but all
cross platform projects I've seen so far has a #define LONGLONG to
either long long or __int64 somewhere (with an appropriate comment
<wink>).

If you call

  SQLColAttribute(hstmt, n, SQL_DESC_TYPE, NULL, 0, NULL,
(SQLPOINTER)&sqlType);

and sqlType == SQL_BIGINT, that would be a good indication for result
sets, and if

  SQLBindParameter(hstmt, num, SQL_PARAM_INPUT,
                   SQL_C_SBIGINT, SQL_BIGINT, 
                   sizeof(__int64), 0, 
                   (void*)val, sizeof(__int64), &m_nullInd);

returns either SQL_SUCCESS or SQL_SUCCESS_WITH_INFO that ought to be
good enough for parameters?

> It's one of those MS things again... the half cooked, "works
> for me" kind of attitude.

And here I thought it was the "everyone else did it that way, so we'll
do it different just because we can" attitude <wink>.

[...]

> Seriously, performance on the Python processing side is usually
> not an issue (mxODBC is fast); it's network latency, database and
> query optimizations that do matter.

All true, but when all that is taken care of you have to sqeeze where
there's juice left :-) (the reason we're using Python is that the
algorithm calls for ~200K dict inserts and lookups times 10,000 -- i.e.
Python should be faster than C++ :-)

-- bjorn





More information about the Python-list mailing list