Selling Python Software

Bengt Richter bokr at oz.net
Tue Nov 4 14:53:33 EST 2003


On Tue, 04 Nov 2003 09:03:51 GMT, "Andrew Dalke" <adalke at mindspring.com> wrote:

>Bengt Richter:
>> OTOH, we are getting to the point where rather big functionality can be
>put
>> on a chip or tamper-proof-by-anyone-but-a-TLA-group module. I.e.,
>visualize
>> the effect of CPUs' having secret-to-everyone private keys, along with
>public keys,
>
>Actually, we aren't.  There have been various ways to pull data of
>of a smart card (I recall readings some on RISKS, but the hits I
>found are about 5+ years old).  In circuit emulators get cheaper and
>faster, just like the chips themselves.  And when in doubt, you can
>buy or even build your own STM pretty cheap -- in hobbiest range
>even (a few thousand dollars).
Even if you knew exactly where on a chip to look, and it wasn't engineered
to have the key self-destruct when exposed, what would you do with the key?
You'd have the binary image of an executable meant to execute in the secret-room
processing core. How would you make it available to anyone else? You could re-encrypt
it with someone else's specific public key. Or distribute a program that does that,
along with the clear binary. But what if the program contains an auth challenge for the target
executing system? Now you have to reverse engineer the binary and see if you can modify it
to remove challenges and checks and still re-encrypt it to get it executed by other processors.
Or you have to translate the functionality to a program that runs in clear mode on the ordinary cores.
Sounds like real work to me, even if you have a decompyler and the inter-core comm specs.
Of course, someone will think it's fun work. And they would get to start over on the next program,
even assuming programs encrypted with the public key of the compromised system would be provided,
so there better not be a watermark left in the warez images that would indentify the compromised
system. Or else they would get to destroy another CPU module to find its key. Probably easier the
second time, assuming no self-destruct stuff ;-)

>
>> and built so they can accept your precious program code wrapped in a PGP
>encrypted
>> message that you have encrypted with its public key.
>
>Some of the tricks are subtle, like looking at the power draw.
>Eg, suppose the chip stops when it finds the key is invalid.  That
>time can be measured and gives clues as to how many steps it
>went through, and even what operations were done.  This can
>turn an exponential search of key space into a linear one.
That was then. Plus remember this would not be an isolated card chip that you can
probe, it's one or more specialized cores someplace on a general purpose
multi-cpu chip that you can't get at as a hobbyist, because opening it without destroying
what you want to look at requires non-hobby equipment, by design.

>
>> This is not so magic. You could design a PC with a locked enclosure and
>special BIOS
>> to simulate this, except that that wouldn't be so hard to break into. But
>the principle
>> is there. Taking the idea to SOC silicon is a matter of engineering, not
>an idea break-through
>> (though someone will probably try to patent on-chip stuff as if it were
>essentially different
>> and not obvious ;-/)
>
>But the counter principle (breaking into a locked box in an uncontrolled
>environment) is also there.  There are a lot of attacks against smart
>cards (eg, as used in pay TV systems), which cause improvements (new
>generation of cards), which are matched by counter attacks.
>
>These attacks don't require the resources of a No Such Agency,
>only dedicated hobbiest with experience and time on their hands.
>
Sounds like an article of faith ;-)

>                    Andrew
>                    dalke at dalkescientific.com
>P.S.
>  I did have fun breaking the license protection on a company's
>software.  Ended up changing one byte.  Took about 12 hours.
>Would have been less if I knew Solaris assembly.  And I did
>ask them for permission to do so.  :)

Changed a conditional jump to unconditional? Some schemes aren't so
static and centralized ...

I once ran into a scheme that IIRC involved a pre-execution snippet of code that had to
run full bore for _lots_ of cycles doing things to locations and values in
the program-to-be-executed that depended on precise timing and obscured info.
I guess the idea was that if someone tried to trace or step through it, it would
generate wrong locations and info and also stop short and the attacker would have
to set up to record memory addresses and values off the wires to figure out what to change,
but even then they would run into code that did mysterious randomly sprinkled
milestone checks, so capturing the core image after start wasn't free lunch either.
Plus it had to run in a privileged CPU mode, and it wasn't a stand-alone app, it
was part of an OS ... that wasn't open source ... and you didn't have to tools to rebuild...
This was just some code I stumbled on, I may have misunderstood, since I didn't
pursue it, being there for other reasons. But that was primitive compared to what you
could do with specialized chip design.

Regards,
Bengt Richter




More information about the Python-list mailing list