rsa implementation question

Heiko Wundram heikowu at ceosg.de
Wed Aug 11 22:10:52 EDT 2004


Am Donnerstag, 12. August 2004 02:36 schrieb Bryan Olson:
> There is a notion of blocks in many public-key ciphers,
> including RSA.  The modulus n in RSA is composite, not prime.
> The "only the notion" statement implies that integer modular
> arithmetic is the only base for public-key cryptography, which
> is not true.

Pardon me, yeah, the modulus is composite, I should've rather said that most 
public key ciphers work in Z{n}. And I know that it's pretty easy to 
generalize ElGamal to work in other rings than Z, or take bucket public key 
cryptography for example.

I know of others, but: have you ever seen an implementation or use case of 
some algorithm not working in Z{n}(*)? I have not, at least not when working 
with computers.

> Try the book you cited, section 11.2.3, Note 11.10, Example
> 11.11, and Remark 11.12.
>
>     http://www.cacr.math.uwaterloo.ca/hac/

Yup, I read those, and yeah, I know what they are talking about, and yeah, 
that's not what I said. It's not about decrypting to sign, encrypting to 
verify, it's about the redundancy function. In case you use a proper hash 
function (I'm talking about identity with SHAing before signing), this attack 
blatantly fails, as it would mean that you'd have to find hash collisions. 
Even when I can easily find a m~ for which I have a valid signature m, 
finding a plaintext for m~ which is meaningful should be impossible, due to 
the hashing, for "long enough plaintext."

And the example code I gave was not about leaving out hashing before putting 
it throught the identity function as "redundancy", it was exactly about 
reducing redundancy, which probably I was to stupid to explain correctly, 
but I'll try now.

> Don't do that, even for encryption.  See Bleichenbacher's
> attacks on RSA encrpytion:
>
>   http://www.bell-labs.com/user/bleichen/bib.html

Link doesn't work... But anyway, look at the same book on page 288, (ii):

"A pseudorandomly generated bitstring should be appended to the plaintext 
before encryption (this is also called salting)."

That's exactly what my little example was trying to do: append a salt to the 
data which was being encrypted. First, this gives an internal structure to 
the data which is being encrypted/decrypted (and thus makes it easy to 
discard invalid values), as you just have to check that the two lowest bytes 
of the number coming out are <= keysize//8, and also assures that you don't 
lose information when for example you are encrypting "what is this?" with a 
768 bit key, now when this number comes out of decryption, the string will 
have been "prepadded" with zeros, but as you know the actual length of the 
string (from the two lowest bytes), you can extract the string properly.

Second, what this also achieves, is that you may encrypt the same plaintext 
for the same recipient multiple times without an eavesdropper seeing that the 
plaintext is identical (because of the salt), and

Thirdly, the attack stated on the given page is also not successful (when 
encrypting to multiple recipients with the same encryption exponent, but this 
is certainly not true for most common situations I need cryptography in).

So, pre/postpadding with salt is certainly not the wrong way to go for 
encryption.

Now I'd really like to hear from you why this same argument doesn't go for 
signing. Because:

1) I generate a signature for a string "some string" with SHA.
2) I decrypt this hash, which is salted by the same standard salting algorithm 
I noted above with my private key to get the signature
3) I encrypt the signature to get the salted hash with my public key, which 
can easily be unsalted, and then checked for validity.

Forging a valid signature would mean the following steps:

1) Find a pair of numbers (n,E(n)) for which E(n) has the lowest two bytes 
<= keysize//8. It's pretty easy to insert additional constraints here: for 
example, the string of randomized bits in E(n) has to have parity one to be 
accepted, this would make this step harder (but not all of the positions may 
be one, except when the bitstring is of length one, or something).
2) Now, find a pair from one of the pairs found in (1) which has a value which 
has correct length (sha digests are always 160 bits long),
3) Now, for this pair, find a message which corresponds to the hash 
value.

If you complete these three steps (well, you'd only have to find one pair in 
step one and two, which should be hard enough), you'll have a faked 
signature. But not before this.

I recon that step three is computationally infesible, and I also assume that 
finding a pair (n,E(n)) which first of all has the properties that I give to 
it (stated above), and also turns out to be a valid hash value for some 
string I want it to be is computationally infesible. But, as I stated above, 
even if step 1 is dead easy (by using the identify function here), what is 
the use if I can easily find a a proper signature for a certain hash value, if 
I have no plaintext for the hash value for which I have the signature? The 
problem being that I can't construct the signature from the hash value, but 
rather even when using identidy redundancy, I have to have the signature 
first, before constructing the hash value.

At least that's what my professor taught me at univ, and with what I can 
agree.

> Then I'm guessing I won't see you at Crypto 04 next week ;)

No, you won't, I'm more or less a hobby cryptographer... And I guess I'm 
certainly not the smartest at this, but... I'd love to hear why you think the 
above argumentation is wrong, or direct me to a paper which explains why 
salting a signature with an equally simple algorithm I proposed to salt a 
string to encrypt won't make things "more secure" for signing also...

Anyway, if you're up to some more explaining, feel free to contact me 
off-list... I'd be glad to learn... :-)

Heiko.



More information about the Python-list mailing list