-
-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Determine secure amount of private key modulo bias #71
Comments
Hi, |
Here are my ideas for the other points:
|
For 4 I don't see how |
Yes you are right it should be the order of the group. |
Regarding 1. A 2^-64 bias means that the statistical distance between the uniform distribution and the actual distribution (assuming a perfect underlying rng) is 2^-64 (see the appendix of https://eprint.iacr.org/2023/1254.pdf, for a derivation of this bound). Here is a survey on the consequences of partial information leakage: https://eprint.iacr.org/2020/1506.pdf. That being said, I believe a bias of 2^-64 should be acceptable for all practical purposes!? (Even statistical security of 2^-40 can be considered practically secure.) |
Thanks Matthias, that's very helpful. |
Is it solved by 1545230ee59b6e360f0e660cc2eb567d46ff3805 ? |
We've decreased bias, yes, but the security story still needs to be understood. |
Bias in the context of cryptographic key generation refers to the deviation from a perfectly uniform distribution when generating random numbers, which can have security implications. I'm sure you already have most of the explanation, but here's mine, hoping it'll help :What does bias actually mean? Is there any realistic attack? How many keys would an attacker need to walk through before gaining a meaningful advantage? Bias refers to the statistical deviation from a perfectly uniform distribution. In the context of cryptographic key generation, bias can be problematic because it may introduce vulnerabilities. While a bias of 2^-64 might seem small, it could potentially be exploited by an attacker who has knowledge of the bias and can use it to their advantage. To understand the implications, consider an attacker who can observe multiple key generation attempts. With a bias of 2^-64, an attacker might need to analyze around 2^64 keys before gaining a meaningful advantage. This is a massive number and generally considered secure for most practical purposes. However, it's essential to remember that security depends on multiple factors, including the cryptographic primitives used and the specific context of the application. The security implications also depend on the nature of the attack. The papers you cited (e.g., https://eprint.iacr.org/2023/1254.pdf) delve into the consequences of partial information leakage. For practical security, a bias of 2^-40 is often considered secure. However, the level of security required depends on the specific use case. We might also ensure entropy quality by collecting sufficient bits (B) from a high-entropy source. Use NIST SP 800-90B recommendations to estimate the entropy. If collected entropy is H bits, the min-entropy is something like How was the formula from hash-to-curve draft calculated? Why does it consider 384 bits from CSPRNG to match 128-bit security for a 256-bit curve? The formula in the hash-to-curve draft calculates the number of bits needed from the CSPRNG to achieve a specific level of security. It takes into account the size of the prime (p) for the elliptic curve and the desired security level (k bits). The formula for L is:
For a 256-bit curve with 128-bit security (k = 128), the formula becomes:
This calculation ensures that the randomness obtained from the CSPRNG is sufficient to provide a 128-bit security level for the 256-bit curve. What other standards are out there with regards to bias? There are various standards and recommendations related to bias in cryptographic applications. For example, NIST's guidelines for cryptographic algorithms (e.g., FIPS 140-2) provide recommendations for generating random numbers with minimal bias. Additionally, standards such as RFC 6979 offer deterministic approaches for generating cryptographic nonces, reducing the risk of bias. You can also refer to academic papers and recommendations from organizations like the IETF for more specific guidance. DJB used bias for eddsa nonce: r = mod(sha512(prefix, msg), ed25519.l). Why not? Bernstein used bias in the context of EdDSA (Edwards-curve Digital Signature Algorithm) to address concerns about the security of nonces. In the case of EdDSA, it's important to avoid reusing nonces to prevent certain attacks. The modulo operation used in generating 'r' ensures that nonces stay within a specific range and avoids any bias that could be introduced by raw CSPRNG output. How should a generic rule look like? We can't use Fp.BYTES + Math.ceil(Fp.BYTES/2) because it's field - not curve group. A generic rule for obtaining random values for cryptographic operations, such as private key generation, should consider the following:
As demonstrated in the hash-to-curve draft, we could use a formula that calculates the number of bytes (or bits) needed from the CSPRNG based on the curve's size and the desired security level. It shouldn theoretically, ensure that the generated random values meet the required level of security and avoid biases. Mathias and Silvain provided some valuable insights, and you can refer to the provided papers for further details. To avoid bias inside the randomness extraction function (e.g., HMAC-DRBG), I'd suggest, analyzing it's properties. We need to ensure that the function's output is statistically uniform, ideally using statistical tests like NIST SP 800-22. |
But there IS bias in eddsa. It's |
Oops ! You're right. But I don't think it's always the case, no ? I might be wrong but...
Where,
To my understanding, the 'mod' operation introduces this bias. In practice, the 2^(-259) bias in EdDSA is generally considered negligible and doesn't pose significant security risks. But using SHA-512 for nonce generation in EdDSA may introduce potential issues related to those. Again, I might be wrong and I'd be happy to be corrected. SHA-512 has a fixed output length of 512 bits (64 bytes) meaning that regardless of the input size, the hash function always produces a 512-bit output. The nonce 'r' is calculated modulo 'ed25519.l' to ensure that it falls within a specific range. If the SHA-512 output is too long or contains more bits than needed, this can introduce biases. When you take the modulo of a large number ('sha512(prefix, msg)') by a smaller number ('ed25519.l'), some values in the output range may have a higher probability of occurrence than others. This introduces bias, albeit very small, into the nonce generation process. The bias can be quantified as 2^(-259), meaning that some nonces are slightly more likely to be selected than others, which could theoretically be exploited by an attacker. As always just giving info, I forgot a lot about EdDSA, I need to restretch my brain 😅 |
Understanding Bias in Cryptographic KeysWhat is Bias in Cryptography?
Bias in Different Key Generation Methods
Analyzing the Bias
Generic Rule for Key Generation
Conclusion
Code Implementation (Conceptual)function generateSecureKey(curveProperties, securityLevel) {
const requiredBits = Math.ceil(Math.log2(curveProperties.prime)) + securityLevel;
const requiredBytes = Math.ceil(requiredBits / 8);
const randomBytes = getRandomBytes(requiredBytes); // CSPRNG
const key = reduceModPrime(randomBytes, curveProperties.order);
return key;
} Note: This is a high-level conceptual representation and needs to be adapted to specific curve properties and cryptographic libraries. |
Your replies don't answer the questions. I highlight the important parts:
To summarize:
|
I understand your questions better now. 1. Bias in Cryptography
2. Formula from Hash-to-Curve Draft
3. Other Standards for Bias Management
4. Biased Nonce in EdDSA
5. Generic Rounding Formula for Various Curves
Conclusion
|
Do you copy this from chatgpt? |
For a curve like secp256k1, which has 32-byte (256-bit)$G$ , we take 32+8 bytes (256+64 bits) from CSPRNG and mod-divide it by curve order $n$ . This follows FIPS guidelines and has $2^{-64}$ bias:
noble-curves/src/abstract/weierstrass.ts
Lines 847 to 855 in 8c48abe
I think it's fine to adjust the amount of bytes. Several questions still arise:
r = mod(sha512(prefix, msg), ed25519.l)
. Why notFp.BYTES + Math.ceil(Fp.BYTES/2)
, because it's field - not curve group.The text was updated successfully, but these errors were encountered: