We’re told that it will take powerful quantum computers to break RSA encryption, so for now the world is safe. But I wondered, in an era of increasingly sophisticated models, might AI pose a threat? These systems excel at finding patterns in data that humans miss, and if there were any subtle weaknesses in key generation, I would think AI could detect them.
Yes, theory says it’s all but impossible to break RSA because it relies on the computational hardness of factoring large prime numbers. But theory and practice don’t always align, and sometimes the most interesting discoveries come from testing our assumptions. So I set up an experiment to test whether a transformer model could learn to reverse-engineer SSH private keys from their corresponding public keys.
Experiment and Results
I trained a T5-small transformer model (60 million parameters) on a dataset of 50,000 SSH key pairs, split into 70% training, 15% validation, and 15% test. Given a public key as input, the model was asked to output the corresponding private key.
For hardware reasons, I decided to work with a smaller model (I could provision more resources and try a larger model if the results indicated something interesting). As it was, the experiment ran “comfortably” on my Mac M1, with 1 epoch taking about 15 hours.
In the first 25% of training, the model quickly learned. Loss dropped from 7.66 to 4.58. I suspected it was learning superficial structure, such as base64 encoding patterns and the standard SSH private key file header and footer. But the loss soon plateaued, stabalizing around 4.56-4.54, with only tiny improvements during the remainder of training. The validation loss showed similar behavior, decreasing from 4.52 to 4.49 over the final 50% of training. this suggests the model had hit a barrier.
For RSA-2048, the probability of randomly guessing a private key is approximately 2-2048, which is essentially zero. So, this sort of stagnation is exactly what we’d hope to see. The model had reached the the point where it had exhausted all learnable patterns except the actual mathematical relationship.
When I tested the trained model on unseen public keys, the results were reassuring. The model generated outputs that were structurally correct but cryptographically invalid. The first ~80 characters were identical, but this is the OpenSSH private key header. Where the actual cryptographic material begins, the outputs diverged.
Conclusion
I’d like to think this experiment provides empirical validation of RSA’s security against pattern-based attacks. A 60-million parameter transformer, trained on 50,000 unique examples, could not find any exploitable patterns in SSH key generation. And the plateau suggests further attempts with a larger model or more training would continue to be unsuccessful. The model did indeed learn formatting, but failed at the cryptographic content.
So much for any interesting discoveries. At least the world is indeed safe … at least for now.
Code for this experiment is available on GitHub.
Comments
Post a Comment