19 July 2012
EVEN the most sophisticated electronic security can be defeated by forcing someone to reveal a password. But what if sensitive information could be stored in your brain in such a way that you couldn’t consciously disclose it, no matter how hard you tried?
That is the promise of a new technique that combines cryptography with neuroscience. In initial tests, volunteers learned a password and later used that password to pass a test, but could not identify it when asked to do so.
The system is based on implicit learning, a process by which people can unconsciously learn a pattern. Hristo Bojinov, at Stanford University in California and colleagues designed a game in which players intercept falling objects by pressing a key. The objects appear in one of six positions, each corresponding to a different key.
Unbeknownst to the players, the positions of the objects were not always random. Hidden within the game was a sequence of 30 successive positions that was repeated over 100 times during the 30 to 45 minutes of game play. Players made fewer errors when they encountered this sequence on successive rounds, and this learning persisted when the players were tested two weeks later.
The results suggest that the game could form the basis of a security system. Users would learn a sequence unique to them in an initial session and later prove that they know it by playing the same game. Crucially, previous studies have shown that people cannot recite sequences that are learned in this way.
This phenomenon occurs in everyday life: consider, for example, how people are able to include new words accurately into a sentence without consciously being aware of the rules behind the grammar that they are using.
A person could conceivably try to discover the sequence by forcing the password holder to play a similar game and watching to see when they make fewer errors. But because the sequence consists of 30 key presses in six different positions, the chances of piecing together the sequence are slim. The creators estimate that testing 100 users non-stop for a year would result in less than a 1 in 60,000 chance of extracting the sequence.
The system needs to be more user-friendly before it can be deployed commercially. And like other security systems, it could be broken by hacking into the system used to authenticate users. For these reasons, Bojinov says it is more likely to find applications in high-risk scenarios when the code-holder needs to be physically present, such as to gain access to a nuclear or military facility.
The system has potential advantages over biometric methods, which rely on recognising a unique trait such as an iris pattern. “Authentication doesn’t require explicit effort on the part of the user,” says Ari Juels, director of RSA Laboratories in Cambridge, Massachusetts. “If the time required for training and authentication can be reduced, then some of the benefits of biometrics, namely effortlessness and minimal risk of loss, can be coupled with a feature that biometrics lack: the ability to replace a biometric that has been compromised.”
Bojinov will present his work on 8 August at the USENIX Security Symposiumin Bellevue, Washington.