Breaking Audio CAPTCHAs

Part of Advances in Neural Information Processing Systems 21 (NIPS 2008)

Bibtex Metadata Paper

Authors

Jennifer Tam, Jiri Simsa, Sean Hyde, Luis Ahn

Abstract

CAP T C H A s are computer-generated tests that humans can pass but current computer systems cannot. CAP T C H A s provide a method for automatically distinguishing a human from a computer program, and therefore can protect Web services from abuse by so-called "bots." Most CAP T C H A s consist of distorted images, usually text, for which a user must provide some description. Unfortunately, visual CAP T C H A s limit access to the millions of visually impaired people using the Web. Audio CAP T C H A s were created to solve this accessibility issue; however, the security of audio CAP T C H A s was never formally tested. Some visual CAP T C H A s have been broken using machine learning techniques, and we propose using similar ideas to test the security of audio CAP T C H A s . Audio CAP T C H A s are generally composed of a set of words to be identified, layered on top of noise. We analyzed the security of current audio CAP T CH A s from popular Web sites by using AdaBoost, SVM, and k-NN, and achieved correct solutions for test samples with accuracy up to 71%. Such accuracy is enough to consider these CAPTCHAs broken. Training several different machine learning algorithms on different types of audio CAP T C H A s allowed us to analyze the strengths and weaknesses of the algorithms so that we could suggest a design for a more robust audio CAPTCHA.

1

Int r o d u c t i o n

CAP T C H A s [1] are automated tests designed to tell computers and humans apart by presenting users with a problem that humans can solve but current computer programs cannot. Because CAPTCHAs can distinguish between humans and computers with high probability, they are used for many different security applications: they prevent bots from voting continuously in online polls, automatically registering for millions of spam email accounts, automatically purchasing tickets to buy out an event, etc. Once a CAP T C H A is broken (i.e., computer programs can successfully pass the test), bots can impersonate humans and gain access to services that they should not. Therefore, it is important for CAP T C H A s to be secure. To pass the typical visual CAP T C H A , a user must correctly type the characters displayed in an image of distorted text. Many visual CAP T C H A s have been broken with machine

learning techniques [2]-[3], though some remain secure against such attacks. Because visually impaired users who surf the Web using screen-reading programs cannot see this type of CAPTCHA, audio CAP T C H A s were created. Typical audio CAP T C H A s consist of one or several speakers saying letters or digits at randomly spaced intervals. A user must correctly identify the digits or characters spoken in the audio file to pass the CAP T C H A . To make this test difficult for current computer systems, specifically automatic speech recognition (ASR) programs, background noise is injected into the audio files. Since no official evaluation of existing audio CAP T C H A s has been reported, we tested the security of audio CAP T C H A s used by many popular Web sites by running machine learning experiments designed to break them. In the next section, we provide an overview of the literature related to our project. Section 3 describes our methods for creating training data, and section 4 describes how we create classifiers that can recognize letters, digits, and noise. In section 5, we discuss how we evaluated our methods on widely used audio CAP T C H A s and we give our results. In particular, we show that the audio CAP T C H A s used by sites such as Google and Digg are susceptible to machine learning attacks. Section 6 mentions the proposed design of a new more secure audio CAP T C H A based on our findings.

2

Lit e r a t u r e r e v i e w

To break the audio CAP T C H A s , we derive features from the CAP T C H A audio and use several machine learning techniques to perform ASR on segments of the CAPTCHA. There are many popular techniques for extracting features from speech. The three techniques we use are mel-frequency cepstral coefficients (MFCC), perceptual linear prediction (PLP), and relative spectral transform-PLP (RAS TA - P L P) . MFCC is one of the most popular speech feature representations used. Similar to a fast Fourier transform (FF T ) , MFCC transforms an audio file into frequency bands, but (unlike FF T ) MFCC uses mel-frequency bands, which are better for approximating the range of frequencies humans hear. PLP was designed to extract speaker-independent features from speech [4]. Therefore, by using PLP and a variant such as RAS TA - P L P, we were able to train our classifiers to recognize letters and digits independently of who spoke them. Since many different people recorded the digits used in one of the types of audio CAP T C H A s we tested, PLP and RAS TA- PL P were needed to extract the features that were most useful for solving them. In [4]-[5], the authors conducted experiments on recognizing isolated digits in the presence of noise using both PLP and RAS TA - PL P. However, the noise used consisted of telephone or microphone static caused by recording in different locations. The audio CAP T C H A s we use contain this type of noise, as well as added vocal noise and/or music, which is supposed to make the automated recognition process much harder. The authors of [3] emphasize how many visual CAP T C HA s can be broken by successfully splitting the task into two smaller tasks: segmentation and recognition. We follow a similar approach in that we first automatically split the audio into segments, and then we classify these segments as noise or words. In early March 2008, concurrent to our work, the blog of Wintercore Labs [6] claimed to have successfully broken the Google audio CAP T C H A . After reading their Web article and viewing the video of how they solve the CAP T C H A s , we are unconvinced that the process is entirely automatic, and it is unclear what their exact pass rate is. Because we are unable to find any formal technical analysis of this program, we can neither be sure of its accuracy nor the extent of its automation.

3

Cr e a t i o n o f tra i n i n g dat a

Since automated programs can attempt to pass a CAPTCHA repeatedly, a CAPTCHA is essentially broken when a program can pass it more than a non-trivial fraction of the time; e.g., a 5% pass rate is enough. Our approach to breaking the audio CAP T C H A s began by first splitting the audio files into segments of noise or words: for our experiments, the words were spoken letters or digits. We used manual transcriptions of the audio CAP T C H A s to get information regarding the location of each spoken word within the audio file. We were able to label our segments accurately by using this information.

We gathered 1,000 audio CAP T C H A s from each of the following Web sites: google.com, digg.com, and an older version of the audio CAP T C H A in recaptcha.net. Each of the CAP T C H A s was annotated with the information regarding letter/digit locations provided by the manual transcriptions. For each type of CAPTCHA, we randomly selected 900 samples for training and used the remaining 100 for testing. Using the digit/letter location information provided in the manual CAP T C H A transcriptions, each training CAP T C H A is divided into segments of noise, the letters a-z, or the digits 0-9, and labeled as such. We ignore the annotation information of the CAP T C H A s we use for testing, and therefore we cannot identify the size of those segments. Instead, each test CAP T C H A is divided into a number of fixed-size segments. The segments with the highest energy peaks are then classified using machine learning techniques (Figure 1). Since the size of a feature vector extracted from a segment generally depends on the size of the segment, using fixed-size segments allows each segment to be described with a feature vector of the same length. We chose the window size by listening to a few training segments and adjusted accordingly to ensure that the segment contained the entire digit/letter. There is undoubtedly a more optimal way of selecting the window size, however, we were still able to break the three CAP T C H A s we tested with our method.

Figure 1: A test audio CAP T C H A with the fixed-size segments containing the highest energy peaks highlighted. The information provided in the manual transcriptions of the audio CAP T C H A s contains a list of the time intervals within which words are spoken. However, these intervals are of variable size and the word might be spoken anywhere within this interval. To provide fixeds ize segments for training, we developed the following heuristic. First, divide each file into variable-size segments using the time intervals provided and label each segment accordingly. Then, within each segment, detect the highest energy peak and return its fixed-size neighborhood labeled with the current segment's label. This heuristic achieved nearly perfect labeling accuracy for the training set. Rare mistakes occurred when the highest energy peak of a digit or letter segment corresponded to noise rather than to a digit or letter. To summarize this subsection, an audio file is transformed into a set of fixed-size segments labeled as noise, a digit between 0 and 9, or a letter between a and z. These segments are then used for training. Classifiers are trained for one type of CAPTCHA at a time.

4

C l a s s i f i e r co n s t r u c t i o n

From the training data we extracted five sets of features using twelve MFCCs and twelfth-

order spectral (SPEC) and cepstral (CEPS) coefficients from PLP Matlab functions for extracting these features were provided online Voicebox package. We use AdaBoost, SVM, and k-NN algorithms digit and letter recognition. We detail our implementation of following subsections. 4 .1 AdaBoost

and RAS TA- P L P. The at [7] and as part of the to implement automated each algorithm in the

Using decision stumps as weak classifiers for AdaBoost, anywhere from 11 to 37 ensemble classifiers are built. The number of classifiers built depends on which type of CAPTCHA we are solving. Each classifier trains on all the segments associated with that type of CAP T C H A , and for the purpose of building a single classifier, segments are labeled by either -1 (negative example) or +1 (positive example). Using cross-validation, we choose to use 50 iterations for our AdaBoost algorithm. A segment can then be classified as a particular letter, digit, or noise according to the ensemble classifier that outputs the number closest to 1. 4 .2 S u p p o rt v ecto r ma ch i n e

To conduct digit recognition with SVM, we used the C++ implementations of libSVM [8] version 2.85 with C-SMV and RBF kernel. First, all feature values are scaled to the range of -1 to 1 as suggested by [8]. The scale parameters are stored so that test samples can be scaled accordingly. Then, a single multiclass classifier is created for each set of features using all the segments for a particular type of CAPTCHA. We use cross-validation and grid search to discover the optimal slack penalty (C=32) and kernel parameter (=0.011). 4 .3 k - n e a re s t n e i g h b o r ( k - N N )

We use k-NN as our final method for classifying digits. For each type of CAP T C H A , five different classifiers are created by using all of the training data and the five sets of features associated with that particular type of CAP T C H A . Again we use cross-validation to discover the optimal parameter, in this case k=1. We use Euclidian distance as our distance metric.

5

Ass e s s m e n t of cu r r e n t a u d i o CAPTCHAs

Our method for solving CAP T C H A s iteratively extracts an audio segment from a CAP T C H A , inputs the segment to one of our digit or letter recognizers, and outputs the label for that segment. We continue this process until the maximum solution size is reached or there are no unlabeled segments left. Some of the CAPTCHAs we evaluated have solutions that vary in length. Our method ensures that we get solutions of varying length that are never longer than the maximum solution length. A segment to be classified is identified by taking the neighborhood of the highest energy peak of an as yet unlabeled part of the CAP T C H A . Once a prediction of the solution to the CAPTCHA is computed, it is compared to the true solution. Given that at least one of the audio CAP T C H A s allows users to make a mistake in one of the digits (e.g., reCAPTCHA), we compute the pass rate for each of the different types of CAPTCHAs with all of the following conditions: The prediction matches the true solution exactly. Inserting one digit into the prediction would make it match the solution exactly. Replacing one digit in the prediction would make it match the solution exactly. Removing one digit from the prediction would make it match the solution exactly.

However, since we are only sure that these conditions apply to reCAPTCHA audio CAP T C H A s , we also calculate the percentage of exact solution matches in our results for each type of audio CAP T C H A . These results are described in the following subsections. 5 .1 Goog le

Google audio CAP T C H A s consist of one speaker saying random digits 0-9, the phrase "once again," followed by the exact same recorded sequence of digits originally presented.

The background noise consists of human voices speaking backwards at varying volumes. A solution can range in length from five to eight words. We set our classifier to find the 12 loudest segments and classify these segments as digits or noise. Because the phrase "once again" marks the halfway point of the CAPTCHA, we preprocessed the audio to only serve this half of the CAP T C H A to our classifiers. It is important to note, however, that the classifiers were always able to identify the segment containing "once again," and these segments were identified before all other segments. Therefore, if necessary, we could have had our system cut the file in half after first labeling this segment. For AdaBoost, we create 12 classifiers: one classifier for each digit, one for noise, and one for the phrase "once again." Our results ( Tab le 1) show that at best we achieved a 90% pass rate using the "one mistake" passing conditions and a 66% exact solution match rate. Using SVM and the "one mistake" passing conditions, at best we achieve a 92% pass rate and a 67% exact solution match. For k-NN, the "one mistake" pass rate is 62% and the exact solution match rate is 26%. Table 1: Google audio CAP T C H A results: Maximum 67% accuracy was achieved by SVM. Classifiers Used AdaBoost One mistake MFCC PLPS PEC PLPCEP S RAS TA P L PS PEC RAS TA P L PCEP S 5 .2 Digg 88% 90% 90% 88% exact match 61% 66% 66% 48% SVM one mistake 92% 90% 92% 90% exact match 67% 67% 67% 61% k-NN one mistake 30% 60% 62% 29% exact match 1% 26% 23% 1%