# Breaking caesar ciphers ![center-aligned Caesar wheel](caesarwheel1.gif) --- # Human vs Machine Slow but clever vs Dumb but fast ## Human approach Ciphertext | Plaintext ---|--- ![left-aligned Ciphertext frequencies](c1a_frequency_histogram.png) | ![left-aligned English frequencies](english_frequency_histogram.png) --- # Human vs machine ## Machine approach Brute force. Try all keys. * How many keys to try? ## Basic idea ``` for each key: decipher with this key how close is it to English? remember the best key ``` What steps do we know how to do? --- # How close is it to English? What does English look like? * We need a model of English. How do we define "closeness"? --- # What does English look like? ## Abstraction: frequency of letter counts Letter | Count -------|------ a | 489107 b | 92647 c | 140497 d | 267381 e | 756288 . | . . | . . | . z | 3575 Use this to predict the probability of each letter, and hence the probability of a sequence of letters. --- # An infinite number of monkeys What is the probability that this string of letters is a sample of English? ## Naive Bayes, or the bag of letters Ignore letter order, just treat each letter individually. Probability of a text is `\( \prod_i p_i \)` Letter | h | e | l | l | o | hello ------------|---------|---------|---------|---------|---------|------- Probability | 0.06645 | 0.12099 | 0.04134 | 0.04134 | 0.08052 | 1.10648239 × 10
-6
Letter | i | f | m | m | p | ifmmp ------------|---------|---------|---------|---------|---------|------- Probability | 0.06723 | 0.02159 | 0.02748 | 0.02748 | 0.01607 | 1.76244520 × 10
-8
(Implmentation issue: this can often underflow, so get in the habit of rephrasing it as `\( \sum_i \log p_i \)`) Letter | h | e | l | l | o | hello ------------|---------|---------|---------|---------|---------|------- Probability | -1.1774 | -0.9172 | -1.3836 | -1.3836 | -1.0940 | -5.956055 --- # Frequencies of English But before then how do we count the letters? * Read a file into a string ```python open() .read() ``` * Count them ```python import collections ``` Create the `language_models.py` file for this. --- # Canonical forms Counting letters in _War and Peace_ gives all manner of junk. * Convert the text in canonical form (lower case, accents removed, non-letters stripped) before counting ```python [l.lower() for l in text if ...] ``` --- # Accents ```python >>> 'é' in string.ascii_letters >>> 'e' in string.ascii_letters ``` ## Unicode, combining codepoints, and normal forms Text encodings will bite you when you least expect it. - **é** : LATIN SMALL LETTER E WITH ACUTE (U+00E9) - **e** + ** ́** : LATIN SMALL LETTER E (U+0065) + COMBINING ACUTE ACCENT (U+0301) * urlencoding is the other pain point. --- # Five minutes on StackOverflow later... ```python def unaccent(text): """Remove all accents from letters. It does this by converting the unicode string to decomposed compatibility form, dropping all the combining accents, then re-encoding the bytes. >>> unaccent('hello') 'hello' >>> unaccent('HELLO') 'HELLO' >>> unaccent('héllo') 'hello' >>> unaccent('héllö') 'hello' >>> unaccent('HÉLLÖ') 'HELLO' """ return unicodedata.normalize('NFKD', text).\ encode('ascii', 'ignore').\ decode('utf-8') ``` --- # Find the frequencies of letters in English 1. Read from `shakespeare.txt`, `sherlock-holmes.txt`, and `war-and-peace.txt`. 2. Find the frequencies (`.update()`) 3. Sort by count 4. Write counts to `count_1l.txt` (`'text{}\n'.format()`) --- # Reading letter probabilities 1. Load the file `count_1l.txt` into a dict, with letters as keys. 2. Normalise the counts (components of vector sum to 1): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \mathbf{x}_1 + \mathbf{x}_2 + \mathbf{x}_3 + \dots }$$` * Return a new dict * Remember the doctest! 3. Create a dict `Pl` that gives the log probability of a letter 4. Create a function `Pletters` that gives the probability of an iterable of letters * What preconditions should this function have? * Remember the doctest! --- # Breaking caesar ciphers ## Remember the basic idea ``` for each key: decipher with this key how close is it to English? remember the best key ``` Try it on the text in `2013/1a.ciphertext`. Does it work? --- # Aside: Logging Better than scattering `print()`statements through your code ```python import logging logger = logging.getLogger(__name__) logger.addHandler(logging.FileHandler('cipher.log')) logger.setLevel(logging.WARNING) logger.debug('Caesar break attempt using key {0} gives fit of {1} ' 'and decrypt starting: {2}'.format(shift, fit, plaintext[:50])) ``` * Yes, it's ugly. Use `logger.setLevel()` to change the level: CRITICAL, ERROR, WARNING, INFO, DEBUG --- # Back to frequency of letter counts Letter | Count -------|------ a | 489107 b | 92647 c | 140497 d | 267381 e | 756288 . | . . | . . | . z | 3575 Another way of thinking about this is a 26-dimensional vector. Create a vector of our text, and one of idealised English. The distance between the vectors is how far from English the text is. --- # Vector distances .float-right[![right-aligned Vector subtraction](vector-subtraction.svg)] Several different distance measures (__metrics__, also called __norms__): * L
2
norm (Euclidean distance): `\(\|\mathbf{a} - \mathbf{b}\| = \sqrt{\sum_i (\mathbf{a}_i - \mathbf{b}_i)^2} \)` * L
1
norm (Manhattan distance, taxicab distance): `\(\|\mathbf{a} - \mathbf{b}\| = \sum_i |\mathbf{a}_i - \mathbf{b}_i| \)` * L
3
norm: `\(\|\mathbf{a} - \mathbf{b}\| = \sqrt[3]{\sum_i |\mathbf{a}_i - \mathbf{b}_i|^3} \)` The higher the power used, the more weight is given to the largest differences in components. (Extends out to: * L
0
norm (Hamming distance): `$$\|\mathbf{a} - \mathbf{b}\| = \sum_i \left\{ \begin{matrix} 1 &\mbox{if}\ \mathbf{a}_i \neq \mathbf{b}_i , \\ 0 &\mbox{if}\ \mathbf{a}_i = \mathbf{b}_i \end{matrix} \right. $$` * L
∞
norm: `\(\|\mathbf{a} - \mathbf{b}\| = \max_i{(\mathbf{a}_i - \mathbf{b}_i)} \)` neither of which will be that useful here, but they keep cropping up.) --- # Normalisation of vectors Frequency distributions drawn from different sources will have different lengths. For a fair comparison we need to scale them. * Eucliean scaling (vector with unit length): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \sqrt{\mathbf{x}_1^2 + \mathbf{x}_2^2 + \mathbf{x}_3^2 + \dots } }$$` * Normalisation (components of vector sum to 1): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \mathbf{x}_1 + \mathbf{x}_2 + \mathbf{x}_3 + \dots }$$` --- # Angle, not distance Rather than looking at the distance between the vectors, look at the angle between them. .float-right[![right-aligned Vector dot product](vector-dot-product.svg)] Vector dot product shows how much of one vector lies in the direction of another: `\( \mathbf{A} \bullet \mathbf{B} = \| \mathbf{A} \| \cdot \| \mathbf{B} \| \cos{\theta} \)` But, `\( \mathbf{A} \bullet \mathbf{B} = \sum_i \mathbf{A}_i \cdot \mathbf{B}_i \)` and `\( \| \mathbf{A} \| = \sum_i \mathbf{A}_i^2 \)` A bit of rearranging give the cosine simiarity: `$$ \cos{\theta} = \frac{ \mathbf{A} \bullet \mathbf{B} }{ \| \mathbf{A} \| \cdot \| \mathbf{B} \| } = \frac{\sum_i \mathbf{A}_i \cdot \mathbf{B}_i}{\sum_i \mathbf{A}_i^2 \times \sum_i \mathbf{B}_i^2} $$` This is independent of vector lengths! Cosine similarity is 1 if in parallel, 0 if perpendicular, -1 if antiparallel. --- # Which is best? | Euclidean | Normalised ---|-----------|------------ L1 | x | x L2 | x | x L3 | x | x Cosine | x | x And the probability measure! * Nine different ways of measuring fitness. ## Computing is an empircal science Let's do some experiments to find the best solution! --- # Experimental harness ## Step 1: build some other scoring functions We need a way of passing the different functions to the keyfinding function. ## Step 2: find the best scoring function Try them all on random ciphertexts, see which one works best. --- # Functions are values! ```python >>> Pletters
``` ```python def caesar_break(message, fitness=Pletters): """Breaks a Caesar cipher using frequency analysis ... for shift in range(26): plaintext = caesar_decipher(message, shift) fit = fitness(plaintext) ``` --- # Changing the comparison function * Must be a function that takes a text and returns a score * Better fit must give higher score, opposite of the vector distance norms ```python def make_frequency_compare_function(target_frequency, frequency_scaling, metric, invert): def frequency_compare(text): ... return score return frequency_compare ``` --- # Data-driven processing ```python metrics = [{'func': norms.l1, 'invert': True, 'name': 'l1'}, {'func': norms.l2, 'invert': True, 'name': 'l2'}, {'func': norms.l3, 'invert': True, 'name': 'l3'}, {'func': norms.cosine_similarity, 'invert': False, 'name': 'cosine_similarity'}] scalings = [{'corpus_frequency': normalised_english_counts, 'scaling': norms.normalise, 'name': 'normalised'}, {'corpus_frequency': euclidean_scaled_english_counts, 'scaling': norms.euclidean_scale, 'name': 'euclidean_scaled'}] ``` Use this to make all nine scoring functions.