# Breaking caesar ciphers ![center-aligned Caesar wheel](caesarwheel1.gif) --- layout: true .indexlink[[Index](index.html)] --- # Human vs Machine Slow but clever vs Dumb but fast ## Human approach Ciphertext | Plaintext ---|--- ![left-aligned Ciphertext frequencies](c1a_frequency_histogram.png) | ![left-aligned English frequencies](english_frequency_histogram.png) --- # Human vs machine ## Machine approach Brute force. Try all keys. * How many keys to try? ## Basic idea ``` for each key: decipher with this key how close is it to English? remember the best key ``` What steps do we know how to do? --- # How close is it to English? What does English look like? * We need a model of English. How do we define "closeness"? ## Here begineth the yak shaving --- # What does English look like? ## Abstraction: frequency of letter counts Letter | Count -------|------ a | 489107 b | 92647 c | 140497 d | 267381 e | 756288 . | . . | . . | . z | 3575 Use this to predict the probability of each letter, and hence the probability of a sequence of letters. --- .float-right[![right-aligned Typing monkey](typingmonkeylarge.jpg)] # Naive Bayes, or the bag of letters What is the probability that this string of letters is a sample of English? Ignore letter order, just treat each letter individually. Probability of a text is `\( \prod_i p_i \)` Letter | h | e | l | l | o | hello ------------|---------|---------|---------|---------|---------|------- Probability | 0.06645 | 0.12099 | 0.04134 | 0.04134 | 0.08052 | 1.10648239 × 10
-6
Letter | i | f | m | m | p | ifmmp ------------|---------|---------|---------|---------|---------|------- Probability | 0.06723 | 0.02159 | 0.02748 | 0.02748 | 0.01607 | 1.76244520 × 10
-8
(Implmentation issue: this can often underflow, so get in the habit of rephrasing it as `\( \sum_i \log p_i \)`) Letter | h | e | l | l | o | hello ------------|---------|---------|---------|---------|---------|------- Probability | -1.1774 | -0.9172 | -1.3836 | -1.3836 | -1.0940 | -5.956055 --- # Frequencies of English But before then how do we count the letters? * Read a file into a string ```python open() .read() ``` * Count them ```python import collections collections.Counter() ``` Create the `language_models.py` file for this. --- # Canonical forms Counting letters in _War and Peace_ gives all manner of junk. * Convert the text in canonical form (lower case, accents removed, non-letters stripped) before counting ```python [l.lower() for l in text if ...] ``` --- # Accents ```python >>> 'é' in string.ascii_letters >>> 'e' in string.ascii_letters ``` ## Unicode, combining codepoints, and normal forms Text encodings will bite you when you least expect it. - **é** : LATIN SMALL LETTER E WITH ACUTE (U+00E9) - **e** + ** ́** : LATIN SMALL LETTER E (U+0065) + COMBINING ACUTE ACCENT (U+0301) * urlencoding is the other pain point. --- # Five minutes on StackOverflow later... ```python def unaccent(text): """Remove all accents from letters. It does this by converting the unicode string to decomposed compatibility form, dropping all the combining accents, then re-encoding the bytes. >>> unaccent('hello') 'hello' >>> unaccent('HELLO') 'HELLO' >>> unaccent('héllo') 'hello' >>> unaccent('héllö') 'hello' >>> unaccent('HÉLLÖ') 'HELLO' """ return unicodedata.normalize('NFKD', text).\ encode('ascii', 'ignore').\ decode('utf-8') ``` --- # Find the frequencies of letters in English 1. Read from `shakespeare.txt`, `sherlock-holmes.txt`, and `war-and-peace.txt`. 2. Find the frequencies (`.update()`) 3. Sort by count (read the docs...) 4. Write counts to `count_1l.txt` ```python with open('count_1l.txt', 'w') as f: for each letter...: f.write('text\t{}\n'.format(count)) ``` --- # Reading letter probabilities New file: `language_models.py` 1. Load the file `count_1l.txt` into a dict, with letters as keys. 2. Normalise the counts (components of vector sum to 1): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \mathbf{x}_1 + \mathbf{x}_2 + \mathbf{x}_3 + \dots }$$` * Return a new dict * Remember the doctest! 3. Create a dict `Pl` that gives the log probability of a letter 4. Create a function `Pletters` that gives the probability of an iterable of letters * What preconditions should this function have? * Remember the doctest! --- # Breaking caesar ciphers New file: `cipherbreak.py` ## Remember the basic idea ``` for each key: decipher with this key how close is it to English? remember the best key ``` Try it on the text in `2013/1a.ciphertext`. Does it work? --- # Aside: Logging Better than scattering `print()`statements through your code ```python import logging logger = logging.getLogger(__name__) logger.addHandler(logging.FileHandler('cipher.log')) logger.setLevel(logging.WARNING) logger.debug('Caesar break attempt using key {0} gives fit of {1} ' 'and decrypt starting: {2}'.format(shift, fit, plaintext[:50])) ``` * Yes, it's ugly. Use `logger.setLevel()` to change the level: CRITICAL, ERROR, WARNING, INFO, DEBUG Use `logger.debug()`, `logger.info()`, etc. to log a message. --- # Homework: how much ciphertext do we need? ## Let's do an experiment to find out 1. Load the whole corpus into a string (sanitised) 2. Select a random chunk of plaintext and a random key 3. Encipher the text 4. Score 1 point if `caesar_cipher_break()` recovers the correct key 5. Repeat many times and with many plaintext lengths ```python import csv def show_results(): with open('caesar_break_parameter_trials.csv', 'w') as f: writer = csv.DictWriter(f, ['name'] + message_lengths, quoting=csv.QUOTE_NONNUMERIC) writer.writeheader() for scoring in sorted(scores.keys()): scores[scoring]['name'] = scoring writer.writerow(scores[scoring]) ```