---
-# Brute force
+# Human vs Machine
-How many keys to try?
+Slow but clever vs Dumb but fast
+
+## Human approach
+
+Ciphertext | Plaintext
+---|---
+![left-aligned Ciphertext frequencies](c1a_frequency_histogram.png) | ![left-aligned English frequencies](english_frequency_histogram.png)
+
+---
+
+# Human vs machine
+
+## Machine approach
+
+Brute force.
+
+Try all keys.
+
+* How many keys to try?
## Basic idea
* Read a file into a string
```python
open()
-read()
+.read()
```
* Count them
```python
import collections
```
+Create the `language_models.py` file for this.
+
---
# Canonical forms
* Convert the text in canonical form (lower case, accents removed, non-letters stripped) before counting
+```python
+[l.lower() for l in text if ...]
+```
+---
+
+
+# Accents
+
+```python
+>>> 'é' in string.ascii_letters
+>>> 'e' in string.ascii_letters
+```
+
+## Unicode, combining codepoints, and normal forms
+
+Text encodings will bite you when you least expect it.
+
+- **é** : LATIN SMALL LETTER E WITH ACUTE (U+00E9)
+
+- **e** + ** ́** : LATIN SMALL LETTER E (U+0065) + COMBINING ACUTE ACCENT (U+0301)
+
+* urlencoding is the other pain point.
+
+---
+
+# Five minutes on StackOverflow later...
+
+```python
+def unaccent(text):
+ """Remove all accents from letters.
+ It does this by converting the unicode string to decomposed compatibility
+ form, dropping all the combining accents, then re-encoding the bytes.
+
+ >>> unaccent('hello')
+ 'hello'
+ >>> unaccent('HELLO')
+ 'HELLO'
+ >>> unaccent('héllo')
+ 'hello'
+ >>> unaccent('héllö')
+ 'hello'
+ >>> unaccent('HÉLLÖ')
+ 'HELLO'
+ """
+ return unicodedata.normalize('NFKD', text).\
+ encode('ascii', 'ignore').\
+ decode('utf-8')
+```
+
+---
+
+# Find the frequencies of letters in English
+
+1. Read from `shakespeare.txt`, `sherlock-holmes.txt`, and `war-and-peace.txt`.
+2. Find the frequencies (`.update()`)
+3. Sort by count
+4. Write counts to `count_1l.txt` (`'text{}\n'.format()`)
+
---
# Vector distances
Several different distance measures (__metrics__, also called __norms__):
* L<sub>2</sub> norm (Euclidean distance):
-`\(|\mathbf{a} - \mathbf{b}| = \sqrt{\sum_i (\mathbf{a}_i - \mathbf{b}_i)^2} \)`
+`\(\|\mathbf{a} - \mathbf{b}\| = \sqrt{\sum_i (\mathbf{a}_i - \mathbf{b}_i)^2} \)`
* L<sub>1</sub> norm (Manhattan distance, taxicab distance):
-`\(|\mathbf{a} - \mathbf{b}| = \sum_i |\mathbf{a}_i - \mathbf{b}_i| \)`
+`\(\|\mathbf{a} - \mathbf{b}\| = \sum_i |\mathbf{a}_i - \mathbf{b}_i| \)`
* L<sub>3</sub> norm:
-`\(|\mathbf{a} - \mathbf{b}| = \sqrt[3]{\sum_i |\mathbf{a}_i - \mathbf{b}_i|^3} \)`
+`\(\|\mathbf{a} - \mathbf{b}\| = \sqrt[3]{\sum_i |\mathbf{a}_i - \mathbf{b}_i|^3} \)`
The higher the power used, the more weight is given to the largest differences in components.
(Extends out to:
* L<sub>0</sub> norm (Hamming distance):
-`$$|\mathbf{a} - \mathbf{b}| = \sum_i \left\{
+`$$\|\mathbf{a} - \mathbf{b}\| = \sum_i \left\{
\begin{matrix} 1 &\mbox{if}\ \mathbf{a}_i \neq \mathbf{b}_i , \\
0 &\mbox{if}\ \mathbf{a}_i = \mathbf{b}_i \end{matrix} \right. $$`
* L<sub>∞</sub> norm:
-`\(|\mathbf{a} - \mathbf{b}| = \max_i{(\mathbf{a}_i - \mathbf{b}_i)} \)`
+`\(\|\mathbf{a} - \mathbf{b}\| = \max_i{(\mathbf{a}_i - \mathbf{b}_i)} \)`
-neither of which will be that useful.)
+neither of which will be that useful here, but they keep cropping up.)
---
# Normalisation of vectors
## Computing is an empircal science
+Let's do some experiments to find the best solution!
+
+---
+
+## Step 1: get **some** codebreaking working
+
+Let's start with the letter probability norm, because it's easy.
+
+## Step 2: build some other scoring functions
+
+We also need a way of passing the different functions to the keyfinding function.
+
+## Step 3: find the best scoring function
+
+Try them all on random ciphertexts, see which one works best.
</textarea>