color: #ff6666;
text-shadow: 0 0 20px #333;
padding: 2px 5px;
+ }
+ .indexlink {
+ position: absolute;
+ bottom: 1em;
+ left: 1em;
}
.float-right {
float: right;
---
+layout: true
+
+.indexlink[[Index](index.html)]
+
+---
+
# Human vs Machine
Slow but clever vs Dumb but fast
How do we define "closeness"?
+## Here begineth the yak shaving
+
---
# What does English look like?
. | .
z | 3575
-One way of thinking about this is a 26-dimensional vector.
+Use this to predict the probability of each letter, and hence the probability of a sequence of letters.
+
+---
+
+.float-right[![right-aligned Typing monkey](typingmonkeylarge.jpg)]
+
+# Naive Bayes, or the bag of letters
+
+What is the probability that this string of letters is a sample of English?
+
+Ignore letter order, just treat each letter individually.
+
+Probability of a text is `\( \prod_i p_i \)`
-Create a vector of our text, and one of idealised English.
+Letter | h | e | l | l | o | hello
+------------|---------|---------|---------|---------|---------|-------
+Probability | 0.06645 | 0.12099 | 0.04134 | 0.04134 | 0.08052 | 1.10648239 × 10<sup>-6</sup>
+
+Letter | i | f | m | m | p | ifmmp
+------------|---------|---------|---------|---------|---------|-------
+Probability | 0.06723 | 0.02159 | 0.02748 | 0.02748 | 0.01607 | 1.76244520 × 10<sup>-8</sup>
+
+(Implmentation issue: this can often underflow, so get in the habit of rephrasing it as `\( \sum_i \log p_i \)`)
+
+Letter | h | e | l | l | o | hello
+------------|---------|---------|---------|---------|---------|-------
+Probability | -1.1774 | -0.9172 | -1.3836 | -1.3836 | -1.0940 | -5.956055
-The distance between the vectors is how far from English the text is.
---
* Count them
```python
import collections
+collections.Counter()
```
Create the `language_models.py` file for this.
```
---
-
# Accents
```python
# Find the frequencies of letters in English
1. Read from `shakespeare.txt`, `sherlock-holmes.txt`, and `war-and-peace.txt`.
-2. Find the frequencies
-3. Sort by count (`sorted(, key=)` ; `.items()`, `.keys()`, `.values()`, `.get()`)
+2. Find the frequencies (`.update()`)
+3. Sort by count (read the docs...)
4. Write counts to `count_1l.txt`
+```python
+with open('count_1l.txt', 'w') as f:
+ for each letter...:
+ f.write('text\t{}\n'.format(count))
+```
---
-# Vector distances
-
-.float-right[![right-aligned Vector subtraction](vector-subtraction.svg)]
-
-Several different distance measures (__metrics__, also called __norms__):
-
-* L<sub>2</sub> norm (Euclidean distance):
-`\(\|\mathbf{a} - \mathbf{b}\| = \sqrt{\sum_i (\mathbf{a}_i - \mathbf{b}_i)^2} \)`
-
-* L<sub>1</sub> norm (Manhattan distance, taxicab distance):
-`\(\|\mathbf{a} - \mathbf{b}\| = \sum_i |\mathbf{a}_i - \mathbf{b}_i| \)`
-
-* L<sub>3</sub> norm:
-`\(\|\mathbf{a} - \mathbf{b}\| = \sqrt[3]{\sum_i |\mathbf{a}_i - \mathbf{b}_i|^3} \)`
-
-The higher the power used, the more weight is given to the largest differences in components.
-
-(Extends out to:
-
-* L<sub>0</sub> norm (Hamming distance):
-`$$\|\mathbf{a} - \mathbf{b}\| = \sum_i \left\{
-\begin{matrix} 1 &\mbox{if}\ \mathbf{a}_i \neq \mathbf{b}_i , \\
- 0 &\mbox{if}\ \mathbf{a}_i = \mathbf{b}_i \end{matrix} \right. $$`
+# Reading letter probabilities
-* L<sub>∞</sub> norm:
-`\(\|\mathbf{a} - \mathbf{b}\| = \max_i{(\mathbf{a}_i - \mathbf{b}_i)} \)`
+1. Load the file `count_1l.txt` into a dict, with letters as keys.
-neither of which will be that useful here, but they keep cropping up.)
----
-
-# Normalisation of vectors
+2. Normalise the counts (components of vector sum to 1): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \mathbf{x}_1 + \mathbf{x}_2 + \mathbf{x}_3 + \dots }$$`
+ * Return a new dict
+ * Remember the doctest!
-Frequency distributions drawn from different sources will have different lengths. For a fair comparison we need to scale them.
+3. Create a dict `Pl` that gives the log probability of a letter
-* Eucliean scaling (vector with unit length): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \sqrt{\mathbf{x}_1^2 + \mathbf{x}_2^2 + \mathbf{x}_3^2 + \dots } }$$`
-
-* Normalisation (components of vector sum to 1): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \mathbf{x}_1 + \mathbf{x}_2 + \mathbf{x}_3 + \dots }$$`
+4. Create a function `Pletters` that gives the probability of an iterable of letters
+ * What preconditions should this function have?
+ * Remember the doctest!
---
-# Angle, not distance
-
-Rather than looking at the distance between the vectors, look at the angle between them.
-
-.float-right[![right-aligned Vector dot product](vector-dot-product.svg)]
-
-Vector dot product shows how much of one vector lies in the direction of another:
-`\( \mathbf{A} \bullet \mathbf{B} =
-\| \mathbf{A} \| \cdot \| \mathbf{B} \| \cos{\theta} \)`
+# Breaking caesar ciphers
-But,
-`\( \mathbf{A} \bullet \mathbf{B} = \sum_i \mathbf{A}_i \cdot \mathbf{B}_i \)`
-and `\( \| \mathbf{A} \| = \sum_i \mathbf{A}_i^2 \)`
+New file: `cipherbreak.py`
-A bit of rearranging give the cosine simiarity:
-`$$ \cos{\theta} = \frac{ \mathbf{A} \bullet \mathbf{B} }{ \| \mathbf{A} \| \cdot \| \mathbf{B} \| } =
-\frac{\sum_i \mathbf{A}_i \cdot \mathbf{B}_i}{\sum_i \mathbf{A}_i^2 \times \sum_i \mathbf{B}_i^2} $$`
+## Remember the basic idea
-This is independent of vector lengths!
+```
+for each key:
+ decipher with this key
+ how close is it to English?
+ remember the best key
+```
-Cosine similarity is 1 if in parallel, 0 if perpendicular, -1 if antiparallel.
+Try it on the text in `2013/1a.ciphertext`. Does it work?
---
-# An infinite number of monkeys
-
-What is the probability that this string of letters is a sample of English?
-
-Given 'th', 'e' is about six times more likely than 'a' or 'i'.
+# Aside: Logging
-## Naive Bayes, or the bag of letters
+Better than scattering `print()`statements through your code
-Ignore letter order, just treat each letter individually.
-
-Probability of a text is `\( \prod_i p_i \)`
-
-(Implmentation issue: this can often underflow, so get in the habit of rephrasing it as `\( \sum_i \log p_i \)`)
-
----
-
-# Which is best?
+```python
+import logging
- | Euclidean | Normalised
----|-----------|------------
-L1 | x | x
-L2 | x | x
-L3 | x | x
-Cosine | x | x
+logger = logging.getLogger(__name__)
+logger.addHandler(logging.FileHandler('cipher.log'))
+logger.setLevel(logging.WARNING)
-And the probability measure!
+ logger.debug('Caesar break attempt using key {0} gives fit of {1} '
+ 'and decrypt starting: {2}'.format(shift, fit, plaintext[:50]))
-* Nine different ways of measuring fitness.
+```
+* Yes, it's ugly.
-## Computing is an empircal science
+Use `logger.setLevel()` to change the level: CRITICAL, ERROR, WARNING, INFO, DEBUG
-Let's do some experiments to find the best solution!
+Use `logger.debug()`, `logger.info()`, etc. to log a message.
---
-## Step 1: get **some** codebreaking working
-
-Let's start with the letter probability norm, because it's easy.
-
-## Step 2: build some other scoring functions
+# Homework: how much ciphertext do we need?
-We also need a way of passing the different functions to the keyfinding function.
+## Let's do an experiment to find out
-## Step 3: find the best scoring function
-
-Try them all on random ciphertexts, see which one works best.
+1. Load the whole corpus into a string (sanitised)
+2. Select a random chunk of plaintext and a random key
+3. Encipher the text
+4. Score 1 point if `caesar_cipher_break()` recovers the correct key
+5. Repeat many times and with many plaintext lengths
+```python
+import csv
+
+def show_results():
+ with open('caesar_break_parameter_trials.csv', 'w') as f:
+ writer = csv.DictWriter(f, ['name'] + message_lengths,
+ quoting=csv.QUOTE_NONNUMERIC)
+ writer.writeheader()
+ for scoring in sorted(scores.keys()):
+ scores[scoring]['name'] = scoring
+ writer.writerow(scores[scoring])
+```
</textarea>
<script src="http://gnab.github.io/remark/downloads/remark-0.6.0.min.js" type="text/javascript">