4 <title>Breaking caesar ciphers
</title>
5 <meta http-equiv=
"Content-Type" content=
"text/html; charset=UTF-8"/>
6 <style type=
"text/css">
15 h1 { font-size:
3em; }
16 h2 { font-size:
2em; }
17 h3 { font-size:
1.6em; }
19 text-decoration: none;
22 -moz-border-radius:
5px;
23 -web-border-radius:
5px;
31 text-shadow:
0 0 20px #
333;
37 text-shadow:
0 0 20px #
333;
46 <textarea id=
"source">
48 # Breaking caesar ciphers
50 ![center-aligned Caesar wheel](caesarwheel1.gif)
56 Slow but clever vs Dumb but fast
60 Ciphertext | Plaintext
62 ![left-aligned Ciphertext frequencies](c1a_frequency_histogram.png) | ![left-aligned English frequencies](english_frequency_histogram.png)
74 * How many keys to try?
80 decipher with this key
81 how close is it to English?
85 What steps do we know how to do?
88 # How close is it to English?
90 What does English look like?
92 * We need a model of English.
94 How do we define
"closeness"?
98 # What does English look like?
100 ## Abstraction: frequency of letter counts
114 One way of thinking about this is a
26-dimensional vector.
116 Create a vector of our text, and one of idealised English.
118 The distance between the vectors is how far from English the text is.
122 # Frequencies of English
124 But before then how do we count the letters?
126 * Read a file into a string
140 Counting letters in _War and Peace_ gives all manner of junk.
142 * Convert the text in canonical form (lower case, accents removed, non-letters stripped) before counting
148 .float-right[![right-aligned Vector subtraction](vector-subtraction.svg)]
150 Several different distance measures (__metrics__, also called __norms__):
152 * L
<sub>2</sub> norm (Euclidean distance):
153 `\(\|\mathbf{a} - \mathbf{b}\| = \sqrt{\sum_i (\mathbf{a}_i - \mathbf{b}_i)^
2} \)`
155 * L
<sub>1</sub> norm (Manhattan distance, taxicab distance):
156 `\(\|\mathbf{a} - \mathbf{b}\| = \sum_i |\mathbf{a}_i - \mathbf{b}_i| \)`
158 * L
<sub>3</sub> norm:
159 `\(\|\mathbf{a} - \mathbf{b}\| = \sqrt[
3]{\sum_i |\mathbf{a}_i - \mathbf{b}_i|^
3} \)`
161 The higher the power used, the more weight is given to the largest differences in components.
165 * L
<sub>0</sub> norm (Hamming distance):
166 `$$\|\mathbf{a} - \mathbf{b}\| = \sum_i \left\{
167 \begin{matrix}
1 &\mbox{if}\ \mathbf{a}_i \neq \mathbf{b}_i , \\
168 0 &\mbox{if}\ \mathbf{a}_i = \mathbf{b}_i \end{matrix} \right. $$`
170 * L
<sub>∞</sub> norm:
171 `\(\|\mathbf{a} - \mathbf{b}\| = \max_i{(\mathbf{a}_i - \mathbf{b}_i)} \)`
173 neither of which will be that useful.)
176 # Normalisation of vectors
178 Frequency distributions drawn from different sources will have different lengths. For a fair comparison we need to scale them.
180 * Eucliean scaling (vector with unit length): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \sqrt{\mathbf{x}_1^
2 + \mathbf{x}_2^
2 + \mathbf{x}_3^
2 + \dots } }$$`
182 * Normalisation (components of vector sum to
1): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \mathbf{x}_1 + \mathbf{x}_2 + \mathbf{x}_3 + \dots }$$`
186 # Angle, not distance
188 Rather than looking at the distance between the vectors, look at the angle between them.
190 .float-right[![right-aligned Vector dot product](vector-dot-product.svg)]
192 Vector dot product shows how much of one vector lies in the direction of another:
193 `\( \mathbf{A} \bullet \mathbf{B} =
194 \| \mathbf{A} \| \cdot \| \mathbf{B} \| \cos{\theta} \)`
197 `\( \mathbf{A} \bullet \mathbf{B} = \sum_i \mathbf{A}_i \cdot \mathbf{B}_i \)`
198 and `\( \| \mathbf{A} \| = \sum_i \mathbf{A}_i^
2 \)`
200 A bit of rearranging give the cosine simiarity:
201 `$$ \cos{\theta} = \frac{ \mathbf{A} \bullet \mathbf{B} }{ \| \mathbf{A} \| \cdot \| \mathbf{B} \| } =
202 \frac{\sum_i \mathbf{A}_i \cdot \mathbf{B}_i}{\sum_i \mathbf{A}_i^
2 \times \sum_i \mathbf{B}_i^
2} $$`
204 This is independent of vector lengths!
206 Cosine similarity is
1 if in parallel,
0 if perpendicular, -
1 if antiparallel.
210 # An infinite number of monkeys
212 What is the probability that this string of letters is a sample of English?
214 Given 'th', 'e' is about six times more likely than 'a' or 'i'.
216 ## Naive Bayes, or the bag of letters
218 Ignore letter order, just treat each letter individually.
220 Probability of a text is `\( \prod_i p_i \)`
222 (Implmentation issue: this can often underflow, so get in the habit of rephrasing it as `\( \sum_i \log p_i \)`)
228 | Euclidean | Normalised
229 ---|-----------|------------
235 And the probability measure!
237 * Nine different ways of measuring fitness.
239 ## Computing is an empircal science
244 <script src=
"http://gnab.github.io/remark/downloads/remark-0.6.0.min.js" type=
"text/javascript">
247 <script type=
"text/javascript"
248 src=
"http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML&delayStartupUntil=configured"></script>
250 <script type=
"text/javascript">
251 var slideshow = remark.create({ ratio:
"16:9" });
256 skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
259 MathJax.Hub.Queue(function() {
260 $(MathJax.Hub.getAllJax()).map(function(index, elem) {
261 return(elem.SourceElement());
262 }).parent().addClass('has-jax');
264 MathJax.Hub.Configured();