4 <title>Breaking caesar ciphers
</title>
5 <meta http-equiv=
"Content-Type" content=
"text/html; charset=UTF-8"/>
6 <style type=
"text/css">
15 h1 { font-size:
3em; }
16 h2 { font-size:
2em; }
17 h3 { font-size:
1.6em; }
19 text-decoration: none;
22 -moz-border-radius:
5px;
23 -web-border-radius:
5px;
31 text-shadow:
0 0 20px #
333;
37 text-shadow:
0 0 20px #
333;
46 <textarea id=
"source">
48 # Breaking caesar ciphers
50 ![center-aligned Caesar wheel](caesarwheel1.gif)
62 decipher with this key
63 how close is it to English?
67 What steps do we know how to do?
70 # How close is it to English?
72 What does English look like?
74 * We need a model of English.
76 How do we define
"closeness"?
80 # What does English look like?
82 ## Abstraction: frequency of letter counts
96 One way of thinking about this is a
26-dimensional vector.
98 Create a vector of our text, and one of idealised English.
100 The distance between the vectors is how far from English the text is.
104 # Frequencies of English
106 But before then how do we count the letters?
108 * Read a file into a string
122 Counting letters in _War and Peace_ gives all manner of junk.
124 * Convert the text in canonical form (lower case, accents removed, non-letters stripped) before counting
130 .float-right[![right-aligned Vector subtraction](vector-subtraction.svg)]
132 Several different distance measures (__metrics__, also called __norms__):
134 * L
<sub>2</sub> norm (Euclidean distance):
135 `\(|\mathbf{a} - \mathbf{b}| = \sqrt{\sum_i (\mathbf{a}_i - \mathbf{b}_i)^
2} \)`
137 * L
<sub>1</sub> norm (Manhattan distance, taxicab distance):
138 `\(|\mathbf{a} - \mathbf{b}| = \sum_i |\mathbf{a}_i - \mathbf{b}_i| \)`
140 * L
<sub>3</sub> norm:
141 `\(|\mathbf{a} - \mathbf{b}| = \sqrt[
3]{\sum_i |\mathbf{a}_i - \mathbf{b}_i|^
3} \)`
143 The higher the power used, the more weight is given to the largest differences in components.
147 * L
<sub>0</sub> norm (Hamming distance):
148 `$$|\mathbf{a} - \mathbf{b}| = \sum_i \left\{
149 \begin{matrix}
1 &\mbox{if}\ \mathbf{a}_i \neq \mathbf{b}_i , \\
150 0 &\mbox{if}\ \mathbf{a}_i = \mathbf{b}_i \end{matrix} \right. $$`
152 * L
<sub>∞</sub> norm:
153 `\(|\mathbf{a} - \mathbf{b}| = \max_i{(\mathbf{a}_i - \mathbf{b}_i)} \)`
155 neither of which will be that useful.)
158 # Normalisation of vectors
160 Frequency distributions drawn from different sources will have different lengths. For a fair comparison we need to scale them.
162 * Eucliean scaling (vector with unit length): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \sqrt{\mathbf{x}_1^
2 + \mathbf{x}_2^
2 + \mathbf{x}_3^
2 + \dots } }$$`
164 * Normalisation (components of vector sum to
1): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \mathbf{x}_1 + \mathbf{x}_2 + \mathbf{x}_3 + \dots }$$`
168 # Angle, not distance
170 Rather than looking at the distance between the vectors, look at the angle between them.
172 .float-right[![right-aligned Vector dot product](vector-dot-product.svg)]
174 Vector dot product shows how much of one vector lies in the direction of another:
175 `\( \mathbf{A} \bullet \mathbf{B} =
176 \| \mathbf{A} \| \cdot \| \mathbf{B} \| \cos{\theta} \)`
179 `\( \mathbf{A} \bullet \mathbf{B} = \sum_i \mathbf{A}_i \cdot \mathbf{B}_i \)`
180 and `\( \| \mathbf{A} \| = \sum_i \mathbf{A}_i^
2 \)`
182 A bit of rearranging give the cosine simiarity:
183 `$$ \cos{\theta} = \frac{ \mathbf{A} \bullet \mathbf{B} }{ \| \mathbf{A} \| \cdot \| \mathbf{B} \| } =
184 \frac{\sum_i \mathbf{A}_i \cdot \mathbf{B}_i}{\sum_i \mathbf{A}_i^
2 \times \sum_i \mathbf{B}_i^
2} $$`
186 This is independent of vector lengths!
188 Cosine similarity is
1 if in parallel,
0 if perpendicular, -
1 if antiparallel.
192 # An infinite number of monkeys
194 What is the probability that this string of letters is a sample of English?
196 Given 'th', 'e' is about six times more likely than 'a' or 'i'.
198 ## Naive Bayes, or the bag of letters
200 Ignore letter order, just treat each letter individually.
202 Probability of a text is `\( \prod_i p_i \)`
204 (Implmentation issue: this can often underflow, so get in the habit of rephrasing it as `\( \sum_i \log p_i \)`)
210 | Euclidean | Normalised
211 ---|-----------|------------
217 And the probability measure!
219 * Nine different ways of measuring fitness.
221 ## Computing is an empircal science
226 <script src=
"http://gnab.github.io/remark/downloads/remark-0.6.0.min.js" type=
"text/javascript">
229 <script type=
"text/javascript"
230 src=
"http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML&delayStartupUntil=configured"></script>
232 <script type=
"text/javascript">
233 var slideshow = remark.create({ ratio:
"16:9" });
238 skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
241 MathJax.Hub.Queue(function() {
242 $(MathJax.Hub.getAllJax()).map(function(index, elem) {
243 return(elem.SourceElement());
244 }).parent().addClass('has-jax');
246 MathJax.Hub.Configured();