Moved discussion of accents to cipher breaking
[cipher-training.git] / slides / caesar-break.html
1 <!DOCTYPE html>
2 <html>
3 <head>
4 <title>Breaking caesar ciphers</title>
5 <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
6 <style type="text/css">
7 /* Slideshow styles */
8 body {
9 font-size: 20px;
10 }
11 h1, h2, h3 {
12 font-weight: 400;
13 margin-bottom: 0;
14 }
15 h1 { font-size: 3em; }
16 h2 { font-size: 2em; }
17 h3 { font-size: 1.6em; }
18 a, a > code {
19 text-decoration: none;
20 }
21 code {
22 -moz-border-radius: 5px;
23 -web-border-radius: 5px;
24 background: #e7e8e2;
25 border-radius: 5px;
26 font-size: 16px;
27 }
28 .plaintext {
29 background: #272822;
30 color: #80ff80;
31 text-shadow: 0 0 20px #333;
32 padding: 2px 5px;
33 }
34 .ciphertext {
35 background: #272822;
36 color: #ff6666;
37 text-shadow: 0 0 20px #333;
38 padding: 2px 5px;
39 }
40 .float-right {
41 float: right;
42 }
43 </style>
44 </head>
45 <body>
46 <textarea id="source">
47
48 # Breaking caesar ciphers
49
50 ![center-aligned Caesar wheel](caesarwheel1.gif)
51
52 ---
53
54 # Human vs Machine
55
56 Slow but clever vs Dumb but fast
57
58 ## Human approach
59
60 Ciphertext | Plaintext
61 ---|---
62 ![left-aligned Ciphertext frequencies](c1a_frequency_histogram.png) | ![left-aligned English frequencies](english_frequency_histogram.png)
63
64 ---
65
66 # Human vs machine
67
68 ## Machine approach
69
70 Brute force.
71
72 Try all keys.
73
74 * How many keys to try?
75
76 ## Basic idea
77
78 ```
79 for each key:
80 decipher with this key
81 how close is it to English?
82 remember the best key
83 ```
84
85 What steps do we know how to do?
86
87 ---
88 # How close is it to English?
89
90 What does English look like?
91
92 * We need a model of English.
93
94 How do we define "closeness"?
95
96 ---
97
98 # What does English look like?
99
100 ## Abstraction: frequency of letter counts
101
102 Letter | Count
103 -------|------
104 a | 489107
105 b | 92647
106 c | 140497
107 d | 267381
108 e | 756288
109 . | .
110 . | .
111 . | .
112 z | 3575
113
114 One way of thinking about this is a 26-dimensional vector.
115
116 Create a vector of our text, and one of idealised English.
117
118 The distance between the vectors is how far from English the text is.
119
120 ---
121
122 # Frequencies of English
123
124 But before then how do we count the letters?
125
126 * Read a file into a string
127 ```python
128 open()
129 read()
130 ```
131 * Count them
132 ```python
133 import collections
134 ```
135
136 ---
137
138 # Canonical forms
139
140 Counting letters in _War and Peace_ gives all manner of junk.
141
142 * Convert the text in canonical form (lower case, accents removed, non-letters stripped) before counting
143
144 ```python
145 [l.lower() for l in text if ...]
146 ```
147 ---
148
149
150 # Accents
151
152 ```python
153 >>> caesar_encipher_letter('é', 1)
154 ```
155 What does it produce?
156
157 What should it produce?
158
159 ## Unicode, combining codepoints, and normal forms
160
161 Text encodings will bite you when you least expect it.
162
163 * urlencoding is the other pain point.
164
165 ---
166
167 # Five minutes on StackOverflow later...
168
169 ```python
170 def unaccent(text):
171 """Remove all accents from letters.
172 It does this by converting the unicode string to decomposed compatibility
173 form, dropping all the combining accents, then re-encoding the bytes.
174
175 >>> unaccent('hello')
176 'hello'
177 >>> unaccent('HELLO')
178 'HELLO'
179 >>> unaccent('héllo')
180 'hello'
181 >>> unaccent('héllö')
182 'hello'
183 >>> unaccent('HÉLLÖ')
184 'HELLO'
185 """
186 return unicodedata.normalize('NFKD', text).\
187 encode('ascii', 'ignore').\
188 decode('utf-8')
189 ```
190
191 ---
192
193 # Vector distances
194
195 .float-right[![right-aligned Vector subtraction](vector-subtraction.svg)]
196
197 Several different distance measures (__metrics__, also called __norms__):
198
199 * L<sub>2</sub> norm (Euclidean distance):
200 `\(\|\mathbf{a} - \mathbf{b}\| = \sqrt{\sum_i (\mathbf{a}_i - \mathbf{b}_i)^2} \)`
201
202 * L<sub>1</sub> norm (Manhattan distance, taxicab distance):
203 `\(\|\mathbf{a} - \mathbf{b}\| = \sum_i |\mathbf{a}_i - \mathbf{b}_i| \)`
204
205 * L<sub>3</sub> norm:
206 `\(\|\mathbf{a} - \mathbf{b}\| = \sqrt[3]{\sum_i |\mathbf{a}_i - \mathbf{b}_i|^3} \)`
207
208 The higher the power used, the more weight is given to the largest differences in components.
209
210 (Extends out to:
211
212 * L<sub>0</sub> norm (Hamming distance):
213 `$$\|\mathbf{a} - \mathbf{b}\| = \sum_i \left\{
214 \begin{matrix} 1 &amp;\mbox{if}\ \mathbf{a}_i \neq \mathbf{b}_i , \\
215 0 &amp;\mbox{if}\ \mathbf{a}_i = \mathbf{b}_i \end{matrix} \right. $$`
216
217 * L<sub>&infin;</sub> norm:
218 `\(\|\mathbf{a} - \mathbf{b}\| = \max_i{(\mathbf{a}_i - \mathbf{b}_i)} \)`
219
220 neither of which will be that useful.)
221 ---
222
223 # Normalisation of vectors
224
225 Frequency distributions drawn from different sources will have different lengths. For a fair comparison we need to scale them.
226
227 * Eucliean scaling (vector with unit length): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \sqrt{\mathbf{x}_1^2 + \mathbf{x}_2^2 + \mathbf{x}_3^2 + \dots } }$$`
228
229 * Normalisation (components of vector sum to 1): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \mathbf{x}_1 + \mathbf{x}_2 + \mathbf{x}_3 + \dots }$$`
230
231 ---
232
233 # Angle, not distance
234
235 Rather than looking at the distance between the vectors, look at the angle between them.
236
237 .float-right[![right-aligned Vector dot product](vector-dot-product.svg)]
238
239 Vector dot product shows how much of one vector lies in the direction of another:
240 `\( \mathbf{A} \bullet \mathbf{B} =
241 \| \mathbf{A} \| \cdot \| \mathbf{B} \| \cos{\theta} \)`
242
243 But,
244 `\( \mathbf{A} \bullet \mathbf{B} = \sum_i \mathbf{A}_i \cdot \mathbf{B}_i \)`
245 and `\( \| \mathbf{A} \| = \sum_i \mathbf{A}_i^2 \)`
246
247 A bit of rearranging give the cosine simiarity:
248 `$$ \cos{\theta} = \frac{ \mathbf{A} \bullet \mathbf{B} }{ \| \mathbf{A} \| \cdot \| \mathbf{B} \| } =
249 \frac{\sum_i \mathbf{A}_i \cdot \mathbf{B}_i}{\sum_i \mathbf{A}_i^2 \times \sum_i \mathbf{B}_i^2} $$`
250
251 This is independent of vector lengths!
252
253 Cosine similarity is 1 if in parallel, 0 if perpendicular, -1 if antiparallel.
254
255 ---
256
257 # An infinite number of monkeys
258
259 What is the probability that this string of letters is a sample of English?
260
261 Given 'th', 'e' is about six times more likely than 'a' or 'i'.
262
263 ## Naive Bayes, or the bag of letters
264
265 Ignore letter order, just treat each letter individually.
266
267 Probability of a text is `\( \prod_i p_i \)`
268
269 (Implmentation issue: this can often underflow, so get in the habit of rephrasing it as `\( \sum_i \log p_i \)`)
270
271 ---
272
273 # Which is best?
274
275 | Euclidean | Normalised
276 ---|-----------|------------
277 L1 | x | x
278 L2 | x | x
279 L3 | x | x
280 Cosine | x | x
281
282 And the probability measure!
283
284 * Nine different ways of measuring fitness.
285
286 ## Computing is an empircal science
287
288
289
290 </textarea>
291 <script src="http://gnab.github.io/remark/downloads/remark-0.6.0.min.js" type="text/javascript">
292 </script>
293
294 <script type="text/javascript"
295 src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML&delayStartupUntil=configured"></script>
296
297 <script type="text/javascript">
298 var slideshow = remark.create({ ratio: "16:9" });
299
300 // Setup MathJax
301 MathJax.Hub.Config({
302 tex2jax: {
303 skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
304 }
305 });
306 MathJax.Hub.Queue(function() {
307 $(MathJax.Hub.getAllJax()).map(function(index, elem) {
308 return(elem.SourceElement());
309 }).parent().addClass('has-jax');
310 });
311 MathJax.Hub.Configured();
312 </script>
313 </body>
314 </html>