From: Neil Smith Date: Mon, 10 Mar 2014 22:41:38 +0000 (-0400) Subject: Finished for the day X-Git-Url: https://git.njae.me.uk/?a=commitdiff_plain;h=7fc1274a02de18710febd4a1f7d89ba7eaea6192;p=cipher-training.git Finished for the day --- diff --git a/slides/aims.html b/slides/aims.html index e830227..fcf0c70 100644 --- a/slides/aims.html +++ b/slides/aims.html @@ -70,7 +70,7 @@ This course will cover four things, in increasing order of importance. \ No newline at end of file diff --git a/slides/caesar-break.html b/slides/caesar-break.html index 5c95fc2..d552209 100644 --- a/slides/caesar-break.html +++ b/slides/caesar-break.html @@ -71,12 +71,113 @@ What does English look like? How do we define "closeness"? +--- + +# What does English look like? + +## Abstraction: frequency of letter counts + +Letter | Count +-------|------ +a | 489107 +b | 92647 +c | 140497 +d | 267381 +e | 756288 +. | . +. | . +. | . +z | 3575 + +One way of thinking about this is a 26-dimensional vector. + +Create a vector of our text, and one of idealised English. + +The distance between the vectors is how far from English the text is. + +--- + +# Vector distances + + +## FIXME! Diagram of vector subtraction here + +Several different distance measures (__metrics__, also called __norms__): + +* L2 norm (Euclidean distance): `\(|\mathbf{x} - \mathbf{y}| = \sqrt{\sum_i (\mathbf{x}_i - \mathbf{y}_i)^2} \)` + +* L1 norm (Manhattan distance, taxicab distance): `\(|\mathbf{x} - \mathbf{y}| = \sum_i |\mathbf{x}_i - \mathbf{y}_i| \)` + +* L3 norm: `\(|\mathbf{x} - \mathbf{y}| = \sqrt[3]{\sum_i |\mathbf{x}_i - \mathbf{y}_i|^3} \)` + +The higher the power used, the more weight is given to the largest differences in components. + +(Extends out to: + +* L0 norm (Hamming distance): `\(|\mathbf{x} - \mathbf{y}| = \sum_i \left\{\begin{matrix} 1 &\mbox{if}\ \mathbf{x}_i \neq \mathbf{y}_i , \\ 0 &\mbox{if}\ \mathbf{x}_i = \mathbf{y}_i \end{matrix} \right| \)` + +* L norm: `\(|\mathbf{x} - \mathbf{y}| = \max_i{(\mathbf{x}_i - \mathbf{y}_i)} \)` + +neither of which will be that useful.) +--- + +# Normalisation of vectors + +Frequency distributions drawn from different sources will have different lengths. For a fair comparison we need to scale them. + +* Eucliean scaling (vector with unit length): `\( \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \sqrt{\mathbf{x}_1^2 + \mathbf{x}_2^2 + \mathbf{x}_3^2 + \dots } }\)` + +* Normalisation (components of vector sum to 1): `\( \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \mathbf{x}_1 + \mathbf{x}_2 + \mathbf{x}_3 + \dots }\)` + +--- + +# Angle, not distance + +Rather than looking at the distance between the vectors, look at the angle between them. + +Vector dot product shows how much of one vector lies in the direction of another: +`\( \mathbf{A} \bullet \mathbf{B} = +\| \mathbf{A} \| \cdot \| \mathbf{B} \| \cos{\theta} \)` + +But, +`\( \mathbf{A} \bullet \mathbf{B} = \sum_i \mathbf{A}_i \cdot \mathbf{B}_i \)` +and `\( \| \mathbf{A} \| = \sum_i \mathbf{A}_i^2 \)` + +A bit of rearranging give the cosine simiarity: +`\( \cos{\theta} = \frac{ \mathbf{A} \bullet \mathbf{B} }{ \| \mathbf{A} \| \cdot \| \mathbf{B} \| } = +\frac{\sum_i \mathbf{A}_i \cdot \mathbf{B}_i}{\sum_i \mathbf{A}_i^2 \times \sum_i \mathbf{B}_i^2} \)` + +This is independent of vector lengths! + +Cosine similarity is 1 if in same direction, 0 if at 90⁰, -1 if antiparallel. + +## FIXME! Cosine distance bug: frequencies2 length not squared. + + +## FIXME! Diagram of vector dot product + + + - \ No newline at end of file + diff --git a/slides/caesar-encipher.html b/slides/caesar-encipher.html index c4e8fb1..f45a166 100644 --- a/slides/caesar-encipher.html +++ b/slides/caesar-encipher.html @@ -120,6 +120,49 @@ if __name__ == "__main__": --- +# Accents + +```python +>>> caesar_encipher_letter('é', 1) +``` +What does it produce? + +What should it produce? + +## Unicode, combining codepoints, and normal forms + +Text encodings will bite you when you least expect it. + +* urlencoding is the other pain point. + +--- + +# Five minutes on StackOverflow later... + +```python +def unaccent(text): + """Remove all accents from letters. + It does this by converting the unicode string to decomposed compatibility + form, dropping all the combining accents, then re-encoding the bytes. + + >>> unaccent('hello') + 'hello' + >>> unaccent('HELLO') + 'HELLO' + >>> unaccent('héllo') + 'hello' + >>> unaccent('héllö') + 'hello' + >>> unaccent('HÉLLÖ') + 'HELLO' + """ + return unicodedata.normalize('NFKD', text).\ + encode('ascii', 'ignore').\ + decode('utf-8') +``` + +--- + # Doing all the letters ## Test-first developement @@ -186,7 +229,7 @@ ciphertext = [caesar_encipher_letter(p, key) for p in plaintext] \ No newline at end of file