----
-
-# Back to frequency of letter counts
-
-Letter | Count
--------|------
-a | 489107
-b | 92647
-c | 140497
-d | 267381
-e | 756288
-. | .
-. | .
-. | .
-z | 3575
-
-Another way of thinking about this is a 26-dimensional vector.
-
-Create a vector of our text, and one of idealised English.
-
-The distance between the vectors is how far from English the text is.
-
----
-
-# Vector distances
-
-.float-right[![right-aligned Vector subtraction](vector-subtraction.svg)]
-
-Several different distance measures (__metrics__, also called __norms__):
-
-* L<sub>2</sub> norm (Euclidean distance):
-`\(\|\mathbf{a} - \mathbf{b}\| = \sqrt{\sum_i (\mathbf{a}_i - \mathbf{b}_i)^2} \)`
-
-* L<sub>1</sub> norm (Manhattan distance, taxicab distance):
-`\(\|\mathbf{a} - \mathbf{b}\| = \sum_i |\mathbf{a}_i - \mathbf{b}_i| \)`
-
-* L<sub>3</sub> norm:
-`\(\|\mathbf{a} - \mathbf{b}\| = \sqrt[3]{\sum_i |\mathbf{a}_i - \mathbf{b}_i|^3} \)`
-
-The higher the power used, the more weight is given to the largest differences in components.
-
-(Extends out to:
-
-* L<sub>0</sub> norm (Hamming distance):
-`$$\|\mathbf{a} - \mathbf{b}\| = \sum_i \left\{
-\begin{matrix} 1 &\mbox{if}\ \mathbf{a}_i \neq \mathbf{b}_i , \\
- 0 &\mbox{if}\ \mathbf{a}_i = \mathbf{b}_i \end{matrix} \right. $$`
-
-* L<sub>∞</sub> norm:
-`\(\|\mathbf{a} - \mathbf{b}\| = \max_i{(\mathbf{a}_i - \mathbf{b}_i)} \)`
-
-neither of which will be that useful here, but they keep cropping up.)
----
-
-# Normalisation of vectors
-
-Frequency distributions drawn from different sources will have different lengths. For a fair comparison we need to scale them.
-
-* Eucliean scaling (vector with unit length): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \sqrt{\mathbf{x}_1^2 + \mathbf{x}_2^2 + \mathbf{x}_3^2 + \dots } }$$`
-
-* Normalisation (components of vector sum to 1): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \mathbf{x}_1 + \mathbf{x}_2 + \mathbf{x}_3 + \dots }$$`
-
----
-
-# Angle, not distance
-
-Rather than looking at the distance between the vectors, look at the angle between them.
-
-.float-right[![right-aligned Vector dot product](vector-dot-product.svg)]
-
-Vector dot product shows how much of one vector lies in the direction of another:
-`\( \mathbf{A} \bullet \mathbf{B} =
-\| \mathbf{A} \| \cdot \| \mathbf{B} \| \cos{\theta} \)`
-
-But,
-`\( \mathbf{A} \bullet \mathbf{B} = \sum_i \mathbf{A}_i \cdot \mathbf{B}_i \)`
-and `\( \| \mathbf{A} \| = \sum_i \mathbf{A}_i^2 \)`
-
-A bit of rearranging give the cosine simiarity:
-`$$ \cos{\theta} = \frac{ \mathbf{A} \bullet \mathbf{B} }{ \| \mathbf{A} \| \cdot \| \mathbf{B} \| } =
-\frac{\sum_i \mathbf{A}_i \cdot \mathbf{B}_i}{\sum_i \mathbf{A}_i^2 \times \sum_i \mathbf{B}_i^2} $$`
-
-This is independent of vector lengths!
-
-Cosine similarity is 1 if in parallel, 0 if perpendicular, -1 if antiparallel.
-
----
-
-# Which is best?
-
- | Euclidean | Normalised
----|-----------|------------
-L1 | x | x
-L2 | x | x
-L3 | x | x
-Cosine | x | x
-
-And the probability measure!
-
-* Nine different ways of measuring fitness.
-
-## Computing is an empircal science
-
-Let's do some experiments to find the best solution!