## Abstraction: frequency of letter counts
+.float-right[![right-aligned Letter frequencies](letter-frequency-treemap.png)]
+
Letter | Count
-------|------
a | 489107
------------|---------|---------|---------|---------|---------|-------
Probability | 0.06723 | 0.02159 | 0.02748 | 0.02748 | 0.01607 | 1.76244520 × 10<sup>-8</sup>
-(Implmentation issue: this can often underflow, so get in the habit of rephrasing it as `\( \sum_i \log p_i \)`)
+(Implmentation issue: this can often underflow, so we rephrase it as `\( \sum_i \log p_i \)`)
Letter | h | e | l | l | o | hello
------------|---------|---------|---------|---------|---------|-------
# Five minutes on StackOverflow later...
```python
+import unicodedata
+
def unaccent(text):
"""Remove all accents from letters.
It does this by converting the unicode string to decomposed compatibility
# Reading letter probabilities
-New file: `language_models.py`
-
1. Load the file `count_1l.txt` into a dict, with letters as keys.
2. Normalise the counts (components of vector sum to 1): `$$ \hat{\mathbf{x}} = \frac{\mathbf{x}}{\| \mathbf{x} \|} = \frac{\mathbf{x}}{ \mathbf{x}_1 + \mathbf{x}_2 + \mathbf{x}_3 + \dots }$$`