Lexical Diversity Measurements

On this page, you can calculate a large number of measures of lexical diversity. For each measurement, a reference is given. If you want to read about the why and how of these calculations, you should look up the references. If you use this tool for your research, please cite it as follows.

Reuneker, A. (2017). Lexical Diversity Measurements. Retrieved 12 June, 2025, from https://www.reuneker.nl/files/ld.

For other calculation tools (such as ngrams, wordlists and keyword analysis), and for contact details, see https://www.reuneker.nl.

Input


Options






(100-1500000)
(2-8)

* The sample texts are the first chapter of George Orwell's Animal Farm, Sir Arthur Conan Doyle's A Scandal in Bohemia and the first chapter of Charles Darwin's On the Origin of Species. See References for Gutenberg links.

Please take note of the pre-processing (i.e. before calculation) done here:

Output

Tokenization ... text split into words
Frequency list ... all words and their frequencies
Tokens ... total number of words
Types ... number unique of words
Hapax legomena ... number of words occurring only once Please analyze a text first
Dis legomena ... number of words occurring twice Please analyze a text first
Type-token-ratio (TTR) ... number of types divided by number of tokens
Mean word frequency (MWF) ... number of tokens divided by number of types
Mean segmental TTR (MSTTR) ... mean ttr for text segmented into chunks of 100 words; only for text longer than 100 words; segment length set to 25 (Johnson 1944)
Moving average TTR (MATTR) ... mean ttr for successive windows of a text; window size set to 25 words (Covington 2007; Covington & McFall 2010)
Guiraud's Index ... Guiraud (1954)
Herdan's C ... Herdan (1960; 1964)
Yule's I ... Yule (1944); Gries (2004)
Yule's K ... Yule (1944); Oakes (2004, pp. 203-5)
Maas's a2 ... Maas (1972); Tweedie & Baayen (1998); Treffers-Daller (2013)
Dugast's U2 ... Dugast (1978; 1979)
Measure of textual lexical diversity (MTLD) ... McCarthy & Jarvis (2010)
Compression rate (GZ) ... The deflate function from zlib is used instead of gzcompress, because the latter adds headers which penalizes shorter texts more severely. The compression rate (0-1) is calculated by subtracting the length of the compressed text by the length of the original, uncompressed text from 1. The higher the rate, the higher the compression, meaning the text contains less repetition — and vice versa.
Compression rate (LZW) ... The Lempel–Ziv–Welch algorithm (Welch, 1984) is used. The compression rate is calculated by subtracting the length of the compressed text divided by the length of the original, uncompressed text from 1. The higher the rate, the higher the compression, meaning the text contains more repetition — and vice versa.
Processing time ... Yes, a script like this takes only milliseconds.

Tokenized text

Below you can see the text in tokenized form. This means that each numbered item should represent a word. The tokenized text is the main ingredient for all analyses of lexical diversity, but tokenization is not always perfect. Therefore, I consider it a good habit to inspect the tokenized text.

Frequency list

Below you can see a list of the most frequent words. Most of the times, the top of the list is occupied by some function words (determiners, general verbs et cetera). The list has a maximum of words and is sorted from most to less frequent.

Hapax legomena

Below you can see a list of words that occur only once (hapax legomena).

Dis legomena

Below you can see a list of words that occur twice (dis legomena).