Stretching back into the murky history of the Voynich Manuscript, however, is the lurking suspicion that it is a fraud; either a modern fabrication or, perhaps, a hoax by a contemporary scribe.

One of the more well-known arguments for the authenticity of the manuscript, in addition to its manufacture with period parchment and inks, is that the text appears to follow certain statistical properties associated with human language, and which were unknown at the time of its creation.

The most well-known of these properties is that the frequency of words in the Voynich Manuscript have been claimed to follow a phenomenon known as *Zipf’s Law*, whereby the frequency of a word’s occurrence in the text is inversely proportional to its rank in the list of words ordered by frequency.

In this post, we will scrutinise the extent to which the expected statistical properties of natural languages hold for the arcane glyphs presented by the Voynich manuscript.

Zipf’s Law is an example of a discrete power law probability distribution. Power laws have been found to lurk beneath a sinister variety of ostensibly natural phenomena, from the relative size of human settlements to the diversity of species descended from a particular ancestral freshwater fish.

In its original context of human langauge, Zipf’s Law states that the most common word in a given language is likely to be roughly twice as common as the second most common word, and three times as common as the third most common word. More precisely, this law holds *for much of the corpus*, as the law tends to break down somewhat at both the most-frequent and least-frequent words in the corpus^{1}. Despite this, we will focus on the original, simpler Zipfian characterisation in this analysis.

The most well-known, if highly flawed, method to determine whether a distribution follows a power law is to plot it with both axes expressed as a log-scale: a so-called log-log plot. A power law, represented in such a way, will appear linear. Unfortunately, a hideous menagerie of other distributions will also appear linear in such a setting.

More generally, it is rarely sensible to claim that any natural phenomenon *follows* a given distribution or model, but instead to demonstrate that a distribution presents *a useful model* for a given set of observations. Indeed, it is possible to fit any set of observations to a power law, with the assumption that the fit will be poor. Ultimately, we can do little more than demonstrate that a given model is the best simulacrum of observed reality, subject to the uses to which it will be put. Certainly, a more Bayesian approach would advocate building a range of models, demonstrating that the power law is most accurate. All truth, it seems, is relative.

Faced with the awful statistical horror of the universe, we are reduced to seeking evidence *against* a phenomenon’s adherence to a given distribution. Our first examination, then, is to see whether the basic log-log plot supports or undermines the Voynich Manuscript.

A crude visual analysis certainly supports the argument that, for much of the upper half of the Voynich corpus, there is a linear relationship on the log-log plot consistent with Zipf’s Law. As mentioned, however, this superficial appeal to our senses leaves a gnawing lack of certainty in the conclusion. We must turn to less fallible tools.

The poweRlaw package for R is designed specifically to exorcise these particular demons. This package attempts to fit a power law distribution to a series of observations, in our case the word frequencies observed in the corpus of Voynich text. With the fitted model, we then attempt to *disprove* the null hypothesis that the data is drawn from a power law. If this attempt to betray our own model fails, then we attain an inverse enlightenment: there is insufficient evidence that the model is *not* drawn from a power law.

This is an inversion of the more typical frequentist null hypothesis scenario. Typically, in such approaches, we hope for a low p-value, typically below 0.05 or even 0.001, showing that the chance of the observations being consistent with the null hypothesis is extremely low. For this test, we instead hope that our p-value is *insufficiently* low to make such a claim, and thus that a power law *is* consistent with the data.

The diagram above shows a fitted parameterisation of the power law according to the poweRlaw package. In addition to the visually appealing fit of the line, the weirdly inverted logic of the above test provides a p-value of `0.151`

. We thus have as much confidence as we can have, via this approach, that a power law is a reasonable model for the text in the Voynich corpus.

Led further down twisting paths by this initial taste of success, we can now present the Voynich corpus against other human-language corpora to gain a faint impression of how similar or different it is to known languages. The following plot compares the frequency of words in the Voynich Manuscript to those of the twenty most popular languages in Wikipedia, taken from the dataset available here.

The Voynich text seems consistent with the behaviour of known natural languages from Wikipedia. The most striking difference being the clustering of Voynich word frequencies in the lower half of the diagram, resulting from the smaller corpus of words in the Voynich Manuscript. This causes, in particular, lower-frequency words to occur an identical number of times, resulting in vertical leaps in the frequency graph towards the lower end.

To highlight this phenomenon, we can apply a similar technique to another widely-translated short text: the United Nations Declaration of Human Rights.

The above arguments might at first appear compelling. The surface incomprehensibility of the Voynich Manuscript succumbs to the deep currents of statistical laws, and reveals an underlying pattern amongst the chaos of the text.

Sadly, however, as with all too many arguments in the literature regarding power law distributions arising in nature, there is a complication to this argument that again highlights the difference between proof and the failure to disprove. Certainly, if a power law had proved incompatible with the Voynich Manuscript then we would have doubted its authenticity. With its apparent adherence to such a distribution, however, we have taken only one hesitant step towards confidence.

Rugg has argued that certain random mechanisms can produce text that adheres to Zipf’s Law, and has demonstrated a simple mechanical procedure for doing so. A more compelling argument is presented, without reference to the Voynich Manuscript, by Li. (1992)^{2}, who demonstrates that a text drawn entirely at random from any given alphabet of symbols that includes a space will result in a text adhering to some form of Zipf-like distribution. We cannot hang our confidence on such a slender thread.

While Zipf’s Law has been shown to hold for human language text, and a text that does not demonstrate it is certainly suspect, it is far from being the only telltale statistical property of natural language. We have already briefly examined sequences of repeated words in the text; we will now delve further.

Another curious distortion of human languages is that they demonstrate a preference for shorter words. The precise mechanism that results in this apparently universal property is unclear, but likely relates somehow to efficiency of communication. Regardless of the deeper causality, in most natural language texts there is a markedly higher frequency of short words than longer words.

As demonstrated by Sigurd, Eeg-Olofsson, and van Weijer, (2004)^{3}, however, the very shortest words are not the most common. Instead, at least for English, Swedish, and German, words between 3 and 5 letters in length dominate. This property can be accurately modelled by an appropriately parameterised Gamma distribution.

Notably, and conveniently, this property will not hold for random texts as described above. These purely stochastic texts would be expected to produce, in the long term, a monotonically-decreasing function as word length increases.

To demonstrate this effect, we can simulate a purely random text along the lines discussed by Li, and show its correspondingly naïve descending plot.

As can be seen, the word-length frequency of this random text forms a comfortingly simple exponential curve, with pseudo-words of one letter being by far the most common. It is also notable that, at the far reaches of the probability distribution, this cacophonous experiment will produce words of almost four-hundred letters in length. While adherence to Zipf’s Law would have misled us into supporting this as an apparently natural language, even a cursory glance at this plot would have convinced us otherwise.

How, then, does the Voynich Manuscript adhere to the expected Gamma distribution of Sigurd et al.? We can employ the excellent fitdistrplus package to peel back this particular veil.

The word-length frequency distribution of the Voynich Manuscript clearly demonstrates a preference for four-letter words, not only breaking free of the confines of pure randomness, but also corresponding broadly with observed frequency patterns of the languages tested by Sigurd et al.

These analyses can only present a dim outline of the text itself, and we resist the awful temptation to attempt any form of decipherment. Certainly, the evidence here seems convincing enough that the Voynich Manuscript does represent a human language, but the statistics presented here are of little use in such an effort. It is likely, of course, that the most frequent words in the manuscript may, under certain assumptions, correspond to the most common words or particles in many languages — the definite article, the indefinite article, conjunctions, pronouns, and similar. Without deeper knowledge of the language, however, and with the range of scribing conventions and shortcuts commonplace in texts of the period, these techniques are too limited to do more than tantalise us with what we may never know.

Subjecting the text of the Voynich Manuscript to the crude frequency analyses presented here can support, although not prove, the view that the manuscript, regardless of its true content, is not simply random gibberish. Nor is the text likely to be the result of a simple mechanical process designed without knowledge of the statistical patterns of human languages. Neither is it likely to be any form of cryptogram more sophisticated than the simplest ciphers, as these would have tended to compromise the statistical properties that we have observed.

The demonstrable following of Zipf’s Law, and the adherence to a Gamma distribution of similar shape to known languages, strongly suggests that the text is likely a representation of some natural language.

In the next post we will attempt blindly to wrench more secrets from the text itself through application of modern textual analysis techniques. Until then the Voynich Manuscript remains, silently obscure, beyond the reach of our faltering science.

While the world abounds with strange phenomena ripe for analysis in their raw state, there is a peculiar pleasure in scrutinising arcane information curated and obscured by the human mind.

The Voynich Manuscript is one of the most well-known and studied volumes of occult knowledge. The book’s most recent history involves its purchase in 1912 by Wilfrid Voynich, a rare book dealer, from a sale of manuscripts by the Society of Jesus at the Villa Mondragone, Frascati. Following several fruitless years of attempts to decipher the manusript and discover its origin, or to interest others in it, Wilfrid Voynich died. The book passed through a number of other hands before being donated to Yale University by the noted rare book dealer Hans P. Kraus in 1969. It now resides in Yale’s Beinecke Rare Book and Manuscript Library with the designation MS 408.

Written almost entirely in an unknown script, barring a small number of words apparently in Latin and High German, the manuscript is compellingly illustrated with depictions of plants, herbs, human figures, astronomical and astrological symbols. The manuscript has resisted all attempts at interpretation by cryptographers, historians, and linguists.

From a linguistic and cryptographic perspective, this lack of success in interpretation is not surprising. The two-hundred or so folios of the manuscript, while beautifully illuminated, present a sadly limited corpus of text for the purposes of traditional analysis.

In this short series of posts we will subject the Voynich Manuscript to a range of text analysis techniques, delving into its structure, gain horrific insight into its composition, and skeptically assessing its credibility. The manuscript has been subjected to almost fifty years of furtive attempts by cryptographers, including the US National Security Agency and a menagerie of others from the distinguished to the deranged. We will crudely mimic some earlier results, and hopefully add our own confusion to the roiling mass of current research into the Voynich Manuscript.

Since its discovery, and throughout the ongoing unsuccessful attempts to decipher its contents, many have questioned the authenticity of the Voynich Manuscript. The theory that the entire book is a hoax, either by contemporary scribes or by more modern players, has been raised repeatedly over the years.

Radiocarbon dating in 2010 asserted that the manuscript’s parchment likely dates from the early 15th century; the volume of parchment in the manuscript, and its consistency across the document, make it unlikely, although not impossible, that the book is a modern-day hoax.

Other supporting evidence has drawn from early mentions of the manuscript in correspondence. According to http://www.voynich.nu, which presents a far more detailed and thorough description of the research around the manuscript and its history than we could hope to offer here, the first extant mention of the manuscript can be found in a 1639 letter from Athanasius Kircher in Rome, replying to a letter forwarded from Georgius Barschius of Prague by the mathematician Theodor Moretus.

The letter refers to a “book of mysterious steganography” (*“libellum… …steganographici mysterisi”*) illustrated with pictures of plants, stars and chemical secrets that Kirscher had not yet had time to decipher. Barschius had sought out Kirscher’s expertise due to his fame at the time for claiming to have, erroneously as it later transpired, deciphered the hieroglyphic writing system of the Ancient Egyptian language. Later correspondence between Barschius and Kirscher appears, according to Zandbergen^{1}, to suggest strongly that the mysterious book in question is the Voynich Manuscript based on its description.

We now turn from historical sources to darker, more statistical realms. There is compelling support for the notion that, regardless of the true meaning of the book, its contents are drawn from a human language and are neither random symbols nor any form of sophisticated cipher.

One of the pillars of this argument is that certain statistical properties of the Voynich Manuscripts text strongly resemble those of natural, human languages, and which are unlikely, although not impossible, to arise from random text, artificially generated text, or most forms of encipherment.

The most well-known of these statistical properties is the apparent adherence of the manuscript to Zipf’s Law. This law, made famous by the US linguist George Zipf, observes that in corpora of natural languages, the frequency of a word is inversely proportional to its rank when words from a corpus are ordered by frequency. More plainly: the most common word in a language is likely to be *n* times more common than the second most common word; the second word will be roughly *n* times more common than the third word, and so on. Whilst merely an approximation, this law can be seen to hold for most human languages, and for a range of other natural phenomena.

Random gibberish, on the other hand, would most likely not follow Zipf’s Law, although carefully crafted gibberish certainly could. Rugg has demonstrated that a simple mechanical procedure can produce randomised text that adhered to Zipf’s Law, although the example he provides is both somewhat contrived and also presupposes a knowledge of this statistical quirk of human languages in the first place. Given that the physical makeup of the Voynich Manuscript dates to the early 15th Century, some four centuries before Zipf popularised this mathematical assessment of human languages, the argument that it is a contemporary act of calligraphic glossolalia seems strained.

Similarly, most forms of cryptography beyond the simplest substitution ciphers would also skew the text away from Zipf’s Law. It is notable that the Voynich Manuscript predates even works such as Trithemius‘s Steganographia, or the Book of Soyga and its magic tables of letters that so obsessed John Dee.

In contrast, however, it has been claimed that other features of the text raise doubts. One of the most commonly stated counter-arguments to the natural hypothesis of the Voynich text is that some words are repeated an unnatural number of times. Depending on the transcription, individual words have been reported to be repeated up to five times. Whilst this is not an impossible occurrence in human lanugage, it is highly irregular.

The next post in this short series will focus on the Voynich Manuscript’s adherence, or lack thereof, to Zipf’s Law in full. Following that, we will see the extent to which other forms of modern textual analysis can be applied to dissect the arcane and unrelenting secrets of MS 408.

This post, however, will describe the contortions required to render the Voynich text suitable for our particular form of scrutiny.

Given the format and presentation of the text, we make several assumptions about the writing system contained in the Voynich Manuscript:

- It is written in an alphabet, or potentially an abjad or even an abugida
^{2}, and not a logographic system. That the text is not logographic is justified by the small number of individual symbols. The distinction between the other systems is sufficiently subtle that it will not affect our analyses^{3}. - The manuscript is written from left to right, and not the reverse, vertically, boustrophedon. This is uncontroversial and apparent from even a cursory inspection of the text itself; the horizontal flow of the writing is clear, with lines clearly starting at the left margin and ending before the right. The text is separated into paragraphs, of which the final line is justified to the left.

Due to the diligent activity of several generations of Voynich researchers, the text of the manuscript has been transcribed into a machine-readable format. As the alphabet is unknown, there are minor uncertainties in rendering the text, leading to a number of similar but competing transcriptions. The subtle details of the various transcription efforts, and their history, are available at: http://www.voynich.nu/transcr.html, with the raw data available at http://www.voynich.nu/data/. We have settled on the v101 transliteration by Glen Claston, rendered in the Intermediate Voynich Transliteration File Format (IVTFF) of Zandbergen. This is one of the more recent and widely-used transcriptions, and has the added advantage of being supported by the availability of a TrueType font. The underlying file is available here: http://www.voynich.nu/data/GC_ivtff_s.txt.

We perform the following steps to make the data usable for our analyses. For many scenarios, we would develop a generalisable set of steps to allow conversion of many documents to an appropriate form. Until and unless, however, a new cache of documents in the same language are found, it is simpler and easier to perform these one-time steps manually.

Firstly, we delete from the text all incomplete words, as marked in the IVTFF format. This includes:

- all text in angle brackets
- all words containing ?’s
- all words containing []

Secondly, we tokenize the text and remove punctuation. The transcription of the Voynich manuscript that we have chosen uses the following punctuation:

- “.” is a space
- “,” is a potential space. For simplicity, we do not treat these as a space.

Finally, we organize the document in an appropriate form to be imported into an R data frame, or tidyverse tibble.

The above steps were performed in the Vim text editor, and the commands used are reproduced in the code below:

The resulting raw data file is available here. This file can be read into R simply by use of the `read.csv`

function:

voynich_tbl <- read_csv( "data/voynich_raw.txt", col_names=FALSE ) %>% rename( folio = X1, text = X2 )

As a first, horrifying glance into the forms of analysis that this allows, we can now use our raw data to identify the most repeated words in the manuscript, according to our transcription. The following R code extracts the entirety of the text and encodes it as a run length encoding. This conveniently results in a sequential list of words and the number of times that each is repeated *in sequence*. We can then simply extract the largest number of repetitions for each word in the corpus:

This simple analysis shows that, in the transcription we have chosen, the longest sequences of repeated words are only three words in length, occuring a total of five times in the text. While there are many other arguments against the potential validity of the Voynich Manuscript, word repetition does in itself present a compelling reason to doubt that the text is a human language.

We have now reduced the strange and beautiful elegance of the Voynich Manuscript’s centuries-old illuminations to a crude, utilitarian abstraction. With this particular act of artistic and literary desecration complete, in the next post we will examine Zipf’s Law in more detail, and interrogate the extent to which this law supports or undermines the text’s authenticity.

In the previous three posts^{1} in our series delving into the cosmic horror of UFO sightings in the US, we have descended from the deceptively warm and sunlit waters of basic linear regression, through the increasingly frigid, stygian depths of Bayesian inference, generalised linear models, and the probabilistic programming language Stan.

In this final post we will explore the implications of the murky realms in which we find ourselves, and consider the awful choices that have led us to this point. We will therefore look, with merciful brevity, at the foul truth revealed by our models, but also consider the arcane philosophies that lie sleeping beneath.

Our crazed wanderings through dark statistical realms have led us eventually to a varying slope, varying intercept negative binomial generalised linear model, whose selection was justified over its simpler cousins via leave-one-out cross-validation (LOO-CV). By interrogating the range of hyperparameters of this model, we could reproduce an alluringly satisfying visual display of the posterior predictive distribution across the United States:

Further, our model provides us with insight into the individual per-state intercept \(\alpha\) and slope \(\beta\) parameters of the underlying linear model, demonstrating that there is variation between the rate of sightings in US states that cannot be accounted for by their ostensibly human population.

Interpreting these parameters, however, is not as quite as simple as in a basic linear model^{2}. Most importantly our negative binomial GLM employs a *log link* function to relate the linear model to the data:

y &\sim& \mathbf{NegBinomial}(\mu, \phi)\\

\log(\mu) &=& \alpha + \beta x\\

\alpha &\sim& \mathcal{N}(0, 1)\\

\beta &\sim& \mathcal{N}(0, 1)\\

\phi &\sim& \mathbf{HalfCauchy}(2)

\end{eqnarray}$$

In a basic linear regression, \(y=\alpha+\beta x\), the \(\alpha\) parameter can be interpreted as the value of \(y\) when \(x\) is 0. Increasing the value of \(x\) by 1 results in a change in the \(y\) value of \(\beta\). We have, however, been drawn far beyond such naive certainties.

The \(\alpha\) and \(\beta\) coefficients in our negative binomial GLM produce the \(\log\) of the \(y\) value: the *mean* of the negative binomial in our parameterisation.

With a simple rearrangement, we can being to understand the grim effects of this transformation:

$$\begin{array}

_ & \log(\mu) &=& \alpha + \beta x\\

\Rightarrow &\mu &=& \operatorname{e}^{\alpha + \beta x}\\

\end{array}$$

If we set \(x=0\):

$$\begin{eqnarray}

\mu_0 &=& \operatorname{e}^{\alpha}

\end{eqnarray}$$

The mean of the negative binomial when \(x\) is 0 is therefore \(\operatorname{e}^{\alpha}\). If we increase the value of \(x\) by 1:

$$\begin{eqnarray}

\mu_1 &=& \operatorname{e}^{\alpha + \beta}\\

&=& \operatorname{e}^{\alpha} \operatorname{e}^{\beta}

\end{eqnarray}$$

Which, if we recall the definition of the underlying mean of our model’s negative binomial, \(\mu_0\), above, is:

$$\mu_0 \operatorname{e}^{\beta}$$

The effect of an increase in \(x\) is therefore *multiplicative* with a log link: each increase of \(x\) by 1 causes the mean of the negative binomial to be further multiplied by \(\operatorname{e}^{\beta}\).

Despite this insidious complexity, in many senses our naive interpretation of these values still holds true. A higher value for the \(\beta\) coefficient does mean that the rate of sightings increases more swiftly with population.

With the full, unsettling panoply of US States laid out before us, any attempt to elucidate their many and varied deviations would be overwhelming. Broadly, we can see that both slope and intercepts are generally restricted to a fairly close range, with the 50% and 95% credible intervals notably overlapping in many cases. Despite this, there are certain unavoidable abnormalities from which we cannot, must not, shrink:

- Only Pennsylvania presents a slope (\(\beta\)) parameter that could be considered as potentially zero, if we consider its 95% credible interval. The correlation between population and number of sightings is otherwise unambiguously positive.
- Delaware, whilst presenting a wide credible interval for its slope (\(\beta\)) parameter, stands out as suffering from the greatest rate of change in sightings as its population increases.
- Both California and Utah, present suspiciously narrow credible intervals on their slope (\(\beta\)) parameters. The growth in sightings as the population increases therefore demonstrates a worrying consistency although, in both cases, this rate is amongst the lowest of all the states.

We can conclude, then, that while the *total* number of sightings in Delaware are currently low, any increase in numbers of residents there appears to possess a strange fascination for visitors from beyond the inky blackness of space. By contrast, whilst our alien observers have devoted significant resources to monitoring Utah and California, their apparent willingness to devote further effort to tracking those states’ burgeoning populations is low.

One of the fundamental elements of the Bayesian approach is its willing embrace of uncertainty. The output of our eldritch inferential processes are not *point estimates* of the outcome, as in certain other approaches, but instead *posterior predictive distributions* for those outcomes. As such, if when we turn our minds to predicting new outcomes based on previously unseen data, our outcome is a *distribution* over possible values rather than a single estimate. Thus, at the dark heart of Bayesian inference is a belief in the truth that all uncertainty be quantified as probability distributions.

The Bayesian approach as inculcated here has a *predictive* bent to it. These intricate methods lend themselves to forecasting a distribution of possibilities before the future unveils itself. Here, we gain a horrifying glimpse into the emerging occurrence of alien visitations to the US as its people busy themselves about their various concerns, scrutinised and studied, perhaps almost as narrowly as a man with a microscope might scrutinise the transient creatures that swarm and multiply in a drop of water.

The twisted reasoning underlying this series of posts has been not only in indoctrinating others into the hideous formalities of Bayesian inference, probabilistic programming, and the arcane subtleties of the Stan programming language; but also as an exercise in exposing our own minds to their horrors. As such, there is a tentative method to the madness of some of the choices made in this series that we will now elucidate.

Perhaps the most jarring choice has been our choice to code these models in Stan directly, rather than using one of the excellent helper libraries that allow for more concise generation of the underlying Stan code. Both `brms`

and `rstanarm`

possess the capacity to spawn models such as ours with greater simplicity of specification and efficiency of output, due to a number of arcane tricks. As an exercise in internalising such forbidden knowledge, however, it is useful to address reality unshielded by such swaddling conveniences.

In fabricating models for more practical reasons, however, we would naturally turn to these tools unless our unspeakable demands go beyond their natural scope. As a personal choice, `brms`

is appealing due to its more natural construction of readable per-model Stan code to be compiled. This allows for the grotesque internals of generated models to be inspected and, if required, twisted to whatever form we desire. `rstanarm`

, by contrast, avoids per-model compilation by pre-compiling more generically applicable models, but its underlying Stan code is correspondingly more arcane for an unskilled neophyte.

The Stan models presented in previous posts have also been constructed as simply as possible and have avoided all but the most universally accepted tricks for improving speed and stability^{3}. Most notably, Stan presents specific functions for GLMs based on the Poisson and negative binomial distributions that apply standard link functions directly. As mentioned, we consider it more useful for personal and public indoctrination to use the basic, albeit `log`

-form parameterisations.

In concluding the dark descent of this series of posts on Bayesian inference, generalised linear models, and the unearthly effects of extraterrestrial visitions on humanity, we have applied numerous esoteric techniques to identify, describe, and quantify the relationship between human population and UFO sightings. The enigmatic model constructed throughout this and the previous three entries darkly implies that, while the rate of inexplicable aerial phenomena is inextricably and positively linked to humanity’s unchecked growth, there are nonetheless unseen factors that draw our non-terrestrial visitors to certain populations more than others, and that their focus and attention is ever more acute.

This series has inevitably fallen short of a full and meaningful elucidation of the techniques of Bayesian inference and Stan. From this first step on such a path, then, interested students of the bizarre and arcane would be well advised to draw on the following esoteric resources:

- McElreath’s Statistical Rethinking
- Gelman et al.’s Bayesian Data Analysis
- Stan Manual and Tutorials

Until then, watch the skies and archive your data.

In the previous post of this series unveiling the relationship between UFO sightings and population, we crossed the threshold of normality underpinning linear models to construct a *generalised linear model* based on the more theoretically satisfying Poisson distribution.

On inspection, however, this model revealed itself to be less well suited to the data than we had, in our tragic ignorance, hoped. While it appeared, on visual inspection, to capture some features of the data, the predictive posterior density plot demonstrated that it still fell short of addressing the subtleties of the original.

In this post, we will seek to overcome this sad lack in two ways: firstly, we will subject our models to pitiless mathematical scrutiny to assess their ability to describe the data. With our eyes irrevocably opened to these techniques, we will construct an ever more complex armillary with which to approach the unknowable truth.

Our previous post showed the different fit of the Poisson model to the data from the simple Gaussian linear model. When presented with a grim array of potential simulacra, however, it is crucial to have reliable and quantitative mechanisms to select amongst them.

The eldritch procedure most suited to this purpose, *model selection*, in our framework, draws on *information criteria* that express the *relative* effectiveness of models at creating sad mockeries of the original data. The original and most well-known such criterion is the *Akaike Information Criterion*, which has, in turn, spawned a multitude of successors applicable in different situations and with different properties. Here, we will make use of *Leave-One-Out Cross Validation* (LOO-CV)^{1} as the most applicable to the style of model and set of techniques applied here.

It is important to reiterate that these approaches do not speak to an absolute underlying truth; information criteria allow us to choose between models, assessing which has most closely assimilated the madness and chaos of the data. For LOO-CV, this results in an *expected log predictive density* (`elpd`

) for each model. The model with the lowest `elpd`

is the least-warped mirror of reality amongst those we subject to scrutiny.

There are many fragile subtleties to model selection, of which we will mention only two here. Firstly, in general, the greater the number of predictors or variables incorporated into a model, the more closely it will be able to mimic the original data. This is problematic, in that a model can become *overfit* to the original data and thus be unable to represent previously unseen data accurately — it learns to mimic the form of the observed data at the expense of uncovering its underlying reality. The LOO-CV technique avoids this trap by, in effect, withholding data from the model to assess its ability to make accurate inferences on previously unseen data.

The second consideration in model selection is that the information criteria scores of models, such as (`elpd`

) in LOO-CV, are subject to *standard error* in their assessment; the score itself is not a perfect metric of model performance, but a cunning approximation. As such we will only consider one model to have outperformed its competitors if the difference in their relative `elpd`

is several times greater than this standard error.

With this understanding in hand, we can now ruthlessly quantify the effectiveness of the Gaussian linear model against the Poisson generalised linear model.

The original model presented before our subsequent descent into horror was a simple linear Gaussian, produced through use of `ggplot2`

‘s `geom_smooth`

function. To compare this meaningfully against the Poisson model of the previous post, we must now recreate this model using the, now hideously familar, tools of Bayesian modelling with Stan.

With both models straining in their different directions towards the light, we apply LOO-CV cross validation to assess their effectiveness at predicting the data.

> compare( loo_normal, loo_poisson ) elpd_diff se -8576.1 712.5

The information criterion shows that the complexity of the Poisson model does not, in fact, produce a more effective model than the false serenity of the Gaussian^{2}. The negative `elpd_diff`

of the `compare`

function supports the first of the two models, and the magnitude being over twelve times greater than the standard error leaves little doubt that the difference is significant. We must, it seems, look further.

With these techniques for selecting between models in hand, then, we can move on to constructing ever more complex attempts to dispel the darkness.

The Poisson distribution, whilst appropriate for many forms of count data, suffers from fundamental limits to its understanding. The single parameter of the Poisson, \(\lambda\), enforces that the mean and variance of the data are equal. When such comforting falsehoods wither in the pale light of reality, we must move beyond the gentle chains in which the Poisson binds us.

The next horrific evolution, then, is the *negative binomial* distribution, which similarly speaks to count data, but presents a *dispersion* parameter (\(\phi\)) that allows the variance to exceed the mean^{3}.

With our arcane theoretical library suitably expanded, we can now transplant the still-beating Poisson heart of our earlier generalised linear model with the more complex machinery of the negative binomial:

$$\begin{eqnarray}y &\sim& \mathbf{NegBinomial}(\mu, \phi)\\

\log(\mu) &=& \alpha + \beta x\\

\alpha &\sim& \mathcal{N}(0, 1)\\

\beta &\sim& \mathcal{N}(0, 1)\\

\phi &\sim& \mathbf{HalfCauchy}(2)

\end{eqnarray}$$

As with the Poisson, our negative binomial generalised linear model employs a log link function to transform the linear predictor. The Stan code for this model is given below.

With this model fit, we can compare its whispered falsehoods against both the original linear Gaussian model and the Poisson GLM:

> compare( loo_poisson, loo_negbinom ) elpd_diff se 8880.8 721.9

With the first comparison, it is clear that the sinuous flexibility offered by the dispersion parameter, \(\phi\), of the negative binomial allows that model to mould itself much more effectively to the data than the Poisson. The `elpd_diff`

score is positive, indicating that the second of the two compared models is favoured; the difference is over twelve times the standard error, giving us confidence that the negative binomial model is meaningfully more effective than the Poisson.

Whilst superior to the Poisson, does this adaptive capacity allow the negative binomial model to render the naïve Gaussian linear model obsolete?

> compare( loo_normal, loo_negbinom ) elpd_diff se 304.7 30.9

The negative binomial model subsumes the Gaussian with little effort. The `elpd_diff`

is almost ten times the standard error in favour of the negative binomial GLM, giving us confidence in choosing it. From here on, we will rely on the negative binomial as the core of our schemes.

The improvements we have seen with the negative binomial model allow us to discard the Gaussian and Poisson models with confidence. It is not, however, sufficient to fill the gaping void induced by our belief that the sightings of abnormal aerial phenomena in differing US states vary differently with their human population.

To address this question we must ascertain whether allowing our models to unpick the individual influence of states will improve their predictive ability. This, in turn, will lead us into the gnostic insanity of *hierarchical models*, in which we group predictors in our models to account for their shadowy underlying structures.

The first step on this path is to allow part of the linear function underpinning our model, specifically the intercept value, \(\alpha\), to vary between different US states. In a simple linear model, this causes the line of best fit for each state to meet the y-axis at a different point, whilst maintaining a constant slope for all states. In such a model, the result is a set of parallel lines of fit, rather than a single global truth.

This varying intercept can describe a range of possible phenomena for which the rate of change remains constant, but the baseline value varies. In such *hierarchical models* we employ a concept known as *partial pooling* to extract as much forbidden knowledge from the reluctant data as possible.

A set of entirely separate models, such as the per-state set of linear regressions presented in the first post of this series, employs a *no pooling* approach: the data of each state is treated separately, with an entirely different model fit to each. This certainly considers each the uniqueness of each state, but cannot benefit from insights drawn from the broader range of data we have available, which we may reasonably assume to have some relevance.

By contrast, the global Gaussian, Poisson, and negative binomial models presented so far represent *complete pooling*, in which the entire set of data is considered a formless, protean amalgam without meaningful structure. This mindless, groping approach causes the unique features of each state to be lost amongst the anarchy and chaos.

A partial pooling approach instead builds a *global* mean intercept value across the dataset, but allows the intercept value for each individual state to deviate according to a governing probability distribution. This both accounts for the individuality of each group of observations, in our case the state, but also draws on the accumulated wisdom of the whole.

We now construct a partially-pooled varying intercept model, in which the parameters and observations for each US state in our dataset is individually indexed:

$$\begin{eqnarray}

y &\sim& \mathbf{NegBinomial}(\mu, \phi)\\

\log(\mu) &=& \alpha_i + \beta x\\

\alpha_i &\sim& \mathcal{N}(\mu_\alpha, \sigma_\alpha)\\

\beta &\sim& \mathcal{N}(0, 1)\\

\phi &\sim& \mathbf{HalfCauchy}(2)

\end{eqnarray}$$

Note that the intercept parameter, \(\alpha\), in the second line is now indexed by the state, represented here by the subscript \(i\). The slope parameter, \(\beta\), remains constant across all states.

This model can be rendered in Stan code as follows:

Once the model has twisted itself into the most appropriate form for our data, we can now compare it against our previous completely-pooled model:

> compare( loo_negbinom, loo_negbinom_var_intercept ) elpd_diff se 363.2 28.8

Our transcendent journey from the statistical primordial ooze continues: the varying intercept model is favoured over the completely-pooled model by a significant margin.

Now that our minds have apprehended a startling glimpse of the implications of the varying intercept model, it is natural to consider taking a further terrible step and allowing both the slope and the intercept to vary^{4}.

With both the intercept and slope of the underlying linear predictor varying, an additional complexity raises its head: can we safely assume that these parameters, the intercept and slope, vary independently of each other, or may there be arcane correlations between them? Do states with a higher intercept also experience a higher slope in general, or is the opposite the case? Without prior knowledge to the contrary, we must allow our model to determine these possible correlations, or we are needlessly throwing away potential information in our model.

For a varying slope and intercept model, therefore, we must now include a *correlation matrix*, \(\Omega\), between the parameters of the linear predictor for each state in our model. This correlation matrix, as with all parameters in a Bayesian framework, must be expressed with a prior distribution from which the model can begin its evaluation of the data.

With deference to the authoritative quaint and curious volume of forgotten lore we will use an LKJ prior for the correlation matrix without further discussion of the reasoning behind it.

$$\begin{eqnarray}

y &\sim& \mathbf{NegBinomial}(\mu, \phi)\\

\log(\mu) &=& \alpha_i + \beta x_i\\

\begin{bmatrix}

\alpha_i\\

\beta_i

\end{bmatrix} &\sim& \mathcal{N}(

\begin{bmatrix}

\mu_\alpha\\

\mu_\beta

\end{bmatrix}, \Omega )\\

\Omega &\sim& \mathbf{LKJCorr}(2)\\

\phi &\sim& \mathbf{HalfCauchy}(2)

\end{eqnarray}$$

This model has grown and gained a somewhat twisted complexity compared with the serene austerity of our earliest linear model. Despite this, each further step in the descent has followed its own perverse logic, and the progression should clear. The corresponding Stan code follows:

The ultimate test of our faith, then, is whether the added complexity of the partially-pooled varying slope, varying intercept model is justified. Once again, we turn to the ruthless judgement of the LOO-CV:

> compare( loo_negbinom_var_intercept, loo_negbinom_var_intercept_slope ) elpd_diff se 13.3 2.4

In this final step we can see that our labours in the arcane have been rewarded. The final model is once again a significant improvement over its simpler relatives. Whilst the potential for deeper and more perfect models never ends, we will settle for now on this.

With our final model built, we can now begin to examine its mortifying implications. We will leave the majority of the subjective analysis for the next, and final, post in this series. For now, however, we can reinforce our quantitative analysis with visual assessment of the posterior predictive distribution output of our final model.

In comparison with earlier attempts, the varying intercept and slope model visibly captures the overall shape of the distribution with terrifying ease. As our wary confidence mounts in the mindless automaton we have fashioned, we can now examine its predictive ability on our original data.

The purpose of our endeavours is to show whether or not the frequency of extraterrestrial visitations is merely a sad reflection of the number of unsuspecting humans living in each state. After seemingly endless cryptic calculations, our statistical machinery implies that there are deeper mysteries here: allowing the relationship between sightings and the underlying linear predictors to vary by state more perfectly predicts the data. There are clearly other, hidden, factors in play.

More than that, however, our final model allows us to quantify these differences. We can now retrieve from the very bowels of our inferential process the per-state distribution of paremeters for both the slope and intercept of the linear predictor.

It is important to note that, while we are still referring to the \(\alpha\) and \(\beta\) parameters as the slope and intercept, their interpretation is more complex in a generalised linear model with a \(\log\) link function than in the simple linear model. For now, however, this diagram is sufficient to show that the horror visited on innocent lives by our interstellar visitors is not purely arbitrary, but depends at least in part on geographical location.

With this malign inferential process finally complete we will turn, in the next post, to a trembling interpretation of the model and its dark implications for our collective future.

This post continues our series on developing statistical models to explore the arcane relationship between UFO sightings and population. The previous post is available here: Bayes vs. the Invaders! Part One: The 37th Parallel.

The simple linear model developed in the previous post is far from satisfying. It makes many unsupportable assumptions about the data and the form of the residual errors from the model. Most obviously, it relies on an underlying Gaussian (or *normal*) distribution for its understanding of the data. For our count data, some basic features of the Guassian are inappropriate.

Most notably:

- a Gaussian distribution is continuous whilst counts are discrete — you can’t have 2.3 UFO sightings in a given day;
- the Gaussian can produce negative values, which are impossible when dealing with counts — you can’t have a negative number of UFO sightings;
- the Gaussian is symmetrical around its mean value whereas count data is typically
*skewed*.

Moving from the safety and comfort of basic *linear regression*, then, we will delve into the madness and chaos of *generalized linear models* that allow us to choose from a range of distributions to describe the relationship between state population and counts of UFO sightings.

We will be working in a Bayesian framework, in which we assign a *prior distribution* to each parameter that allows, and requires, us to express some *prior knowledge* about the parameters of interest. These priors are the initial starting points for parameters Afrom which the model moves towards the underlying values as it learns from the data. Choice of priors can have significant effects not only on the outputs of the model, but also its ability to function effectively; as such, it is both an important, but also arcane and subtle, aspect of the Bayesian approach^{1}.

Practically speaking, a simple linear regression can be expressed in the following form:

$$y \sim \mathcal{N}(\mu, \sigma)$$

(Read as “\(y\) *is drawn from* a normal distribution with mean \(\mu\) and standard deviation \(\sigma\)”).

In the the above expression the model relies on a Gaussian, or *normal* *likelihood* (\(\mathcal{N}\)) to describe the data — making assertions regarding how we believe the underlying data was generated. The Gaussian distribution is parameterised by a *location parameter* (\(\mu\)) and a standard deviation (\(\sigma\)).

If we were uninterested in prediction, we could describe the *shape* of the distribution of counts (\(y\)) without a predictor variable. In this approach, we could specify our model by providing *priors* for \(\mu\) and \(\sigma\) that express a level of belief in their likely values:

$$\begin{eqnarray}

y &\sim& \mathcal{N}(\mu, \sigma) \\

\mu &\sim& \mathcal{N}(0, 1) \\

\sigma &\sim& \mathbf{HalfCauchy}(2)

\end{eqnarray}$$

This provides an initial belief as to the likely shape of the data that informs, via arcane computational procedures, the model of how the observed data approaches the underlying truth^{2}.

This model is less than interesting, however. It simply defines a range of possible Gaussian distributions without unveiling the horror of the underlying relationships between unsuspecting terrestrial inhabitants and anomalous events.

To construct such a model, relating a *predictor* to a *response*, we express those relationships as follows:

$$\begin{eqnarray}

y &\sim& \mathcal{N}(\mu, \sigma) \\

\mu &=& \alpha + \beta x \\

\alpha &\sim& \mathcal{N}(0, 1) \\

\beta &\sim& \mathcal{N}(0, 1) \\

\sigma &\sim& \mathbf{HalfCauchy}(1)

\end{eqnarray}$$

In this model, the parameters of the likelihood are now probability distributions themselves. From a traditional linear model, we now have an *intercept* (\(\alpha\)), and a *slope* (\(\beta\)) that relates the change in the predictor variable (\(x\)) to the change in the response. Each of these *hyperparameters* is fitted according to the observed dataset.

We can now break free from the bonds of pure linear regression and consider other distributions that more naturally describe data of the form that we are considering. The awful power of GLMs is that they can use an underlying linear model, such \(\alpha + \beta x\), as parameters to a range of likelihoods beyond the Gaussian. This allows the natural description of a vast and esoteric menagerie of possible data.

The second key element of a generalised linear model is the *link function* that transforms the relationship between the parameters and the data into a form suitable for our twisted calculations. We can consider the link function as acting on the linear predictor — such as \(\alpha + \beta x\) in our example model — to represent a different relationship via a range of possible functions, many of which are inextricably bound to certain likelihood functions.

For count data the most commonly-chosen likelihood is the Poisson distribution, whose sole parameter is the *arrival rate* (\(\lambda\)). While somewhat restricted, as we will see, we can begin our descent into madness by fitting a Poisson-based model to our observed data. For Poisson-based generalised linear models, the canonical link function is the *log* — our linear predictor, rather than directly being the parameter \(\lambda\) is instead the *logarithm* of \(\lambda\). The insidious effects of this on the output of the model will become all too obvious as we persist.

To fit a model, we will use the Stan probabilistic programming language. Stan allows us to write a program defining a stastical model which can then be fit to the data using Markov-Chain Monte Carlo (MCMC) methods. In effect, at a very abstract level, this approach uses a random sampling to discover the values of the parameters that best fit the observed data^{3}.

Stan lets us specify models in the form given above, along with ways to pass in and define the nature and form of the data. This code can then be called from R using the `rstan`

package.

In this, and subsequent posts, we will be using Stan code directly as both a learning and explanatory exercise. In typical usage, however it is often more convenient to use one of two excellent R packages `brms`

or `rstanarm`

that allow for more compact and convenient specification of models, with well-specified raw Stan code generated automatically.

In seeking to take our first steps beyond the placid island of ignorance of the Gaussian, the Poisson distribution is a first step for assessing count data. Adapting the Gaussian model above, we can propose a predictive model for the entire population of states as follows:

$$\begin{eqnarray}

y &\sim& \mathbf{Poisson}(\lambda) \\

\log( \lambda ) &=& \alpha + \beta x \\

\alpha &\sim& \mathcal{N}(0, 1) \\

\beta &\sim& \mathcal{N}(0, 1)

\end{eqnarray}$$

The sole parameter of the Poisson is the *arrival rate* (\(\lambda\)) that we construct here from a population-wide intercept (\(\alpha\)) and slope (\(\beta\)). Note that, in contrast to earlier models, the linear predictor is subject to the \(\log\) *link function*.

The Stan code for the above model, and associated R code to run it, is below:

With this model encoded and fit, we can now peel back the layers of the procedure to see the extent to which it has endured the horror of our data.

The MCMC algorithm that underpins Stan — specifically Hamiltonian Monte Carlo (HMC) using the No U-Turn Sampler (NUTS) — attempts to find an island of stability in the space of possibilities that corresponds to the best fit to the observed data. To do so, the algorithm spawns a set of Markov chains that explore the parameter space. If the model is appropriate, and the data coherent, the set of Markov chains end up *converging* to exploring a similar, small set of possible states.

When modelling via this approach, a first check of the model’s chances of having fit correctly is to examine the so-called ‘traceplot’ that shows how well the separate Markov chains ‘mix’ — that is, converge to exploring the same area of the parameter space^{4}. For the Poisson model above, the traceplot can be created using the `bayesplot`

library:

These traceplots exhibit the characteristic insane scribbling of well-mixed chains often referred to, in hushed whispers, as weirdly reminiscent of a hairy caterpillar; the separate lines representing each chain are clearly overlapping and exploring the same forbidding regions. If, by contrast, the lines were largely separated or did not show the same space, there would be reason to believe that our model had become lost and unable to find a coherent voice amongst the myriad babbling murmurs of the data.

A second check on the sanity of the modelling process is to examine the output of the model itself to show the value of the fitted parameters of interest, and some diagnostic information:

fit_ufo_pop_poisson %>% summary(pars=c("a", "b" )) %>% extract2( "summary" ) mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat a 4.0236045 1.026568e-04 0.004851688 4.0139485 4.0203329 4.0236485 4.026829 4.0330836 2233.626 0.9995597 b 0.5070227 6.206903e-05 0.002263160 0.5027733 0.5054245 0.5069979 0.508547 0.5115027 1329.477 1.0021745

For assessment of successful model fit, the Rhat (\(\hat{R}\)) value represents the extent to which the various Markov chains exploring the parameter space, of which there are four by default in Stan, are consistent with each other. As a rule of thumb, a value of \(\hat{R} \gt 1.1\) indicates that the model has not converged appropriately and may require a longer set of random sampling iterations, or an improved model. Here, the values of \(\hat{R}\) are close to the ideal value of 1.

As a final step, we should examine how well our model can reproduce the shape of the original data. Models aim to be eerily lifelike parodies of the truth; in a Bayesian framework, and in the Stan language, we can build into the model the ability to draw random samples from the *posterior predictive distribution* — the set of parameters that the model has learnt from the data — to create new possible values of the outcomes based on the observed inputs. This process can be repeated many times to produce a multiplicity of possible outcomes drawn from model, which we can then visualize to see graphically how well our model fits the observed data.

In the Stan code above, this is created in the `generated_quantities`

block. When using more convenient libraries such as `brms`

or `rstanarm`

, draws from the posterior predictive distribution can be obtained more simply after the model has been fit through a range of helper functions. Here, we undertake the process manually.

We can see, then, how well the Poisson distribution, informed by our selection of priors, has shaped itself to the underlying data.

In the diagram above, the yellow line shows the densities of count values; the cyan lines show a sample of twisted mockeries spawned by our piscine approximations. The model has roughly captured the shape of the distribution of the original data, but demonstrates certain hideous dissimilarities — the peak of the posterior predictive distribution is significantly skewed away from the observed value.

To appreciate the full horror of what we have wrought we can plot the predictions of the model against the real data.

This shows a notably different line of best fit to that produced from the basic Gaussian model in the previous post. The most visible difference is the curved predictor resulting from the \(\log\) link function, which appears to account for the changes in the data very differently to the constrained absolute linearity of the previous Gaussian model^{5}. Whether this is more or less effective remains to be seen.

In this post we have opened our eyes to the weirdly non-linear possibilities of generalised linear models; sealed and bound this concept within the wild philosophy of Bayesian inference; and unleashed the horrifying capacities of Markov Chain Monte Carlo methods and their manifestation in the Stan language.

Applying the Poisson distribution to our records of extraterrestrial sightings, we have seen that we can, to some extent, create a mindless Golem that imperfectly mimics the original data. In the next post, we will delve more deeply into the esoteric possibilities of other distributions for count data, explore ways in which to account for arcane relationships across and between per-state observations, and show how we can compare the effectiveness of different models to select the final glimpse of dread truth that we inadvisably seek.

From our earlier studies of UFO sightings, a recurring question has been the extent to which the frequency of sightings of inexplicable otherworldly phenomena depends on the population of an area. Intuitively: where there are more people to catch a glimpse of the unknown, there will be more reports of alien visitors.

Is this hypothesis, however, true? Do UFO sightings closely follow population or are there other, less comforting, factors at work?

In this short series of posts, we will build a statistical model of UFO sightings in the United States, based on data previously scraped from the National UFO Reporting Centre and see how well we can predict the rate of UFO sightings based on state population.

This series of posts is part tutorial and part exploration of a set of modelling tools and techniques. Specifically, we will use Generalized Linear Models (GLMs), Bayesian inference, and the Stan probabilistic programming language to unveil the relationship between unsuspecting populations of US states and the dread sightings of extraterrestrial truth that they experience.

As mentioned, we will rely on data from NUFORC for extraterrestrial sightings.

For population data, we can rely on the the FRED database for historical US state-level census data. The combination of these datasets provides us with a count of UFO sightings per year for each state, and the population of that state in that year.

The downloading and scraping code is included here:

For ease, we will treat each year’s count of sightings as *independent* from the previous year’s — we do not make an assumption that the number of sightings in each year is based on the number of sightings in the previous year, but is rather due to the unknowable schemes of alien minds. (If extraterrestrials visitors were colonising areas in secrecy rather than making sporadic visits, and thus being seen repeatedly, we might not want to make such a bold assumption.) Each annual count will be treated as an individual, independent data point relating population to count, with each observation tagged by state.

For simplicity, particularly in building later models, we will restrict ourselves to sightings post 1990, roughly reflecting a period in which the NUFORC data sees a significant increase in reporting and thus relies less on historical reports. (NUFORC’s phone hotline has existed since 1974, and its web form since 1998.)

To begin, we start with the most basic form of model: a simple linear relationship between the count of sightings and the population of the state at that time. If sightings were purely dependent on population, it might be reasonable to assume that such a model would fit the data fairly well.

This relationship can be plotted with relative ease using the `geom_smooth()`

function of `ggplot2`

in R. For opening our eyes to the awful truth contained in the data, this is a useful first step.

While this graph does seem to support the argument that sightings increase with population *in general*, a closer inspection shows that the individual data points are clearly clustered. If we highlight the location of each data point, colouring points by US state, this becomes clearer:

This strongly suggests that, in preference to the simple linear model across all sightings, we might instead fit a linear model individually to each state:

The code to produce the above graphs from the NUFORC and FRED data is given below:

The plots shown here strongly indicate that the rate of dread interplanetary visitations per capita varies differently per state. It seems, therefore, that while the number of sightings is generally proportional to population, the specific relationship is state-dependent.

This simple linear model is, however, entirely unsatisfactory in describing the data, despite its support for the argument that different states have different underlying rates of sightings.

In the next post, therefore, we will delve deeper into the unsettling relationships between UFO sightings and the innocent humans to which they are drawn. To do so, we will have to consider a class of techniques that go beyond the normal distribution that underpins key assumptions of the simple linear models used here, and so move into the eldritch world of *generalized linear models*.

The Bigfoot Field Research Organisation has compiled a detailed database of Bigfoot sightings going back to the 1920s. Each sighting is dated, located to the nearest town or road, and contains a full description of the sighting. In many cases, sightings are accompanied by a follow-up report from the organisation itself.

As previously with UFO sightings and paranormal manifestations, our first step is to retrieve the data and parse it for analysis. Thankfully, the `bfro.net`

dataset is relatively well-formatted; reports are broken down by region, with each report following a mainly standard format.

As before, we rely on the `rvest`

package in R to explore and scrape the website. In this case, the key elements were to retrieve each state’s set of reports from the top level page, and retrieve the link for each report. Conveniently, these are in a standard format; the website also allows a printer-friendly mode that greatly simplifies scraping.

The scraping code is given here:

With each page retrieved, we step through and parse each report. Again, each page is fairly well-formatted, and uses a standard set of tags for date, location, and similar. The report parsing code is given here:

With each report parsed into a form suitable for analysis, the final step in scraping the site is to geolocate the reports. As in previous posts, we rely on Google’s geolocation API. For each report, we extract an appropriate address and parse it into a set of latitude and longitude coordinates. For the purposes of this initial scrape we restrict ourselves to North America, which compromises a large majority of reports on `bfro.net`

. Geolocation code is included below. (Note that a Google Geolocation API key is required for this code to run.)

With geolocated data in hand, we can now venture into the wilds. In which areas of North America are Sasquatch most commonly seen to roam? The plot below shows the overall density of Bigfoot sightings, with individual reports marked.

There are particular clusters on the Great Lakes, particularly in Southern Ontario; as well as in the Pacific Northwest. Smaller notable clusters exist in Florida, centered around Orlando. As with most report-based datasets, sightings are skewed towards areas of high population density.

The obvious first question to ask of such data is which, if any, environmental features correlate with these sightings. Other analyses of Bigfoot sightings, such as the seminal work of Lozier et al.^{1}, have suggested that forested regions are natural habitats for Sasquatch.

To answer this, we combine the underlying mapping data and Bigfoot sightings, with bioclimatic data taken from the Global Land Cover Facility. Amongst other datasets, this provides us with an accurate, high-resolution land cover raster map, detailing vegetation for each 5-arcminute cell in the country — approximately one cell per 10km² .

There are a range of bioclimatic variables in this dataset. The diagram below overlays all areas that are some form of forest onto the previous density plot.

The code for producing both of the above plots is given here:

From this initial plot we can see that, whilst tree cover is certainly not a bad predictor of Bigfoot sightings, it is far from a definitive correlation. The largest cluster, around the US-Canada border near Toronto, is principally lakes; whilst the secondary cluster in Florida is neither significantly forested or even close to the Everglades, which might have been expected. From the other perspective, there are significant forested areas for which sightings are reasonably rare.

The mystery of Bigfoot’s natural habitat and preferences is, therefore, very much unanswered from our initial analysis. With a broad range of variables still to explore — climate, altitude, food sources — future posts will attempt to determine what conditions lend themselves to strange survivals of pre-human primate activity. Perhaps changing conditions have pushed our far-distant cousins to previously unsuspected regions.

Until then, we keep following these trails into data unknown.

**References**

The NUFORC dataset, however, provides much more detailed information on individual sightings. The most significant immediate feature of each report, beyond its time and location, is the recorded shape of each object. Was the reported UFO saucer-shaped? Triangular? A flash of light? Or did the individual see more more than one object moving in formation? By considering this aspect of the data we can interrogate more closely the nature of UFO sightings over the years.

The NUFORC dataset classifies each sighting as one of 46 possible shapes, with approximately three percent of entries not being classified directly. Of those 46, several categories overlap each other; “Triangle”, “triangle”, and “Triangular” are all, for example, possibilities. Additionally, the dataset contains both “other” and “unknown” categories.

With a minimal level of cleaning we are left with 26 categories, including the familiar circular objects, but also “crescent” (2 entries), “hexagon” (1 entry), and “cross” (356 entries). For easier representation and analysis, we have collapsed several infrequent and similar categories together, resulting in eight top-level categories distributed in the following way:

We can clearly see from this that lights are the most commonly-reported extraterrestrial manifestation, closely followed by the category of “round” objects that most closely matches, perhaps, the traditional concept of a UFO sighting. This category does, however, extend to spheres, disks, ovals, domes, eggs, and cones.

This breakdown of frequency is somewhat deceptive: the sightings reported in the NUFORC database span from a reported 1400CE (a roughly-dated cave painting in Texas depicting a saucer-shaped object) to the present day. For reliability, we have discounted reports prior to 1900CE from our analysis. In our data, then, are these sightings consistent over time? Has the form and nature of our extraterrestrial visitors shifted in recent history? Are we naively assuming that all objects are from the same source, and with similar intentions?

At the most mundane level, the total volume of sightings has sharply increased since the early reports in the dataset. The total number of reported sightings in the 1940s was 144 in total, compared with 4934 sightings in 2017 alone, and a peak of 8651 sightings in 2014.

Broken down by category, the total number of sightings since 1945 is shown below. We have removed sightings prior to 1945 from this diagram, as they were sufficiently low in volume that they were not visible. The most marked rise in sightings begins in the mid-1990s, with 502 sightings in 1994 rising to 1467 in 1995, with the overall rising trend following until its peak in 2014.

To understand the specific nature of visitations, however, it is useful to view sightings as a proportion of the total, rather than their absolute numbers.

It is clear that, allowing for the overall increase in numbers, the proportion of generically round UFOs has reduced since the 1950s, when they clearly dominated. The most significant increase has been the rise in triangular sightings, including “delta” and “chevron” shaped craft. This conceivably tracks the development of terrestrial military aircraft towards “delta wing” and similar profiles.

Since the mid-90s there have been a marked increase in sightings reported simply as “lights” — flashes, fireballs, flares, and similar. From 2000 onwards, the relative proportions appear to be mainly steady.

For specific cases, 1995 shows an oddly large proportion of unclassified “other” sightings, although these do not seem to be the result of any particular event. The highest proportion of these are in Seattle, with 38 sightings, but are spread fairly evenly throughout the year.

Breaking down sightings according to specific times, rather than year-by-year reveals some other points of interest. Firstly, sightings by month:

Sightings are much more common in the Northern hemisphere’s summer months, presumably due to higher numbers of people spending time outside and being in a position to spot anomalous phenomena.

Breaking down sightings by hour, we can see that sightings are far more common at night than during the day, with the lowest volumes of sightings around 08:00, and the highest at 21:00. For both monthly and hourly sightings, the relative proportions of sightings by shape remain relatively constant. We can conclude that UFOs’ activity is unrelated to their shape. This consistency of behaviour suggests that, regardless of their shape, the various forms of UFO, however they disguise themselves, may be drawn from a single source.

This is far from a definitive breakdown of UFO behaviour by their shape. In future posts we will explore whether differing shapes of UFO cluster geographically, and the extent to which cotemporaneous sightings can be correlated by their shape and description.

You can keep up to date with our latest statistical esoterica on Twitter at @WeirdDataSci.

As always, keep delving.

**Code Note:**

In developing this entry we have moved from using the excellent work of Tim Renner in gathering and cleaning the NUFORC UFO dataset, and developed our own scraping code. Most posts here have included source code at the bottom of each entry. As this post relied on more than the usual code, however, and included multiple outputs, we are including only representative code. The full scraping and analysis code will be the focus of a future post.

This is, however, relatively unsatisfactory. It is much more interesting to know where such sightings and events occur. Are there particular haunts of restless spirits? Do mysterious beasts roam in particular regions more than others? To answer these questions, we need to delve into the specific locations of different reports.

The Paranormal Database does contain location information, but it is given very informally. To map this we can make use of Google’s Geolocation API to convert free text strings, such as “Felbrigg Hall, Norfolk” into usable latitude and longitude coordinates. (In this case: 52.907479, 1.259443.)

The geolocation is not perfect, but with sufficient manipulation of the service it was possible to produce geolocated coordinates for most of the entries in the database. In order to represent these meaningfully, we have also subdivided the entries into a different types. The original Paranormal Database data is subdivided into twenty categories, which we have reduced to six for easier presentation. This includes collapsing the various kinds of haunting, from poltergeists to ‘post-mortem manifestions’, simply to hauntings. Similarly, we combine alien big cats and shuck into the broader family of cryptozoology.

With this in place, we can see the overall distribution of paranormal events in the British Isles.

As might be expected, London is a dark and sinister nexus of paranormal activity. Hauntings, as might be expected from their overall frequency, dominate the majority of the British Isles. Moving north, particularly as we reach the Scottish Highlands, cryptozoological sightings begin to challenge hauntings as the most common supernatural event. We can also see significant cryptozoology in the Hebrides, Orkney, and Shetland — the archipelagos that surround the Scottish mainland.

Both Wales, Ireland, and Cornwall are significantly less densely haunted in the Paranormal Database, with the majority of sightings falling in England.

This overall view, however, combines a number of very different phenomena. Where, for example, are we most likely to receive a visitation from a restless spectre as opposed to being pursued by a savage and unnatural beast?

By breaking down the sightings into different types, and plotting a heatmap of event density over each, we can identify the regions in which different manifestations cluster.

This view highlights several points of interest.

Firstly, London’s preeminent position is not for all forms of paranormal activity. Hauntings are extremely dense in London, however the rest of England is also well-populated. As might be expected from the first diagram, the less population-dense regions further north have produced fewer sad echoes of mortality. Despite its general reputation Edinburgh, while noticeably haunted, cannot compare with many regions of England.

Cryptozoologically, however, London is far from dominant. Whilst unknown beings may lurk in the foetid sewers of the capital, they clearly prefer the wide open spaces — both the Norfolk and Suffolk Broads are rife with cryptids, as are the Hebrides, Orkney, and Shetland that were noticeable earlier. Finally, visitors to Cornwall will pass through areas of increasing monstrous activity.

UFO’s also appear to be attracted to East Anglia most strongly, and are otherwise most common in the large population centres of England. Less obviously, there is a noticeable density of UFO activity on the Pembrokeshire Coast, in the south-west of Wales.

Monsters, which in this classification includes werewolves, vampires, and dragons, produce a surprising cluster in North Wales, around Snowdonia. The most significant monstrous sightings, however, appear to be in Exmoor; again on the south-western tip of the British Isles.

The final categories of manifestation include legends, fairies, and a catch-all category of ‘other manifestations’ that include mysterious orbs, talking trees, bleeding stones, and the supernatural impressions left by the work of John Dee. As might be expected, this last category is more uniformly distributed across the country, matching high-density population areas. There is, however, another notable cluster in Cornwall for this category.

In conclusion, then, the British Isles are teeming with paranormal activity. Entities from beyond the grave lie close at all times, with twisted monstrosities roaming the wild spaces. UFOs descend from the night sky to terrorise the coastal regions.

From this analysis, ghost hunters should concentrate on London for the best chance of a sighting, although almost any of the large centres of population provide a reasonable chance of spectral apparitions. Cryptid researchers should concentrate in East Anglia or head north to the islands beyond Scotland. Those seeking contact with extraterrestrials should focus particularly on the east coast of Suffolk, or travel to the south-west of Wales. Paranormal investigators whose interests lie in legends or monsters, or less specific strange entities, would be well-advised to visit Cornwall.

Code for the plotting elements of this analyis are given below, following on from the scraping and parsing code in our previous post. The geolocation step required a more significant effort, and will be the focus of a future code-based post.

You can keep up to date with our latest paranormal data mining on Twitter at @WeirdDataSci.

In more recent history, strange beasts have been rumoured to live wild in the open spaces, whether large predators escaped from zoos, the last survivals of prehistory, or spirits. Every village, town, and county has its own stories and traditions.

The Paranormal Database is a collection of both traditional and recent paranormal events in Britain and Ireland. It contains details of almost 20,000 hauntings, cryptozoological sightings, legends, monsters, UFO’s, and other strange phenomena, with details of the date and time of sightings, the location, and brief descriptions.

The data is not easily accessible beyond directly reading pages, and required some effort and time to scrape and make usable. Paranormal Database entries contain names, dates, locations, and comments as unstructured text and so will require further effort to perform a more thorough analysis. The R code used to scrape the website is included at the end of this post.

To understand the range and breadth of the paranormal life of the British Isles, we will focus on the data stored in the Paranormal Database. For this initial entry, we will take a first look at the data and get an overview of what mind-numbing horrors are most commonly encountered by the unsuspecting traveller in the United Kingdom and beyond.

As we can see from the diagram and the frequency table, hauntings are by far the most common manifestation in paranormal Britain, being an order of magnitude greater than the number of legend recorded. Examining the list, beyond “Haunting Manifestation” we see that several of the most common types are variants: both poltergeist activity and unknown types of ghost represent a significant amount of the total events recorded.

Cryptozoology, in its various forms, is also well-represented. The phenomenon of the Black Shuck, a ghostly black dog, is one of the highest categories next to the main cryptozoology category, and alien big cats are close behind. Dragons, werewolves, and vampires, perhaps, deserve to be classed more as monstrous entities than cryptozoological oddities and are, in any case, far less common.

In brief conclusion, then, the unquiet dead are by far the most numerous beings to trouble the unhappy folk of the British Isles; twisted mockeries of natural fauna are far from rare.

Do particular phenomena cluster in regions and, if so, where? Are werewolves truly more commonly seen when the moon is full? Have certain manifestations become more common as time passes? Are certain sightings clustered temporally as well as geographically? Are the most haunted areas also the most cryptozoologically active? With access to the full horror of the data we can begin to answer these question about the darkest corners of the United Kingdom.

Full code for scraping the data and producing the plot are given below.

You can keep up to date with our latest visions of the statistical unknown on Twitter at @WeirdDataSci.