7

The philosophy of probability is a subject on which many books and papers have been written, so the subject is obviously of interest to philosophers. There are many ways in which the subject of probability and chance impinges on philosophical questions. One of these is the question of free will. Advocates of compatibilism argue that "An uncaused action would be entirely capricious and random and could not be attributed to any agent, much less interpreted as a free and responsible act". (Quoted from SEP)

This reference to randomness is potentially equivocal, so it prompts an important question:

Do we have different types of randomness? If so, have they been categorized and have they played major roles in discussing other philosophical questions?

21
  • Surely, once you qualify randomness, it is no longer random? And I don't understand what you mean about infinity growing and shrinking. Can you please show some examples? Commented Jul 7 at 19:34
  • @WeatherVane I think he means to say different sizes of infinity just like Cantor's infinities i.e., infinities of natural numbers vs. real numbers.
    – How why e
    Commented Jul 7 at 19:46
  • Can't speak to the growing/shrinking, but there are examples of differing qualities or types of inifintiy. Hilberts Hotel is a classic example and Cantor is credited with discovering different infinities the most common of which has to do with pairing the set of natural numbers in comparison to fractional numbers. As if to say there is more infinity between 0 and 1 than there is in all natural numbers. Commented Jul 7 at 19:50
  • 2
    There are infinitely many lists - all of which are boring. Commented Jul 7 at 20:31
  • 3
    If this is a question about maths, it should go to the maths site. Such as this question math.stackexchange.com/questions/240673 . If not, the question should still be closed as lacking clarity, as it needs to specify randomness of what.
    – tkruse
    Commented Jul 7 at 21:39

6 Answers 6

12

The core intuition underlying randomness is that of unpredictability. For simplicity, I'll use a discrete time stochastic process as an example.

We say process X is "random" if knowing its value at some time T (i.e. X(T)) and some information up to time T, I(T), does not single out a single value for X(T+1) -- given I(T), we observe (or could observe) multiple values for X(T+1).

I emphasize "some information" because it helps us define different types of randomness.

First, there is "true randomness" or "ontological randomness" or "intrinsic randomness". Here we take I(T) to be all information that exists. There is literally nothing left to know about the state of the world and yet we cannot rule out all but one value. Quantum randomness may be an example of this (not conclusive at all -- take a look at foundations of quantum mechanics literature to see that, such as Bohmian Mechanics)

Next, there is the class of "pseudo randomness", which satisfies all the mathematical properties of randomness (e.g., no autocorrelation, independent from external variables, etc) but it is generated by a deterministic underlying process. This is how computers generate random-looking sequences. These processes are so good at mimicking true randomness that we treat them as if they were truly random for all practical purposes. It differs from true randomness in that knowing the algorithm and its parameters moves the process from random to completely non-random.

The final class of randomness I can envisage is "epistemic randomness" which is based on the fact that you lack all the relevant details (i.e., your I is incomplete). Obviously, pseudorandomness is a subset of this, but even perfectly deterministic things that have discoverable dynamics can still surprise us (e.g., chaotic processes). Here, it is due to our limitations on knowing things precisely enough (chaos) or not knowing all the variables.

There are other ways to do this but that is how I've differentiated different "flavors" of randomness.

10
  • 2
    "There is literally nothing left to know about the state of the world and yet we cannot rule out all but one value". This is kind of spooky stuff. So are the results of many quantum experiments. I've never thought about randomness from this perspective. It invites us to accept the possibility there may exist truths that we cannot ever intuitively understand or rigorously prove. Commented Jul 7 at 20:54
  • @MikeSteele correct. Question is if true randomness is really coherent and, even then, does it exist — e.g., Bohmian mechanics is deterministic so all spookiness comes from epistemic limitations and non-locality
    – Annika
    Commented Jul 7 at 20:58
  • 1
    Am I right in equating the "epistemic limitations" as some combination of not enough information and no way to intuitively understand the information we have and "non-locality" as being unpredictable in the microscopic, but predictable in the macroscopic? Commented Jul 7 at 21:03
  • @MikeSteele -- yeah, Bohm pushes our uncertainty out to the boundary conditions and non-locality refers to the pilot wave, which is modeled as nonlocal.
    – Annika
    Commented Jul 7 at 21:50
  • "These processes are so good at mimicking true randomness that we treat them as if they were truly random for all practical purposes." ─ except cryptographic key generation. (There are some pseudorandom generators suitable for cryptographic use, but their security relies on using "truly random" seeds. And they aren't used for key generation because the difficulty of brute-forcing depends on the number of "truly random" bits; if you use fewer "truly random" bits to seed a PRNG to generate your key, then only the seed bits needs to be brute-forced by an attacker.)
    – kaya3
    Commented Jul 8 at 9:20
6

Oooh. REF: white paper on 5 types of randomness

Type 0: Fixed numbers or known outcomes

  • fully predictable, either because it is a fixed process, or because the outcome is known already

Type 1: Pseudo random.

  • appears random, but comes from a deterministic process. Given enough time or knowledge, can theoretically be reduced to type 0

Type 2: Non-fully reducible

  • random outcomes where you can affect the probabilities, e.g. by gaining information to narrow down the outcome space. Most "real world randomness" falls into this category; predicting health care outcomes was given as an example

Type 3: Martingale random

  • random outcomes where you cannot affect the probabilities. "Fair bet" and a theoretical "fair coin" are cited as examples

Type 4: Real randomness.

  • type 3 randomness whose source is not only unknown, but is a priori unknowable.

    "If Type 4 randomness exists, then God plays dice; randomness is “baked in” to the universe. I suspect that if Type 4 randomness really does exist, then it will be impossible to prove."

3
  • Perhaps it would be impossible to disprove also? :-)
    – Scott Rowe
    Commented Jul 7 at 23:38
  • 1
    This is what I think. I think there are many things that are neither provable nor disprovable. Its these things though that have to do with free-will, determinism, morality, meaning, purpose, and consciousness that are the most interesting. Commented Jul 8 at 1:17
  • 1
    Perhaps you could approach this question from the perspective of information theory. Randomness is unpredictable, because the random information does not exist, before it's generated. Random information is not calculated from existing information, an algorithm and a seed value, that would be pseudorandom. Random information must be extracted from some source of entropy, a stochastic or chaotic process, where no individual can decide the result. Throwing dice is an example of such process, the player cannot decide the result. Commented Jul 8 at 7:37
4

Randomness has many different uses in practice. It is a negative term that can really only be understood by reference to what it is being contrasted with.

In fundamental physics, random is contrasted with deterministic. A process is random if from a given set of boundary conditions, more than one outcome is possible. This is a feature of quantum theory, at least under its most common interpretations. Randomness in this sense is irreducible: it does not indicate a lack of information.

When speaking more generally of natural phenomena, random indicates something that lacks any identifiable pattern, structure or predictability. This may be said of a process, or of a state of affairs that is the product of a process. This kind of randomness arises because of a lack of complete information and does not mean the phenomena is uncaused or nondeterministic. A toss of a coin might be random in this sense: its outcome has a cause and may be predictable in principle, but we describe it as random because we lack the information to predict it. We may be able to describe phenomena using stochastic models. For example, in statistical mechanics we might say that the motion of molecules of a gas is random, but we are able to describe their behaviour using a statistical distribution. In the case of chaotic systems, it may be impossible to predict their behaviour because of limitations in the precision with which we can measure their boundary conditions.

In Shannon information theory, randomness is also related to lack of predictability. A highly random, or highly entropic, message or system contains more information than a low entropy one because it is less predictable and more surprising.

In the context of statistical methods, particularly of the frequentist variety, randomness means lack of bias. A random data set is one that has been obtained in such a way that best efforts have been made to ensure that its acquisition is uncorrelated with the parameters of interest.

When speaking of events that are the result of human actions, random means having no rational explanation that we can discern. We might describe an act as random if it is clear that someone did it but we are unable to explain why. It does not mean uncaused.

In computing, we also sometimes speak of pseudo-random sequences or pseudo-random number generators. These are deterministic sequences that cannot be distinguished from a uniform distribution by any efficient procedure. They give the appearance of being highly entropic, but have a low Kolmogorov complexity.

It is important to note what randomness, or chance, is not. Chance is not a thing that causes other things. People often say that something happened by chance. It is an important mistake to reify a negative concept like randomness and treat as a concrete thing. If I say that a tile falling from my roof happened by chance, I mean only that I have no explanation of why it happened. It does not mean that the chaos monster paid me a visit and knocked it off. This may seem obvious, but it is very common to come across this kind of loose and confused thinking about chance.

1

Yes. Much like with infinity, there are a variety of approaches to distinguishing different kinds of infinity. (Also, like with infinity, this leads to several different frameworks which aren't necessarily comparable, since they're approaching the topic from different perspectives, and each framework has its own various kinds of randomness inside it.)

Here are two (certainly not exhaustive) examples.

One approach, coming out of computability theory, is algorithmic randomness, which thinks of randomness as a property of a countable sequence and asks how difficult it is to predict the sequence's behavior using a finite initial segment. There are a number of good surveys, but I'll specifically point to this book, since the articles were written with a more philosophical audience in mind.

Another, rather different approach that I've been more personally involved in is combinatorial approaches to randomness, which concern mathematical structures which contain several different "kinds" of information; we can then formalize randomness as being when information of one "kind" does not help us predict information of another kind. (If this all seems hopelessly abstract, the idea comes from thinking about random graphs: the properties of individual vertices are one kind of information, where there's an edge is a second kind of information, and a quasi-random graph is one where the first kind of information is of no help in determining the second kind.) Again, there's a large literature on this, with key search terms being "quasi-random hypergraphs".

1

Beyond the probability theory, there are theories that deal with uncertain probabilities ("Knightian uncertainty"). These include Possibility theory, Demprter-Shafer evidence theory, and others.

It should be known that there is a famous consequence argument, given by Peter van Inwagen in his monograph "Metaphysics" (and also described in this video), where he argues that free will is not compatible neither with determinism, nor with indeterminism, from which he concludes that free will is impossible.

But he makes a mistake by equating indeterminism with stochastic randomness. He does not consider another option: the most complete physical description of the universe is neither deterministic, nor stochastic (probabilistic) but rather includes Knightian uncertaicy, uncertain (in principle) probabilities.

It seems that any system that does not properly include the observer can be described by quantum mechanics or a more advanced future stochastic theory. The most of quantum mechanics interpretations are based on randmness of quantum events.

But as was shown by Thomas Breuer, quantum mechanics or any other stochastic theory is not universally-valid, in other words, it can not describe a system which properly includes the observer.

This means that the most complete physical description of a system that includes the observer cannot be either stochastic or deterministic (which is a case of stohastic). This gives a way for possible free will of the observer.

1
  • I could spend weeks in these wonderful rabbit holes and T.Y. wayback machine! I'll leave that tab open (Thomas Breuer) and try to read it when I can. You know some quantum experiments make me imagine time travel differently. I imagined a universe where backward time travel would not reveal events as they had once occurred, but a "random" variation of events that could have lead to they very local snapshot of spacetime that we started from. Likewise, once having gone back, when we went forward again we would see events "randomly" being "selected". Don't think so, but interesting. Commented Jul 13 at 4:34
-1

There are several types of randomness, or several meanings for the word "random".

Mathematical randomness is a property of a series. It is the unpredictability of the next item in the series. There are some tools to detect mathematical randomness.

True randomness is a property of a single number. It means that the number is a product of a genuinely stochastic process, no-one has deliberately selected it.

Pseudorandomness, as the name implies, is fake randomness. Pseudorandom numbers are deliberately selected to create a false impression of randomness.

A pseudorandom series may pass and a truly random series may fail a test for mathematical randomness. The tests are not watertight.

In physics randomness refers to the probabilistic inaccuracy in all events.

In philosophy and in colloquial speech random refers to anything that is not deliberately selected. If free will is seen as making deliberate selections, then randomness is the very opposite of free will.

2
  • 3
    "True randomness is a property of a single number. It means that the number is a product of a genuinely stochastic process, no-one has deliberately selected it." - doesn't that make it a property of the process, and not the single number?
    – TKoL
    Commented Jul 7 at 21:20
  • Adjective "random" refers to the number, not the process. The process is "stochastic". Commented Jul 8 at 6:03

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .