Statistics Across the Curriculum Glossary

A Faculty Development Program

  • The definitions for our meeting on Probability are sample space, event (in a sample space), probability (of an event), conditional probability (of an event given another event), Bayes’ rule, and independent (events in a sample space).
  • The definitions for our first meeting on Distributions are (discrete) random variable (on a sample space), probability distribution (of a discrete random variable), expected value (of a discrete random variable), variance (of a discrete random variable), binomial distribution, Poisson distribution, and joint probability distribution (of a pair of discrete random variables).
  • The definitions for our second meeting on Distributions are (continuous) random variable (on a sample space), probability distribution (of a continuous random variable), expected value (of a continuous random variable), variance (of a continuous random variable), normal distribution, exponential distribution, joint probability distribution (of a pair of continuous random variables), and independent (random variables)..
  • The definitions for our third meeting on Distributions are random sample, sample mean, sample variance and standard deviation, unbiased estimator, Central Limit Theorem, confidence interval, and t distribution.
  • For our first meeting on Hypothesis Testing, Richard Ball prepared a handout on p values.

Bayes rule: The conditional probability of A given B is computed from the conditional probability of B given A using the rule P(A|B) = P(B|A)P(A)/P(B). An application to epidemiology of Bayes’ rule was provided at the request of Kaye Edwards.

Binomial distribution: A random variable X is said to be binomially distributed, with probability distribution Bin(n,p), if the set of all possible values k of X is {0,1,2,…,n}, and P(X=k) is (n choose k)pk(1-p)n-k, where (n choose k) = n!/(k!(n-k)!). An application to error analysis of the binomial distribution was provided at the instigation of Rob Scarrow. The binomial distribution Bin(n,p) has expected value μ=np and variance σ2=np(1-p). This fact follows from the fact that the sum of n independent Bin(1,p) random variables is a Bin(n,p) random variable, and the easy computation that E(X)=p and V(X)=p(1-p) if P(X=1)=p and P(X=0)=1-p.

Conditional probability (of an event given another event): The conditional probability of A given B (where P(B) is nonzero), is P(A|B) = P(A and B)/P(B). If the sample space is the set of all possible genotypes of a child born to parents whose genotypes are both Aa, then P({AA,Aa}|{Aa,aa}) = P(Aa)/P({Aa,aa})=2/3.

Event (in a sample space): A subset of the sample space. If the sample space were the set of all possible genotypes of a child born to parents whose genotypes are both Aa, then the event that the child has genotype AA or Aa is the subset {AA,Aa} of the sample space {AA,Aa,aa}.

Expected value: The expected value E(X) of a discrete random variable X is the weighted average of its possible values k, where the weight multiplying k is p(k) = P(X=k). (See probability distribution of a discrete random variable.) If the probability distribution of X is defined by p(1)=3/4 and p(0)=1/4, then E(X) = (1)3/4 + (0)1/4 = 3/4. This definition comes from noticing that the average of a list of values is a weighted average of the distinct values in the list, where the weight multiplying a value is the number of times that value occurs in the list divided by the total number of values listed. So the average (2+2+1+2+3+1)/6 is (1)2/6+(2)3/6+(3)1/6. The expected value of a continuous random variable is also the weighted average of its possible values. See Lynne Butler’s slide. If a random variable X is normally distributed, then E(X) is the mean μ of the normal distribution, where the distribution has its highest value and about which the distribution is symmetric. Note that E(cX+d)=cE(X)+d. If X and Y are jointly distributed random variables, then E(X+Y)=E(X)+E(Y) (whether or not X and Y are independent).

Independent (events in a sample space): Events A and B are independent if P(A|B) = P(A) (or P(B) = 0). For the sample space {AA, Aa, aa} of all possible genotypes of a child born to parents whose genotypes are both Aa, the event {AA,Aa} and the event {Aa, aa} are not independent, since P({AA,Aa})=3/4 but P({AA,Aa}|{Aa,aa})=2/3.

Independent (random variables): A pair (X,Y) of jointly distributed discrete random variables is said to be independent if p(x,y) is the product of P(X=x) and P(Y=y), for all possible values x of X and y of Y. (See joint probability distribution.) A pair (X,Y) of jointly distributed continuous random variables is said to be independent if f(x,y) is the product of fX(x) and fY(y), where fX and fY are the marginal distributions of X and Y, respectively.

Joint probability distribution: The joint probability distribution p(x,y) of a pair (X,Y) of discrete random variables is a nonnegative function defined for all pairs (x,y) of possible values of X and Y, respectively. It is defined by p(x,y) = P(X=x and Y=y). To make sense of the sum X+Y of two random variables, it is necessary to use their joint probability distribution. See slides from Lynne Butler’s presentation. The joint probability distribution f(x,y) of a pair of continuous random variables is a function defined for all pairs (x,y) of possible values x of X and y of Y, whose graph lies above the xy-plane. It is defined so that P( a < X < b and c < Y < d ) is the volume under the graph of f over the rectangle of points (x,y) where a < x < b and c < y < d. So the total volume under the graph of f is 1. In particular, the marginal distribution fX for X is defined so that fX(k) is the area under the curve where the graph of f intersects the plane x=k. Similarly, the marginal distribution fY for Y is defined so that fY(l) is the area under the curve where the graph of f intersects the plane y=l.

Normal distribution: A binomial distribution B(n,p) where n is large and p is close to 1/2 is often approximated by the normal distribution with mean np and variance np(1-p). The standard normal distribution is pictured in Lynne Butler’s slide. It is symmetric about the vertical axis, so its expected value is 0. The integral used to calculate its variance is given in another slide. Other normal distributions are obtained from the standard normal distribution by a horizontal stretch by a factor of σ, followed by a vertical stretch by a factor of 1/σ (to keep the area under the graph equal to 1), followed by a translation right by μ units. The fact that this normal distribution has expected value μ and variance σ2 is explained in yet another slide. If X and Y are independent normally distributed random variables, then X+Y is normally distributed.

Poisson distribution: A random variable X is said to have a Poisson distribution, with parameter λ>0, if the set of all possible values k of X is {0,1,2,…}, and P(X=k) is eλk/(k!). The Poisson distribution with parameter λ has expected value μ=λ and variance σ2=λ. If p is very small and n is very large, the Poisson distribution with λ=np is a good approximation of the binomial distribution Bin(n,p).

Probability (of an event): The probability of an event A in a sample space S is the probability that the experiment will give an outcome in A. So P(S) = 1 and P(not A) = 1 – P(A) for any event A. For the sample space {AA, Aa, aa} of all possible genotypes of a child born to parents whose genotypes are both Aa, the event {AA,Aa} has probability P({AA,Aa}) = P(AA) + P(Aa) = 1/4 + 1/2 = 3/4. That is, in the population of all such children, three fourths exhibit the dominant trait.

Probability distribution: The probability distribution of a discrete random variable X on a sample space S is the function p that assigns to each possible value k of X the probability that X equals k. That is p(k)=P({s in S such that X(s)=k}). For the sample space of all possible genotypes of a child born to parents whose genotypes are both Aa, the random variable X defined by X(AA)=1, X(Aa)=1 and X(aa)=0 has probability distribution p defined by p(1)=3/4 and p(0)=1/4. (See also joint probability distribution.) A probability distribution for a continuous random variable is a function f(x), defined on an interval of real numbers x, with a graph that lies above the horizontal axis and for which the region underneath the graph has area equal to 1. A continuous random variable X is said to have probability distribution f if P( a < X < b ) is the area under the graph of f between the vertical lines through x=a and x=b.

Random variable: A random variable X on a sample space S is a function that assigns to each element s of S a number X(s). If the sample space is finite or countably infinite, the random variable is called discrete. See also probability distribution. For the sample space of all possible genotypes of a child born to parents whose genotypes are both Aa, then one discrete random variable of interest is the function X defined by X(AA)=1, X(Aa)=1 and X(aa)=0. If the random variable assigns (to elements of the sample space) values covering an interval of real numbers, the random variable is called continuous. Any continuous random variable can be approximated by discrete random variables. For intuition, think about weighing 10,000 slices of Oatnut bread on more and more accurate scales. See data from Lynne’s experiment.

Sample space: The set S of all possible outcomes of an experiment (performed on individuals drawn from a population of interest). For example, if the experiment is to record the genotype of a child born to parents whose genotypes are both Aa, the sample space is S={AA, Aa, aa}. The probability that the child has genotype AA is P(AA)=1/4, the probability that the child has genotype Aa is P(Aa)=1/2, and the probability that the child has genotype aa is P(aa)=1/4.

Variance: The variance V(X) of a discrete random variable X with expected value or “mean” E(X)=μ is the weighted average of the squared deviations from the mean, where the weight multiplying a squared deviation (k-μ)2 is p(k) = P(X=k). (See probability distribution of a discrete random variable.) If the probability distribution of X is defined by p(1)=3/4 and p(0)=1/4, then V(X) = (1 – 3/4)23/4 + (0 – 3/4)21/4 = (3/4)(1/4). The variance of a continuous random variable is also the weighted average of squared deviations from the mean. See Lynne Butler’s slide. If a random variable X is normally distributed, then V(X) is the square of the standard deviaion σ of the normal distribution, where the distribution has inflection points at μ-σ and μ+σ. Note that V(cX+d)=c2V(X). If X and Y are jointly distributed independent random variables, then V(X+Y)=V(X)+V(Y).

css.php