首页 > 代码库 > 3.Discrete Random Variables and Probability Distributions

3.Discrete Random Variables and Probability Distributions

1. Random Variables

Random variables 

  • variable: because different numerical values are possible;
  • random: because the observed value depends on which of the possible experimental outcomes results.

For a given sample space δ of some experiment, a random variable is any rule tha associates a number with each outcome in δ.

Any random variable whose only possible values are 0 and 1 is called Bermoulli random variable.

Two Types of Random Variables

  • A discrete random variable is an rv whose possible value either constitute a finite set or else can be listed in an infinite sequence in which there is a first element, a second element, and so on.
  • A continous random variable is if its set of possible values consists of an entire interval on the number line.

 

2. Probability Distributions for Discrete Random Variables

The probability distribution of X says how the total probability of 1 is distributed among the various possible X values.

The probability distribution or probability mass function of a discrete rv is defined for every number x by p(x)=P(X=x)=P(all s in δ:X(s)=x):

In words, for every possible value x of the random variable, the function specifies the probability of observing that value when the experiment is performed.

A Parameter fo a Probability Distribution

Suppose p(x) depends on a quantity that can be assigned any one of a number of possible values, with each different value determining a different probability distribution. Such a quantity is called a parameter of the distribution. The collection of all probability distributions for different values of the parameter is called a family of probability distribution.

The Cumulative Distribution Function

The cumulative distribution function F(x) of a discrete rv variable X with pmf p(x) is defined for every number x by

F(x) = P(X≤x) = ∑p(y)

For any number x, F(x) is the probability that the observed value of X will be at most x.

Another View of Probability Mass Functions

It is often helpful to think of a pmf as specifying a mathematical model for a discrete population. Once we have such a mathematical model for a population, we will use it to compute values of population characteristics and make inferences about such characteristics.

 

3. Expected Values of Discrete Random Variables

When computing the expections of X, the population size is irrelevant as long as the pmf is given. The average or mean value of X is then a weighted average of the possible values, where the weights are the probabilities of those values.

The Expected Value of X

Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X) or μx, is E(X)=μx=∑x*p(x)

The Expected Value of a Function

If the rv X has set of possible values D and pmf p(x), then the expected value of any function h(x), denoted by E[h(x)] or ch(x) is computed by E[h(X)] = ∑h(x)*p(x)

According to this proposition, E[h(x)] is computed in the same way that E(x) itself is, expect that h(x) is substituted in place of x.

Rules of Expected Value

The h(X) function of interest is quite frequently a linear function aX+b. In this case, E[h(x)] is easily computed from E(X).

The expected value of a linear function equals the linear function evaluated at the expected value E(X).

PROPOSITION:E(aX+b) = a* E(X) + b

The Variance of X

We will use variance of X to measure the amount of variability in the distribution of X.

Let X have pmf p(x) and expected value μ. Then the variance of X, denoted by V(X) or σx2 or just σ2, is

V(X) = ∑(x-μ)2*p(x) = E[(X-μ)2]

A Shortcut Formula for σ2

The number of arithmetic operations necessary to compute σ2 can be reduced by using an alternative computing formula.

V(X) = σ2 = E(X2) - [E(X)]2

Rules of Variance

The variance of h(x) is the expected value of the squared difference between h(x) and its expected value.

 

4.The Binomial Probability Distribution

An experiment for which conditions 1-4 are satisfied is called a binomial experiment:

  1. The experiment consists of a sequence of n trials, where n is fixed in advance of the experiment.
  2. The trials are identical, and each trial can result in one of the same two possible outcomes, which we denote by success (S) or failure (F).
  3. The trials are idenpendent, so tha the outcome on any particular trial does not influence the outcome on any other trials.
  4. The probability of success is constant from trial to trial, we denote this probability by p.

The Binomial Random Variable and Distribution

Given a binomial experiment consisting of n trials, the binomial random variable X associate with this experiment is defined as X = the number of S‘s among the n trails

Using Binomial Tables

For X ~ Bin(n,p), the cdf will be denoted by 

P(X≤x) = B(x;n,p) = ∑b(y;n,p)         x= 0, 1,...,n

The Mean and Variance of X

PROPOSITION: If X ~ Bin(n,p), then E(X)=np, V(X)=np(1-p)=npq, and σx = (npq)1/2 where q = 1-p.

 

5. Hypergeometric and Negative Binomial Distributions

The hypergeometric distribution is the exact probability model for the number of S‘s in the sample.

The binomial rv X is the number of S‘s when the number n of trials is fixed, whereas the negative binomial distribution arises from fixing the number of S‘s and leting the number of trials be random.

The Hypergeometric Distribution

The assumptions leading to the hypergeometric distribution are as follows:

  1. The population or set to be sampled consists of N individuals, objects, or elements (a finite population).
  2. Each individual can be characterized as a success (S) or a failure (F), and these are M successes in the population.
  3. A sample of n individuals is selected without replacement in such a way that each subset of size n is equally likely to be chosen.

If X is the number of S‘s in a completely random sample of size n drawn from a population consisting of M S‘s and (N-M) F‘s, then the probability distribution of X, called the hypergeometric distribution.

The Mean and Variance of X

The mean and variance of the hypergeometric rv X having pmf h(x;n,M,N) are 

E(X) = n * M/N 

V(X) = ((N-n)/(N-1))*n*M/N*(1-M/N)

The means of the binomial and hypergeometric rv‘s are equal, whereas the variances of the two rv‘s differ by the factor (N-n)/(N-1), often called the finite population correction factor.

Approximating Hypergeometric Probabilities

Let the population size, N, and number of population S‘s, M, get large with the ratio M/N approaching p. Then h(x;n,M,N) approaches b(x;n,p), so for n/N small, the two are approximately equal provided that p is not too near either 0 or 1.

The Negative Binomial Distribution 

The negative binomial rv and distribution are based on an experiment satisfying the following conditions:

  1. The experiment consists of a sequence of independent trials.
  2. Each trial can result in either a success (S) or a failure (F).
  3. The probability of success is constant from trial to trial, so P(S on trial i) = p for i = 1, 2, 3,...
  4. The experiment continues (trials are performed) until a total of r ruccesses have been observed, where r is a specified positive integer.

The random variable of interest is X = the number of failures that precede the rth success;

X is called a negative binomial random variable because, in contrast to the binomial rv, the number of successes is fixed and the number of trials is random.

 

6. The Poisson Probability Distribution

A random variable X is said to have a Poisson distribution if the pmf of X is p(x;λ) = eλx/x!    x = 0,1,2,... for some λ>0

The Poisson Distribution as a Limit

PROPOSITION: Suppose that in a binomial pmf b(x;n,p), we let n→∞ and p→0 in such a way that np approaches a value λ>0. Then b(x;n,p)→p(x;λ).

The Mean and Variance of X

PROPOSITION: If X has a Poisson distribution with parameter λ, then E(X) = V(X) = λ.

The Poisson Process

 

3.Discrete Random Variables and Probability Distributions