Discrete probability distribution

Discrete probability distribution

In probability theory, a probability distribution is called discrete if it is characterized by a probability mass function. Thus, the distribution of a random variable "X" is discrete, and "X" is then called a discrete random variable, if

:sum_u Pr(X=u) = 1

as "u" runs through the set of all possible values of "X".

If a random variable is discrete, then the set of all values that it can assume with non-zero probability is finite or countably infinite, because the sum of uncountably many positive real numbers (which is the least upper bound of the set of all finite partial sums) always diverges to infinity.

Typically, this set of possible values is a topologically discrete set in the sense that all its points are isolated points. But, there are discrete random variables for which this countable set is dense on the real line.

The Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution are among the most well-known discrete probability distributions.

Alternative description

Equivalently to the above, a discrete random variable can be defined as a random variable whose cumulative distribution function (cdf) increases only by jump discontinuities — that is, its cdf increases only where it "jumps" to a higher value, and is constant between those jumps. The points where jumps occur are precisely the values which the random variable may take. The number of such jumps may be finite or countably infinite. The set of locations of such jumps need not be topologically discrete; for example, the cdf might jump at each rational number.

Representation in terms of indicator functions

For a discrete random variable "X", let "u"0, "u"1, ... be the values it can take with non-zero probability. Denote

:Omega_i={omega: X(omega)=u_i},, i=0, 1, 2, dots

These are disjoint sets, and by formula (1)

:Prleft(igcup_i Omega_i ight)=sum_i Pr(Omega_i)=sum_iPr(X=u_i)=1.

It follows that the probability that "X" takes any value except for "u"0, "u"1, ... is zero, and thus one can write "X" as

:X=sum_i alpha_i 1_{Omega_i}

except on a set of probability zero, where alpha_i=Pr(X=u_i) and 1_A is the indicator function of "A". This may serve as an alternative definition of discrete random variables.

ee also

* Stochastic vector


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Probability distribution — This article is about probability distribution. For generalized functions in mathematical analysis, see Distribution (mathematics). For other uses, see Distribution (disambiguation). In probability theory, a probability mass, probability density …   Wikipedia

  • Continuous probability distribution — In probability theory, a probability distribution is called continuous if its cumulative distribution function is continuous. That is equivalent to saying that for random variables X with the distribution in question, Pr [ X = a ] = 0 for all… …   Wikipedia

  • Joint probability distribution — In the study of probability, given two random variables X and Y that are defined on the same probability space, the joint distribution for X and Y defines the probability of events defined in terms of both X and Y. In the case of only two random… …   Wikipedia

  • Conditional probability distribution — Given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X is the probability distribution of Y when X is known to be a particular value. If the conditional distribution of Y given X is a… …   Wikipedia

  • Compound probability distribution — In probability theory, a compound probability distribution is the probability distribution that results from assuming that a random variable is distributed according to some parametrized distribution F with an unknown parameter θ that is… …   Wikipedia

  • probability distribution — noun A function of a discrete random variable yielding the probability that the variable will have a given value …   Wiktionary

  • Probability theory — is the branch of mathematics concerned with analysis of random phenomena.[1] The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non deterministic events or measured… …   Wikipedia

  • Maximum entropy probability distribution — In statistics and information theory, a maximum entropy probability distribution is a probability distribution whose entropy is at least as great as that of all other members of a specified class of distributions. According to the principle of… …   Wikipedia

  • Discrete — in science is the opposite of continuous: something that is separate; distinct; individual. This article is about the possible uses of the word discrete . For a definition of the word discreet , see the Wiktionary entry discreet. Discrete may… …   Wikipedia

  • Probability metric — A probability metric is a function defining a distance between random variables or vectors. In particular the probability metric does not satisfy the identity of indiscernibles condition required to be satisfied by the metric of the metric… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”