 # Joint Probability Distribution

Because of the probability distributions X, Y,..., specified in a cartesian plane, the conditional probability distribution for X, Y,..., is a conditional probability that indicates the probability that each of the variables X, Y,... will fall within some given range or distinct compilation of variables stated for that factor. This is called a bivariate distribution in the case of only two random variables, but the notion generalizes to any number of random variables, producing a multivariate distribution.

The distribution of joint probability can indeed be demonstrated either in terms of the function of the joint statistical distribution or in terms of the distribution of the joint probability density (in the case of continuous variables) or the function of the joint probability mass (in the case of discrete variables).

In contrast, these can also be used to find two other forms of distributions: the median distribution that gives the probabilities for all of the variables without regard to any fixed range of values for the other variables, and the conditional distribution of probabilities that gives the probabilities for any subset of variables that are conditional on the remaining variables' specific values.

Covariance and Correlation

Covariance is a measure of the mutual variability of two or more variables in probability theory and statistics. The covariance is positive if the higher values of one variable correlate predominantly to both the higher values of the variable, and the same applies to the lower values (that is, the variables appear to display similar behavior). In the opposite case, where the higher values of one variable correspond primarily to the lower values of the other variable, the covariance is negative (that is, the variables appear to exhibit the opposite behavior).

Furthermore, the sign of both the covariance demonstrates the pattern between the variables in the linear relationship. Because it is not standardized and thus depends on the magnitudes of the variables, the magnitude of the covariance is not easy to interpret. However, the normalized covariance version, the coefficient of correlation, indicates the power of the linear relationship by its magnitude.

A significant difference exists among (1) the covariance of two or more variables, which is a population variable that could be used as a joint probability distribution property, including (2) the covariance of the sample, which further acts as the population parameter's estimated value, in addition to serving as a descriptor of the sample.

Covariance, in biology, is an important component. Some other DNA sequences are more highly conserved among some species on this planet than others and some thus sequences in common species are compared in order to better understand the tertiary and secondary structure of the proteins or RNA structures. Sequences are found to be necessary for similar architectural motifs, along with an RNA loop, if sequence changes are found or no changes at all are found in non-coding RNA (such as microRNA). In genetics, covariance serves as the basis for Genetic Relationship Matrix (GRM) (aka kinship matrix) computation, allowing inference from samples with no established close relatives on population structure, as well as inference on the estimation of complex trait heritability.

The Price equation explains how hereditary trait alterations in frequency over time in the school of thought of evolution through natural selection. To give a schematic representation of evolution through natural selection, the equation implements a correlation coefficient between that and a trait and fitness. It offers a way to examine the consequences of the percentage of genes within each new generation of a population of gene transfer and natural selection. George R. Price derived the Price equation for the re-derivation of W.D. Work by Hamilton on kin collection. For different evolutionary situations, examples of the Price equation have been constructed.

In statistics, any statistical association, whether causal or not, among both two or more variables or bivariate data is correlation or dependency. Any statistical correlation is correlated in the broadest sense, although it typically refers to the degree to which a pair of variables are connected linearly. Familiar examples of dependent phenomena include the correlation between the height of parents and their children, as seen in the so-called demand curve, and the correlation between the price of a good and the amount buyers are willing to buy.

Correlations are beneficial since a predictive relationship that can then be manipulated in practice can be demonstrated. For example, measure the correlation between electricity consumption and weather, an electrical utility could generate less power on a mild day. There is a causal correlation in this case since extreme weather induces individuals to use more energy for heating or cooling. In general, however, the existence of a correlation is not sufficient to conclude that a causal relationship is present (i.e., correlation does not imply causation).