Problems on the classical determination of probability. Probability theory formulas and examples of problem solving

In fact, formulas (1) and (2) are a short record of conditional probability based on a contingency table of characteristics. Let's return to the example discussed (Fig. 1). Suppose we learn that a family is planning to buy a wide-screen television. What is the probability that this family will actually buy such a TV?

Rice. 1. Widescreen TV Buying Behavior

IN in this case we need to calculate the conditional probability P (purchase completed | purchase planned). Since we know that the family is planning to buy, the sample space does not consist of all 1000 families, but only those planning to buy a wide-screen TV. Of the 250 such families, 200 actually bought this TV. Therefore, the probability that a family will actually buy a wide-screen TV if they have planned to do so can be calculated using the following formula:

P (purchase completed | purchase planned) = number of families who planned and bought a wide-screen TV / number of families planning to buy a wide-screen TV = 200 / 250 = 0.8

Formula (2) gives the same result:

where is the event A is that the family is planning to purchase a widescreen TV, and the event IN- that she will actually buy it. Substituting real data into the formula, we get:

Decision tree

In Fig. 1 families are divided into four categories: those who planned to buy a wide-screen TV and those who did not, as well as those who bought such a TV and those who did not. A similar classification can be performed using a decision tree (Fig. 2). The tree shown in Fig. 2 has two branches corresponding to families who planned to purchase a widescreen TV and families who did not. Each of these branches splits into two additional branches corresponding to households that did and did not purchase a widescreen TV. The probabilities written at the ends of the two main branches are the unconditional probabilities of events A And A'. The probabilities written at the ends of the four additional branches are the conditional probabilities of each combination of events A And IN. Conditional probabilities are calculated by dividing the joint probability of events by the corresponding unconditional probability of each of them.

Rice. 2. Decision tree

For example, to calculate the probability that a family will buy a wide-screen television if it has planned to do so, one must determine the probability of the event purchase planned and completed, and then divide it by the probability of the event purchase planned. Moving along the decision tree shown in Fig. 2, we get the following (similar to the previous) answer:

Statistical independence

In the example of buying a wide-screen TV, the probability that a randomly selected family purchased a wide-screen TV given that they planned to do so is 200/250 = 0.8. Recall that the unconditional probability that a randomly selected family purchased a wide-screen TV is 300/1000 = 0.3. This leads to a very important conclusion. Prior information that the family was planning a purchase influences the likelihood of the purchase itself. In other words, these two events depend on each other. In contrast to this example, there are statistically independent events whose probabilities do not depend on each other. Statistical independence is expressed by the identity: P(A|B) = P(A), Where P(A|B)- probability of event A provided that the event occurred IN, P(A)- unconditional probability of event A.

Please note that events A And IN P(A|B) = P(A). If in a contingency table of characteristics having a size of 2×2, this condition is satisfied for at least one combination of events A And IN, it will be valid for any other combination. In our example events purchase planned And purchase completed are not statistically independent because information about one event affects the probability of another.

Let's look at an example that shows how to test the statistical independence of two events. Let's ask 300 families who bought a widescreen TV if they were satisfied with their purchase (Fig. 3). Determine whether the degree of satisfaction with the purchase and the type of TV are related.

Rice. 3. Data characterizing the degree of satisfaction of buyers of widescreen TVs

Judging by these data,

In the same time,

P (customer satisfied) = 240 / 300 = 0.80

Therefore, the probability that the customer is satisfied with the purchase and that the family purchased an HDTV are equal, and these events are statistically independent because they are not related to each other.

Probability multiplication rule

The formula for calculating conditional probability allows you to determine the probability of a joint event A and B. Having resolved formula (1)

relative to joint probability P(A and B), we obtain a general rule for multiplying probabilities. Probability of event A and B equal to the probability of the event A provided that the event occurs IN IN:

(3) P(A and B) = P(A|B) * P(B)

Let's take as an example 80 families who bought a widescreen HDTV television (Fig. 3). The table shows that 64 families are satisfied with the purchase and 16 are not. Let us assume that two families are randomly selected from among them. Determine the probability that both customers will be satisfied. Using formula (3), we obtain:

P(A and B) = P(A|B) * P(B)

where is the event A is that the second family is satisfied with their purchase, and the event IN- that the first family is satisfied with their purchase. The probability that the first family is satisfied with their purchase is 64/80. However, the likelihood that the second family is also satisfied with their purchase depends on the first family's response. If the first family does not return to the sample after the survey (selection without return), the number of respondents is reduced to 79. If the first family is satisfied with their purchase, the probability that the second family will also be satisfied is 63/79, since there are only 63 left in the sample families satisfied with their purchase. Thus, substituting specific data into formula (3), we get the following answer:

P(A and B) = (63/79)(64/80) = 0.638.

Therefore, the probability that both families are satisfied with their purchases is 63.8%.

Suppose that after the survey the first family returns to the sample. Determine the probability that both families will be satisfied with their purchase. In this case, the probability that both families are satisfied with their purchase is the same, equal to 64/80. Therefore, P(A and B) = (64/80)(64/80) = 0.64. Thus, the probability that both families are satisfied with their purchases is 64.0%. This example shows that the choice of the second family does not depend on the choice of the first. Thus, replacing the conditional probability in formula (3) P(A|B) probability P(A), we obtain a formula for multiplying the probabilities of independent events.

The rule for multiplying the probabilities of independent events. If events A And IN are statistically independent, the probability of an event A and B equal to the probability of the event A, multiplied by the probability of the event IN.

(4) P(A and B) = P(A)P(B)

If this rule is true for events A And IN, which means they are statistically independent. Thus, there are two ways to determine the statistical independence of two events:

  1. Events A And IN are statistically independent of each other if and only if P(A|B) = P(A).
  2. Events A And B are statistically independent of each other if and only if P(A and B) = P(A)P(B).

If in a 2x2 contingency table, one of these conditions is met for at least one combination of events A And B, it will be valid for any other combination.

Unconditional probability of an elementary event

(5) P(A) = P(A|B 1)P(B 1) + P(A|B 2)P(B 2) + … + P(A|B k)P(B k)

where events B 1, B 2, ... B k are mutually exclusive and exhaustive.

Let us illustrate the application of this formula using the example of Fig. 1. Using formula (5), we obtain:

P(A) = P(A|B 1)P(B 1) + P(A|B 2)P(B 2)

Where P(A)- the likelihood that the purchase was planned, P(B 1)- the probability that the purchase is made, P(B 2)- the probability that the purchase is not completed.

BAYES' THEOREM

The conditional probability of an event takes into account information that some other event has occurred. This approach can be used both to refine the probability taking into account newly received information, and to calculate the probability that the observed effect is a consequence of a specific cause. The procedure for refining these probabilities is called Bayes' theorem. It was first developed by Thomas Bayes in the 18th century.

Let's assume that the company mentioned above is researching the market for a new TV model. In the past, 40% of the TVs created by the company were successful, while 60% of the models were not recognized. Before announcing the release of a new model, marketing specialists carefully research the market and record demand. In the past, 80% of successful models were predicted to be successful, while 30% of successful predictions turned out to be wrong. The marketing department gave a favorable forecast for the new model. What is the likelihood that a new TV model will be in demand?

Bayes' theorem can be derived from the definitions of conditional probability (1) and (2). To calculate the probability P(B|A), take formula (2):

and substitute instead of P(A and B) the value from formula (3):

P(A and B) = P(A|B) * P(B)

Substituting formula (5) instead of P(A), we obtain Bayes’ theorem:

where events B 1, B 2, ... B k are mutually exclusive and exhaustive.

Let us introduce the following notation: event S - TV is in demand, event S’ - TV is not in demand, event F - favorable prognosis, event F’ - poor prognosis. Let’s assume that P(S) = 0.4, P(S’) = 0.6, P(F|S) = 0.8, P(F|S’) = 0.3. Applying Bayes' theorem we get:

The probability of demand for a new TV model, given a favorable forecast, is 0.64. Thus, the probability of lack of demand given a favorable forecast is 1–0.64=0.36. The calculation process is shown in Fig. 4.

Rice. 4. (a) Calculations using the Bayes formula to estimate the probability of demand for televisions; (b) Decision tree when studying demand for a new TV model

Let's look at an example of using Bayes' theorem for medical diagnostics. The probability that a person suffers from a particular disease is 0.03. A medical test can check if this is true. If a person is truly sick, the probability of an accurate diagnosis (saying that the person is sick when he really is sick) is 0.9. If a person is healthy, the probability of a false positive diagnosis (saying that a person is sick when he is healthy) is 0.02. Let's say that the medical test gives a positive result. What is the probability that a person is actually sick? What is the likelihood of an accurate diagnosis?

Let us introduce the following notation: event D - the person is sick, event D’ - the person is healthy, event T - diagnosis is positive, event T’ - diagnosis negative. From the conditions of the problem it follows that P(D) = 0.03, P(D’) = 0.97, P(T|D) = 0.90, P(T|D’) = 0.02. Applying formula (6), we obtain:

The probability that with a positive diagnosis a person is really sick is 0.582 (see also Fig. 5). Please note that the denominator of the Bayes formula is equal to the probability of a positive diagnosis, i.e. 0.0464.

Probability theory is a branch of mathematics that studies the patterns of random phenomena: random events, random variables, their properties and operations on them.

For a long time, probability theory did not have a clear definition. It was formulated only in 1929. The emergence of probability theory as a science dates back to the Middle Ages and the first attempts at mathematical analysis gambling(toss, dice, roulette). French mathematicians of the 17th century Blaise Pascal and Pierre Fermat, while studying the prediction of winnings in gambling, discovered the first probabilistic patterns that arise when throwing dice.

Probability theory arose as a science from the belief that certain patterns underlie mass random events. Probability theory studies these patterns.

Probability theory deals with the study of events whose occurrence is not known with certainty. It allows you to judge the degree of probability of the occurrence of some events compared to others.

For example: it is impossible to determine unambiguously the result of “heads” or “tails” as a result of tossing a coin, but with repeated tossing, approximately the same number of “heads” and “tails” appear, which means that the probability that “heads” or “tails” will fall ", is equal to 50%.

Test in this case, the implementation of a certain set of conditions is called, that is, in this case, the toss of a coin. The challenge can be played an unlimited number of times. In this case, the set of conditions includes random factors.

The test result is event. The event happens:

  1. Reliable (always occurs as a result of testing).
  2. Impossible (never happens).
  3. Random (may or may not occur as a result of the test).

For example, when tossing a coin, an impossible event - the coin will land on its edge, a random event - the appearance of “heads” or “tails”. The specific test result is called elementary event. As a result of the test, only elementary events occur. The set of all possible, different, specific test outcomes is called space of elementary events.

Basic concepts of the theory

Probability- the degree of possibility of the occurrence of an event. When the reasons for some possible event to actually occur outweigh the opposing reasons, then this event is called probable, otherwise - unlikely or improbable.

Random value- this is a quantity that, as a result of testing, can take one or another value, and it is not known in advance which one. For example: number per fire station per day, number of hits with 10 shots, etc.

Random variables can be divided into two categories.

  1. Discrete random variable is a quantity that, as a result of testing, can take on certain values ​​with a certain probability, forming a countable set (a set whose elements can be numbered). This set can be either finite or infinite. For example, the number of shots before the first hit on the target is a discrete random variable, because this quantity can take on an infinite, albeit countable, number of values.
  2. Continuous random variable is a quantity that can take any value from some finite or infinite interval. Obviously, the number of possible values ​​of a continuous random variable is infinite.

Probability space- concept introduced by A.N. Kolmogorov in the 30s of the 20th century to formalize the concept of probability, which gave rise to the rapid development of probability theory as a strict mathematical discipline.

A probability space is a triple (sometimes enclosed in angle brackets: , where

This is an arbitrary set, the elements of which are called elementary events, outcomes or points;
- sigma algebra of subsets called (random) events;
- probability measure or probability, i.e. sigma-additive finite measure such that .

De Moivre-Laplace theorem- one of the limit theorems of probability theory, established by Laplace in 1812. It states that the number of successes when repeating the same random experiment over and over again with two possible outcomes is approximately normally distributed. It allows you to find an approximate probability value.

If for each of the independent trials the probability of the occurrence of some random event is equal to () and is the number of trials in which it actually occurs, then the probability of the inequality being true is close (for large values) to the value of the Laplace integral.

Distribution function in probability theory- a function characterizing the distribution of a random variable or random vector; the probability that a random variable X will take a value less than or equal to x, where x is an arbitrary real number. Subject to known conditions completely determines the random variable.

Expected value- the average value of a random variable (this is the probability distribution of a random variable, considered in probability theory). In English-language literature it is denoted by , in Russian - . In statistics, the notation is often used.

Let a probability space and a random variable defined on it be given. That is, by definition, a measurable function. Then, if there is a Lebesgue integral of over space, then it is called the mathematical expectation, or the mean value, and is denoted .

Variance of a random variable- a measure of the spread of a given random variable, i.e. its deviation from the mathematical expectation. It is designated in Russian and foreign literature. In statistics, the notation or is often used. Square root of the variance is called the standard deviation, standard deviation, or standard spread.

Let be a random variable defined on some probability space. Then

where the symbol denotes the mathematical expectation.

In probability theory, two random events are called independent, if the occurrence of one of them does not change the probability of the occurrence of the other. Similarly, two random variables are called dependent, if the value of one of them affects the probability of the values ​​of the other.

Simplest form of law large numbers is Bernoulli's theorem, which states that if the probability of an event is the same in all trials, then as the number of trials increases, the frequency of the event tends to the probability of the event and ceases to be random.

The law of large numbers in probability theory states that the arithmetic mean of a finite sample from a fixed distribution is close to the theoretical mean of that distribution. Depending on the type of convergence, a distinction is made between the weak law of large numbers, when convergence occurs by probability, and the strong law of large numbers, when convergence is almost certain.

The general meaning of the law of large numbers is that the joint action of a large number of identical and independent random factors leads to a result that, in the limit, does not depend on chance.

Methods for estimating probability based on finite sample analysis are based on this property. A clear example is the forecast of election results based on a survey of a sample of voters.

Central limit theorems- a class of theorems in probability theory stating that the sum is sufficient large quantities weakly dependent random variables that have approximately the same scales (no one term dominates or makes a decisive contribution to the sum) has a distribution close to normal.

Since many random variables in applications are formed under the influence of several weakly dependent random factors, their distribution is considered normal. In this case, the condition must be met that none of the factors is dominant. Central limit theorems in these cases justify the use of the normal distribution.

In economics, as in other areas of human activity or in nature, we constantly have to deal with events that cannot be accurately predicted. Thus, the sales volume of a product depends on demand, which can vary significantly, and on a number of other factors that are almost impossible to take into account. Therefore, when organizing production and carrying out sales, you have to predict the outcome of such activities on the basis of either your own previous experience, or similar experience of other people, or intuition, which to a large extent also relies on experimental data.

In order to somehow evaluate the event in question, it is necessary to take into account or specially organize the conditions in which this event is recorded.

The implementation of certain conditions or actions to identify the event in question is called experience or experiment.

The event is called random, if as a result of experience it may or may not occur.

The event is called reliable, if it necessarily appears as a result of a given experience, and impossible, if it cannot appear in this experience.

For example, snowfall in Moscow on November 30 is a random event. The daily sunrise can be considered a reliable event. Snowfall at the equator can be considered an impossible event.

One of the main tasks in probability theory is the task of determining a quantitative measure of the possibility of an event occurring.

Algebra of events

Events are called incompatible if they cannot be observed together in the same experience. Thus, the presence of two and three cars in one store for sale at the same time are two incompatible events.

Amount events is an event consisting of the occurrence of at least one of these events

An example of the sum of events is the presence of at least one of two products in the store.

The work events is an event consisting of the simultaneous occurrence of all these events

An event consisting of the appearance of two goods in a store at the same time is a product of events: - the appearance of one product, - the appearance of another product.

Events form full group events if at least one of them is sure to occur in experience.

Example. The port has two berths for receiving ships. Three events can be considered: - the absence of ships at the berths, - the presence of one ship at one of the berths, - the presence of two ships at two berths. These three events form a complete group of events.

Opposite two unique possible events that form a complete group are called.

If one of the events that is opposite is denoted by , then the opposite event is usually denoted by .

Classical and statistical definitions of event probability

Each of the equally possible results of tests (experiments) is called an elementary outcome. They are usually designated by letters. For example, a die is thrown. There can be a total of six elementary outcomes based on the number of points on the sides.

From elementary outcomes you can create a more complex event. Thus, the event of an even number of points is determined by three outcomes: 2, 4, 6.

A quantitative measure of the possibility of the occurrence of the event in question is probability.

The most widely used definitions of the probability of an event are: classic And statistical.

The classical definition of probability is associated with the concept of a favorable outcome.

The outcome is called favorable to a given event if its occurrence entails the occurrence of this event.

In the example given, the event in question is even number points on the dropped side has three favorable outcomes. In this case, the general
number of possible outcomes. This means that the classical definition of the probability of an event can be used here.

Classic definition equals the ratio of the number of favorable outcomes to total number possible outcomes

where is the probability of the event, is the number of outcomes favorable to the event, is the total number of possible outcomes.

In the considered example

The statistical definition of probability is associated with the concept of the relative frequency of occurrence of an event in experiments.

The relative frequency of occurrence of an event is calculated using the formula

where is the number of occurrences of an event in a series of experiments (tests).

Statistical definition. The probability of an event is the number around which the relative frequency stabilizes (sets) with an unlimited increase in the number of experiments.

In practical problems, the probability of an event is taken to be the relative frequency at sufficiently large number tests.

From these definitions of the probability of an event it is clear that the inequality is always satisfied

To determine the probability of an event based on formula (1.1), combinatorics formulas are often used, which are used to find the number of favorable outcomes and the total number of possible outcomes.

as an ontological category reflects the extent of the possibility of the emergence of any entity under any conditions. In contrast to the mathematical and logical interpretation of this concept, ontological mathematics does not associate itself with the obligation of quantitative expression. The meaning of V. is revealed in the context of understanding determinism and the nature of development in general.

Excellent definition

Incomplete definition

PROBABILITY

concept characterizing quantities. the measure of the possibility of the occurrence of a certain event at a certain conditions. In scientific knowledge there are three interpretations of V. The classical concept of V., which arose from mathematical. analysis of gambling and most fully developed by B. Pascal, J. Bernoulli and P. Laplace, considers winning as the ratio of the number of favorable cases to the total number of all equally possible ones. For example, when throwing a dice that has 6 sides, each of them can be expected to land with a value of 1/6, since no one side has advantages over the other. Such symmetry of experimental outcomes is specially taken into account when organizing games, but is relatively rare in the study of objective events in science and practice. Classic V.'s interpretation gave way to statistics. V.'s concepts, which are based on the actual observing the occurrence of a certain event over a long period of time. experience under precisely fixed conditions. Practice confirms that the more often an event occurs, the more degree objective possibility of its occurrence, or B. Therefore, statistical. V.'s interpretation is based on the concept of relates. frequency, which can be determined experimentally. V. as a theoretical the concept never coincides with the empirically determined frequency, however, in plural. In cases, it differs practically little from the relative one. frequency found as a result of duration. observations. Many statisticians consider V. as a “double” refers. frequencies, edges are determined statistically. study of observational results

or experiments. Less realistic was the definition of V. as the limit relates. frequencies of mass events, or groups, proposed by R. Mises. As further development The frequency approach to V. puts forward a dispositional, or propensitive, interpretation of V. (K. Popper, J. Hacking, M. Bunge, T. Settle). According to this interpretation, V. characterizes the property of generating conditions, for example. experiment. installations to obtain a sequence of massive random events. It is precisely this attitude that gives rise to physical dispositions, or predispositions, V. which can be checked using relatives. frequency

Statistical V.'s interpretation dominates scientific research. cognition, because it reflects specific. the nature of the patterns inherent in mass phenomena of a random nature. In many physical, biological, economic, demographic. and other social processes, it is necessary to take into account the action of many random factors, which are characterized by a stable frequency. Identifying these stable frequencies and quantities. its assessment with the help of V. makes it possible to reveal the necessity that makes its way through the cumulative action of many accidents. This is where the dialectic of transforming chance into necessity finds its manifestation (see F. Engels, in the book: K. Marx and F. Engels, Works, vol. 20, pp. 535-36).

Logical, or inductive, reasoning characterizes the relationship between the premises and the conclusion of non-demonstrative and, in particular, inductive reasoning. Unlike deduction, the premises of induction do not guarantee the truth of the conclusion, but only make it more or less plausible. This plausibility, with precisely formulated premises, can sometimes be assessed using V. The value of this V. is most often determined by comparison. concepts (greater than, less than or equal to), and sometimes in a numerical way. Logical interpretation is often used to analyze inductive reasoning and construct various systems of probabilistic logic (R. Carnap, R. Jeffrey). In semantics logical concepts V. is often defined as the degree to which one statement is confirmed by others (for example, a hypothesis by its empirical data).

In connection with the development of theories of decision making and games, the so-called personalistic interpretation of V. Although V. at the same time expresses the degree of faith of the subject and the occurrence of a certain event, V. themselves must be chosen in such a way that the axioms of the calculus of V. are satisfied. Therefore, V. with such an interpretation expresses not so much the degree of subjective, but rather reasonable faith . Consequently, decisions made on the basis of such V. will be rational, because they do not take into account psychological factors. characteristics and inclinations of the subject.

With epistemological t.zr. difference between statistical, logical. and personalistic interpretations of V. is that if the first characterizes the objective properties and relationships of mass phenomena of a random nature, then the last two analyze the features of the subjective, cognizant. human activities under conditions of uncertainty.

PROBABILITY

one of the most important concepts science, characterizing a special systemic vision of the world, its structure, evolution and knowledge. The specificity of the probabilistic view of the world is revealed through inclusion in the number basic concepts the existence of the concepts of randomness, independence and hierarchy (ideas of levels in the structure and determination of systems).

Ideas about probability originated in ancient times and related to the characteristics of our knowledge, while the existence of probabilistic knowledge was recognized, which differed from reliable knowledge and from false knowledge. The impact of the idea of ​​probability on scientific thinking and on the development of knowledge is directly related to the development of probability theory as a mathematical discipline. The origin of the mathematical doctrine of probability dates back to the 17th century, when the development of a core of concepts allowing. quantitative (numerical) characteristics and expressing a probabilistic idea.

Intensive applications of probability to the development of cognition occur in the 2nd half. 19 - 1st floor 20th century Probability has entered the structures of such fundamental sciences of nature as classical statistical physics, genetics, quantum theory, and cybernetics (information theory). Accordingly, probability personifies that stage in the development of science, which is now defined as non-classical science. To reveal the novelty and features of the probabilistic way of thinking, it is necessary to proceed from an analysis of the subject of probability theory and the foundations of its numerous applications. Probability theory is usually defined as a mathematical discipline that studies the patterns of mass random phenomena under certain conditions. Randomness means that within the framework of mass character, the existence of each elementary phenomenon does not depend on and is not determined by the existence of other phenomena. At the same time, the mass nature of phenomena itself has a stable structure and contains certain regularities. A mass phenomenon is quite strictly divided into subsystems, and the relative number of elementary phenomena in each of the subsystems (relative frequency) is very stable. This stability is compared with probability. A mass phenomenon as a whole is characterized by a probability distribution, that is, by specifying subsystems and their corresponding probabilities. The language of probability theory is the language of probability distributions. Accordingly, probability theory is defined as the abstract science of operating with distributions.

Probability gave rise in science to ideas about statistical patterns and statistical systems. The last essence systems formed from independent or quasi-independent entities, their structure is characterized by probability distributions. But how is it possible to form systems from independent entities? It is usually assumed that for the formation of systems with integral characteristics, it is necessary that sufficiently stable connections exist between their elements that cement the systems. Stability of statistical systems is given by the presence of external conditions, external environment, external, and not internal forces. The very definition of probability is always based on setting the conditions for the formation of the initial mass phenomenon. Another important idea characterizing the probabilistic paradigm is the idea of ​​hierarchy (subordination). This idea expresses the relationship between the characteristics of individual elements and the integral characteristics of systems: the latter, as it were, are built on top of the former.

The importance of probabilistic methods in cognition lies in the fact that they make it possible to study and theoretically express the patterns of structure and behavior of objects and systems that have a hierarchical, “two-level” structure.

Analysis of the nature of probability is based on its frequency, statistical interpretation. At the same time, very long time In science, such an understanding of probability prevailed, which was called logical, or inductive, probability. Logical probability is interested in questions of the validity of a separate, individual judgment under certain conditions. Is it possible to evaluate the degree of confirmation (reliability, truth) of an inductive conclusion (hypothetical conclusion) in quantitative form? During the development of probability theory, such questions were repeatedly discussed, and they began to talk about the degrees of confirmation of hypothetical conclusions. This measure of probability is determined by the available this person information, his experience, views on the world and psychological mindset. In all similar cases the magnitude of probability is not amenable to strict measurements and practically lies outside the competence of probability theory as a consistent mathematical discipline.

The objective, frequentist interpretation of probability was established in science with significant difficulties. Initially, the understanding of the nature of probability was influenced by strong impact those philosophical and methodological views that were characteristic of classical science. Historically, the development of probabilistic methods in physics occurred under the determining influence of the ideas of mechanics: statistical systems were interpreted simply as mechanical. Since the corresponding problems were not solved by strict methods of mechanics, assertions arose that turning to probabilistic methods and statistical laws is the result of the incompleteness of our knowledge. In the history of the development of classical statistical physics, numerous attempts were made to substantiate it on the basis of classical mechanics, but they all failed. The basis of probability is that it expresses the structural features of a certain class of systems, other than mechanical systems: the state of the elements of these systems is characterized by instability and a special (not reducible to mechanics) nature of interactions.

The entry of probability into knowledge leads to the denial of the concept of hard determinism, to the denial of the basic model of being and knowledge developed in the process of the formation of classical science. The basic models represented by statistical theories have a different, more general character: These include ideas of randomness and independence. The idea of ​​probability is associated with the disclosure of the internal dynamics of objects and systems, which cannot be entirely determined by external conditions and circumstances.

The concept of a probabilistic vision of the world, based on the absolutization of ideas about independence (as before the paradigm of rigid determination), has now revealed its limitations, which most strongly affects the transition modern science to analytical methods for studying complex systems and the physical and mathematical foundations of self-organization phenomena.

Excellent definition

Incomplete definition ↓

Probability event is the ratio of the number of elementary outcomes favorable to a given event to the number of all equally possible outcomes of the experience in which this event may appear. The probability of event A is denoted by P(A) (here P is the first letter French word probabilite - probability). According to the definition
(1.2.1)
where is the number of elementary outcomes favorable to event A; - the number of all equally possible elementary outcomes of the experiment, forming a complete group of events.
This definition of probability is called classical. It arose on initial stage development of probability theory.

The probability of an event has the following properties:
1. The probability of a reliable event is equal to one. Let us denote a reliable event by the letter . For a certain event, therefore
(1.2.2)
2. The probability of an impossible event is zero. Let us denote an impossible event by the letter . For an impossible event, therefore
(1.2.3)
3. The probability of a random event is expressed as a positive number less than one. Since for a random event the inequalities , or , are satisfied, then
(1.2.4)
4. The probability of any event satisfies the inequalities
(1.2.5)
This follows from relations (1.2.2) - (1.2.4).

Example 1. An urn contains 10 balls of equal size and weight, of which 4 are red and 6 are blue. One ball is drawn from the urn. What is the probability that the drawn ball will be blue?

Solution. We denote the event “the drawn ball turned out to be blue” by the letter A. This test has 10 equally possible elementary outcomes, of which 6 favor event A. In accordance with formula (1.2.1), we obtain

Example 2. All natural numbers from 1 to 30 are written on identical cards and placed in an urn. After thoroughly shuffling the cards, one card is removed from the urn. What is the probability that the number on the card taken is a multiple of 5?

Solution. Let us denote by A the event “the number on the taken card is a multiple of 5.” In this test there are 30 equally possible elementary outcomes, of which event A is favored by 6 outcomes (the numbers 5, 10, 15, 20, 25, 30). Hence,

Example 3. Two dice are tossed and the sum of points on the top faces is calculated. Find the probability of event B such that the top faces of the dice have a total of 9 points.

Solution. In this test there are only 6 2 = 36 equally possible elementary outcomes. Event B is favored by 4 outcomes: (3;6), (4;5), (5;4), (6;3), therefore

Example 4. Selected at random natural number, not exceeding 10. What is the probability that this number is prime?

Solution. Let us denote by the letter C the event “the chosen number is prime”. In this case n = 10, m = 4 ( prime numbers 2, 3, 5, 7). Therefore, the required probability

Example 5. Two symmetrical coins are tossed. What is the probability that there are numbers on the top sides of both coins?

Solution. Let us denote by the letter D the event “there is a number on the top side of each coin.” In this test there are 4 equally possible elementary outcomes: (G, G), (G, C), (C, G), (C, C). (The notation (G, C) means that the first coin has a coat of arms, the second one has a number). Event D is favored by one elementary outcome (C, C). Since m = 1, n = 4, then

Example 6. What is the probability that a two-digit number chosen at random has the same digits?

Solution. Double digit numbers are numbers from 10 to 99; There are 90 such numbers in total. Same numbers have 9 numbers (these numbers are 11, 22, 33, 44, 55, 66, 77, 88, 99). Since in this case m = 9, n = 90, then
,
where A is the “number with identical digits” event.

Example 7. From the letters of the word differential One letter is chosen at random. What is the probability that this letter will be: a) a vowel, b) a consonant, c) a letter h?

Solution. The word differential has 12 letters, of which 5 are vowels and 7 are consonants. Letters h there is no in this word. Let us denote the events: A - “vowel letter”, B - “consonant letter”, C - “letter h". The number of favorable elementary outcomes: - for event A, - for event B, - for event C. Since n = 12, then
, And .

Example 8. Two dice are tossed and the number of points on the top of each dice is noted. Find the probability that both dice show the same number of points.

Solution. Let's denote this event by the letter A. Event A is favored by 6 elementary outcomes: (1;]), (2;2), (3;3), (4;4), (5;5), (6;6). The total number of equally possible elementary outcomes that form a complete group of events, in this case n=6 2 =36. This means that the required probability

Example 9. The book has 300 pages. What is the probability that a randomly opened page will have a serial number divisible by 5?

Solution. From the conditions of the problem it follows that all equally possible elementary outcomes that form a complete group of events will be n = 300. Of these, m = 60 favor the occurrence of the specified event. Indeed, a number that is a multiple of 5 has the form 5k, where k is a natural number, and , whence . Hence,
, where A is the “page” event and has a sequence number that is a multiple of 5".

Example 10. Two dice are tossed and the sum of points on the top faces is calculated. What is more likely - getting a total of 7 or 8?

Solution. Let us denote the events: A - “7 points are rolled”, B – “8 points are rolled”. Event A is favored by 6 elementary outcomes: (1; 6), (2; 5), (3; 4), (4; 3), (5; 2), (6; 1), and event B is favored by 5 outcomes: (2; 6), (3; 5), (4; 4), (5; 3), (6; 2). All equally possible elementary outcomes are n = 6 2 = 36. Hence, And .

So, P(A)>P(B), that is, getting a total of 7 points is a more likely event than getting a total of 8 points.

Tasks

1. A natural number not exceeding 30 is chosen at random. What is the probability that this number is a multiple of 3?
2. In the urn a red and b blue balls, identical in size and weight. What is the probability that a ball drawn at random from this urn will be blue?
3. A number not exceeding 30 is chosen at random. What is the probability that this number is a divisor of 30?
4. In the urn A blue and b red balls, identical in size and weight. One ball is taken from this urn and set aside. This ball turned out to be red. After this, another ball is drawn from the urn. Find the probability that the second ball is also red.
5. A national number not exceeding 50 is chosen at random. What is the probability that this number is prime?
6. Three dice are tossed and the sum of points on the top faces is calculated. What is more likely - to get a total of 9 or 10 points?
7. Three dice are tossed and the sum of the points rolled is calculated. What is more likely - to get a total of 11 (event A) or 12 points (event B)?

Answers

1. 1/3. 2 . b/(a+b). 3 . 0,2. 4 . (b-1)/(a+b-1). 5 .0,3.6 . p 1 = 25/216 - probability of getting 9 points in total; p 2 = 27/216 - probability of getting 10 points in total; p 2 > p 1 7 . P(A) = 27/216, P(B) = 25/216, P(A) > P(B).

Questions

1. What is the probability of an event called?
2. What is the probability of a reliable event?
3. What is the probability of an impossible event?
4. What are the limits of the probability of a random event?
5. What are the limits of the probability of any event?
6. What definition of probability is called classical?