An Objective Theory of Probability by Gillies Donald

An Objective Theory of Probability by Gillies Donald

Author:Gillies, Donald
Language: eng
Format: epub
Publisher: Taylor & Francis (CAM)


with probability 1. We can criticize this approach in exactly the same way as the law of large numbers view discussed in Chapter 5. First of all we should avoid approximating the finite by the infinite wherever possible, and we can do so here. Secondly, to obtain practical results from (1), we have to use a methodological rule for neglecting probabilities which does not correspond to the rule normally applied in statistics.

(ii) Independence and gambling systems

Our deduction of the law of excluded gambling systems depends crucially on the assumption of independence for the original sequence of random variables: ξ1, ξ2,..., ξn,.... Conversely a gambling system can be considered as a test of independence. To see this let us consider again the elementary test described in Chapter 5. The test consisted of observing the number m of l's in the first n observed values of the sequence ξ1,..., ξn,... of random variables. If m/n differed too much from p, our probability hypothesis involving independence and identical distribution was regarded as falsified. It is easy, however, to imagine a finite sequence of 0’s and l's which would pass this test, but whose form would lead us to conjecture that the hypothesis was false. Consider for example the case p = . If we obtained the sequence 0101...01 in 5,000 tosses, the hypothesis would certainly pass our elementary test but it would be obvious that such a result could not have been given by a sequence of independent random variables. The dependence of one result on its predecessor is quite clear. This observation naturally suggests a second test which would have succeeded in falsifying the hypothesis. Let us now consider not the whole sample but a subsample consisting of every second member of the original sample. Such a sample is reasonably large (2,500) and according to our original hypothesis is generated by a sequence of independent identically distributed random variables. Thus had we applied our elementary test again to this subsample, the result would have been a falsification. We can generalize this to any probability hypothesis involving a sequence of random variables which are postulated to be (a) independent and (b) identically distributed. Suppose we test such a hypothesis by collecting a finite initial sequence of observed values of the random variables and calculating a statistic from this sample in the usual way. We can obtain a new test by taking any sufficiently large subset of the original sample and repeating the procedure of the original test. In this way we obtain a number of tests which taken together are much more severe than the original test and in particular test the independence assumptions involved.

Unfortunately this method of obtaining severe tests is, apparently at least, liable to just the same objection which arose when we were discussing random sequences in the context of the frequency theory. Suppose the distribution associated with our hypothesis is the elementary one P(1) = p; P(0) = 1 – p. Suppose our sample contains n elements. Let us select the subset of this sample which contains just the 1’s.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.