Skip to Main Content

++

Thus far, we have used statistical methods to reach conclusions by seeing how compatible the observations were with the null hypothesis that the treatment had no effect. When the data were unlikely to occur if this null hypothesis was true, we rejected it and concluded that the treatment had an effect. We used a test statistic (F, t, z, or χ2) to quantify the difference between the actual observations and those we would expect if the null hypothesis of no effect were true. We concluded that the treatment had an effect if the value of this test statistic was bigger than 95% of the values that would occur if the treatment had no effect. When this is so, it is common for medical investigators to report a statistically significant effect. On the other hand, when the test statistic is not big enough to reject the hypothesis of no treatment effect, investigators often report no statistically significant difference and then discuss their results as if they had proven that the treatment had no effect. All they really did was fail to demonstrate that it did have an effect. The distinction between positively demonstrating that a treatment had no effect and failing to demonstrate that it did have an effect is subtle but very important, especially in the light of the small numbers of subjects included in most clinical studies.*

++

As already mentioned in our discussion of the t test, the ability to detect a treatment effect with a given level of confidence depends on the size of the treatment effect, the variability within the population, and the size of the samples used in the study. Just as bigger samples make it more likely that you will be able to detect an effect, smaller sample sizes make it harder. In practical terms, this fact means that studies of therapies that involve only a few subjects and fail to reject the null hypothesis of no treatment effect may arrive at this result because the statistical procedures lacked the power to detect the effect because of a too small sample size, even though the treatment did have an effect. Conversely, considerations of the power of a test permit you to compute the sample size needed to detect a treatment effect of given size that you believe is present.

++

* This problem is particularly encountered in small clinical studies in which there are no “failures” in the treatment group. This situation often leads to overly optimistic assessments of therapeutic efficacy. See Hanley JA, Lippman-Hand A. If nothing goes wrong, is everything all right? Interpreting zero numerators. JAMA. 1983;249:1743–1745.

++

Now, we make a radical departure from everything that has preceded: we assume that the treatment does have an effect.

++

Figure 6-1 shows the same population of people we studied in Figure 4-3 except that this time the drug given to increase daily urine production works. ...

Want remote access to your institution's subscription?

Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.

Ok

About MyAccess

If your institution subscribes to this resource, and you don't have a MyAccess profile, please contact your library's reference desk for information on how to gain access to this resource from off-campus.

Subscription Options

AccessAnesthesiology Full Site: One-Year Subscription

Connect to the full suite of AccessAnesthesiology content and resources including procedural videos, interactive self-assessment, real-life cases, 20+ textbooks, and more

$995 USD
Buy Now

Pay Per View: Timed Access to all of AccessAnesthesiology

24 Hour Subscription $34.95

Buy Now

48 Hour Subscription $54.95

Buy Now

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.