﻿Risk ratios, odds ratios, and rate ratios are also estimates based on samples, and we can also calculate confidence intervals for them. Again, most frequently we will be using 95% confidence intervals. Consider a hypothetical example. A small study compared the risk of heart disease in diabetics and non-diabetics. The exposure is diabetes, and the outcome of interest is heart disease We found a risk ratio of 7.0, and the 95% confidence interval for this risk ratio ranged from 1.3 to 12. So, we are not really certain whether diabetes just has a very modest impact on the risk of heart disease or if, in fact, it's a very substantial increase, increasing it by 12 times. So, clearly in this situation we would like to have more information, more confidence, and more precision in the estimate. If we were to repeat this study with a larger sample, we wouldn't necessarily get the same point estimate. Here, hypothetically, we repeated the study, and we got a risk ratio of 6.0 with a 95% confidence interval for that risk ratio of 5.1 to 7.2. Now we have a somewhat lower point estimate, but we have a much narrower confidence interval, and we have much more confidence in this estimate and much more precision in the true impact of diabetes on the risk of heart disease. The other thing we can determine from the confidence interval is whether these results meet the usual criterion for statistical significance or not. The 95% confidence interval means that we are 95% confident that the true risk ratio is somewhere within that range. That means that there is a less than 5% probability that the true value is outside that interval, and if the null is outside the confidence interval, then the probability that the sample data that we've obtained is compatible with the null must be less than 5%. Consequently, if the 95% confidence interval lies entirely above or entirely below the null value, then the results would be considered statistically significant. Here is another hypothetical result in which we have an estimated risk ratio which is greater than 1.0, but the 95% confidence interval is wide, and it includes the null value. So, here the null value is inside the confidence interval, so the null is potentially compatible with the sample data and therefore the p-value is greater than 0.05. So this result is not statistically significant. We would not reject the null hypothesis. And here are two hypothetical results in which the point estimate is below 1.0, suggesting a decreased risk. They are both broad confidence intervals, but the top one includes the null value of 1.0. And if the 95% confidence interval includes the null value, then the findings are potentially compatible with a risk ratio of 1.0, one so the findings are not statistically significant. The lower confidence interval is also broad, but it does not include the null, so it does meet the criterion of p<0.05. Here are several more examples. The upper one on the left shows a fairly narrow confidence interval and suggests a statistically significant decrease in risk in the exposed group. So in this case the p-value is less than 0.05. The next two on the right side are both confidence intervals that exclude the null, so these would be considered statistically significant, and yet the lower of the two is much broader and therefore much less precise. So, they are both statistically significant but clearly the upper of these two is much more precise. The lower in three examples show confidence intervals that vary in width, but all three of these would be considered not statistically significant, because all three embrace, or include, the null value. Nevertheless, the lower two results are much narrower and much more precise, which brings us to the next point. Here are two results which are both not statistically significant because the 95% confidence interval includes the null value. But do you view these two results differently? I think you should, because in the upper result the point estimate suggests a very minimal increase in risk and the confidence interval is quite narrow. Even if there were some impact of this exposure, the confidence interval indicates that it will be modest at most, and therefore it would be unnecessary to repeat the study. But in the lower example the point estimate suggests that there might be a fairly substantial increase in risk, perhaps four or five-fold, and while the result is not statistically significant because the 95% confidence interval includes the null, it is a very imprecise estimate, and there may be something going on here that we missed because of the small sample size. So, in the lower study we might be interested in repeating this the study with a larger sample in order to get a more precise estimate so that we don't potentially miss something causing an import increase in risk. To summarize, the 95% confidence intervals for risk ratios, odds ratios, and rate ratios give us a handle on the precision of those estimates, but from them, we can also figure out whether or not the results would meet the usual criterion for statistical significance. Here is a way to remember this. In the upper example I have shown a 95% confidence interval that includes the null and my cartoon shows a situation in which the null is being "embraced," in other words, there is compatibility between the null hypothesis and the observed results. So in this situation the results are compatible with the null, and we would not reject the null hypothesis. In the lower two examples the null is not included in these confidence intervals, and therefore the null is being rejected and the results are statistically significant. So the 95% confidence interval gives us some additional information that the p-value by itself does not. It gives us information about the precision of the estimate; it gives us some feel for the sample size that was utilized; and also indicates whether not the results met the usual criterion for statistical significance.