Do youth participating in the program demonstrate a significant reduction in perceived stress from intake to discharge?
Do youth participating in the program demonstrate a significant reduction in perceived stress from intake to discharge?
This test is used when you have two sets of observations on the same sample (the youth) and you want to see if the mean difference between the two observations (stress score at intake and stress score at discharge) is statistically different from zero.
Null Hypothesis ($H_0$): There is no significant difference in the mean perceived stress scores from intake ($\mu_{\text{intake}}$) to discharge ($\mu_{\text{discharge}}$).
Alternative Hypothesis ($H_a$): There is a significant reduction in the mean perceived stress scores from intake to discharge.
(This is a one-tailed test since you are specifically hypothesizing a reduction.)
Intake Stress Score ($X_1$): The perceived stress score measured when the youth started the program (e.g., using a standardized measure like the Perceived Stress Scale, or PSS).
Discharge Stress Score ($X_2$): The perceived stress score measured when the youth completed the program.
The statistical output will provide a $p$-value. This value represents the probability of observing the reduction in stress scores (or an even greater reduction) purely by chance, assuming the null hypothesis is true.
If the $p$-value is less than the significance level (typically $\alpha = 0.05$): You reject the null hypothesis. This allows you to conclude that the observed reduction in perceived stress from intake to discharge is statistically significant and is likely due to the program's effect.
If the $p$-value is greater than 0.05: You fail to reject the null hypothesis. This means the observed reduction is not large enough or consistent enough to rule out chance, and you cannot conclude the program had a significant effect on reducing perceived stress.
Beyond the statistical significance, you should also consider:
Effect Size: While the $p$-value tells you if the difference is real, the effect size (e.g., Cohen's $d$) tells you the magnitude of the reduction. A statistically significant result might have a small effect size, indicating the reduction, though real, is minor.
Clinical Significance: Does the reduction, even if statistically significant, translate into a meaningful improvement in the youth's daily life? For example, a drop of 2 points on a 40-point scale might be significant, but not large enough to impact functioning.
In summary, the answer to your question hinges on calculating the paired-samples $t$-test and examining the resulting $p$-value and effect size.