Classification of Distal Radius Fractures in Children
Classification of Distal Radius Fractures in Children
Background We wanted to test the reliability of a commonly used classification of distal radius fractures in children.
Methods 105 consecutive fractures of the distal radius in children were rated on two occasions three months apart by 3 groups of doctors; 4 junior registrars, 4 senior registrars and 4 orthopedic consultants. The fractures were classified as buckle, greenstick, complete or physeal. Kappa statistics were used to analyze inter- and intraobserver reliability.
Results The kappa value for interobserver agreement at the first reading was 0.59 for the junior registrars, 0.63 for the senior registrars and 0.66 for the consultants. The mean kappa value for intraobserver reliability was 0.79 for the senior registrars, 0.74 for the consultants and 0.66 for the junior registrars.
Conclusions We conclude that the classification tested in this study is reliable and reproducible when applied by raters experienced in fracture management. The reliability varies according to the experience of the raters. Experienced raters can verify the classification, and avoid unnecessary follow-up appointments.
Abstract and Introduction
Abstract
Background We wanted to test the reliability of a commonly used classification of distal radius fractures in children.
Methods 105 consecutive fractures of the distal radius in children were rated on two occasions three months apart by 3 groups of doctors; 4 junior registrars, 4 senior registrars and 4 orthopedic consultants. The fractures were classified as buckle, greenstick, complete or physeal. Kappa statistics were used to analyze inter- and intraobserver reliability.
Results The kappa value for interobserver agreement at the first reading was 0.59 for the junior registrars, 0.63 for the senior registrars and 0.66 for the consultants. The mean kappa value for intraobserver reliability was 0.79 for the senior registrars, 0.74 for the consultants and 0.66 for the junior registrars.
Conclusions We conclude that the classification tested in this study is reliable and reproducible when applied by raters experienced in fracture management. The reliability varies according to the experience of the raters. Experienced raters can verify the classification, and avoid unnecessary follow-up appointments.