See you at RSNA Dec 1 – 5

Radiology

Interobserver agreement and performance of concurrent AI assistance for radiographic evaluation of knee osteoarthritis

Authors:

Mathias W. Brejnebøl , Anders Lenskjold, Katharina Ziegeler, Huib Ruitenbeek, Felix C. Müller, Janus U. Nybing, Jacob J. Visser, Loes M. Schiphouwer, et. al.

Share this publication

Abstract

Background

Due to conflicting findings in the literature, there are concerns about a lack of objectivity in grading knee osteoarthritis (KOA) on radiographs.

Purpose

To examine how artificial intelligence (AI) assistance affects the performance and interobserver agreement of radiologists and orthopedists of various experience levels when evaluating KOA on radiographs according to the established Kellgren-Lawrence (KL) grading system.

AI tool

The commercially available Conformité Européenne–certified AI tool RBknee (version 2.1; Radiobotics) was used in this study. This tool analyzed all weight-bearing frontal radiographs and corresponding lateral projection radiographs of the knee or knees, outputting the KL grade on the frontal view, and presence or absence of patellar osteophytes on the lateral view. The AI tool was not trained on images from any of the participating centers.

Results:

Seventy-five studies were included from each center, totaling 225 studies (mean patient age, 55 years ± 15 [SD]; 113 female patients). The KL grades were KL-0, 24.0% (n = 54); KL-1, 28.0% (n = 63); KL-2, 21.8% (n = 49); KL-3, 18.7% (n = 42); and KL-4, 7.6% (n = 17). Eleven readers completed their readings. Three of the six junior readers showed higher KL grading performance with versus without AI assistance (area under the receiver operating characteristic curve, 0.81 ± 0.017 [SEM] vs 0.88 ± 0.011 [P < .001]; 0.76 ± 0.018 vs 0.86 ± 0.013 [P < .001]; and 0.89 ± 0.011 vs 0.91 ± 0.009 [P = .008]). Interobserver agreement for KL grading among all readers was higher with versus without AI assistance (κ = 0.77 ± 0.018 [SEM] vs 0.85 ± 0.013; P < .001). Board-certified radiologists achieved almost perfect agreement for KL grading when assisted by AI (κ = 0.90 ± 0.01), which was higher than that achieved by the reference readers independently (κ = 0.84 ± 0.017; P = .01).

Conclusion:

AI assistance increased junior readers’ radiographic KOA grading performance and increased interobserver agreement for osteoarthritis grading across all readers and experience levels.

Open access

This research is open access.

Published regularly since 1923 by the Radiological Society of North America (RSNA), Radiology has long been recognized as the authoritative reference for the most current, clinically relevant and highest quality research in the field of radiology.

The latest research about Radiobotics

Entering your information above does not subscribe you to marketing emails from Radiobotics; it just makes it easier for us to get in touch with you. You can always read our privacy policy.

CONTACT US

Interested in a research collaboration?

Would you like to do some research with us or with our AI solutions for fracture detection?

Complete the form, and someone will get back to you shortly. Or find department emails at our contact us page.