Quality control
Both the imaging and the clinical/behavioral data have undergone quality
controls to ensure a high quality product.
Several levels of quality controls for the imaging data have been implemented.
The first level is conducted at the site immediately after data acquisition.
Once data arrives at the Data Coordinating Center (DCC), it is archived
and then converted to MINC, the medical image file format developed
at the Brain Imaging Centre. Here, some automated quality control takes
place to ensure correct labeling and proper acquisition parameters.
MRI data then undergo a series of quality control measures. For more
details about MRI quality control, please click here.
Several levels of quality controls for the collection of clinical/behavioral
data have been implemented. Initially, examiners and interviewer were
trained on standardized administration and scoring procedures for the
various instruments. Periodically throughout data collection, regular
reviews of audiotapes, videotapes, and completed test forms are conducted
by the Clinical Coordinating Center (CCC) to ensure proper administration
and scoring. Once data is transferred to the DCC, automated and visual
quality controls are implemented. Please click here
for a summary of CCC and DCC quality control procedures. For additional
details, see the study protocol, procedure manuals, and published methods
papers (Evans, et al., 2006; Almli, et al., 2007).
Disclaimer
While every effort has been made to ensure the accuracy of the data,
it is possible that errors exist. NIH and the study investigators do
not and cannot warrant the results that may be obtained by using the
data. NIH and the study investigators disclaim all warranties as to
the accuracy of the data in the study database or the performance or
fitness of the data for any particular purpose.
Description of MRI quality control procedures
The first level of quality control is conducted at the site, immediately
after the data acquisition. Image files are transferred from the console
to a Study Work Station (SWS) where they are reviewed using visualization
software. Data is then transferred to the Data Coordinating Center (DCC)
in Montreal.
Data arrives at the DCC in DICOM format. This data is archived and then
converted to MINC, the medical image file format developed at the Brain
Imaging Centre. The two assigned identification numbers (labels) are
checked to make sure they match across all volumes sent for the particular
subject. The acquisition parameters for each volume for the subject
are checked against the scanning protocol automatically. If there is
a problem with the identification labels, the site is contacted and
asked to relabel and resend the data. Once the labels pass verification,
the data is inserted into the database with a status flag set to PENDING
for the subject and for each image volume. The MRI data are then ready
for a review of the image quality.
A verification image (sample slice of the
full image) is created from the native
data containing a mid-transverse, sagittal and coronal image through
the volume. These are the images used to organize the visual inspection
described below.
For each volume, an automatic program runs a number of scripts on all
of the selected slices to accomplish the following tasks:
- A histogram of the volume is computed.
- Movement artifacts are estimated.
- Noise and contrast are estimated.
- Image preprocessing (cropping, image intensity non-uniformity correction,
intensity normalization) is completed.
Using the acquisition parameters, the volumes for T1, T2 and PD-weighted
anatomical are pre-selected. Once completed, the data is ready for visual
inspection.
The goal of the visual inspection is to rate:
- the amount of movement artifacts – either within the slices
or volume, or between packets for the multi-packet acquisitions.
- the level of intensity homogeneity within slices, between slices
and throughout the volume (the amount of noise in the scan).
- the level of contrast between grey matter, white matter and CSF.
- the adherence to the scanning protocol in terms of coverage of
the head/brain from left to right, top to bottom and front to back.
- the amount of geometric distortion due to susceptibility artifacts.
- the appearance of any other artifacts in the images.
Individual volumes are given a ”PASS” or “FAIL”
quality control status, based on the categories above.
Passing MRI datasets: The complete dataset for a subject
visit is then given a “PASS” or “FAIL” status
determined by the successful acquisition of the structural MRIs (T1W,
PD/T2W; either full protocol or fallback protocol [used where a limited
protocol was necessitated by the level of subject compliance]). A dataset
must have a successful T1W and a PD/T2W to receive a final “PASS”
status. If a scan is failed, a rescan is requested (assuming there is
time remaining within the allowable age window).
Description of clinical/behavioral quality control
procedures at the Clinical Coordinating Center (CCC)
The following is a summary of the clinical/behavioral quality control
procedures that are implemented at the CCC:
1. All quality control materials (e.g., videotapes/audiotapes, paper
copies of completed test booklets/score sheets, questionnaire forms,
etc.) from the Pediatric Study Centers are sent to the CCC in a timely
fashion, i.e., as soon as possible after completion of testing and scoring,
but no later than two weeks after testing.
2. The sampled materials are logged, tracked, and monitored by the
CCC.
3. In some instances, portions of the reviews are accomplished at a
collaborating site based on its expertise with specific instruments.
In such cases, the CCC provides the site with the materials to be reviewed.
4. Each instrument is rated as:
a. Passing -defined as ≥ 90% agreement with the standard for
item administration and scoring (required for valid data when testing
real ‘scanned’ subjects);
b. Provisionally passing -Minor problems, potentially passing (may
or may not yield valid data); or
c. Administered/scored incorrectly--Major problems, < 90% agreement
with the standard for item administration and scoring (invalid data
when testing real ‘scanned’ subjects).
5. Written feedback about administration and scoring performance is
provided to each rater/tester by the evaluator. This feedback is provided
in the form of a checklist review with space for specific comments.
The written feedback is also forwarded to site’s behavioral investigator
and principal investigator (as well as any others designated by the
principal investigator), and the DCC.
6. Copies of the evaluation checklists and comments are retained at
the CCC.
7. The CCC enters all quality control related data into a database
to consolidate results for monitoring the overall process.
8. The CCC also enters individual evaluation results into the Examiner
Certification fields of the DCC database. These fields are used to ‘flag’
the performance of individual testers/raters and their associated testing
data.
9. Technically incomplete, insufficient or poor video/audio recordings
(e.g., image or sound not adequate for accurate evaluation, missing
tests or parts of tests) of testing are not reviewed. Such situations
are rated as “No quality control decision,” and the tester/rater
has to redo the testing and recording with additional practice children
(i.e., non-subjects) for submission to the CCC for evaluation.
10. If “correctable” errors (e.g., certain scoring errors)
are noted during the evaluation process, the rater/tester is required
to correct the error(s) on the score sheets/booklets and in the DCC
database. The tester/rater notifies the evaluator by sending the corrected
score sheets/booklets back to the evaluator (via FAX or some form of
express mail), who confirms that the correction(s) has(ve) been made
in the DCC database. If a subject’s data profile was already sent
to the DCC, the tester/rater contacts the DCC to request access to the
subject’s data so that corrections can be made. Failure to send
corrected materials to the evaluator and to make required corrections
in the DCC database results in the test (or subtest, as appropriate)
being rated, “Administered/scored incorrectly.” The time
allotted to complete this process is one week following notification,
barring extenuating circumstances that were approved by the CCC.
Description of clinical/behavioral quality control
at the Data Coordinating Center (DCC)
Clinical/behavioral quality control of the entry of each subject’s
data into the database is centralized at the Data Coordinating Center
(DCC), following the steps outlined below:
1. Staff at each Pediatric Study Center enter data using a standard
laptop provided to each site and upload the data into the DCC database.
2. DCC requests a hard copy of paper and pencil forms for which data
was hand entered using a random schedule.
- The list of hard copies requested is made available to each site
via an RSS Channel link.
- For objective 1, visits 1 and 2, and objective 2 subjects, one in
three subject's forms were randomly selected. For objective 1, visit
3, one in six subject's profiles were randomly selected for review.
3. Once a hard copy of the subject’s data is received at the
DCC, the subject's data is moved into a review and approval stage, during
which sites have ‘Read Only’ privileges. The entered data
is confirmed against the hard copies.
4. When necessary, data corrections are requested using a web-based
system. The sites are given ‘Read/Write’ access to those
instruments for which data corrections are needed. Feedback identified
clerical, input, and scoring inconsistencies or errors. Potential scoring
errors encountered over the course of the DCC clinical/behavioral quality
control are brought to the attention of the Clinical Coordinating Center
(CCC).
5. Upon completion of corrections, the subject's data become ‘Read
Only’ and subject’s profile is tagged as having been ‘Hard
copy quality controlled.’ The DCC maintains a log of the errors
identified and the feedback interactions with each Pediatric Study Center.