ISSN:2167-7964
Journal of Radiology
Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business

Redefining Image Quality Analysis

Bruce Reiner*
Maryland Veterans Affairs Medical Center, USA
Corresponding Author : Bruce Reiner
Maryland Veterans Affairs Medical Center, USA
Tel: 410-251-1729
E-mail: breiner1@comcast.net
Received April 26, 2014; Accepted April 27, 2014; Published May 05, 2014
Citation: Reiner B (2014) Redefining Image Quality Analysis. OMICS J Radiology 3:e126. doi: 10.4172/2167-7964.1000e126
Copyright: © 2014 Reiner B. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at Journal of Radiology

Abstract

Quality assurance in medical imaging is dependent upon multiple steps in the collective imaging chain; including image quality analysis, which defines the diagnostic capabilities of the imaging dataset. Conventional methods used for image quality analysis primarily fall into two categories; operator-performed analysis using subjective numerical scoring of perceived image quality and computerized analysis using objective measurements correlating with the human visual system. A number of deficiencies exist in these conventional methods; which often fail to take into account clinical context, patient attributes, and data segmentation. The incorporation of these variables into an alternative standardized methodology for image quality analysis would have the potential to expand the practicality, utility, and granularity of image quality analysis; while leading to the creation of a referenceable database with real-time decision support applications.

Existing Practice
Imaging quality analysis is one of several quality assurance steps which occur in the medical imaging chain, each of which ultimately impacts clinical outcomes and the perceived quality of medical imaging service delivery. A number of clinical, workflow, economic, cultural, and technologic factors contribute to the overall success of image QA in medical imaging; many of which have undergone significant change over the past two decades [1].
In conventional practice, image quality analysis (image QA) is customarily performed by the same technologist tasked with performing the imaging exam of question, which creates a potential conflict of interest [2]. In addition, the subjective nature of conventional image QA practice and lack of established image quality standards often creates a high degree of inter-observer QA variability [3]. As productivity and workflow takes on heightened importance in the current era of reduced reimbursements and profitability, this can come at the expense of quality initiatives including image QA [4]. With the digitization of medical imaging practice, the physical layout and location of personnel in the imaging department has transitioned from a centralized to peripheral model; which diminishes the personnel interaction between staff (both technologists and radiologists), which can adversely affect opportunities for education and consultation relating to image QA [1]. While technology has always played an integral role in medical imaging evolution, technologic innovation in image QA has been relatively quiescent; largely due to the lack of mandated standards and fact that QA in itself is not revenue generating [5].
The net result is that image QA in its current form is highly subjective, inconsistent, and often idiosyncratic [2]. The negative outcomes associated with poor image QA are not isolated, and can have adverse consequences on other imaging chain events including image interpretation and reporting. Reinventing image QA has the potential to positively affect clinical outcomes while also providing an important quality differentiator for medical imaging service providers.
Methodology of Image Quality Analysis
In its current form, the methodology for image QA can be divided into two primary groups; human versus computerized assessment. Human methods for image QA such as those created by the International Labor Office (ILO) [6] and RadLex [7] subjectively rate image quality using a scaled numerical system. Computerized methods such as mean square error (MSE), peak signal to noise ratio (PSNR) and just noticeable differences (JNDmetric) utilize objective physical measurements in an attempt to correlate with the human visual system [8-10].
In everyday clinical practice, image QA is largely binary in nature; where the operator determines whether the imaging exam performed is non-diagnostic or diagnostic. In the event that the exam is determined to be non-diagnostic, it is repeated, but this is a relatively rare event [11]. When it does occur, rarely is imaging QA data recorded for future analysis. The technologist will either electronically discard the qualitydeficient imaging dataset (without a permanent record) or add the new images to the original dataset for interpretation. Rarely is QA data recorded in a centralized database for longitudinal review and analysis; and this lack of comprehensive QA data handicaps identification of QA deficiencies and opportunities for systematic and individual improvement. When retrospective image QA is performed it is largely reactionary in nature, and driven by an adverse clinical event (e.g. diagnostic error).
Current image QA assessment largely views the imaging dataset in a single all-inclusive fashion (i.e. generalized image QA). Image QA is viewed as the sum of all parts, with minimal attention or differential weighting applied to individual components within the collective imaging dataset. In reality, individual components of an imaging dataset can have dramatic differences from one another in overall clinical impact. When clinical context (i.e. clinical indication) is factored into the analysis, it becomes obvious that certain anatomic structures and/or organ systems have different degrees of clinical importance (based upon Bayesian analysis) and contribution to diagnosis. In a very simplistic example, a chest CT angiography performed to evaluate pulmonary embolus should place far greater QA emphasis on the pulmonary arteries than the lung fields or chest wall. On the other hand, a lung cancer screening chest CT will shift the clinical priority to the lung fields and decrease the relative importance of vascular anatomy. This underscores the reality that individual segments of the collective imaging dataset should not be viewed as equivalent in the image QA process; but instead be differentially weighting in according to clinical context (i.e. segmented image QA).
This concept of “segmented” image QA can also extend to other portions of the imaging dataset. In the case of more complex imaging exams (e.g. MRI, triple phase CT); the collective imaging dataset can consist of multiple sequences, orthogonal planes, and reconstruction algorithms; each of which can be viewed as an individual “segment” of the collective dataset. The determination of each individual segment’s contribution to quality assessment of the collective imaging dataset will depend upon a number of factors including (but not limited to) the clinical context, data redundancy, and historical imaging data of the individual patient. Data redundancy refers to the fact that imaging data from a single imaging dataset can be represented in multiple formats (e.g. sagittal and coronal T2 weighted MR images). If one data format was compromised by a QA deficiency but another format provided the requisite data in high quality, then the relative significance of the QA deficiency would be minimized. Historical imaging data refers to the fact that individual patients will frequently have multiple correlating imaging studies in their data repositories, which may be of direct relevance to the current imaging dataset, and impact QA analysis. As an example, a patient who is undergoing sequential CT exams for assessment of clinical response to oncologic treatment will have well documented baseline pathology on prior serial CT exams. As a result, a QA deficiency on the current CT exam may have less significance then a first time CT exam, assuming the QA deficiency does not impair diagnosis of “high priority” anatomy.
Another deficiency of conventional QA methodology is the fact that patients are largely viewed as a homogenous population. In reality, marked variability exists in the diverse population of patients who undergo imaging exams. This patient diversity can have a profound impact on image quality analysis, which is currently overlooked or under-emphasized. Individual attributes such as patient size, age, compliance, morbidity, and medical/surgical history will ultimately affect image quality and should be an integral component in image quality analysis. It would be unreasonable to place similar image quality expectations for portable chest radiographs (performed for evaluation of chest pain) on a 76 year old intensive care unit patient, when compared with an 18 year old emergency room patient. One solution could be to create a “patient profile” which scores individual patient attributes in a standardized fashion, thereby providing a reproducible method of incorporating patient diversity into image quality analysis.
Innovation Opportunities
Rather than view conventional image quality analysis as an isolated event, a preferred approach would be to create a standardized methodology which can be directly prospectively integrated into workflow for all imaging exams, in order to create a comprehensive referenceable database. The methodology for image quality analysis could utilize a combination of existing techniques, with the addition of external analysis (e.g. unbiased third party) in hopes of quantifying and reducing subjective variability. As these standardized referenceable databases expand in size and scope, they in turn can be used for creation of new technology such as computerized image quality algorithms [2], creation of context and patient-specific best practice guidelines and standards, and creation of real-time decision support tools (e.g. point of care protocol optimization).
Image quality scoring methodologies can be expanded to include targeted and patient-specific measures; which can provide increased granularity and context specificity to the database and derived analytics. Educational and training resources can be derived from these databases to facilitate improved understanding and consistency of image quality analysis. If one was to incorporate selected key images form the corresponding imaging datasets which best exemplify the quality scoring and deficiencies, it could lead to the creation of an imagecentric quality database; which would far surpass the educational value of a static numerical database alone [12].
The ability to record, track, and analyze standardized image quality data routinely could also provide valuable insights as to the relationship between image quality and other steps in the imaging chain (e.g. follow up recommendations, diagnostic accuracy and confidence in reporting), as well as clinical outcomes. This data could in turn create an opportunity for quality-based differentiation of imaging providers, tools for patient empowerment (e.g. data-driven provider selection), comparative technology assessment, and meaningful economic reimbursement directly tied to quality data. The opportunities for redefining image quality analysis are immense and should be proactively embraced by the medical imaging community as a means of improving clinical outcomes and long term economic viability.

References

--
Post your comment

Share This Article

Recommended Journals

Article Usage

  • Total views: 13034
  • [From(publication date):
    April-2014 - Oct 20, 2024]
  • Breakdown by view type
  • HTML page views : 8694
  • PDF downloads : 4340
Top