Softcopy Interpretation
No Results
No Results
processing….
Radiologists have been looking at film images ever since Roentgen first discovered x-rays and obtained an image of his wife’s hand. However, since the 1980s, radiologists have leapt into the digital world and view images on computer monitors with increasing frequency. Picture archiving and communication systems (PACS), Cross-Enterprise Document Sharing for Imaging (XDS-1), and teleradiology networks are becoming commonplace, and many radiology residents currently are trained with digital rather than film displays. [1, 2, 3, 4, 5, 6, 7]
For many radiologists, the technical details of network architectures, bandwidths, digital archives, and digital imaging and communications in medicine (DICOM) interface compatibility are of little concern. What is most important from the clinical perspective is that the required patient images are available when needed, are available quickly, and are of diagnostic quality. Of these, diagnostic quality may be the primary factor from the clinician’s standpoint (see the images below). [8, 9, 10, 11, 12, 13]
Quality can be affected at a number of points in the digital chain before the final image is presented to the radiologist’s eye-brain system. Of most concern to the radiologist is the final step in this chain, the presentation of the clinical image on a display device, because this is what the radiologist examines and uses to make a diagnostic decision. Thus, radiologists must be aware of the issues that affect the display of digital radiographic images.
Recent research has focused on issues of privacy protection and security in the transmission of these images, including the use of chaotic maps [14] and watermarks. [15] Other researchers are focusing on establishing standardized and consistent reporting procedures. [16, 17]
As the transition from film to monitors began to take place, it became evident that, similar to many digital images, the radiographic image on the computer monitor did not appear the same as the image on film. Initially, radiologists were skeptical and did not trust digital displays for routine clinical use. Many perceptual and ergonomic issues arose when the use of film began to decrease. Compared with the traditional method of viewing film on a light box, monitors typically are less bright, have less spatial resolution, have less contrast (dynamic range), and have a limited viewing area.
These factors must be addressed, and researchers in medical image perception have begun to investigate them. Eventually, radiologists must ensure that switching to a different viewing medium neither negatively affects diagnostic accuracy nor significantly affects workflow. If adapting to and using a new type of workstation or viewing system takes too much time, radiologists are not likely to make the transition easily or quickly. [18, 19, 20, 21]
A number of discoveries correlate optimal monitor luminance, tone scale, and interface design with perceptual factors that impact the clinical reading environment and diagnostic accuracy. In diagnostic accuracy, performance usually is approximately the same with film as with monitor viewing. However, other aspects of performance may be affected by a change in display modality. [22, 23, 24, 25, 26]
In one study, 3 bone radiologists and 3 orthopedic surgeons read findings from 27 patients with bone trauma, once from film and once from a monitor. [27] They searched for fractures, dislocations or subluxations, ligamentous injuries, and soft-tissue swellings or effusions. The readers were required to indicate the presence and/or absence of each feature for each patient. Eye position was recorded as they searched the images (see the image below). [28, 29, 30, 31, 32, 33]
Diagnostic performance was statistically equivalent for film and monitor viewing (film = 86% true positives, 13% false positives; monitor = 80% true positives, 21% false positives), although film reading was slightly better. Viewing time and other measures of visual search performance differed significantly, as determined from the eye-position recordings. Average viewing time for film was 46.45 seconds, compared with 91.15 seconds for the monitor, which was approximately twice as long.
The primary difference between film and monitor readings was in the visual dwell times associated with lesion-free areas of images. Average dwell time on true-negative areas was significantly longer with the monitor than with film, and because most areas on an image are lesion free, the extended dwell times incrementally produced significantly longer viewing times. [34, 35]
Additionally, readers took twice as long (Student t [t] = 4.84, degrees of freedom [df] = 107, statistical probability [p] = 0.0001) to first fixate (have the eye land on) the lesion of interest with a monitor (4.67 seconds into the search) than with film (2.35 seconds into the search).
Viewing times with the monitor were extended by the 20% of fixation clusters generated during the search that were on the image-processing menu and tool bar rather than on the diagnostic image. Therefore, the computer interface may have been a distraction. Increasingly, the best interfaces are found to be simple and uncluttered and require little training to use.
This study demonstrates how factors other than diagnostic accuracy can be important. Extended viewing times per patient can yield decreased workflow, increased fatigue, and, possibly, decreased performance over time. Developing an easy-to-use, nondistracting interface is also crucial to promote the use of PACS and teleradiology systems by clinical radiologists. [8, 36, 37]
Certain physical characteristics of the display monitor also can affect diagnostic performance; therefore, examine these features when considering a monitor purchase for teleradiology and/or PACS applications in the clinical setting. Note that all of these display parameters apply to both the traditional cathode ray tube (CRT) display [38] and the multitude of liquid crystal displays (LCDs) that have become available for use in radiology. LCDs, of course, may come with their own set of limitations (eg, degradation of image quality when viewing from off-axis angles).
For example, monitor luminance reportedly affects diagnostic performance. The best monitors currently available are approximately 5 times less bright than a typical radiographic view box (1000 vs 250 foot-lambert). In one study, diagnostic performance was better with a high-luminance monitor (140 foot-lambert) than with a relatively low-luminance monitor (80 foot-lambert). [39] Eye position was recorded as 50 pairs of mammograms were viewed on each monitor (see the image below).
As in previous studies, no significant difference in diagnostic performance was found (alternative free response receiver operating characteristic [AFROC] A1 for 80 foot-lambert = 0.9594, for 140 foot-lambert = 0.9695; t = 1.685, df = 5, p = 0.1528). However, once again, the eye-position recording revealed significantly different viewing times: 52.71 seconds versus 48.99 seconds for 80 foot-lambert versus 140 foot-lambert, respectively (t = 1.99, df = 299, p = 0.047). Concerning dwell times associated with decisions, again it was found that true-negative dwell times were affected most and were significantly longer with the 80 foot-lambert versus the 140 foot-lambert monitor.
These results suggest that changes in digital display luminance may affect the radiologist’s ability to easily determine that lesion-free (ie, normal) image locations are normal. Luminance changes may increase the time required to search an image thoroughly and determine whether it is lesion free.
Performance is better with a perceptually linearized display curve (eg, DICOM curve) than with a nonlinearized curve (eg, Society of Motion Picture and Television Engineers pattern used to calibrate monitors). The DICOM standard curve was developed to match monitor output (relative to gray levels) to the perceptual capabilities of the human visual system. [40]
The idea for perceptual linearization derives from the display of images on a monitor using 2 nonlinear mappings.
The first map takes recorded image data (actual numeric values from the digital image) and transforms them into luminance values on the monitor screen, which represent the monitor’s display function or characteristic curve.
The second map transforms the display luminance according to the brightness response of the human visual system.
The optimal perceived dynamic range of the display (which affects contrast and, therefore, perception, especially of low-contrast lesions) depends crucially on the optimal combination of these 2 mappings. Standardization of display curves is important in PACS and teleradiology because the systems allow radiologists to send images from one location to another and to use different monitors for viewing. Images on one monitor must look the same on all monitors. The DICOM display standard tries to realize this ideal by setting up a standard display curve and certain other quality-control measures.
In a related study, similar to the study that compared monitors of different luminances, a series of 50 mammograms were used to compare the performances of perceptually linearized displays with those of nonperceptually linearized displays. [41] AFROC analysis indicated that diagnostic performance was significantly higher with the perceptually linearized display (A1 = 0.972) than with the nonlinearized display (A1 = 0.951, t = 5.42, df = 5, p = 0.003).
Eye-position data also revealed significant differences concerning dwell time and visual search. With the nonlinearized display, total viewing time was longer, dwell times associated with all types of decisions (true and false, positive and negative) were longer (especially true negatives), and significantly more fixation clusters were generated during searches than with the linearized display. The choice of monitor display curve may significantly affect the radiologist’s ability to detect lesions and decide if truly negative images are negative, which extends overall view time with a nonperceptually linearized display.
Display resolution currently is an important topic in digital radiology and PACS; radiologists prefer as much resolution as possible. However, the higher the resolution of the monitor, the higher the cost. Black and white (B&W) monitors that maximize dynamic range (required for most gray-scale images) are also typically more expensive than color monitors. The most common resolutions of the typical desktop computer monitor are 1024 x 1280 pixels or 1200 x 1600 pixels. Desktop monitors are typically color, which degrade the dynamic range (blacks are not as black, and whites are not as white as with a B&W monitor).
Nuclear medicine (single photon emission computed tomography [SPECT], positron emission tomography [PET]), computed tomography (CT) scanning, and magnetic resonance imaging (MRI) all produce images that are either 256 x 256 pixels or 512 x 512 pixels (for each slice). If one were to view these images on a slice-by-slice basis or by scrolling through them, a 1200 x 1600–pixel monitor potentially would suffice, because these slices easily can be displayed at full resolution on such a monitor. With the advent of techniques that merge images from different modalities and use color overlays to highlight certain types of information, color monitors are desirable.
The concern with low-resolution color monitors is that they cannot provide maximum contrast resolution (CR) and spatial resolution for modalities that require B&W displays for high-resolution images to fully maximize the dynamic range. A typical CR image is approximately 2300 x 1700 pixels, making a medium-to-high-resolution monitor a requirement if the entire image is to be viewed at full resolution. The new digital mammography systems can produce images as large as 4800 x 6400 pixels.
There is some debate in radiology about whether it is necessary to display an image at full resolution all at once or whether it is acceptable to compress the image to view it all at one time, followed by zoom or magnification to access the original data and to view specific portions of the image at full resolution. A significant amount of work is being done to resolve this debate.
In general, monitors of at least 2048 x 2560 pixels should be used for primary diagnostic interpretation in radiology. This resolution will suit most modalities, especially given adequate image-processing support. Currently, 5000 x 5000–pixel monitors are available, but cost and lifetime issues are still of great concern with these displays. As noted, other characteristics need to be considered when purchasing a display device for radiology (eg, luminance, price, dynamic range); thus, spatial resolution alone should not be the deciding factor.
Although the CRT display monitor is currently the most common and reliable display device, other technologies are making an impact in digital radiology. Flat panel devices hold a lot of promise, especially in terms of display luminance. Once the angle-of-regard problem (viewing the display off-center) has been fully solved, these new devices could represent a viable alternative to high-resolution CRT monitors. Because flat panel technology is also undergoing research and development by many companies for the commercial market, radiology may benefit significantly in terms of cost if the technology can be easily adapted for clinical use.
Radiographic images today are being displayed on both medical-grade (MG) and commercial off-the-shelf (COTS) color LCDs that are readily available and usually less expensive than MG grayscale displays and thus very attractive for both large and small practices. Most LCD panels are backlit with cold-cathode fluorescent lamps (CCFLs), but newer ones use light-emitting diodes (LEDs) and have thinner profiles, lower power consumption, and reportedly longer lifetimes.
In both cases, the LCD/LED elements regulate the amount of backlight through the panel and the backlight brightness determines display luminance. The amount of light the panel blocks sets the minimum luminance. Both LCDs and LEDs degrade or dim with time and amount of use, so they need to be replaced once the luminance is no longer in compliance with established standards (minimum 1.0 cd/m2; maximum at least 350 cd/m2 and 420 cd/m2 for mammography; luminance ratio >250). MG displays often have embedded tools that monitor and adjust backlight levels so they are more stable than COTS displays, which require more regular manual recalibration. They typically have better luminance uniformity (~15%) than COTS displays (>20%), also owing to embedded technology that compensates for pixel luminance variation.
One main reason luminance is so important is contrast ratio, which can vary considerably since (1) the human eye adjusts to the average brightness it is exposed to and (2) as brightness diverges from the point of adaptation, subtle contrast changes (ie, lesions in radiographic images) are more and more difficult to perceive. Very briefly, contrast sensitivity of the human visual system (HVS) can be quantified using just-noticeable differences (JNDs) or detection thresholds, which represent perceivable changes in luminance for a given displayed luminance. Most of the seminal work on modeling contrast sensitivity was done by Barten and is still used today. He determined an average HVS response based on data (detection of sinusoidal contrast patterns on different luminance backgrounds) collected from a large sample of subjects and showed that the HVS is nonlinear. In other words, the percent contrast change required for a JND at high background luminance is lower than that for a JND at low backgroundluminance.
This means that in order to optimize the perceptibility of diagnostic image information, calibration methods must account for the capabilities and limitations (ie, contrast sensitivity nonlinearity) of the HVS. Barten’s model provides a means to accomplish this by producing perceptual linearity across grayscale values (by setting luminance values so changes in pixel value correspond to equal JNDs), so changes in pixel values across a grayscale range are perceived to have similar contrast. Basically, information at low luminance levels is not lost at the expense of being able to perceive information at high levels and vice versa.
The Digital Imaging and Communications in Medicine (DICOM) Part 14 grayscale standard display function (GSDF) accomplishes this, and studies have shown that diagnostic accuracy is better with a DICOM-calibrated than uncalibrated display. Although the DICOM GSDF is not perfect, it is the most widely used calibration method in radiology today.
The use and impact of the DICOM GSDF on MG grayscale displays is rather well known, but as noted previously, color displays are being used more widely today in radiology for diagnostic interpretation than ever before. The DICOM GSDF does not to date have any specific recommendations for color displays in radiology, and the de facto standard is to use the GSDF for calibrating color display of grayscale images. Although older studies showed clear differences between color and monochrome displays in terms of achievable diagnostic accuracy (monochrome superior to color), most recent studies show that even high-quality COTS color displays can yield equivalent levels of diagnostic accuracy if properly calibrated and maintained. The American College of Radiology (ACR) standard recommends that all monitors be set to a white point corresponding to the CIE daylight standard D65 white point or a color temperature of about 6500°F.
With the advent of electronic health records and the Integrated Healthcare Enterprise (IHE), proper calibration of color displays is increasingly becoming important. Radiologists and other clinicians are not only viewing grayscale radiographs during patient care but they are also viewing color medical image such as pathology whole slide images (WSI), digital ophthalmology images, dermatology images, and a host of other visible light images. In current practice, there is very little guidance on the calibration or characterization of medical color displays. One possibility is to use color-device profiles conforming to the International Color Consortium (ICC) specification standard for color management of digital imaging systems, as it provides a standardized architecture, profile format and data structure for color management, and color data interchange between different color imaging devices.
Methods are being proposed for display of both monochrome and color medical images, but few have been validated with respect to impact on diagnostic performance. It may be that separate calibration schemes will be required for displaying monochrome versus color images on color displays until an all-in-one method suitable for simultaneous display of both types of images has been devised. For example, one could start by taking 2 standard pathology slides that are scanned and displayed. One slide is embedded with 9 filters having colors purposely selected for hematoxylin and eosin (H&E)–stained WSIs, and the other slide is an H&E-stained mouse embryo. The displayed images are compared with a standard to identify inaccurate display of color and its causes.
Other methods include looking at display characterization and the tools used for calibration. One recent study characterized 3 probes for measuring display color: a modification of a small-spot luminance probe and 2 conic probes based on black frusta. They found significant differences between the probes that affect the measurements used to quantify display color and have thus proposed a method to evaluate the performance of color calibration kits for LCD monitors, a universal platform (Virtual Display) to emulate tone reproduction curves.
A more recent method developed by Silverstein et al included implementing a black-level correction and encoding it so that it was compatible with the ICC color profile structure. They found that color reproduction accuracy improved dramatically using their proposed methodology for color display characterization and profiling using a series of COTS displays with varying preset calibrations.
In one of the only studies to examine the impact of color management and calibration on diagnostic accuracy, this method was used to compare a calibrated versus uncalibrated (out of the box) COTS NEC 2690 display for diagnosing a set of WSI breast biopsy images. Although diagnostic performance with the color-calibrated display was higher than with the uncalibrated display, no statistically significant differences in diagnostic accuracy were observed. However, viewing time was significantly shorter with the calibrated display, suggesting a slight advantage diagnostically for a properly calibrated and color-managed display and a significant potential advantage in terms of improved workflow.
Digital displays of radiographs also make possible the true clinical use of computer-aided diagnosis (CAD) schemes. The goal of CAD is similar to the goal of perceptual feedback discussed earlier (ie, to provide the radiologist with an additional look at an image, with the potential lesion locations indicated). However, instead of using eye-position information, CAD uses a variety of image-processing algorithms to detect and occasionally classify probable lesion sites. Methods of using CAD information by radiologists in the clinic and CAD’s effect on diagnostic performance are becoming topics of interest. [42]
Although CAD systems perform well, computers still miss lesions that the radiologist is able to find. Radiologists and a CAD system independently examined a series of 80 mammograms for microcalcification clusters, with the following findings [43] :
The CAD system had a true-positive rate of 83%, with 0.5 false positives per image.
The radiologists had true-positive rates of 78-90% and false-positive rates of 0.03-0.20 per image.
When the locations of the CADs’ and the radiologists’ true and false positives were examined, all but 5% of the true microcalcification clusters were identified by the CAD system, the radiologists, or both.
Of the detected clusters, 10% were detected by CAD but missed by the radiologist, and 11% were missed by CAD and detected by a minimum of 1 radiologist.
Examination of the lesion features revealed that CAD detected microcalcifications that radiologists judged to have few or no visible features but occasionally missed those with obvious but nontypical features.
As radiologists, be aware that CAD is not perfect, and learn how to use CAD as a supplement to perceptual search strategies. Do not eliminate perceptual search of images, and do not rely on CAD to detect 100% of lesions. Continue to search the entire image to better decide if a suggestive region indicated by CAD is a true lesion or a false positive.
In general, CAD will probably help most radiologists to an extent for certain types of images. Those radiologists with more experience are less likely to benefit from CAD than radiologists with less experience in terms of diagnostic performance, but CAD prompts (when accuracy is high, without excessive false-positive results) may help in other ways, such as improving workflow. This may be especially true in such areas as CAD for lung CT imaging, in which a significant number of images must be viewed, potentially leading to increased distraction or inattention on the part of the radiologist. CAD in this situation may help find potential lesions and may also help focus the radiologist’s search.
One study also used eye-position recordings to study perceptual strategies of experienced mammographers versus residents reading mammograms with and without CAD information. [44] Significant differences based on the level of expertise were found. Experienced mammographers spent more time (104 seconds) doing a more thorough search of the images before they accessed CAD prompts than did residents (86 seconds). During the search without CAD, mammographers also fixated more of the lesions than residents; therefore, when the experienced mammographers accessed the CAD pointers, they used them more to confirm suspicions about potential lesions. This hypothesis was supported in interviews with the readers after the study.
The experienced mammographers in the study noted that in at least 95% of patients in whom they had detected the lesion prior to CAD, they merely glanced at the lesion with CAD to make sure the CAD was pointing to the same lesion and location. The rest of the time, they looked a little longer because the CAD prompt was not always pointing at the center of the lesion; thus, the location had to be verified more carefully. Residents appeared to use CAD to guide them to an initial inspection of potential lesions.
With this strategy, less experienced readers may not be as likely as more experienced readers to discover lesions that the CAD system did not detect. This also was confirmed after the study was completed. The residents tended to state that they were not able to detect too many lesions on first glance or were very unsure of the lesions they detected. They also said that they tended to wait for the CAD information because it was apt to take too long to search without it and because there were too many confusing structures to deal with without the help from CAD. However, this may not have been a good strategy.
The study also showed that CAD’s usefulness in helping radiologists determine if lesions were present was affected. For mammographers, 50% of lesions missed without CAD were detected and reported with CAD. The original number of false negatives before CAD was higher for residents than for the experienced mammographers, and only 33% of the missed lesions were detected correctly and reported with CAD. Thus, although the residents decided to wait for the CAD information, it only helped them with approximately one third of the missed lesions.
CAD can help identify a lesion but without the necessary experience to interpret what is seen, the residents do not benefit as greatly from the CAD help as experienced mammographers, who are more prepared to be able to interpret CAD findings. These and similar results may have significant implications for CAD’s use in the clinical environment. Residents or radiologists who are not expert in mammography may require explicit instruction to conduct thorough searches of images before using CAD. [45]
With the exception of CAD, digital displays provide the radiologist with viewing aids that are not available with film displays. For example, typically, more general image processing (eg, window-level operations, high-pass filters, low-pass filters) is standard with most digital-display workstations. [46]
A number of types of image processing do not appear to improve diagnostic accuracy, while other types do. For example, one study found that radiologists’ decisions were equally as likely to change from false negative to true positive as from true positive to false negative when image processing was used. [47] The conclusion is that, at least for the types of images and image-processing functions used in the study, image processing did not affect diagnostic performance significantly.
Other types of viewing aids with digital displays that may be more helpful to the radiologist include 3-dimensional (3-D) displays (especially with CT scanning, MRI, ultrasound [US]) and color. Traditionally, film-based radiographic images have been displayed only in gray scale, with a dye (typically blue) as the single color added to the film base to reduce eye strain. With digital images, color has been used occasionally and may become accepted more widely in the future.
Currently, the most successful application of color to radiographic images may be in Doppler US for tracking flow information. In a single image, the radiologist can view both anatomy and function. A similar technique has been used in MRI, CT scanning, and nuclear medicine imaging, especially with image registration that compares images taken at different times or in 2 modalities and with 3-D rendering of image data. [48, 49, 50, 51]
Whether color applications will be used on a regular basis, especially with 2-D computer radiography images, remains to be seen. However, keep in mind that if color displays are used, a new set of standards will be required for quality control and calibration of color monitors to maintain image fidelity among monitors and over time.
As we move further into the 21st century and radiology completes the transition to a filmless practice, image perception and observer performance concerns will not decrease. In fact, the importance of medical image perception research may increase. Technology will present better and different images to the radiologist. How will radiologists deal perceptually with advances such as compressed images, color addition, and computer-aided prompts superimposed over images?
Currently, these new methods of presenting images and image information are under investigation. Soon, radiologists will be asked to view these new types of images. Image perception research will help us guide basic research, understand how to present this new information to radiologists, and understand how to improve diagnostic performance by improving perception of images. Perhaps most importantly, radiologists will require continuous feedback and opinion concerning PACS and teleradiology systems used in clinical practice. [9, 10, 11, 12, 13]
Carrino JA. Digital imaging overview. Semin Roentgenol. 2003 Jul. 38(3):200-15. [Medline].
Davison BD, Tello R, Blickman JG. World Wide Web program for optimizing and assessing medical student performance during the radiology clerkship. Acad Radiol. 2000 Apr. 7(4):260-3. [Medline].
Krupinski EA. Medical image perception issues for PACS deployment. Semin Roentgenol. 2003 Jul. 38(3):231-43. [Medline].
Mehta A, Schultz T, Dreyer KJ. Empowering the online educator. Acad Radiol. 2000 Mar. 7(3):196-7. [Medline].
Mertelmeier T. Why and how is soft copy reading possible in clinical practice?. J Digit Imaging. 1999 Feb. 12(1):3-11. [Medline].
Wang J, Langer S. A brief review of human perception factors in digital displays for picture archiving and communications systems. J Digit Imaging. 1997 Nov. 10(4):158-68. [Medline].
Ribeiro LS, Rodrigues RP, Costa C, Oliveira JL. Enabling outsourcing XDS for imaging on the public cloud. Stud Health Technol Inform. 2013. 192:33-7. [Medline].
Jiang Y, Nishikawa RM, Schmidt RA. Relative gains in diagnostic accuracy between computer-aided diagnosis and independent double reading. Proc SPIE Int Soc Opt Eng. 2000. 3981:10-5.
Chen JY, Sippel Schmidt TM, Carr CD, Kahn CE Jr. Enabling the Next-Generation Radiology Report: Description of Two New System Standards. Radiographics. 2017 Oct 6. 160106. [Medline].
European Society of Radiology (ESR). ESR teleradiology survey: results. Insights Imaging. 2016 Aug. 7 (4):463-79. [Medline].
Hunter TB, Weinstein RS, Krupinski EA. State medical licensure for telemedicine and teleradiology. Telemed J E Health. 2015 Apr. 21 (4):315-8. [Medline].
Hunter TB, Krupinski EA. University-Based Teleradiology in the United States. Healthcare (Basel). 2014 Apr 15. 2 (2):192-206. [Medline].
European Society of Radiology (ESR). ESR white paper on teleradiology: an update from the teleradiology subgroup. Insights Imaging. 2014 Feb. 5 (1):1-8. [Medline].
Fu C, Meng WH, Zhan YF, Zhu ZL, Lau FC, Tse CK, et al. An efficient and secure medical image protection scheme based on chaotic maps. Comput Biol Med. 2013 Sep. 43(8):1000-10. [Medline].
Nyeem H, Boles W, Boyd C. A review of medical image watermarking requirements for teleradiology. J Digit Imaging. 2013 Apr. 26(2):326-43. [Medline]. [Full Text].
Larson DB, Towbin AJ, Pryor RM, Donnelly LF. Improving consistency in radiology reporting through the use of department-wide standardized structured reporting. Radiology. 2013 Apr. 267(1):240-50. [Medline].
Kuttner S, Bujila R, Kortesniemi M, Andersson H, Kull L, Østerås BH, et al. A proposed protocol for acceptance and constancy control of computed tomography systems: a Nordic Association for Clinical Physics (NACP) work group report. Acta Radiol. 2013 Mar 1. 54(2):188-98. [Medline].
Scott WW Jr, Bluemke DA, Mysko WK. Interpretation of emergency department radiographs by radiologists and emergency medicine physicians: teleradiology workstation versus radiograph readings. Radiology. 1995 Apr. 195(1):223-9. [Medline].
Guidelines and Standards Committee of the Commission on General, Small, and Rural Practice. ACR Practice Guideline for Radiologist Coverage of Imaging Performed in Hospital Emergency Departments (Revised 2008). American College of Radiology. [Full Text].
Moise A, Atkins MS. Design requirements for radiology workstations. J Digit Imaging. 2004 Jun. 17(2):92-9. [Medline].
Silva E 3rd, Breslau J, Barr RM, Liebscher LA, Bohl M, Hoffman T, et al. ACR white paper on teleradiology practice: a report from the Task Force on Teleradiology Practice. J Am Coll Radiol. 2013 Aug. 10(8):575-85. [Medline].
Krupinski EA, Roehrig H, Fan J, Yoneda T. Monochrome versus color softcopy displays for teleradiology: observer performance and visual search efficiency. Telemed J E Health. 2007 Dec. 13(6):675-81. [Medline].
Krupinski EA. Choosing the right monitor and interface for viewing medical images. Accessed: January 16, 2001. [Full Text].
Song K-S, Lee JS, Kim HY. Effect of monitor luminance on the detection of a solitary pulmonary nodule: ROC analysis. Proc SPIE Int Soc Opt Eng. 1999. 3663:212-6.
Hemminger BM, Dillon AW, Johnston RE. Effect of display luminance on the feature detection rates of masses in mammograms. Med Phys. 1999 Nov. 26(11):2266-72.
Ly CK. SoftCopy Display Quality Assurance Program at Texas Children”s Hospital. J Digit Imaging. 2002. 15 Suppl 1:33-40. [Medline].
Lund PJ, Krupinski EA, Pereles S. Comparison of conventional and computed radiography: assessment of image quality and reader performance in skeletal extremity trauma. Acad Radiol. 1997 Aug. 4(8):570-6. [Medline].
Ishigaki T, Endo T, Ikeda M. Subtle pulmonary disease: detection with computed radiography versus conventional chest radiography. Radiology. 1996 Oct. 201(1):51-60. [Medline].
Gennari RC, Gur D, Miketic LM. Nonportable CR of the chest radiologists’ acceptance. Proc SPIE Int Soc Opt Eng. 1994. 2166:105-10.
Krupinski EA. An eye-movement study on the use of CAD information during mammographic search. Paper presented at: Seventh Far West Image Perception Conference;. October 16-18, 1997. Tucson, Ariz.
Krupinski EA, Evanoff M, Ovitt T. Influence of image processing on chest radiograph interpretation and decision changes. Acad Radiol. 1998 Feb. 5(2):79-85. [Medline].
Nodine CF, Liu H, Miller WT Jr. Observer performance in the localization of tubes and catheters on digital chest images: the role of expertise and image enhancement. Acad Radiol. 1996 Oct. 3(10):834-41. [Medline].
Rehm K, Ovitt TW. Digital image processing of chest radiographs to compensate for the limitations of video displays. J Electronic Imaging. 1993. 2:264-71.
Krupinski EA, Lund PJ. Differences in time to interpretation for evaluation of bone radiographs with monitor and film viewing. Acad Radiol. 1997 Mar. 4(3):177-82. [Medline].
van Beek EJ, Mullan B, Thompson B. Evaluation of a real-time interactive pulmonary nodule analysis system on chest digital radiographic images: a prospective study. Acad Radiol. 2008 May. 15(5):571-5. [Medline].
Watson AB, ed. Digital Images and Human Vision. Cambridge, Mass: MIT Press. 1993.
Yousem DM, Beauchamp NJ Jr. Clinical input into designing a PACS. J Digit Imaging. 2000 Feb. 13(1):19-24. [Medline].
Roehrig H, Blume H, Ji TL. Performance tests and quality control of cathode ray tube displays. J Digit Imaging. 1990 Aug. 3(3):134-45. [Medline].
Krupinski EA, McNeill K, Ovitt TW. Patterns of use and satisfaction with a university-based teleradiology system. J Digit Imaging. 1999 May. 12(2 Suppl 1):166-7. [Medline].
American College of Radiology, National Electrical Manufacturers Association. Digital Imaging and Communications in Medicine (DICOM) supplement 28: grayscale standard display function. Rosslyn, Va: National Electrical Manufacturers Association; 1998.
Krupinski EA, Roehrig H. The influence of a perceptually linearized display on observer performance and visual search. Acad Radiol. 2000 Jan. 7(1):8-13. [Medline].
Good WF, Wang XH, Zheng B. Preliminary investigations of multi-view CAD of mammography employing features derived from ipsilateral pairs. In: Lemke LU, Vannier MW, Inamura K, Farman AG, eds. CARS ’99: Computer Assisted Radiology and Surgery. Bridgewater, NJ: Excerpta Medica. 1999:383-7.
Krupinski EA, Nishikawa RM. Comparison of eye position versus computer identified microcalcification clusters on mammograms. Med Phys. 1997 Jan. 24(1):17-23. [Medline].
Krupinski E, Roehrig H, Furukawa T. Influence of film and monitor display luminance on observer performance and visual search. Acad Radiol. 1999 Jul. 6(7):411-8. [Medline].
ACR Guidelines and Standards Committee of the Commission on Medical Physics. Practice Guideline for Determinants of Image Quality in Digital Mammography (2007). American College of Radiology. [Full Text].
Muka E, Blume HR, Daly SJ. Display of medical images on CRT soft-copy displays: a tutorial. Proc SPIE Int Soc Opt Eng. 1995. 2431:341-59.
Krupinski EA. Differences in viewing time for mammograms displayed on film versus a CRT monitor. In: Karssemeijer N, Thijssen M, Hendriks J, van Erning L, eds. Digital Mammography ’98. Boston, Mass: Kluwer Academic Publishers. 1998:337-43.
Aoki S, Osawa S, Yoshioka N. Velocity-coded color MR angiography. AJNR Am J Neuroradiol. 1998 Apr. 19(4):691-3. [Medline].
Rehm K, Strother SC, Anderson JR. Display of merged multimodality brain images using interleaved pixels with independent color scales. J Nucl Med. 1994 Nov. 35(11):1815-21. [Medline].
Shepherd AJ. Calibrating screens for continuous colour displays. Spat Vis. 1997. 11(1):57-74. [Medline].
Krupinski EA, Roehrig H. Pulmonary nodule detection and visual search: P45 and P104 monochrome versus color monitor displays. Acad Radiol. 2002 Jun. 9(6):638-45. [Medline].
Elizabeth A Krupinski, PhD, FSPIE, FSIIM, FATA, FAIMBE Professor and Vice Chair of Research, Department of Radiology and Imaging Sciences, Emory University School of Medicine
Elizabeth A Krupinski, PhD, FSPIE, FSIIM, FATA, FAIMBE is a member of the following medical societies: Radiological Society of North America
Disclosure: Nothing to disclose.
Bernard D Coombs, MB, ChB, PhD Consulting Staff, Department of Specialist Rehabilitation Services, Hutt Valley District Health Board, New Zealand
Disclosure: Nothing to disclose.
Robert L DeLaPaz, MD Director, Professor, Department of Radiology, Division of Neuroradiology, Columbia University College of Physicians and Surgeons
Robert L DeLaPaz, MD is a member of the following medical societies: American Society of Neuroradiology, Association of University Radiologists, Radiological Society of North America
Disclosure: Nothing to disclose.
Eugene C Lin, MD Attending Radiologist, Teaching Coordinator for Cardiac Imaging, Radiology Residency Program, Virginia Mason Medical Center; Clinical Assistant Professor of Radiology, University of Washington School of Medicine
Eugene C Lin, MD is a member of the following medical societies: American College of Nuclear Medicine, American College of Radiology, Radiological Society of North America, Society of Nuclear Medicine and Molecular Imaging
Disclosure: Nothing to disclose.
John M Lewin, MD Section Chief, Breast Imaging, Diversified Radiology of Colorado, PC; Associate Clinical Professor, Department of Preventative Medicine and Biometrics, University of Colorado School of Medicine
John M Lewin, MD is a member of the following medical societies: American College of Radiology, American Roentgen Ray Society, Radiological Society of North America, Society of Breast Imaging
Disclosure: Received consulting fee from Hologic, Inc. for consulting; Received grant/research funds from Hologic, Inc. for research.
Softcopy Interpretation
Research & References of Softcopy Interpretation |A&C Accounting And Tax Services
Source
0 Comments